Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device Posture API to include "Semantic Postures"? #94

Closed
diekus opened this issue Sep 23, 2021 · 11 comments
Closed

Device Posture API to include "Semantic Postures"? #94

diekus opened this issue Sep 23, 2021 · 11 comments

Comments

@diekus
Copy link
Member

diekus commented Sep 23, 2021

Yesterday Microsoft announced a new device that sets difference "postures" even though the screen itself doesn't really change, fold nor physically change (see image below).

image

This got me thinking into the concept and functionality of the device posture API itself, and its relation to these new types of "semantic" postures, and how we should support them. It is becoming increasingly common to have desktop OSs that react to when a device is being used in conjunction to a keyboard (normal laptop) and when it is being used like a tablet (with touch). Generally these changes correspond to spacing out the UI, and I think it would be really interesting to have something similar for websites.

I want to open discussion on these type of "semantic postures" to be included into the API, since I think more and more we realize that a device posture is a lot more than just applicable to foldable screens and dual screen devices.

Think about a video streaming web that when you browse to it you get the normal traditional website, but when you set your device in a "studio", "playing", "consume", "create" (whatever the name it ends up with, I am interested in the action and posture the device is in) it enhances the layout by maybe making the controls bigger, spacing them out and hiding unnecessary menus.

@media (device-posture: studio) { //adds more padding and margins and hides UI elements }

To elaborate a bit more, I think "continuous" doesn't really make the cut when it comes to device posture as for example, the device listed above, and many other 2-in-1s would all fall in the same pocket, and it would be a missed opportunity to provide layouts with better accessibility and usability.

@anssiko
Copy link
Member

anssiko commented Sep 23, 2021

@diekus thanks for the proposal, added to our TPAC agenda w3c/devicesensors-wg#47 for discussion.

@kenchris
Copy link
Contributor

kenchris commented Sep 23, 2021

Yes, I think this was the first big test of our API and I don't think we went far enough to decouple it from displays

Postures are much more about intended use-cases and it might not just depend on hardware configulation (like folded, primary input, but also be somewhat controlled by the operating system, like some devices enter tablet mode when keyboard is detached or if uses clicks on it)

I heard talk about three postures for the new Surface Studio Laptop: Stage, Canvas and Compose. Microsoft have talked about canvas and compose before for the Surface Neo.

Compose = primary input is keyboard/mouse/trackpad - screen might allow touch/pen
Canvas = primary input is touch/pen, physical keyboard not easily accessible
Stage = more of a presentation/entertainment mode. Keyboard is not easily accessible, useful for watching movies or playing games with touch or external controller.

In many ways Stage covers similar use-cases to the old "tent" mode.

I suggest that we follow the display locking spec and encourages people to check with startsWith like startsWith("portrait") matches both "portrait-primary" and "portrait-secondary".

As I was never a fan of folded-over I would suggest

stage-primary (stage mod with primary or only screen, covers tent mode with only main screen on)
stage-secondary (covers tent mode with secondary screen)
stage-tent (covers regular tent mode)

We could also do stage-single and stage-dual

canvas-flat (covers regular tablets, phones and non-folded foldable devices)
canvas-folded (a folded foldable device)

compose-flat (covers laptop and laptop like mode, like the surface neo with virtual keyboard taking up a whole screen)
compose-folded (covers laptops that have vertically folded display, like this one: https://bgr.com/tech/intel-dual-screen-laptop-honeycomb-glacier-prototype-at-computex/)

@lauramorinigo
Copy link
Contributor

Thanks a lot for bringing this topic. Even though it's something that we can discuss, I disagree to push for a change for now. We are having a release in the following month with the current form of the device posture API that includes demos, codelabs, and even partnerships that can help us to understand developers approval. I would love to have developers' and users' feedback before thinking about a new change again. Besides this, we just had a positive TAG review on the current API in its current form w3ctag/design-reviews#575 (comment)
Anyway happy to brainstorm this during TPAC and bring the feedback that we were able to retrieve.

@chris480
Copy link

I'm wondering if there is a fundamental language set we could use to allow developers to develop their own posture combinations. What happens when we get more unique multi display setups? How would an LG flip be defined if at all? How about a detached secondary screen? Pamphlet folds might necessitate a stage-n()?

@diekus
Copy link
Member Author

diekus commented Sep 23, 2021

@kenchris I'll work on a list of 'postures' and 'semantic postures' I can come up with to re work the postures.

@lauramorinigo As an experimental API in dev trials I would prefer to iterate quickly and land on a design that is future proof. I tend to agree with Kenneth that we need to review postures to bring an API that isn't DOA.
Having said that, you've mentioned you have ongoing partnerships, out of curiosity, do you have any signals from developers or trial at Samsung in which you've managed to collect user feedback? This would be invaluable as another data point to consider for designing this API. I do know that there is evidence that suggests adding "semantic" postures, and I am very keen on reading all the feedback to make the necessary adjustments.

@chris480 just to try to see if I understand your point, hypothetically, how would this LG flip be that it wouldn't fall into a predefined category (and we could add more add more if a new device form factor appears and becomes popular). Also, detached secondary screen might be scope of the Presentation API/remote playback API? Is there any device atm that has native capabilities with detachable screens that we'd want to mimic?

@chris480
Copy link

I think for the LG Flip it was more to bring up that some screen orientations might not folded, but more rotated. Primary and secondary seem to change based on orientation. Based on the current proposals, what would be appropriate for such a device?

As far as detached screens. Another LG product comes to mind, LG V60 ThinQ. I also recall seeing some detached dual screen laptops at CES in the past. There's also the less popular option of having a mini display above or near the keyboard. I think Asus had a laptop which had something that amounted to a giant Macbook-esque touchbar.

@reillyeon
Copy link
Member

reillyeon commented Sep 24, 2021

The continued innovation in this space makes me reconsider the use case for expressing a high-level concept like "posture" instead of low-level details about the screen geometry and positioning as proposed by the Window Segments Enumeration API and hints about which input modes (touch, mouse, keyboard, pen, etc.) are available or preferred given the device configuration. The question I'd like to see answered is whether these high-level postures provide additional information or whether web content would be more adaptable to new device configurations if it made decisions based on the individual low-level properties.

Edited to add: Please take my comments here with a grain of salt since I am mostly an observer in this space and others have been giving it more thought. I think "is the device folded" is another useful piece of low-level information in addition to what I mentioned above. More semantic posture information on the other hand seems like a bit of scope creep.

@torgo
Copy link
Contributor

torgo commented Sep 24, 2021

@diekus with respect I don't think this the right time to make breaking changes. The TAG review has just completed successfully. We (Samsung) have produced an implementation which is currently shipping in our Beta. We're also actively working with partners and intend to bring in developer feedback on the current API back into the process. We're also about to present this API at our developer conference. Iteration requires input - which we don't have yet. So by all means let's work to evolve things but for now can we hold off on breaking changes please until we have some developer input on which to base this change on?

@diekus
Copy link
Member Author

diekus commented Sep 24, 2021

Hola @torgo! On your ideas, I don't think a positive TAG review means to stop improving something, and just as food for thought here, I am proposing considering a class of device/behavior that's not really new. It wasn't considered (☹️) since when the API was initially designed it was all around physical folding capabilities of a screen (it was initially called after all the "Fold Angle API"). This doesn't really include postures of devices without these folding screens, and I think at worst the implications would be a new set of values for the media-query, hopefully nothing too "breaking".

I'd be thrilled to see the developer feedback from Samsung along with the other information from other involved parties so we can continue making improvements to have an API that works best will all devices! So, if I may (as originally intended) ask, what do you (Samsung) think about the concept exposed above "semantic postures"? Think about a user that is browsing on their Samsung Tablet and Internet using DeX vs removing the keyboard. Do you think it would be useful for the layout of a form or site to adapt to this? Maybe space out content? Scale content to make it easier to touch?

Thanks for your feedback ✨

@darktears
Copy link
Contributor

It was discussed at TPAC but I'll put here for history.

Assuming that the difference between regular laptop mode and the other two modes (stage and canvas) is the fact that Windows triggers the tablet mode or not then we're already covered with pointer/hover/any-pointer/any-hover properties. In Chromium they are dynamic and wired up to the tablet mode of Windows and works today on say Surface Book/Surface Pro (see when you detach the keyboard).

@diekus
Copy link
Member Author

diekus commented Oct 28, 2021

functionality can already be achieved with pointer media. Thanks @darktears for the heads up!

@diekus diekus closed this as completed Oct 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants