@istitch

UI Components for VR

If you are new to VR, the process can be intimidating and complex. However, all the basics of design still hold true. Here are a few points to consider as you build out your immersive user experience.

Context

In another digital medium like mobile, tablet or desktop, you can assume that the user is in front of the device. If a user wants to switch modes and perform a various tasks, we present them with a way to access various screens that are specific to each task. Seems pretty basic.

In VR, the user is navigating an entire new world in 3 dimensions. With the ability to move around, there is a new dimension by which we can make assumptions about what they are doing: proximity. The distance between a user and an object in space can tell us a lot about what they are doing. In a 3D immersive experience, users can switch modes by actually moving around.

In VR, the designer can infer context from the distance between a user and an object

Two aspects of proximity that are useful for the designer are layout and exposure. Similar to responsive web design, the designer infers context from screen size and, from there, will hide, show, and modify elements. In VR, the designer can infer context from the distance between a user and an object, hiding, showing and modifying elements.

For example, if a user is close enough to grab an object with their virtual hands, highlight the object. If they are farther than arm’s length, don’t show a highlight. Instead, show a signage element that signals the object is there, and encourage the user to move closer to it so they can pick it up.

Breakpoints help establish the meaning of different distances.

Use breakpoints to define the meaning of different distances. For example, in a social VR application, the area closest to a person might be defined as personal space, beyond that is conversational space, and beyond that is peripheral space. In that example, there are various UI elements that can be displayed and shown in order to highlight those aspects of the experience.

Control

During this last wave of mobile adoption, the spectrum of interaction has been between cursors and touch. In VR the spectrum changes slightly. Touch is virtually non-existent without technology that provides more complete haptic feedback.

In VR, the emergent spectrum of VR controls is between cursors and hands. Though hands are related to touch in a way, hand-based interactions are much different. Users can virtually grab, drop, shake, spin, rotate, scale, and move objects which opens up the door to new and exciting forms of interaction. Most of the time these interactions are mitigated by controllers that have several buttons on each hand, as well as new technology that actually tracks the user’s hands in 3D space, recognizing gestures and hands position. All this is far from standard.

The spectrum of VR controls is between cursors and hands.

There is however some standardization that is occurring closer to the cursor end of the spectrum. First there is gaze-only. The user selects a target by aiming a ray that extends from their forehead out into the space directly in front of them at a certain distance. It’s about as precise as a laser pointer glued to their forehead. This is by and far the most accessible and widely used control scheme because it doesn’t require a controller. Most lower end mobile VR solutions use gaze selection, but also higher-end systems sometimes leverage the simplicity of this system. IMO this is something akin to mobile first design. In its simplest state, all VR UI systems should support a gaze-only model before expanding to support more complex control systems.

Second, is a 3DOF (3 Degrees of Freedom) controller. 3DOF tracks only rotation (pitch, roll, and yaw). The experience is similar to a laser pointer with little to no arm movement. This is the simplest of the tracked controllers, but developers are getting creative with different physical models (fishing pole, arm extension, etc.) that are proving 3DOF to be a powerful solution. Most of the higher-end mobile/untethered solutions are coming standard with 3DOF controllers.

It’s important that the designer makes it clear what the system can do

The final is 6DOF which tracks both rotation (pitch, roll, and yaw) and position (length, width, and depth). At that point, the controller is fully tracked in space and can proxy for hand interaction. This is by and far the most immersive type of VR experience. If done with elegance, the user will no longer be thinking of the physics of proxy and will simply be sword fighting a skeleton warrior, climbing a mountain, or painting in thin air.

In any case, it’s important that the designer makes it clear what the system can do. If an object is selectable, the user should easily discover that. If there are certain aspects of movement or discovery that are essential, the designer should spell it out for the user. In a more hands-based interaction system, the designer should use their best judgement about how to explain how it works. It’s a time of wild experimentation so have fun with it. Just do your best at keeping it simple.

On the more established cursor-based end of the spectrum, there are ways to help make the system more intuitive. The first is cursor state change. When the user hovers over something, the cursor should change to something more specific that helps reveal what might happen next. This pattern exists on the web. When a user hovers over a link the cursor goes from an arrow icon to a hand icon. It works the same way in VR, except the icon might have to do with moving in space, collecting an object, or getting information about something.

Another thing to consider is hit area, hover, and selection states. In VR, the designer has the user’s complete viewport. That means that the hit area of a link or target action should be big and bold. Also, it’s a good idea to make the target hit area larger than the actual visible target. That helps make the system more forgiving. Remember that the user in some cases is navigating by using something similar to a laser pointer glued to their forehead. Big, bold, round areas with super obvious and colorful hover and selection states are helpful to make it as obvious and easy to navigate as possible.

Comfort

As stated previously here, all of the basics of design hold true. Designing for VR is mostly about following standards associated environmental graphic design and signage. The new layers of proximity breakpoints as well as useful hints allow the designer to provide an intuitive and dynamic experience. The final element to pay close attention to for VR is comfort.

Bad design can literally make people vomit. So, in this new reality, it’s important to consider a few things. First, is personal space and movement. Along the lines of proximity, make sure that content remains at a comfortable distance from users. I’d recommend that in general content that is to be in someone’s personal space should sit below eye level, slightly rotated upward for example, so it feels more like a console than a dialog that’s blocking content or in the face.

In terms of movement, it should be limited and it’s a good idea to keep some elements fixed so that the entire environment isn’t moving. Also, if the user is going to move, it helps if they can see where they are going before they transition to that point. It’s not recommended to change the rotation of a person when they move. It’s disorientating, and any control system that I’ve ever played with that has attempted it was confusing to learn. I always prefer to avoid that. Simple, slow, and incremental is better.

VR layout works best with centered elements that are grouped together.

The next has to do with layout. The conventions of rectangular screens don’t always work. Particularly with regard to scale and proportion. If you port content straight from the web into VR it can be very hard to read and can feel very clunky. Instead, I like to use the following rule: If it’s too much for a mobile screen, it’s probably too much for 360 scene. Basically, take the amount of content you have on mobile, make the elements proportionally larger, and keep the elements centered in the viewport.

If it’s too much for a mobile screen, it’s probably too much for 360 scene

It also helps to center elements such as buttons, logos, graphics, etc within the layout. That helps for two reasons. One, because there is no rectangle to anchor those elements to like we have on standard, rectangular screens. Two, because the centered layout works better if there is an variable positioning from side to side. If the user is viewing the layout from both the left side or the right side, the proportions of the layout remain consistent. There aren’t elements that are closer to one person on one side vs farther away from someone on the other side.

An important thing to consider in terms of comfort is the position of the user and what kind of head movement is required to navigate the system. It helps to keep layouts centered in view and also, keep interactions grouped together so that users don’t have to move their head to interact with various decision points. For example, if you have a dialog that leads to a confirmation point, place the new dialog in the same position as the previous so that the interaction chain doesn’t require the user to traverse the screen multiple times.

Final Thoughts

In this brave new world of VR, there are really no rules. New ideas are conjured up every day, providing whole new types of experiences and ways to interact. What I’ve outlined represents emerging patterns that I’ve observed and helped to create. The patterns are stated to help encourage consistency but are organic and rapidly changing. In many ways, this is all a bridge to the moment where holograms are indistinguishable from real objects. One day soon, you’ll just reach out and touch it. When that happens, all bets will be off as we head into a brand new paradigm of spatial computing. The bounds of reality move forward and new things will be possible. Good luck as we all move toward the next level of human experience.

Resources