Why do you think Google, Facebook, Microsoft, Apple, and other global tech giants keep investing billions in VR development?
You’re right if you believe virtual reality will change our world significantly soon.
According to Zippia, The global VR market is projected to reach $26.9 billion by 2027. There are 57.4 million VR users in the U.S., which is 15% of the country’s population. There are 90.9 million AR users in the U.S.There are over 171 million VR users worldwide. The VR industry market size is $7.72 billion. 47% of Americans say they are familiar with VR.
And the attitude of large companies to this technology is proof of that.
VR devices will become as popular as smartphones since they allow users to watch movies and TV shows, attend public events, purchase, and much more.
According to XR Today, Nearly 1 in 3 consumers own an AR/VR device, and 15 percent will buy such a device in 2022: As per a 2022 GroupM Annual Survey on Consumer Attitudes Towards Technology, 32 percent of respondents said they own an AR or VR device, and another 15 percent added they were looking to buy a device in this category in the next 12 months. According to techjury, 70% of VR headset-owning consumers have bought a game on it.14 million AR and VR devices were sold in 2019.
So something significant is visible on the horizon. And if you’re not just an observer but a participant in the story with the ability to influence its outcome, that’s very cool.
Especially for professionals who create interaction systems, you’ll learn about VR design fundamentals here.
The first introduction to the VR design fundamentals
First, add a pause at the beginning of the simulation — let the user do nothing for a few seconds in the scene. It allows users to adapt to the new environment and get oriented in the space. After that, they can concentrate on the task. There’s no need to demand any actions from the user right away.
And here we’re talking about notifications, voiceover, and animations. The very appearance of the scene can be shown gradually (leaving the fade, assembling from the fragments around the user).
Getting new users to take their first step into VR can be pretty tricky. Therefore, along with training on controlling the motion system, add the task of regular physical movement in the scene.
At first, users may not be very good at determining the position of objects in depth. So at the start, it is better to place interactive elements so that they are always at arm’s length or, on the contrary, much further from the initial position of the person. Users must understand that they need to take several steps to the object.
In most cases, if users experience motion sickness in any VR project, it is the developer’s fault. However, there are many discussions on this topic, and developers continue to explore it. So you should pay attention to the study of this issue since it’s possible to select the most appropriate way to move in space depending on the particular task.
In this case, the most common option is to move with the help of teleportation. That is the user’s instant movement in space by indicating the point to proceed with the controller or the direction of the view. The second popular mechanic is all possible flight variations.
Since users are often confused by the controller buttons when using VR glasses for the first time, they can accidentally move long distances, which will cause constant disorientation in space.
If you decide to implement a free movement system (the traditional way of controlling camera movement with the buttons on the controller), then make an alternative option for the part of the audience whose vestibular apparatus will not be ready for such a matter of things.
With a free motion system, you can reduce the effects of motion sickness by s structuring the scene or game level so that the user predominantly moves forward.
Controllers, hands, and display of interactive content
If the controller/hand touches an interactive object, you can illuminate it and indicate that it is already possible to take it to the next step.
The controller/hand model is usually hidden when users take an interactive object. It’s better not to visualize a half-immersed controller in the geometry of the object with which the user interacts.
If the interaction is done with hand models and when a user picks up the object, you need to develop a unique set of wrist positions for different objects so that each one looks good in your hand. It would help if you also considered that it requires some time and financial expenses with a not quite obvious advantage over the other option when the controller or hand disappears during the object pickup.
However, a somewhat different situation may be with Oculus Touch because of its ergonomics. According to Lifeware, Oculus Touch is the motion controller system used in the Oculus Rift, Rift S, and Quest virtual reality (VR) systems. Each Oculus Touch consists of a pair of controllers, with one for each hand. These controllers operate like a single gamepad, allowing Oculus Rift to provide full motion tracking of a player’s hands in a VR space. Also, popular VR devices on the market are the Oculus Quest and Quest 2, which are unlikely ever to let you down.
The side buttons of the HTC Vive controller should be used only when necessary, or better not at all
This is because they are rather unobvious controls in this system.
The fewer controller buttons used in the project, the better. You can use a zone-based capture method to interact with objects with only one button on the controller. It sometimes works when you want to move objects within a limited area. However, you need to develop separate cases for a given condition in other cases. For example, when we’re speaking about the area where an object is taken and returned.
When there’s a separate area where the object is taken via trigger pull and replaces the controller, the subsequent use of the trigger launches the operation’s logic of that object. This approach reduces the number of buttons used for similar types of action. Otherwise, you’ll have to provide one button for object pickup and another for use, which raises questions about other actions that also require buttons, for example, for teleportation.
If you visualize the hands, it is better to outplay with their end, going to transparency, or using the effect of “empty gloves” when you create a glove that takes the form of a hand, but inside, it is nothing if you look from the cuff.
Don’t make a realistic visualization of your hands since it almost always doesn’t look pleasant
Hand transparency solves the ephemerality problem when the user can pass their hand through the scene’s geometry.
The current generation of motion controllers for VR design fundamentals has one compelling advantage over classic input devices. It’s a low input threshold due to the simplicity of the controller’s design. And here we’re talking about natural gestures and body movements. A user, for example, can bite into a donut with their controller, which plays an animation of the character biting into the donut.
In addition to making such actions very clear as they are used in the real world, they also become a more substantial part of the gameplay and the new user experience
But there’s one interesting issue with such solutions: speed. In some cases, speed will be the decisive factor. In others, it will be the process of physical interaction carried over from the real world.
Users want to interact with anything within reach if it looks like an object that seems like something they can pick up or click on. Thus, it’s essential to meet their expectations by considering this specificity when designing and filling the scene with objects.
For example, place only interactive objects within reach, don’t have static and interactive objects in the same area, etc. Another way to solve this issue is to set a clear visual indication for interactive objects (animated markers that appear on interactive objects at arm’s length).
Users can view different indicators of points of interest in areas they have not yet visited by pressing the teleport button.
How to solve the problem of picking up objects on the floor so that users don’t hit real controllers on the floor? So if an object has fallen to the floor, it should fly up slightly when the user reaches out to it. In order to avoid disorienting the user, it is essential to create an additional visual indication of the process of “levitating” the object.
Placement of control elements, Main Menu
The main menu is the first thing the user will see in the project. You can’t place it in a void on a black background. It is better to spend some time on the abstract environment, which is already better than the complete absence of any space.
But that’s not always enough, either. For example, menus can reflect the concept of the project and be a lobby before moving directly to the simulation. Games have long used it, but few services pay attention to it.
Link the interface to the stage area. Even if it’s the main menu bar hanging in the space, justify its existence in the environment. For example, it could be a projector displaying a hologram in the scene, a reflection of light from a hologram on elements in the background, etc.
When creating the user workspace for long-term interaction with interface elements, don’t place pieces for interaction with controllers at gaze level since it’s very tiring for hands and shoulders
So divide the space into an information area at the upper body level and an interaction area at the level of the user’s stomach. In the first case, there will be all sorts of status screens, notifications, and input fields, and in the second — interactive blocks, keyboards, menu elements, etc.
Avoid attaching interface elements to the user’s camera. So you can place them at a small distance in front of the camera and set the movement possibility behind it with some inertia.
It is not always appropriate to visualize buttons and other tools in the form of any natural objects in the space of a scene. There is nothing wrong with virtual panels if high-quality feedback is available to users: buttons respond to pointing and pressing (illuminated, a shift in depth, react to pressure). It’s convenient to place control elements around the controllers.
You can often find solutions by placing control elements on the wrist as a hologram or some panel similar to a smartwatch. Here it is important not to overload such a block of interface with functionality since it’s hard to keep your hand in balance to interact with the interactive elements. Nevertheless, this solution is excellent for notifications and simple sets of actions.
Avatars and network interaction
The first-person user sees only their hands. However, other users in the scene can see a fully rendered avatar of their interlocutor since the mismatch of real and virtual body movements is not as critical for the observer as it is for the owner of the virtual body.
When it comes to the stylistics of avatars for networking, it is better to avoid realistic images of people. Non-photorealistic visual styles are more appropriate here.
In addition to exaggerated visualization, developers often try to play around with the faces and facial expressions of user avatars in various ways, hiding their faces with goggles, masks, and spacesuits. To save resources and compensate for the inability to track facial expressions, this completely finished.
Even without the avatar’s mouth animation, lip-syncing or its equivalent should be available for user avatars that communicate through speech. One option is to make a backlight or display an additional indicator as a UI element.
See also: 2022 New Year Presentation Design Style: Minimalism, Claymorphism, Brutalism and the Future of NFT, VR, and Metaverse
We can say that the success of VR design fundamentals depends entirely on designers. However, humanity has now reached a turning point. VR design fundamentals are becoming less and less dependent on technical aspects and more and more influenced by the designers’ imagination.
It is a new frontier that opens up limitless possibilities. So use the VR design fundamentals and recommendations described in this article about VR design fundamentals to handle your work tasks more effectively.
Let’s visit RRSlide to download free PowerPoint templates. But wait, don’t go anywhere and stay here with our RRGraph Design Blog to keep up-to-date on the best pitch deck template collections and design advice from our PowerPoint experts.
Christine Tomas is a tech expert, consultant, and aspiring writer. She writes for different news portals and thematic blogs that help her stay at the heart of the programming and technology news. Such work gives her the opportunity to write articles on the most relevant topics today.