scholarly journals Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4006
Author(s):  
Razeen Hussain ◽  
Manuela Chessa ◽  
Fabio Solari

Cybersickness is one of the major roadblocks in the widespread adoption of mixed reality devices. Prolonged exposure to these devices, especially virtual reality devices, can cause users to feel discomfort and nausea, spoiling the immersive experience. Incorporating spatial blur in stereoscopic 3D stimuli has shown to reduce cybersickness. In this paper, we develop a technique to incorporate spatial blur in VR systems inspired by the human physiological system. The technique makes use of concepts from foveated imaging and depth-of-field. The developed technique can be applied to any eye tracker equipped VR system as a post-processing step to provide an artifact-free scene. We verify the usefulness of the proposed system by conducting a user study on cybersickness evaluation. We used a custom-built rollercoaster VR environment developed in Unity and an HTC Vive Pro Eye headset to interact with the user. A Simulator Sickness Questionnaire was used to measure the induced sickness while gaze and heart rate data were recorded for quantitative analysis. The experimental analysis highlighted the aptness of our foveated depth-of-field effect in reducing cybersickness in virtual environments by reducing the sickness scores by approximately 66%.

Author(s):  
Randall Spain ◽  
Benjamin Goldberg ◽  
Jeffrey Hansberger ◽  
Tami Griffith ◽  
Jeremy Flynn ◽  
...  

Recent advances in technology have made virtual environments, virtual reality, augmented reality, and simulations more affordable and accessible to researchers, companies, and the general public, which has led to many novel use cases and applications. A key objective of human factors research and practice is determining how these technology-rich applications can be designed and applied to improve human performance across a variety of contexts. This session will demonstrate some of the distinct and diverse uses of virtual environments and mixed reality environments in an alternative format. The session will begin with each demonstrator providing a brief overview of their virtual environment (VE) and a description of how it has been used to address a particular problem or research need. Following the description portion of the session, each VE will be set-up at a demonstration station in the room, and session attendees will be encouraged to directly interact with the virtual environment and ask demonstrators questions about their research and inquire about the effectiveness of using VE for research, training, and evaluation purposes. The overall objective of this alternative session is to increase the awareness of how human factors professionals use VE technologies and increase the awareness of the capabilities and limitations of VE in supporting the work of HF professionals.


Author(s):  
Brandon J. Newendorp ◽  
Christian Noon ◽  
Joe Holub ◽  
Eliot H. Winer ◽  
Stephen Gilbert ◽  
...  

In order to adapt to an ever-changing set of threats, military forces need to find new methods of training. The prevalence of commercial game engines combined with virtual reality (VR) and mixed reality environments can prove beneficial to training. Live, virtual and constructive (LVC) training combines live people, virtual environments and simulated actors to create a better training environment. However, integrating virtual reality displays, software simulations and artificial weapons into a mixed reality environment poses numerous challenges. A mixed reality environment known as The Veldt was constructed to research these challenges. The Veldt consists of numerous independent displays, along with movable walls, doors and windows. This allows The Veldt to simulate numerous training scenarios. Several challenges were encountered in creating this system. Displays were precisely located using the tracking system, then configured using VR Juggler. The ideal viewpoint for each display was configured based on the expect location for users to be looking at it. Finally, the displays were accurately aligned to the virtual terrain model. This paper describes how the displays were configured in The Veldt, as well as how it was used for two training scenarios.


Author(s):  
Rongkai Shi ◽  
Hai-Ning Liang ◽  
Yu Wu ◽  
Difeng Yu ◽  
Wenge Xu

Using virtual reality (VR) head-mounted displays (HMDs) can induce VR sickness. VR sickness can cause strong discomfort, decrease users' presence and enjoyment, especially in games, shorten the duration of the VR experience, and can even pose health risks. Previous research has explored different VR sickness mitigation methods by adding visual effects or elements. Field of View (FOV) reduction, Depth of Field (DOF) blurring, and adding a rest frame into the virtual environment are examples of such methods. Although useful in some cases, they might result in information loss. This research is the first to compare VR sickness, presence, workload to complete a search task, and information loss of these three VR sickness mitigation methods in a racing game with two levels of control. To do this, we conducted a mixed factorial user study (N = 32) with degree of control as the between-subjects factor and the VR sickness mitigation techniques as the within-subjects factor. Participants were required to find targets with three difficulty levels while steering or not steering a car in a virtual environment. Our results show that there are no significant differences in VR sickness, presence and workload among these techniques under two levels of control in our VR racing game. We also found that changing FOV dynamically or using DOF blur effects would result in information loss while adding a target reticule as a rest frame would not.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 87-101
Author(s):  
Robin Horst ◽  
Fabio Klonowski ◽  
Linda Rau ◽  
Ralf Dörner

AbstractAsymmetric Virtual Reality (VR) applications are a substantial subclass of multi-user VR that offers not all participants the same interaction possibilities with the virtual scene. While one user might be immersed using a VR head-mounted display (HMD), another user might experience the VR through a common desktop PC. In an educational scenario, for example, learners can use immersive VR technology to inform themselves at different exhibits within a virtual scene. Educators can use a desktop PC setup for following and guiding learners through virtual exhibits and still being able to pay attention to safety aspects in the real world (e. g., avoid learners bumping against a wall). In such scenarios, educators must ensure that learners have explored the entire scene and have been informed about all virtual exhibits in it. According visualization techniques can support educators and facilitate conducting such VR-enhanced lessons. One common technique is to render the view of the learners on the 2D screen available to the educators. We refer to this solution as the shared view paradigm. However, this straightforward visualization involves challenges. For example, educators have no control over the scene and the collaboration of the learning scenario can be tedious. In this paper, we differentiate between two classes of visualizations that can help educators in asymmetric VR setups. First, we investigate five techniques that visualize the view direction or field of view of users (view visualizations) within virtual environments. Second, we propose three techniques that can support educators to understand what parts of the scene learners already have explored (exploration visualization). In a user study, we show that our participants preferred a volume-based rendering and a view-in-view overlay solution for view visualizations. Furthermore, we show that our participants tended to use combinations of different view visualizations.


Author(s):  
Randall Spain ◽  
Benjamin Goldberg ◽  
Pete Khooshabeh ◽  
David Krum ◽  
Joshua Biro ◽  
...  

Virtual reality, augmented reality, and other forms of virtual environments have the potential to dramatically change how individuals work, learn, and interact with each other. A key objective of human factors research and practice is to determine how these environments should be designed to maximize performance efficiency, ensure health and safety, and circumvent potential human virtual environment interaction problems. This session will demonstrate some of the distinct and diverse uses of virtual reality, mixed reality, and virtual environments in an alternative format. The session will begin with each demonstrator providing a brief overview of their virtual environment and describing how it has been used to address a particular problem or research need. Following the description portion of the session, all demonstrations will be set-up around the room, and session attendees will be encouraged to directly interact with the environment and ask demonstrators questions about their research and inquire about the effectiveness of using their virtual environment for research, training, and evaluation purposes. The overall objective of this alternative session is to provoke ideas among the attendees for how virtual reality, mixed reality, and virtual environments can help address their research, training, education or business needs.


Author(s):  
S Leinster-Evans ◽  
J Newell ◽  
S Luck

This paper looks to expand on the INEC 2016 paper ‘The future role of virtual reality within warship support solutions for the Queen Elizabeth Class aircraft carriers’ presented by Ross Basketter, Craig Birchmore and Abbi Fisher from BAE Systems in May 2016 and the EAAW VII paper ‘Testing the boundaries of virtual reality within ship support’ presented by John Newell from BAE Systems and Simon Luck from BMT DSL in June 2017. BAE Systems and BMT have developed a 3D walkthrough training system that supports the teams working closely with the QEC Aircraft Carriers in Portsmouth and this work was presented at EAAW VII. Since then this work has been extended to demonstrate the art of the possible on Type 26. This latter piece of work is designed to explore the role of 3D immersive environments in the development and fielding of support and training solutions, across the range of support disciplines. The combined team are looking at how this digital thread leads from design of platforms, both surface and subsurface, through build into in-service support and training. This rich data and ways in which it could be used in the whole lifecycle of the ship, from design and development (used for spatial acceptance, HazID, etc) all the way through to operational support and maintenance (in conjunction with big data coming off from the ship coupled with digital tech docs for maintenance procedures) using constantly developing technologies such as 3D, Virtual Reality, Augmented Reality and Mixed Reality, will be proposed.  The drive towards gamification in the training environment to keep younger recruits interested and shortening course lengths will be explored. The paper develops the options and looks to how this technology can be used and where the value proposition lies. 


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2020 ◽  
Vol 11 (1) ◽  
pp. 99-106
Author(s):  
Marián Hudák ◽  
Štefan Korečko ◽  
Branislav Sobota

AbstractRecent advances in the field of web technologies, including the increasing support of virtual reality hardware, have allowed for shared virtual environments, reachable by just entering a URL in a browser. One contemporary solution that provides such a shared virtual reality is LIRKIS Global Collaborative Virtual Environments (LIRKIS G-CVE). It is a web-based software system, built on top of the A-Frame and Networked-Aframe frameworks. This paper describes LIRKIS G-CVE and introduces its two original components. The first one is the Smart-Client Interface, which turns smart devices, such as smartphones and tablets, into input devices. The advantage of this component over the standard way of user input is demonstrated by a series of experiments. The second component is the Enhanced Client Access layer, which provides access to positions and orientations of clients that share a virtual environment. The layer also stores a history of connected clients and provides limited control over the clients. The paper also outlines an ongoing experiment aimed at an evaluation of LIRKIS G-CVE in the area of virtual prototype testing.


Sign in / Sign up

Export Citation Format

Share Document