A vision-based head tracker for fish tank virtual reality-VR without head gear

Author(s):  
J. Rekimoto
Author(s):  
Denis Bienroth ◽  
Hieu T. Nim ◽  
Dimitar Garkov ◽  
Karsten Klein ◽  
Sabrina Jaeger-Honz ◽  
...  

AbstractSpatially resolved transcriptomics is an emerging class of high-throughput technologies that enable biologists to systematically investigate the expression of genes along with spatial information. Upon data acquisition, one major hurdle is the subsequent interpretation and visualization of the datasets acquired. To address this challenge, VR-Cardiomicsis presented, which is a novel data visualization system with interactive functionalities designed to help biologists interpret spatially resolved transcriptomic datasets. By implementing the system in two separate immersive environments, fish tank virtual reality (FTVR) and head-mounted display virtual reality (HMD-VR), biologists can interact with the data in novel ways not previously possible, such as visually exploring the gene expression patterns of an organ, and comparing genes based on their 3D expression profiles. Further, a biologist-driven use-case is presented, in which immersive environments facilitate biologists to explore and compare the heart expression profiles of different genes.


2019 ◽  
Vol 6 ◽  
pp. 205566831984130
Author(s):  
Nahal Norouzi ◽  
Luke Bölling ◽  
Gerd Bruder ◽  
Greg Welch

Introduction: A large body of research in the field of virtual reality is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement. Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds. Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker, e.g. in cases when no physical head rotations are possible. Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.


2019 ◽  
Vol 25 (5) ◽  
pp. 2040-2049 ◽  
Author(s):  
Qian Zhou ◽  
Georg Hagemann ◽  
Dylan Fafard ◽  
Ian Stavness ◽  
Sidney Fels

Author(s):  
Frank Stenicke ◽  
Timo Ropinski ◽  
Gerd Bruder ◽  
Klaus Hinrichs

1995 ◽  
Vol 4 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Warren Robinett ◽  
Richard Holloway

The visual display transformation for virtual reality (VR) systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics. The process can be represented as a series of transformations, some of which contain parameters that must match the physical configuration of the system hardware and the user's body. Because of the number and complexity of the transformations, a systematic approach and a thorough understanding of the mathematical models involved are essential. This paper presents a complete model for the visual display transformation for a VR system; that is, the series of transformations used to map points from object coordinates to screen coordinates. Virtual objects are typically defined in an object-centered coordinate system (CS), but must be displayed using the screen-centered CSs of the two screens of a head-mounted display (HMD). This particular algorithm for the VR display computation allows multiple users to independently change position, orientation, and scale within the virtual world, allows users to pick up and move virtual objects, uses the measurements from a head tracker to immerse the user in the virtual world, provides an adjustable eye separation for generating two stereoscopic images, uses the off-center perspective projection required by many HMDs, and compensates for the optical distortion introduced by the lenses in an HMD. The implementation of this framework as the core of the UNC VR software is described, and the values of the UNC display parameters are given. We also introduce the vector-quaternion-scalar (VQS) representation for transformations between 3D coordinate systems, which is specifically tailored to the needs of a VR system. The transformations and CSs presented comprise a complete framework for generating the computer-graphic imagery required in a typical VR system. The model presented here is deliberately abstract in order to be general purpose; thus, issues of system design and visual perception are not addressed. While the mathematical techniques involved are already well known, there are enough parameters and pitfalls that a detailed description of the entire process should be a useful tool for someone interested in implementing a VR system.


Sign in / Sign up

Export Citation Format

Share Document