This talk examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball to hit a target, trigger a ball to hit a target, predict the landing location of a projectile, and estimate the flight duration of a projectile. The first two experiments compared human behavior in the virtual environment with real-world performance reported in the literature. The last two experiments aimed to test the human ability to adapt to novel gravity fields by measuring their performance in trajectory prediction and time estimation tasks. The experiment results show that: 1) based on brief observation of a projectile’s initial trajectory, humans are accurate at predicting the landing location even under novel gravity fields, and 2) humans’ time estimation in a familiar earth environment fluctuates around the ground truth flight duration, although the time estimation in unknown gravity fields indicates a bias toward earth’s gravity.
As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.
In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We have created multiple mixed reality medical team training exercises designed to impact communication behaviors that are critical for patient safety, and found that MRHs can influence healthcare providers' behavior. This talk will discuss the results from these studies as well as the impact this work has on expanding the boundaries for how VR can be used.
In this presentation Dario will share what he looks for when his clients come to him to explore the VR road to an audience. It can be exciting and confusing at the same time for all parties involved, from clients to production studios. He'll share examples of Zero Code’s work and their approach, and where he sees VR going within the advertising world
We have developed a pipeline capable of generating a digital version of a specific person in 20 minutes semi-automatically with no artistic or technical intervention. This 3D construct includes an animatable face, body and fingers. There are numerous commercial and research applications that would benefit from being able to simulate a specific (or recognizable) person in a 3D environment, including social VR, virtual try-on, and digital communication. We discuss the development of this pipeline, its future directions and applications.
Across graphics, audio, video, and physics, the NVIDIA VRWorks suite of technologies helps developers maximize performance and immersion for VR applications. We'll explore the latest features of VRWorks, explain the VR-specific challenges they address, and provide application-level tips and tricks to take full advantage of these features. Special focus will be given to the details and inner workings of our latest VRWorks feature, Lens Matched Shading, along with the latest VRWorks integrations into Unreal Engine and Unity.
I have been working in VR and interactive 3D for a long time. I have had the pleasure of knowing and working with many of the people that are directly responsible for creating the magic in the world that we live in today. These are the people that started with a virtual blank page and created their own reality. Their vision defined a vector into their future that we have had the privilege of extending into ours. Knowing where this vector started gives us an incredible perspective on where it is today and where it is going. I will describe my personal journey along this vector over the last 35 years, demonstrate a few things I am working on today, and speculate about where this vector into the future may take us.
Focus depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution – a new wide field of view gaze-tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through, varifocal, deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker. The benefits of using the membranes include wide field of view (100° diagonal) and fast depth switching (from 20 cm to infinity within 20 ms). Our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays.
The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration.
VR and AR hold enormous promises as paradigm-shifting ubiquitous technologies. The investment in these technologies by leading IT companies, as well as the buy-in and general excitement from outside investors, technologists, and content producers has never been more palpable. There are good reasons to be excited about the field. The real question will be if the technologies can add sufficient value to people’s lives to establish themselves as more than just niche products. My path in this presentation will lead from a personal estimation of what matters for adoption of new technologies to important innovations we have witnessed on the road to anywhere/anytime use of immersive technologies. In recent years, one track of research in my lab has been concerned with the simulation of possible future capabilities in AR. With the goal to conduct controlled user studies evaluating technologies that are just not possible yet (such as a truly wide-field-of-view augmented reality display), we turn to high-end VR to simulate, predict, and assess these possible futures. In the far future, when technological hurdles, such as real-time reconstruction of photorealistic environment models, are removed, VR and AR naturally converge. Until then, we have a very interesting playing field full of technological constraints to have fun with.
A conversation on how the most important screen in this new virtual world is the 2D one, and how it brings together experiences with audiences, using fundamental techniques and platforms.