Participants will come away with hands-on experience operating the Matterport 3D Vision System with both the Matterport 3D Pro and the Tango enabled Lenovo Phab 2 Pro with Matterport Scenes, and creating 3D & VR content (shooting on location at VRLA). They will work in groups for 3D scanning practice. They will then be guided through the Matterport Workshop (processing time of models will not allow for the models they just created to be edited), and see how VR Editor mode works, as well as the measurement tool and exporting OBJ files. We will also briefly touch upon advanced techniques, such as importing OBJ files into Unity and HTC Vive, and the Matterport VR SDK. Following the session, participants will all be emailed with links to the models that they created the following day, and they can continue to play with those in Workshop or share them with friends.
You can sign into your Streampoint account here and skip to step two of registration to access the workshops.
This talk examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball to hit a target, trigger a ball to hit a target, predict the landing location of a projectile, and estimate the flight duration of a projectile. The first two experiments compared human behavior in the virtual environment with real-world performance reported in the literature. The last two experiments aimed to test the human ability to adapt to novel gravity fields by measuring their performance in trajectory prediction and time estimation tasks. The experiment results show that: 1) based on brief observation of a projectile’s initial trajectory, humans are accurate at predicting the landing location even under novel gravity fields, and 2) humans’ time estimation in a familiar earth environment fluctuates around the ground truth flight duration, although the time estimation in unknown gravity fields indicates a bias toward earth’s gravity.
As the focus of virtual reality technology is shifting from single-person experiences to multi-user interactions it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.
In the past few years, advances have been made on how mixed reality humans (MRHs) can be used for interpersonal communication skills training for medical teams; however, little research has looked at how MRHs can influence communication skills during training. One way to influence communication skills is to leverage MRHs as models of communication behavior. We have created multiple mixed reality medical team training exercises designed to impact communication behaviors that are critical for patient safety, and found that MRHs can influence healthcare providers' behavior. This talk will discuss the results from these studies as well as the impact this work has on expanding the boundaries for how VR can be used.
In this presentation Dario will share what he looks for when his clients come to him to explore the VR road to an audience. It can be exciting and confusing at the same time for all parties involved, from clients to production studios. He'll share examples of Zero Code’s work and their approach, and where he sees VR going within the advertising world
We have developed a pipeline capable of generating a digital version of a specific person in 20 minutes semi-automatically with no artistic or technical intervention. This 3D construct includes an animatable face, body and fingers. There are numerous commercial and research applications that would benefit from being able to simulate a specific (or recognizable) person in a 3D environment, including social VR, virtual try-on, and digital communication. We discuss the development of this pipeline, its future directions and applications.
Across graphics, audio, video, and physics, the NVIDIA VRWorks suite of technologies helps developers maximize performance and immersion for VR applications. We'll explore the latest features of VRWorks, explain the VR-specific challenges they address, and provide application-level tips and tricks to take full advantage of these features. Special focus will be given to the details and inner workings of our latest VRWorks feature, Lens Matched Shading, along with the latest VRWorks integrations into Unreal Engine and Unity.
I have been working in VR and interactive 3D for a long time. I have had the pleasure of knowing and working with many of the people that are directly responsible for creating the magic in the world that we live in today. These are the people that started with a virtual blank page and created their own reality. Their vision defined a vector into their future that we have had the privilege of extending into ours. Knowing where this vector started gives us an incredible perspective on where it is today and where it is going. I will describe my personal journey along this vector over the last 35 years, demonstrate a few things I am working on today, and speculate about where this vector into the future may take us.
Focus depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution – a new wide field of view gaze-tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through, varifocal, deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker. The benefits of using the membranes include wide field of view (100° diagonal) and fast depth switching (from 20 cm to infinity within 20 ms). Our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays.
The use of first-person self-avatars in immersive virtual environments (VEs) has grown over recent years. It is unknown, however, how visual feedback from a self-avatar influences a user's online actions and subsequent calibration of actions within an immersive VE. The current paper uses a prism throwing adaptation paradigm to test the role of a self-avatar arm or full body on action calibration in a VE. Participants' throwing accuracy to a target on the ground was measured first in a normal viewing environment, then with the visual field rotated clockwise about their vertical axis by 17 degrees (prism simulation), and then again in the normal viewing environment with the prism distortion removed. Participants experienced either no-avatar, a first-person avatar arm and hand, or a first-person full body avatar during the entire experimental session, in a between-subjects manipulation. Results showed similar throwing error and adaptation during the prism exposure for all conditions, but a reduced aftereffect (displacement with respect to the target in the opposite direction of the prism-exposure) when the avatar arm or full body was present. The results are discussed in the context of how an avatar can provide a visual frame of reference to aid in action calibration.
Immersive technologies are showing up in out of home venues around the world. In this panel we'll discuss the advantages, challenges and business models afforded by AR and VR in amusement parks, movie theaters and family entertainment centers.
VR and AR hold enormous promises as paradigm-shifting ubiquitous technologies. The investment in these technologies by leading IT companies, as well as the buy-in and general excitement from outside investors, technologists, and content producers has never been more palpable. There are good reasons to be excited about the field. The real question will be if the technologies can add sufficient value to people’s lives to establish themselves as more than just niche products. My path in this presentation will lead from a personal estimation of what matters for adoption of new technologies to important innovations we have witnessed on the road to anywhere/anytime use of immersive technologies. In recent years, one track of research in my lab has been concerned with the simulation of possible future capabilities in AR. With the goal to conduct controlled user studies evaluating technologies that are just not possible yet (such as a truly wide-field-of-view augmented reality display), we turn to high-end VR to simulate, predict, and assess these possible futures. In the far future, when technological hurdles, such as real-time reconstruction of photorealistic environment models, are removed, VR and AR naturally converge. Until then, we have a very interesting playing field full of technological constraints to have fun with.
Justin Roiland, the “Rick & Morty” creator and newly-minted founder of the VR studio Squanchtendo aims to dive into the surreally funny possibilities of the medium in his keynote, remarking "What does the future of VR hold? Will there be more wizard games? Are grandmas real? What IS a wizard really? Are there wizard grandmas? How does this factor into VR? I did all this (simple) math and then made a power point presentation that I **think** maaaayyybe has these questions (and more) all figured out. Please come to my incredible keynote address on the state of VR! You juuuust might learn something, maybe, I don't know. I can't make any promises on that because you may already know everything."
Building on last year’s highly successful VRLA audio workshop, this event will inform on the latest developments and tools, as audio spatialization becomes an ubiquitous part of VR platforms & experiences. The two part workshop will present a short primer on 360 spatial audio principles and approaches to cinematic, documentary-style, commercial, music-centric, gamic & room scale VR. The remainder of the workshop will be dedicated to hands-on experience for all attendees in the art and craft of spatial sound. Guided by the co-founders of ECCO VR, with special guests from Safari Riot, the workshop will cover the following topics:
You can sign into your Streampoint account here and skip to step two of registration to access the workshops.A conversation on how the most important screen in this new virtual world is the 2D one, and how it brings together experiences with audiences, using fundamental techniques and platforms.
Hear from some of the industry's leading creators of VR and immersive experiences as they discuss and debate:
- Which brands are using VR or immersion in the right way, and why
- The choices and challenges of creating for headset vs. mobile
- How to approach the topic of ROI and the new tools and metrics now
available
- The best way to brief a VR or immersive experience
- What trends and technologies in the sector they think will have the
biggest impact on reach and engagement
In the emerging field of AR/VR both the tools and techniques for creating content are still in a nascent state. Ways of thinking through the creative process and applications workflows are surfacing and evolving in real-time with new product development, making this a very exciting time to be a content creator (and a content consumer) but also includes potential gaps and problems as standard best practices are only developing.
Join our panel of audio experts who together, make up some of the biggest names in the industry!
This panel features a selection of some of the most experienced sound designers and mixers in 360 video productions today. During our session, learn answers to questions such as:
• How should I approach audio for VR and when does it take place during the production timeline?
• How do I lobby effectively for audio in a typical production?
• What are the best audio production workflows for VR today? What works? What doesn’t work, and why?
• What features are needed in today’s audio production tools for VR?
• Are we ready for some content basic workflow and distribution standardization?