Enhancing Spatial Perception and Presence in Immersive Virtual Environments

Enabling accurate spatial perception in VR is critical to ensuring the integrity of immersive virtual environments as a practical tool for architectural design and evaluation. After our discovery, in 2006, of the key role of cognitive factors in distance perception accuracy in VR, our group has been working on the development of strategies to facilitate veridical spatial understanding by evoking affordances for natural, embodied interaction in HMD-based VR. Our current efforts focus on providing people with a high-fidelity self-embodiment using simple, low-cost technology.


Predicting Destination using Head Orientation and Gaze Direction During Locomotion in VR, Jonathan Gandrud and Victoria Interrante (2016) ACM Symposium on Applied Perception, pp. 31-38. [PDF] [abstract]

This paper reports preliminary investigations into the extent to which future directional intention might be reliably inferred from head pose and eye gaze during locomotion. Such findings could help inform the more effective implementation of realistic detailed animation for dynamic virtual agents in interactive first- person crowd simulations in VR, as well as the design of more efficient predictive controllers for redirected walking. In three different studies, with a total of 19 participants, we placed people at the base of a T-shaped virtual hallway environment and collected head position, head orientation, and gaze direction data as they set out to perform a hidden target search task across two rooms situated at right angles to the end of the hallway. Subjects wore an nVisorST50 HMD equipped with an Arrington Research ViewPoint eye tracker; positional data were tracked using a 12- camera Vicon MX40 motion capture system. The hidden target search task was used to blind participants to the actual focus of our study, which was to gain insight into how effectively head position, head orientation and gaze direction data might predict people's eventual choice of which room to search first. Our results suggest that eye gaze data does have the potential to provide additional predictive value over the use of 6DOF head tracked data alone, despite the relatively limited field-of-view of the display we used.

Towards Achieving Robust Video Self-avatars under Flexible Environment Conditions, Loren Puchalla Fiore and Victoria Interrante (2012) International Journal of Virtual Reality (special issue featuring papers from the IEEE VR Workshop on Off-The-Shelf Virtual Reality), 11(3), pp. 33-41. [PDF] [abstract]

The user's sense of presence within a virtual environment is very important as it affects their propensity to experience the virtual world as if it were real. A common method of immersion is to use a head-mounted display (HMD) which gives the user a stereoscopic view of the virtual world encompassing their entire field of vision. However, the disadvantage to using an HMD is that the user's view of the real world is completely blocked including the view of his or her own body, thereby removing any sense of embodiment in the virtual world. Without a body, the user is left feeling that they are merely observing a virtual world, rather than experiencing it. We propose using a video-based see-thru HMD (VSTHMD) to capture video of the view of the real-world and then segment the user's body from that video and composite it into the virtual environment. We have developed a VSTHMD using commercial-off-the-shelf components, and have developed a preliminary algorithm to segment and composite the user's arms and hands. This algorithm works by building probabilistic models of the appearance of the room within which the VSTHMD is used, and the user's body. These are then used to classify input video pixels in real-time into foreground and background layers. The approach has promise, but additional measures need to be taken to more robustly handle situations in which the background contains skin-colored objects such as wooden doors. We propose several methods to eliminate such false positives, and discuss the initial results of using the 3D data from a Kinect to identify false positives.

Correlations Between Physiological Response, Gait, Personality, and Presence in Immersive Virtual Environments, Lane Phillips, Victoria Interrante, Michael Kaeding, Brian Ries and Lee Anderson (2012) Presence: Teleoperators and Virtual Environments 21(3), Spring 2012, pp. 119-141. [PDF] [abstract]

In previous work, we have found significant differences in the accuracy with which people make initial spatial judgments in different types of head-mounted, display-based immersive virtual environments (IVEs; Phillips, Interrante, Kaeding, Ries, & Anderson, 2010). In particular, we have found that people tend to less severely underestimate egocentric distances in a virtual environment that is a photorealistic replica of a real place that they have recently visited than when the virtual environment is either a photorealistic replica of an unfamiliar place, or a nonphotorealistically (NPR) portrayed version of a familiar space. We have also noted significant differences in the effect of environment type on distance perception accuracy between individual participants. In this paper, we report the results of two experiments that seek further insight into these phenomena, focusing on factors related to depth of presence in the virtual environment. In our reported first experiment, we immersed users (between-subjects) in one of the three different types of IVEs and asked them to perform a series of well-defined tasks along a delimited path, first in a control version of the environment, and then in a stressful variant in which the floor around the marked path was cut away to reveal a 20-ft drop. We assessed participants' sense of presence during each trial using a diverse set of measures, including: questionnaires, recordings of heart rate and galvanic skin response, and gait metrics derived from tracking data. We computed the differences in each of these measures between the stressful and nonstressful versions of each environment, and then compared the changes due to stress between the different virtual environment conditions. Pooling the data over all participants in each group, we found significant physiological indications of stress after the appearance of the pit in all three environments, but we did not find significant differences in the magnitude of the stress response between the different virtual environment locales. We also did not find any significant difference in the level of subjective presence reported in each environment. However, we did find significant differences in gait: participants in the photorealistic replica room showed a significantly greater reduction in stride speed and stride length between the control and pit version of the room than did participants in either the photorealistically rendered nonreplica environment or the NPR replica environment conditions. Our second experiment, conducted with a new set of participants, sought to more directly investigate potential correlations between distance estimation accuracy and personality, stress response, and reported sense of presence, comparatively across different immersive virtual environment conditions. We used pretest questionnaires to assess a variety of personality measures, and then randomly immersed participants (between-subjects) in either the photorealistic replica or photorealistic non-replica environment and assessed the accuracy of their egocentric distance judgments in that IVE, followed by control trials in a neutral, real-world location. We then had participants go through the same set of tasks as in our first experiment while we collected physiological measures of their stress level and tracked their gait, and we compared the changes in these measures between the neutral and pit-enhanced versions of the environment. Finally, we had people fill out a brief presence questionnaire.Analyzing all of these data, we found that participants made significantly greater distance estimation errors in the unfamiliar room environment than in the replica room environment, but no other differences between the two environments were significant. We found significant positive correlation between several of the personality measures, but we did not find any notable significant correlations between personality and presence, or between either personality or presence and gait changes or distance estimation accuracy. These results suggest to us that the relationship between personality, presence, and performance in IVEs is complicated and not easily captured by existing measures.

Avatar Self-Embodiment Enhances Distance Perception Accuracy in Non-Photorealistic Immersive Virtual Environments, Lane Phillips, Brian Ries, Michael Kaeding and Victoria Interrante (2010) IEEE Virtual Reality 2010, pp. 115-118. [PDF] [abstract]

Non-photorealistically rendered (NPR) immersive virtual environments (IVEs) can facilitate conceptual design in architecture by enabling preliminary design sketches to be previewed and experienced at full scale, from a first-person perspective. However, it is critical to ensure the accurate spatial perception of the represented information, and many studies have shown that people typically underestimate distances in most IVEs, regardless of rendering style. In previous work we have found that while people tend to judge distances more accurately in an IVE that is a high-fidelity replica of their concurrently occupied real environment than in an IVE that it is a photorealistic representation of a real place that they've never been to, significant distance estimation errors re-emerge when the replica environment is represented in a NPR style. We have also previously found that distance estimation accuracy can be improved, in photo-realistically rendered novel virtual environments, when people are given a fully tracked, high fidelity first person avatar self-embodiment. In this paper we report the results of an experiment that seeks to determine whether providing users with a high-fidelity avatar self-embodiment in a NPR virtual replica environment will enable them to perceive the 3D spatial layout of that environment more accurately. We find that users who are given a first person avatar in an NPR replica environment judge distances more accurately than do users who experience the NPR replica room without an embodiment, but not as accurately as users whose distance judgments are made in a photorealistically rendered virtual replica room. Our results provide a partial solution to the problem of facilitating accurate distance perception in NPR virtual environments, while supporting and expanding the scope of previous findings that giving people a realistic avatar self-embodiment in an IVE can help them to interpret what they see through an HMD in a way that is more similar to how they would interpret a corresponding visual stimulus in the real world.

A Further Assessment of Factors Correlating with Presence in Immersive Virtual Environments, Lane Phillips, Victoria Interrante, Michael Kaeding, Brian Ries and Lee Anderson (2010) Joint Virtual Reality Conference of EGVE - EuroVR - VEC, pp. 55-63. [PDF] [abstract]

In previous work, we have found significant differences in participants' distance perception accuracy in different types of immersive virtual environments (IVEs). Could these differences be an indication of, or consequence of, differences in participants' sense of presence under these different virtual environment conditions? In this paper, we report the results of an experiment that seeks further insight into this question.
    In our experiment, users were fully tracked and immersed in one of three different IVEs: a photorealistically rendered replica of our lab, a non-photorealistically rendered replica of our lab, or a photorealistically rendered room that had similar dimensions as our lab, but was texture mapped with photographs from a different real place. Participants in each group were asked to perform a series of tasks, first in a normal (control) version of the IVE and then in a stress-enhanced version in which the floor surrounding the marked path was cut away to reveal a two-story drop. We assessed participants' depth of presence in each of these IVEs using a questionnaire, recordings of heart rate and galvanic skin response, and gait metrics derived from tracking data, and then compared the differences between the stressful and non-stressful versions of each environment. Pooling the data over all participants in each group, we found significant physiological indications of stress after the appearance of the pit in all three environments, but did not find significant differences in the magnitude of the physiological stress response between the different environment conditions. However, we did find significant differences in the change in gait: participants in the photorealistic replica room group walked significantly slower, and with shorter strides, after exposure to the stressful version of the environment, than did participants in either the photorealistically rendered unfamiliar room or the NPR replica room conditions.

Gait Parameters in Stressful Virtual Environments , Lane Phillips, Brian Ries, Michael Kaeding and Victoria Interrante (2010) IEEE VR 2010 Workshop on Perceptual Illusions in Virtual Environments, pp. 19-22. [PDF] [abstract]

We share the results of a preliminary experiment where participants performed a simple task in a control immersive virtual environment (IVE) followed by a stressful IVE. Participants' gaits were recorded with a motion capture system. We computed speed, stride length, and stride width for each participant and found that participants take significantly shorter strides in the stressful environment, but stride width and walking speed do not show a significant difference. In a future experiment we will continue to study how gait parameters relate to a user's experience of a virtual environment. We hope to find parameters that can be used as metrics for comparing a user's level of presence in different virtual environments.

Distance Perception in NPR Immersive Virtual Environments, Revisited, Lane Phillips, Brian Ries, Michael Kaeding and Victoria Interrante (2009) ACM/SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, pp. 11-14. [PDF] [abstract]

Non-photorealistic rendering (NPR) is a representational technique that allows communicating the essence of a design while giving the viewer the sense that the design is open to change. Our research aims to address the question of how to effectively use non-photorealistic rendering in immersive virtual environments to enable the intuitive exploration of early architectural design concepts at full scale. Previous studies have shown that people typically underestimate egocentric distances in immersive virtual environments, regardless of rendering style, although we have recently found that distance estimation errors are minimized in the special case that the virtual environment is a high-fidelity replica of a real environment that the viewer is presently in or has recently been in. In this paper we re-examine the impact of rendering style on distance perception accuracy in this virtual environments context. Specifically, we report the results of an experiment that seeks to assess the accuracy with which people judge distances in a non-photorealistically rendered virtual environment that is a directly-derived stylistic abstraction of the actual environment that they are currently in. Our results indicate that people tend to underestimate distances to a significantly greater extent in a co-located virtual environment when it is rendered using a line-drawing style than when it is rendered using high fidelity textures derived from photographs.

Investigating the Physiological Effects of Self-Embodiment in Stressful Virtual Environments, Brian Ries, Victoria Interrante, Cassandra Ichniowski and Michael Kaeding (2009) IEEE VR 2010 Workshop on Perceptual Illusions in Virtual Environments, pp. 176-198. [PDF] [abstract]

In this paper we explore the benefits that self-embodied virtual avatars provide to a user’s sense of presence while wearing a headmounted display in a immersive virtual environment (IVE). Recent work has shown that providing a user with a virtual avatar can increase their performance when completing tasks such as ego-centric distance judgment. The results of this research imply that a heightened sense of presence is responsible for the improvement. However, there is an ambiguity in interpreting the results. Are users merely gaining additional scaling information of their environment by using the representation of their body as a metric, or is the virtual avatar heightening their sense of presence and increasing their accuracy? To investigate this question, we conducted a between-subjects design experiment to analyze any physiological differences between users given virtual avatars versus ones that were not. If the virtual avatars are increasing a user's sense of presence, their physiological data should indicate a higher level of stress when presented with a stressful environment.

Analyzing the Effect of a Virtual Avatar's Geometric and Animation Fidelity on Ego-centric Spatial Perception in Immersive Virtual Environments , Brian Ries, Michael Kaeding, Lane Phillips and Victoria Interrante (2009) ACM Symposium on Virtual Reality Software and Technology, pp. 59-66. [PDF] [abstract]

Previous work has shown that giving a user a first-person virtual avatar can increase the accuracy of their egocentric distance judgments in an immersive virtual environment (IVE). This result provides one of the rare examples of a manipulation that can enable improved spatial task performance in a virtual environment without potentially compromising the ability for accurate information transfer to the real world. However, many open questions about the scope and limitations of the effectiveness of IVE avatar self-embodiment remain. In this paper, we report the results of a series of four experiments, involving a total of 40 participants, that explore the importance, to the desired outcome of enabling enhanced spatial perception accuracy, of providing a high level of geometric and motion fidelity in the avatar representation. In these studies, we assess participants' abilities to estimate egocentric distances in a novel virtual environment under four different conditions of avatar self-embodiment: a) no avatar; b) a fully tracked, custom-fitted, high fidelity avatar, represented using a textured triangle mesh; c) the same avatar as in b) but implemented with single point rather than full body tracking; and d) a fully tracked but simplified avatar, represented by a collection of small spheres at the raw tracking marker locations. The goal of these investigations is to attain insight into what specific characteristics of a virtual avatar representation are most important to facilitating accurate spatial perception, and what cost-saving measures in the avatar implementation might be possible. Our results indicate that each of the simplified avatar implementations we tested is significantly less effective than the full avatar in facilitating accurate distance estimation; in fact, the participants who were given the simplified avatar representations performed only marginally (but not significantly) more accurately than the participants who were given no avatar at all. These findings suggest that the beneficial impact of providing users with a high fidelity avatar self-representation may stem less directly from the low-level size and motion cues that the avatar embodiment makes available to them than from the cognitive sense of presence that the self-embodiment supports.

Transitional Environments Enhance Distance Perception in Immersive Virtual Reality Systems, Frank Steinicke, Gerd Bruder, Klaus Hinrichs, Markus Lappe, Brian Ries and Victoria Interrante (2009) ACM/SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, pp. 19-26. [PDF] [abstract]

Several experiments have provided evidence that ego-centric distances are perceived as compressed in immersive virtual environments relative to the real world. The principal factors responsible for this phenomenon have remained largely unknown. However, recent experiments suggest that when the virtual environment (VE) is an exact replica of a user's real physical surroundings, the person's distance perception improves. Furthermore, it has been shown that when users start their virtual reality (VR) experience in such a virtual replica and then gradually transition to a different VE, their sense of presence in the actual virtual world increases significantly. In this case the virtual replica serves as a transitional environment between the real and virtual world.
    In this paper we examine whether a person's distance estimation skills can be transferred from a transitional environment to a different VE. We have conducted blind walking experiments to analyze if starting the VR experience in a transitional environment can improve a person's ability to estimate distances in an immersive VR system. We found that users significantly improve their distance estimation skills when they enter the virtual world via a transitional environment.

Elucidating Factors that can Facilitate Veridical Spatial Perception in Immersive Virtual Environments, Victoria Interrante, Brian Ries, Jason Lindquist, Michael Kaeding and Lee Anderson (2008) Presence: Teleoperators and Virtual Environments 17(2), April 2008, pp. 176-198. [PDF] [abstract]

Ensuring veridical spatial perception in immersive virtual environments (IVEs) is an important yet elusive goal. In this paper, we present the results of two experiments that seek further insight into this problem. In the first of these experiments, initially reported in Interrante, Ries, Lindquist, and Anderson (2007), we seek to disambiguate two alternative hypotheses that could explain our recent finding (Interrante, Anderson, and Ries, 2006a) that participants appear not to significantly underestimate egocentric distances in HMD-based IVEs, relative to in the real world, in the special case that they unambiguously know, through first-hand observation, that the presented virtual environment is a high-fidelity 3D model of their concurrently occupied real environment. Specifically, we seek to determine whether people are able to make similarly veridical judgments of egocentric distances in these matched real and virtual environments because (1) they are able to use metric information gleaned from their exposure to the real environment to calibrate their judgments of sizes and distances in the matched virtual environment, or because (2) their prior exposure to the real environment enabled them to achieve a heightened sense of presence in the matched virtual environment, which leads them to act on the visual stimulus provided through the HMD as if they were interpreting it as a computer-mediated view of an actual real environment, rather than just as a computer-generated picture, with all of the uncertainties that that would imply. In our second experiment, we seek to investigate the extent to which augmenting a virtual environment model with faithfully-modeled replicas of familiar objects might enhance people's ability to make accurate judgments of egocentric distances in that environment.

The Effect of Self-Embodiment on Distance Perception in Immersive Virtual Environments, Brian Ries, Victoria Interrante, Michael Kaeding and Lee Anderson (2008) ACM Symposium on Virtual Reality Software and Technology, pp. 167-170. [PDF] [abstract]

Previous research has shown that egocentric distance estimation suffers from compression in virtual environments when viewed through head mounted displays. Though many possible variables and factors have been investigated, the source of the compression is yet to be fully realized. Recent experiments have hinted in the direction of an unsatisfied feeling of presence being the cause. This paper investigates this presence hypothesis by exploring the benefit of providing self-embodiment to the user through the form of a virtual avatar, presenting an experiment comparing errors in egocentric distance perception through direct-blind walking between subjects with a virtual avatar and without. The result of this experiment finds a significant improvement with egocentric distance estimations for users equipped with a virtual avatar over those without.

Elucidating Factors that can Facilitate Veridical Spatial Perception in Immersive Virtual Environments, Victoria Interrante, Brian Ries, Jason Lindquist and Lee Anderson (2007) IEEE Virtual Reality, pp. 11-17. [PDF] [abstract]

Enabling veridical spatial perception in immersive virtual environments (IVEs) is an important yet elusive goal, as even the factors implicated in the often-reported phenomenon of apparent distance compression in HMD-based IVEs have yet to be satisfactorily elucidated. In recent experiments , we have found that participants appear less prone to significantly underestimate egocentric distances in HMD-based IVEs, relative to in the real world, in the special case that they unambiguously know, through first-hand observation, that the presented virtual environment is a high fidelity 3D model of their concurrently occupied real environment. We had hypothesized that this increased veridicality might be due to participants having a stronger sensation of ‘presence’ in the IVE under these conditions of co-location, which state of mind leads them to act on their visual input in the IVE similarly as they would in the real world (the presence hypothesis). However, alternative hypotheses are also possible. Primary among these is the visual calibration hypothesis: participants could be relying on metric information gleaned from their exposure to the real environment to calibrate their judgments of sizes and distances in the matched virtual environment. It is important to disambiguate between the presence and visual calibration hypotheses because they suggest different directions for efforts to facilitate veridical distance perception in general (non-co-located) IVEs. In this paper, we present the results of an experiment that seeks novel insight into this question. Using a mixed within- and between-subjects design, we compare participants’ relative ability to accurately estimate egocentric distances in three different virtual environment models: one that is an identical match to the occupied real environment; one in which each of the walls in our virtual room model has been surreptitiously moved ~10% inward towards the center of the room; and one in which each of the walls has been surreptitiously moved ~10% outwards from the center of the room. If the visual calibration hypothesis holds, then we should expect to see a degradation in the accuracy of peoples’ distance judgments in the surreptitiously modified models, manifested as an underestimation of distances when the IVE is actually larger than the real room and as an overestimation of distances when the IVE is smaller. However, what we found is that distances were significantly underestimated in the virtual environment relative to in the real world in each of the surreptitiously modified room environments, while remaining reasonably accurate (consistent with our previous findings) in the case of the faithfully size-matched room environment. In a post-test survey, participants in each of the three room size conditions reported equivalent subjective levels of presence and did not indicate any overt awareness of the room size manipulation.

Distance Perception in Immersive Virtual Environments, Revisited, Victoria Interrante, Lee Anderson and Brian Ries (2006) IEEE Virtual Reality, pp. 3-10. [PDF] [abstract]

Numerous previous studies have suggested that distances appear to be compressed in immersive virtual environments presented via head mounted display systems, relative to in the real world. However, the principal factors that are responsible for this phenomenon have remained largely unidentified. In this paper we shed some new light on this intriguing problem by reporting the results of two recent experiments in which we assess egocentric distance perception in a high fidelity, low latency, immersive virtual environment that represents an exact virtual replica of the participant's concurrently occupied real environment. Under these novel conditions, we make the startling discovery that distance perception appears not to be significantly compressed in the immersive virtual environment, relative to in the real world.


Selected Posters


A Little Unreality in a Realistic Replica Environment Degrades Distance Estimation Accuracy, Lane Phillips and Victoria Interrante (2011) IEEE Virtual Reality, pp. 235-236. [PDF] [poster] [abstract]

Users of IVEs typically underestimate distances during blind walking tasks, even though they are accurate at this task in the real world. The cause of this underestimation is still not known. Our previous work found an exception to this effect: When the virtual environment was a realistic, co-located replica of the concurrently occupied real environment, users did not significantly underestimate distances. However, when the replica was rendered in an NPR style, we found that users underestimated distances. In this study we explore whether the inaccuracy in distance estimation could be due to lack of size and distance cues in our NPR IVE, or if it could be due to a lack of presence. We ran blind walking trials in a new replica IVE that combined features of the previous two IVEs. Participants significantly underestimated distances in this environment.

Lack of 'Presence' May be a Factor in the Underestimation of Egocentric Distances in Immersive Virtual Environments, Victoria Interante, Lee Anderson and Brian Ries (2005) Journal of Vision [Abstract], 5(8): 527a. [poster] [abstract]

We report the results of a study intended to investigate the possibility that cognitive dissonance in ‘presence’ may play a role in the widely reported phenomenon of underestimation of egocentric distances in immersive virtual environments. In this study, we compare the accuracy of egocentric distance estimates, obtained via direct blind walking, across two cognitively different immersion conditions: one in which the presented virtual environment consists of a perfectly registered, high fidelity 3D model of the same space in which the user is physically located, and one in which the presented virtual environment is a high-fidelity 3D model of a different real space. In each space, we compare distance estimates obtained in the immersive virtual environment with distance estimates obtained in the corresponding physical environment. We also compare distance perception accuracy across two different exposure conditions: one in which the participant experiences the virtual space before s/he is exposed to the real space, and one in which the participant experiences the real space first. We find no significant difference in the accuracy of distance estimates obtained in the real vs. virtual environments when the virtual environment represents the same space as the occupied real environment, regardless of the order of exposure, but, consistent with previously reported findings by others, we find that distances are significantly underestimated in the virtual world, relative to the real world, when the virtual world represents a different place than the occupied real world. In the case of the non-co-located environment only, we also find a significant effect of previous experience in the represented space, i.e. participants who complete the experiment in the real world first exhibit less distance underestimation in the corresponding virtual environment than do participants who complete the experiment in the virtual world first.