Visual-Auditory Redirection: Multimodal Integration of Incongruent Visual and Auditory Cues for Redirected Walking

In this paper, we present a study of redirected walking (RDW) that shifts the positional relationship between visual and auditory cues during curvature manipulation. It has been shown that, when presented with incongruent visual and auditory spatial cues during a localization task, human observers i...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) s. 639 - 648
Hlavní autori: Gao, Peizhong, Matsumoto, Keigo, Narumi, Takuji, Hirose, Michitaka
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.11.2020
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this paper, we present a study of redirected walking (RDW) that shifts the positional relationship between visual and auditory cues during curvature manipulation. It has been shown that, when presented with incongruent visual and auditory spatial cues during a localization task, human observers integrate that information based on each cue's relative reliability, which determines their final perception of the target object's location. This multi-modal integration model is known as maximum likelihood estimation (MLE). By altering the visual location of objects that users perceive in virtual reality (VR) through auditory cues during redirection manipulation, we expect fewer users to notice the manipulation, which helps increase the usable curvature gain. Most existing studies on MLE in multi-modal integration have used random-dot stereograms as visual cues under stable motion states. In the present study, we first investigated whether this model holds while walking in VR environment. Our results indicate that in a walking state, users' perceptions of the target object's location shift toward auditory cue as the reliability of vision decreases, in keeping with the trend shown in previous studies on MLE. Based on this result, we then investigated the detection threshold of curvature gains during redirection manipulation under a condition with congruent visual-auditory cues as well as a condition in which users' location perceptions of the target object are considered to be affected by the incongruent auditory cue. We found that the detection threshold of curvature gains was higher with incongruent visual-auditory cues than with congruent cues. These results show that incongruent multimodal cues in VR may have a promising application in the area of redirected walking.
DOI:10.1109/ISMAR50242.2020.00092