Misperceptions in Stereoscopic Displays: A Vision Science Perspective

3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing th...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:ACM transactions on graphics Ročník 2008; s. 23
Hlavní autoři: Held, Robert T, Banks, Martin S
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States 01.01.2008
Témata:
ISSN:0730-0301
On-line přístup:Zjistit podrobnosti o přístupu
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract 3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.
AbstractList 3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.
3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.
Author Held, Robert T
Banks, Martin S
Author_xml – sequence: 1
  givenname: Robert T
  surname: Held
  fullname: Held, Robert T
  organization: University of California, Berkeley ; University of California, San Francisco
– sequence: 2
  givenname: Martin S
  surname: Banks
  fullname: Banks, Martin S
  organization: University of California, Berkeley
BackLink https://www.ncbi.nlm.nih.gov/pubmed/24683290$$D View this record in MEDLINE/PubMed
BookMark eNo1jztPwzAUhT0U0QfMbMgjS8r1deIHW1XKQyoCqcAa2e6tZJQmIU6R-u-JVJi-4XznSGfKRnVTE2NXAuZC5MWtkDZHI-YnFiM2AS0hAwlizKYpfQGAynN1zsaYKyPRwoStXmJqqQvU9rGpE4813_TUUZNC08bA74e4csd0xxf8M6bB4ZsQqQ7E36gbqqGPP3TBznauSnT5xxn7eFi9L5-y9evj83KxzpyUps_Qod7qsNVgjLZSeUtWO9gVSAoRdeGMkzswipTywVvrvUdJ3qAMIBTgjN2cdtuu-T5Q6st9TIGqytXUHFIpDKrCyhyKQb3-Uw9-T9uy7eLedcfy_zr-Ap0hWds
ContentType Journal Article
DBID NPM
7X8
DOI 10.1145/1394281.1394285
DatabaseName PubMed
MEDLINE - Academic
DatabaseTitle PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
Discipline Engineering
ExternalDocumentID 24683290
Genre Journal Article
GrantInformation_xml – fundername: NEI NIH HHS
  grantid: R01 EY014194
GroupedDBID --Z
-DZ
-~X
.DC
23M
2FS
4.4
5GY
5VS
6J9
85S
8US
AAKMM
AALFJ
AAYFX
ABPPZ
ACGFO
ACGOD
ACM
ADBCU
ADL
ADMLS
AEBYY
AEFXT
AEJOY
AENEX
AENSD
AETEA
AFWIH
AFWXC
AIKLT
AKRVB
ALMA_UNASSIGNED_HOLDINGS
ASPBG
AVWKF
BDXCO
CCLIF
CS3
E.L
EBS
EJD
F5P
FEDTE
GUFHI
HGAVV
I07
LHSKQ
NPM
P1C
P2P
PQQKQ
RNS
ROL
TWZ
UHB
UPT
WH7
XOL
XSW
ZCA
~02
7X8
ID FETCH-LOGICAL-a338t-2a27d7cd70887936b9e97a0f52e622275a8a3f086e66bcb99bbb23eb823c01602
IEDL.DBID 7X8
ISICitedReferencesCount 58
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000266239800003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0730-0301
IngestDate Fri Jul 11 14:33:52 EDT 2025
Sat May 31 02:12:53 EDT 2025
IsPeerReviewed true
IsScholarly true
Keywords Depth perception
Visualization
3D displays
Virtual Reality
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a338t-2a27d7cd70887936b9e97a0f52e622275a8a3f086e66bcb99bbb23eb823c01602
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMID 24683290
PQID 1826593405
PQPubID 23479
ParticipantIDs proquest_miscellaneous_1826593405
pubmed_primary_24683290
PublicationCentury 2000
PublicationDate 2008-01-01
PublicationDateYYYYMMDD 2008-01-01
PublicationDate_xml – month: 01
  year: 2008
  text: 2008-01-01
  day: 01
PublicationDecade 2000
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle ACM transactions on graphics
PublicationTitleAlternate ACM Trans Graph
PublicationYear 2008
SSID ssj0006446
Score 2.2186015
Snippet 3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived...
SourceID proquest
pubmed
SourceType Aggregation Database
Index Database
StartPage 23
Title Misperceptions in Stereoscopic Displays: A Vision Science Perspective
URI https://www.ncbi.nlm.nih.gov/pubmed/24683290
https://www.proquest.com/docview/1826593405
Volume 2008
WOSCitedRecordID wos000266239800003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEF7UetCD70d9sYLXtcnmuV6kaIsHGwpq6S3sJhMISFJNFfz3ziQp9SIIXpJDHoRhMvvNzsz3MXYljasgUCBCSI0geWOhXCcVtrEUWMoCq9ZYmjwGURROp2rcbrhVbVvlIibWgTotE9oj7xEO9pSD-OJ29iZINYqqq62ExirrOAhlqKUrmC7ZwnGtr2uV6MWCoH9L7WO7Xg_vRuBtXzdn73d8Wa8zw-3_fuEO22oRJu83LrHLVqDYY5s_eAf32WCUE0H4oqOF5wV_QvtCSTMqecLv8fKr_qpueJ9P6uFz3sYAPl4OZx6wl-Hg-e5BtHoKQmMiOhdSyyANkjSgyKIc3yhQgbYyT4JPI7GeDrWTYY4Dvm8So5QxRjpgQukkREQnD9laURZwzLjtZaENLjUoupiBhtooEi03CYVLfEmXXS5sFKO_UhFCF1B-VPHSSl121Bg6njXEGrF0fQwwyjr5w9OnbKNp3aDdkDPWyfBvhXO2nnzO8-r9onYEPEbj0Tfr0bvA
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Misperceptions+in+Stereoscopic+Displays%3A+A+Vision+Science+Perspective&rft.jtitle=ACM+transactions+on+graphics&rft.au=Held%2C+Robert+T&rft.au=Banks%2C+Martin+S&rft.date=2008-01-01&rft.issn=0730-0301&rft.volume=2008&rft.spage=23&rft_id=info:doi/10.1145%2F1394281.1394285&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0730-0301&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0730-0301&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0730-0301&client=summon