Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments
Saved in:
| Title: | Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments |
|---|---|
| Authors: | Bianca R. Baltaretu, Immo Schuetz, Melissa L.-H. Võ, Katja Fiehler |
| Source: | Scientific Reports, Vol 14, Iss 1, Pp 1-12 (2024) |
| Publisher Information: | Nature Portfolio, 2024. |
| Publication Year: | 2024 |
| Collection: | LCC:Medicine LCC:Science |
| Subject Terms: | Spatial coding, Scene semantics, Scene perception, Memory-guided action, Virtual reality, Medicine, Science |
| Description: | Abstract Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene’s hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments. |
| Document Type: | article |
| File Description: | electronic resource |
| Language: | English |
| ISSN: | 2045-2322 |
| Relation: | https://doaj.org/toc/2045-2322 |
| DOI: | 10.1038/s41598-024-66428-9 |
| Access URL: | https://doaj.org/article/85ca0455bfeb4eb893024fa916c9970d |
| Accession Number: | edsdoj.85ca0455bfeb4eb893024fa916c9970d |
| Database: | Directory of Open Access Journals |
| Abstract: | Abstract Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene’s hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments. |
|---|---|
| ISSN: | 20452322 |
| DOI: | 10.1038/s41598-024-66428-9 |
Full Text Finder
Nájsť tento článok vo Web of Science