NEUROCOMPUTATIONAL MODELLING OF DISTRIBUTED LEARNING FROM VISUAL STIMULI

Neurocomputational modeling of visual stimuli can lead not only to identify the neural substrates of attention but also to test cognitive theories ofattention with applications on several visual media, robotics, etc. However, there are many research works done in cognitive model for linguistics,but...

Full description

Saved in:
Bibliographic Details
Published in:Asian journal of pharmaceutical and clinical research Vol. 10; no. 13; p. 225
Main Authors: Rai, Ankush, R, Jagadeesh Kannan
Format: Journal Article
Language:English
Published: 01.04.2017
ISSN:0974-2441, 0974-2441
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Neurocomputational modeling of visual stimuli can lead not only to identify the neural substrates of attention but also to test cognitive theories ofattention with applications on several visual media, robotics, etc. However, there are many research works done in cognitive model for linguistics,but the studies regarding cognitive modeling of learning mechanisms for visual stimuli are falling back. Based on principles of operation cognitivefunctionalities in human vision processing, the study presents the development of a computational neurocomputational cognitive model for visualperception with detailed algorithmic descriptions. Here, four essential questions of cognition and visual attention is considered for logicallycompressing into one unified neurocomputational model: (i) Segregation of special classes of stimuli and attention modulation, (ii) relation betweengaze movements and visual perception, (iii) mechanism of selective stimulus processing and its encoding in neuronal cells, and (iv) mechanism ofvisual perception through autonomous relation proofing.
ISSN:0974-2441
0974-2441
DOI:10.22159/ajpcr.2017.v10s1.19645