Multi-Modal Convergence Maps: From Body Schema and Self-Representation to Mental Imagery

TitleMulti-Modal Convergence Maps: From Body Schema and Self-Representation to Mental Imagery
Publication TypeJournal Article
Year of Publication2013
AuthorsLallee, S, Dominey PF
JournalAdaptive Behavior

Understanding the world involves extracting the regularities that define the interaction of the behaving organism within this world, and computing the statistical structure characterizing these regularities. This can be based on contingencies of phenomena at various scales ranging from correlations between sensory signals (e.g., motor-proprioceptive loops) to high-level conceptual links (e.g., vocabulary grounding). Multiple cortical areas contain neurons whose receptive fields are tuned for signals co-occurring in multiple modalities. Moreover, the hierarchical organization of the cortex, described within the Convergence Divergence Zone framework, defines an ideal architecture to extract and make use of contingency at increasing levels of complexity. We present an artificial neural network model of the early cortical amodal computations, which we have demonstrated on the humanoid robot iCub. This model explains and predicts findings in neurophysiology and neuropsychology along with being an efficient tool to control the robot. In particular, through exploratory use of the body, the system learns a form of body schema in terms of specific modalities (e.g., arm proprioception, gaze proprioception, vision) and their multimodal contingencies. Once multimodal contingencies have been learned, the system is capable of generating and exploiting internal representations or mental images based on inputs in one of these multiple dimensions. The system thus provides insight on a possible neural substrate for mental imagery within the context of multimodal convergence.


Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer