Multimodal Cognition at CSRI

Multimodal Cognition

We contribute theory, tools and experimental methodologies for exploring and modeling the interaction between Language, Perception and the Motor system. Our work comprises common sense reasoners for visual scene understanding and verbal human-robot interaction, visuomotor parsers based on a sensorimotor generative action grammar, and large-scale semantic memory modules. 
Embodied Language Processing at CSRI

Embodied Language Processing

We introduce a new theoretical and computational look at language as an active system in multimodal cognition applications. We develop the first suite of embodied language processing tools, and new semantic lexicons that take state of the art research closer to experimental findings on how the human brain works.  
Multisensory Perception at CSRI

Multisensory Perception

We conduct experimental research on the fundamental mechanisms regulating multisensory event and time perception and the role of language in its interaction with such mechanisms. Our experimental research spans a number of topics including the 'unity effect', time-space synaesthesia, the role of action goals and effects in learning new activities, co-speech exploratory acts and object affordances.