Multimodal Cognition

Multimodal Cognition at CSRI

 

Language interacts closely with Perception and the Motor system, and the dynamics of such interaction feed and are fed -among others- by the Semantic Memory; the distributed and associative nature of these dynamics regulate Learning and Reasoning processes. Our research in this direction aims at modeling the autonomous, developmental acquisition of sensorimotor experiences and symbols. We contribute theory, tools and experimental methodologies for exploring and modeling such processes, comprising large-scale semantic memory modules, embodied lexicons, common sense reasoners, and cognitive semantic similarity metrics. The intelligent technology developed along these lines has a wide range of applications, including Visual Scene Understanding, Multimodal Discourse Analysis and Generation, Audiovisual indexing, Retrieval and Summarization for Big Data Processing.