The Institute

CSRI Offices

The Cognitive Systems Research Institute (CSRI) is a non-governmental, non-profit research organization that specializes in the highly interdisciplinary field of Cognitive Systems. Its core activities comprise theoretical, experimental and computational research and development for exploration and modeling of fundamental mechanisms in human cognition.

Vision

To develop cognitive systems with human-level intelligence for the betterment of our everyday life. 

Mission

  • to establish highly interdisciplinary research labs with integrated competencies and expertise comprising Engineering/Computer Science, Humanities/Linguistics, Life Sciences/Neuroscience and Cognitive Psychology;
  • to develop human-centric and biology inspired intelligent technology that addresses societal needs and improves the quality of life;
  • to bring science to society through transfer of interdisciplinary knowledge and research practices to students of all levels of education and the general public.

Philosophy

  • we promote scientific excellence and integrity; 
  • we encourage initiative, innovation, and creativity; 
  • we believe in active collaboration, continuous learning, and interdisciplinary exploration;    
  • we consider the development of systems with human-level intelligence feasible primarily through exploration of human cognition;
  • we view intelligent technology as a platform for exploring how the human brain works and a means for addressing grand societal challenges;
  • we actively support the open-source movement providing our software and data resources freely to the public.

Expertise

Multimodal Cognition at CSRI

Multimodal Cognition

  Language interacts closely with Perception and the Motor system, and the dynamics of such interaction feed and are fed -among others- by the Semantic Memory; the distributed and associative nature of these dynamics regulate Learning and Reasoning processes. Our research in this direction aims at modeling the autonomous, developmental acquisition of sensorimotor experiences and symbols. We contribute theory, tools and experimental methodologies for exploring and modeling such processes, comprising large-scale semantic memory modules, embodied lexicons, common sense reasoners, and cognitive semantic similarity metrics. The intelligent technology developed along these lines has a wide range of applications, including Visual Scene Understanding, Multimodal Discourse Analysis and Generation, Audiovisual indexing, Retrieval and Summarization for Big Data Processing.
Embodied Language Processing at CSRI

Embodied Language Processing

Natural language processing does not take place in a cognitive vacuum, isolated from perception and action. Contrary to the traditional approaches of computational language analysis and generation that operate in a 'language-only' space, we introduce a new theoretical and computational look at language as an active system in multimodal cognition applications. We develop the first suite of embodied language processing tools, and new enactive lexicons that take state of the art research closer to experimental findings on how the human brain works. Our tools aim at bridging the gap between natural language and the sensorimotor space, allowing intelligent systems to go beyond using language as an interface medium, to taking full advantage of its potential for behavior generalization, creativity and intention attribution. In doing so, we bring the notion of referentiality at the core of language analysis, because it holds a key-role for its interaction with perception, the motor system and generalization and learning in semantic memory. We capitalize on fundamental mechanisms of the language system such as the productivity mechanisms of derivation and compounding, and the notion of irregularity; such mechanisms render natural language not just another symbol system, but a highly powerful one.
Multisensory Perception at CSRI

Multisensory Perception

Human perception is multisensory and multimodal. We focus our experimental research on two main axes: exploration of the fundamental mechanisms regulating multisensory event and time perception and the role of language in its interaction with such mechanisms. The former comprises mainly research on synchrony, duration, and ordering, while the latter involves research on language modulated perception of object saliency and attention, as well as co-speech exploration of object affordances through active touch. Our contributions provide ground for establishing an emerging research direction that incorporates language dynamically into the exploration of multisensory perception. We incorporate the corresponding findings in the development of intelligent artificial agents, bridging thus experimental research and technology development. Our experimental research spans a number of topics including the 'unity effect', time-space synaesthesia, the role of action goals and effects in learning new activities, co-speech exploratory acts and object affordances, and others.