In all language modalities, action verbs bear the basic information that should be understood in order to make sense of a sentence, and that should be processed in instructions given to artificial systems. Nevertheless, no one to one correspondence can be established between action predicates and action concepts, causing huge problems for natural language understanding. In fact, the most frequent action verbs extend to actions belonging to different ontological types (e.g. crossing the fingers vs. crossing the street), and moreover languages categorize action in their own way (in Italian, respectively incrociare le dita and attraversare la strada). On the other way around, the variation of action verbs shows the range of activities that humans categorize in the same manner, despite argument variation and the modification of the action schemas required for their performance (e.g. crossing the street / the river / the ocean vs. crossing the fingers / the arms / the legs).
The referring variation of an action verb can be then structured in a 2-dimension model: (a) there is a linguistic concept that varies among different action categories (vertical variation); (b) each action category is a cognitive concept that can correspond to different single performances (horizontal variation).
The general objective of MODELACT project is to specify a model of the human categorization of action, in terms of both linguistic and cognitive encoding. To this aim, the project exploits the IMAGACT ontology, which identifies the action categories (i.e. the entities of reference for the linguistic concepts) by means of prototypical scenes. Scenes are language independent and their semantic value can be naturally understood, allowing the appropriate encoding in different languages (English, Italian, Spanish and Mandarin Chinese in the first release).
Although constituting an essential milestone, the identification of the categories of action is not equivalent to their definition and the ontology cannot be exploited for technological applications as is. MODELACT aims to proceed from the identification of action concepts to their definition, providing a model that should be demonstrated valid for natural language processing and human machine interaction purposes. This problem is faced from the semantic, the acquisitional, and the computational perspectives, which are integrated in the project workflow.
Modeling concepts of action in accordance to machine requirements needs to study the link between the action concepts and their physical performance through the body. To this end the project exploits motion capture systems for the analysis of human actions. The language acquisition perspective is also essential for the study of the body-concept link. During child development, gestures anticipate the linguistic behaviors and are in strict continuity with the actions they represent. Moreover, the study of sign language gives a specific contribution for the definition of the action-act-word flow in the acquisition process.
The modeling of action categories distinguishes a pragmatic and a semantic level, and must explain both the human faculty to categorize actions within the pragmatic continuum and the possibility of a single lexical expression to extend onto a set of different concepts. The semantic model is essential for automatic disambiguation (i.e. giving instructions to a machine through natural language), while the pragmatic model is needed to build robotic systems that perform actions in accordance to human centered design requirements.
NEW!! Abstract & Presentations available
MODELACT Conference on Action, Language and Cognition
2016, June 6-7