Presented by Cristóbal Pagán Cánovas, Ramón y Cajal Assistant Research Professor, Department of English Philology, University of Murcia; Alexander von Humboldt Fellow, Quantitative Linguistics, University of Tübingen
Abstract: How should we model the daunting complexity of human communication? When we communicate, do we merely add up multimodal information and verbal form-meaning pairings, facilitating gestalt recognition, or, instead, do we reuse flexible multimodal patterns that we have interiorized through usage-based statistical inference? This talk, on co-speech gesture and semantic distinctions, explores the second possibility, which proposes a holistic view of the communicative signal as a multimodal flow of low-level features (articulatory, gestural, acoustic, etc.) that anchor meaning directly on action, with no need of intermediate, discrete units in the mind. I take advantage of the unprecedented opportunities offered by the Red Hen Lab’s NewsScape Library, which allows us to analyze multiple utterances of the same phrase or n-gram with manual and computational processing tools, of statistical techniques such as generalized linear models and generalized additive mixed models, as well as of theoretical frameworks such as conceptual integration, enactive cognition, and discriminative learning for implicit grammar. Worried about all the technicalities? Don’t be: I will walk you through the details and showcase what can be done with a minimal data science background such as mine.