Published: Sept. 12, 2018

The Department of Linguistics is pleased to welcome back Claire Bonial, a graduate of our own PhD program who now works for the Computational and Information Sciences Directorate at the Army Research Laboratory (ARL). In this Ling Circle talk, Dr. Bonial will share some of her work on human-robot dialogue. 

Title: "Event semantics in text constructions, vision, and human-robot dialogue"
When: Monday, October 15
Where: Hellems 237 

Abstract: “Ok, robot, make a right and take a picture” – a simple instruction like this exemplifies some of the obstacles in our research on human-robot dialogue: how are make and take to be interpreted? What precise actions should be executed? In this presentation, I explore three challenges: 1) interpreting the semantics of constructions in which verb meanings are extended in novel usages, 2) recognizing activities and events in images/video by employing information about the objects and participants typically involved, and 3) mapping natural language instructions to the physically situated actions executed by a robot. Throughout these distinct research areas, I leverage both Neo- Davidsonian styles of event representation and the principles of Construction Grammar in addressing these challenges for interpretation and execution.

Speaker Bio: Claire Bonial is a computational linguist specializing in the murky world of event semantics. In her efforts to make this world computationally tractable, she has collaborated on a variety of Natural Language Processing semantic role labeling projects, including PropBank, VerbNet, and Abstract Meaning Representation. A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g., take a walk, make a mistake). Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the 鶹ӰԺ. She began her current position in the Computational and Information Sciences Directorate of the Army Research Laboratory (ARL) in 2015. Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), as well as human-robot dialogue.