Strand 1
Understanding and Facilitating Collaborations
Strand 1 researchers are working to answer the foun颅dational question: What AI advances are needed to understand and facilitate collaborative learning conver颅sations? Foundational AI research in natural language understanding, multimodal processing, and knowledge representation is needed to develop AI models that can autonomously monitor the unfolding collaborative learning discourse at multiple levels鈥攗nderstanding the content, the conversational dynamics, gestures, and social signals (e.g., facial expressions)鈥攁nd learn to generate appropriate conversational moves to be an effective partner in the learning conversation. Strand 1 develops mechanisms to sift through and integrate information from multiple student-AI conversations both within a class and over time. The main areas of focus for Strand 1 are: Speech Processing and Diarization, Content Analysis & Dialogue Management (also known as MMIA: Multimodal interactive agent) and Situated Grounding.
This team is working to enable our AI Partners to better understand students when they talk and identify who is speaking and when by improving Automatic Speech Recognition models for classrooms.
This theme is dedicated to helping our AI Partners make sense of what it鈥檚 hearing and seeing and determining optimal interactions between students and teachers. Their work is helping the partners understand key content words and concepts uttered by students.
Students and teachers establish common ground when inter颅acting with one another through both behavioral and verbal cues, as well as prior goals, expectations, and beliefs. This theme is identifying this common ground through discourse and gesture.