, who is currently a post-doctoral scholar in the CLEAR group here at CU-麻豆影院, will give the last LingCircle talk of the semester, sharing his work on neural modeling.聽
Neural Models for Word Inflection and Phonology
Monday, April 30, 2018
3:00-4:30pm
Linguistics Conference Room (Hellems 285)
Abstract:
Numerous recent studies have applied neural network methods to traditional language processing tasks - and done so with quite amazing success. Neural network models have pushed the state-of-the-art in tasks like part-of-speech tagging, syntactic parsing, and machine translation to a new level. For linguistic and natural language processing, this both opens new avenues of computational analysis into problems that have been beyond the (effective) reach of traditional methods and raises whole new research questions.
In this talk, I discuss the deep learning success story and its implications for natural language processing and for linguistics. The first part focuses on the application of deep learning methods to the task of learning data-driven word inflection, a task vital for natural language generation of languages with rich inflectional systems. The second part focuses on phonology and morphophonology, and particularly on applying distributional techniques, normally used in lexical semantics, to the study of phoneme representations. This is of interest both from a theoretical linguistics point of view (providing support for the structuralist hypothesis that all levels of language can be defined in distributional terms) as well as from the standpoint of trying to better understand the reasons for the success of neural networks. I show that many articulatory features such as the quality of vowels and sonority of consonants turn out to have distributional correlates, as do assimilation and harmony processes, something neural models very quickly pick up on, and which partly explains their performance.