麻豆影院

Skip to main content

Stats, Optimization, and Machine Learning Seminar - Bo Waggoner

Bo Waggoner
Department of Computer Science, 麻豆影院

Toward a Characterization of Loss Functions for Distribution Learning

 

A common machine-learning task is to learn a probability distribution over a very large domain. Examples include natural language processing and generative adversarial networks. But how should the learned distribution be evaluated? A natural approach is to draw test samples and use a loss function. However, none, even the popular log loss, can satisfy natural axioms (inspired by the literature on evaluating human forecasters). We show this impossibility can be overturned, and many simple loss functions can have strong usefulness guarantees, by using "one weird trick" -- calibration, a classical forecasting requirement. These results imply that requiring learning algorithms to be calibrated, a kind of regularization, allows us to provably evaluate them while picking a loss function tailored to the setting.

Joint work with Nika Haghtalab and Cameron Musco; 

Speaker Bio: Bo Waggoner is a new Assistant Professor of Computer Science at CU 麻豆影院 working at the interface of game theory, machine learning, and theoretical CS. Prior to Colorado, he held postdoc positions at Microsoft Research NYC and the University of Pennsylvania, and received his PhD from Harvard in 2016.