Seminar 31/5: ‘Representation Costs of Linear Neural Networks: Analysis and Design’

On May 31st, the seminar series welcomed Prof. Mina Karzand of the University of California, Davis. Prof. Karzand presented work, introduced at NeurIPS 2021, studying representation costs of linear neural networks.

Title:

Representation Costs of Linear Neural Networks: Analysis and Design

Abstract:

For different parameterizations (mappings from parameters to predictors), we study the regularization cost in predictor space induced by l_2 regularization on the parameters (weights).  We focus on linear neural networks as parameterizations of linear predictors and identify the representation cost of certain sparse linear ConvNets and residual networks.  In order to get a better understanding of how the architecture and parameterization affect the representation cost, we also study the reverse problem, identifying which regularizers on linear predictors (e.g., l_p quasi-norms, group quasi-norms, the k-support-norm, elastic net) can be the representation cost induced by simple l_2 regularization, and designing the parameterizations that do so.

Biography:

Mina Karzand is an assistant professor at the statistics department at the University of California at Davis. Her research interests include statistical learning theory, generalization in overparameterized models, online learning and learning graphical models. Prior to coming to Davis, she was a research assistant professor at the Toyota Technological Institute at Chicago, a postdoctoral research associate at the University of Wisconsin-Madison, and a postdoctoral associate at MIT. She received her PhD at Electrical Engineering and Computer Science at MIT.

Leave a comment