Presentation: Sparse Additive Modeling and Misspecified Smoothness
Speaker: Noah Simon, Ph.D., Assistant Professor of Biostatistics, University of Washington
Abstract: Predictive methods must balance three objectives: predictive performance, computational tractability, and, in many applications, interpretability. In this talk we will discuss a broad class of models which effectively balance these objectives in high dimensional problems: Sparse additive models induced by combining a structural semi-norm and sparsity penalty. These are more flexible than the standard linear penalized model, but maintain its interpretability and computational tractability. We will show when these penalties can and cannot be combined to induce the desired structure and sparsity. We will give an efficient algorithm for fitting a wide class of these models. And we will give a rate of convergence for this model, which, under some conditions, matches the minimax lower bound.
Asymptotic behavior of these estimators has been primarily studied when the smoothness induced by the penalty matches the smoothness of the true underlying regression function. In this talk we will also give upper bounds on convergence rates when our penalties induce too much smoothness (e.g., When estimating a non-differentiable piecewise constant function with a smoothing spline). In addition, we empirically explore when these bounds do/do not appear to be tight.