Sara van de Geer - Sharp Oracle Inequalities for Non-Convex Loss
From Kathryn Gentilello on September 7th, 2018
There will be three lectures, which in principle will be independent units. Their common theme is exploiting sparsity in high-dimensional statistics. Sparsity means that the statistical model is allowed to have quite a few parameters, but that it is believed that most of these parameters are actually not relevant. We let the data themselves decide which parameters to keep by applying a regularization method. The aim is then to derive so-called sparsity oracle inequalities.
In the first lecture, we consider a statistical procedure called M-estimation. "M" stands here for "minimum": one tries to minimize a risk function, in order to obtain the best fit to the data. Lease squares is a prominent example. Regularization is done by adding a sparsity inducing penalty that discourages too good a fit to the data. An example is the L1-penalty which together with least squares gives to an estimation procedure called the Lasso. We address the question: why does the L1-penalty lead to sparsity oracle inequalities and how does this generalize to other norms?