Gradient Descent: The Mother of All Algorithms? - Aleksander Mądry
From Kathryn Gentilello on May 22nd, 2018
More than a half of century of research in theoretical computer science brought us a great wealth of advanced algorithmic techniques. These techniques can then be combined in a variety of ways to provide us with sophisticated, often beautifully elegant algorithms. This diversity of methods is truly stimulating and intellectually satisfying. But is it also necessary?
In this talk, I will address this question by discussing one of the most, if not the most, fundamental continuous optimization techniques: the gradient descent method. I will briefly describe how this method can be applied, sometime in a quite non-obvious manner, to a number of classic algorithmic tasks, such as the maximum flow problem, the bipartite matching problems, the k-server problem, as well as matrix scaling and balancing. The resulting perspective will provide us with a broad, unifying view on this diverse set of problems. A perspective that was key to making the first in decades progress on each one of these tasks.https://mediaspace.gatech.edu/media/madry.mpg/1_19peupgr