Graduate seminar SS 19 Graduate Seminar on Numerical Simulation
The approximation theory of deep learning
Content: Probably the most important topic of state-of-the-art mathematical reseach in machine learning is the analysis of complex deep learning models. To this end, the capacity - or approximability - of deep learning model classes is an important indicator for the power of the underlying algorithm. We will discuss recent results on the approximation power of deep ReLU networks and variants thereof, which prove to be a necessary requirement for their success in deep learning.
Schedule
Each session starts at 14:15 in room 2.035, Endenicher Allee 19b.
-
23.05.
Kai Echelmeyer: Introduction into Deep Networks and the universal approximation theorem Juleana Villegas: Bounds on the network width -
06.06.
Marco Ronchese: Bounds for DNN approximations of Sobolev smooth functions Rico Krause: Bounds for DNN approximations of piecewise smooth functions -
27.06.
Lennert de Smet: Tensor Networks 1 Jure Taslak: Tensor Networks 2 (Handout) Jing Li: Information Bottleneck and DNNs -
04.07.
Dinesh Kannan: DNNs and differential equations Le Anh Dung: Understanding Convolutional Networks