Back to Browse

Beyond Lazy Training for Over-parameterized Tensor Decomposition

374 views
Jun 10, 2021
37:13

Rong Ge, Duke University Mini-symposium on Low-Rank Models and Applications http://www.fields.utoronto.ca/activities/20-21/constraint-rank Date and Time: Wednesday, June 9, 2021 - 11:00am to 11:40am Abstract: Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m=Ω(d^{l−1}), while a variant of gradient descent can find an approximate tensor when m=O(r^{2.5l}logd). Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

Download

0 formats

No download links available.

Beyond Lazy Training for Over-parameterized Tensor Decomposition | NatokHD