Leonid Berlyand, Department of Mathematics, Penn State University
Enhancing Accuracy in Deep Learning Using Random Matrix Theory
We discuss applications of random matrix theory (RMT) to the training of deep neural networks (DNNs). Our focus is on the pruning of DNN parameters, guided by the Marchenko-Pastur spectral RMT approach. Our numerical results show that this pruning leads to a drastic reduction of parameters while not reducing the accuracy of DNNs and CNNs. Moreover, pruning of the fully connected DNNs increases the accuracy and decreases the variance for random initializations of DNN parameters. Finally, we provide a theoretical understanding of these results by proving the Pruning Theorem that establishes a rigorous relation between the accuracy of the pruned and non-pruned DNNs.
This is a joint work with E. Sandier (U. Paris 12), Y. Shmalo (PSU student) and L. Zhang (Jiao Tong U.) The talk will provide a brief math introduction of DNNs and no prior knowledge of DNNs is necessary.
More information about this speaker may be found at