-

 

 

 




Optimization Online





 

Lossless Compression of Deep Neural Networks

Thiago Serra (thiago.serra***at***bucknell.edu)
Abhinav Kumar (abhinav.kumar***at***cs.utah.edu)
Srikumar Ramalingam (srikumar***at***cs.utah.edu)

Abstract: Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy. Consequently, it is challenging to deploy these networks under limited computational resources, such as in mobile devices. In this work, we introduce an algorithm that removes units and layers of a neural network while not changing the output that is produced, which thus implies a lossless compression. This algorithm, which we denote as LEO (Lossless Expressiveness Optimization), relies on Mixed-Integer Linear Programming (MILP) to identify Rectifier Linear Units (ReLUs) with linear behavior over the input domain. By using L1 regularization to induce such behavior, we can benefit from training over a larger architecture than we would later use in the environment where the trained neural network is deployed.

Keywords: Deep Learning; Mixed-Integer Linear Programming; Neural Network Pruning; Neuron Stability; Rectifier Linear Unit

Category 1: Applications -- Science and Engineering (Other )

Category 2: Integer Programming ((Mixed) Integer Linear Programming )

Citation:

Download: [PDF]

Entry Submitted: 01/01/2020
Entry Accepted: 01/01/2020
Entry Last Modified: 01/24/2020

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society