On the Number of Linear Regions of Deep Neural Networks

Authors: Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio

arXiv: 1402.1869v1 - DOI (stat.ML)
12 pages

Abstract: We study the complexity of functions computable by deep feedforward neural networks with piece-wise linear activations in terms of the number of regions of linearity that they have. Deep networks are able to sequentially map portions of each layer's input space to the same output. In this way, deep models compute functions with a compositional structure that is able to re-use pieces of computation exponentially often in terms of their depth. This note investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piece-wise linear activation functions.

Submitted to arXiv on 08 Feb. 2014

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.