Performance capacity of a complex neural network

Physics 16, 108

A new theory allows researchers to determine the ability of arbitrarily complex neural networks to perform recognition tasks on data with complex structure.


Figure 1: Researchers have developed a method that allows them to determine the performance capability of a complex neural network.

Every day, our brain recognizes and discriminates against the many thousands of sensory signals it encounters. Today’s best AI models, many of which are inspired by the neural circuits in the brain, have similar capabilities. For example, the so-called deep convolutional neural networks used for object recognition and classification are inspired by the layered structure of the visual cortex. However, scientists have yet to develop a comprehensive mathematical understanding of how biological or artificial intelligence systems achieve this recognition capability. Now SueYeon Chung of the Flatiron Institute in New York and her colleagues have developed a more detailed description of the connection of the geometric representation of objects in biological and artificial neural networks to the performance of networks in classification tasks [1] (Fig.1). The researchers show that their theory can accurately estimate the performance capability of an arbitrarily complex neural network, a problem that other methods have struggled to solve.

Neural networks provide coarse-grained descriptions of the complex circuitry of biological neurons in the brain. They consist of highly simplified neurons that signal to each other via synapses, connections between pairs of neurons. The strengths of synaptic connections change as a network is trained to perform a particular task.

During each phase of an activity, groups of neurons receive input from many other neurons in the network and fire when their activity exceeds a certain threshold. This activation produces a pattern of activity, which can be represented as a point in a high-dimensional state space where each neuron corresponds to a different dimension. A collection of activity patterns matching a specific input forms a “manifold” representation in that state space. The geometric properties of the multiple representations in neural networks depend on the distribution of information in the network and the evolution of the multiple representations during an activity is modeled through the algorithms that train the network to perform the specific activity.

The manifold geometries of a network also limit the network’s ability to perform tasks, such as invariant object recognition, the ability of a network to accurately recognize objects regardless of variations in their appearance, including size, location, or background (Fig. 2). In an earlier attempt to understand these constraints, Chung and a different group of colleagues studied simple binary classification tasks, those in which the network must sort stimuli into two groups based on some classification rules [2]. In such activities, the capacity of a network is defined as the number of objects it can correctly classify if the objects are randomly assigned categorization labels.

Figure 2: The image shows representations of three neural manifolds containing three neurons. Each collector represents a different object.

For networks where each object corresponds to a single point in the state space, a single-level network with No neurons can classify 2No objects before the classification error becomes equivalent to that of the random guess. The formalism developed by Chung and his colleagues allowed them to study the performance of complex deep (multilayered) neural networks trained for object classification. Constructing the multiple representations from the images used to train such a network, they found that the mean radius and number of manifold dimensions estimated from the data decreased sharply in deeper layers of the network than in shallower layers. This decrease has been accompanied by an increased classification capacity of the system [2–4].

This previous study and others, however, did not consider the correlations between different object representations when calculating network capacity. Object representations in biological and artificial neural networks are known to have complex correlations, which arise from structural features in the underlying data. These correlations can have important consequences for many tasks, including classification, because they are reflected in different levels of similarity between so-called class pairs in the neural space. For example, in a network tasked with classifying whether an animal was a mammal, the multiple representations of the dog and wolf would be more similar than those of an eagle and a hawk.

Now Chung’s group has generalized the computation of deep neural network performance capability to include correlations between classes of objects [1]. The team derived a set of self-consistent equations that can be solved to give the network the capability of a system with homogeneous correlations between the so-called axes (the dimensions along which the manifold varies) and the centroids (the centers of the manifolds) of different manifolds . The researchers show that axis correlations between manifolds increase performance ability, while centroid correlations push manifolds closer to the origin of the neural state space, decreasing performance ability.

In recent years, the study of neural networks has seen many interesting developments and more and more data analysis tools are being developed to better characterize the geometry of the representations obtained from neural data. The new results make a substantial contribution in this area, as they can be used to study the properties of learned representations in networks trained to perform a wide variety of tasks in which correlations present in the input data can play a crucial role in the learning and performance. These tasks include those related to motor coordination, natural language, and probing the relational structure of abstract knowledge.


  1. AJ Wakhloo et al.Linear classification of neural varieties with correlated variability, Phys. Rev. Lett. 131027301 (2023).
  2. SY Chung et al.Classification and geometry of general perceptual manifolds, Phys. Reverend X 8031003 (2018).
  3. E. Gardner, The interaction space in neural network models, J. Fis. A: Mathematics. gen. 21257 (1988).
  4. U. Cohen et al.Separability and geometry of object manifolds in deep neural networks, Nat. Common. (2019).

About the author

Image by Julia Steinberg

Julia Steinberg received her PhD in physics from Harvard University in 2020, where she studied the dynamical properties of quantum phases of matter with strong electronic correlations. Until March 2023 she was a fellow at the Princeton University Center for the Physics of Biological Function. she cognizes in a way that is amenable to higher cognitive tasks and that persists through transformations and rearrangements of these neural circuits over time.

Thematic areas

Biological Physics Computational Physics

Related articles

The physics of cancer takes shape
More than one way for the egg
fluid dynamics

More than one way for the egg

Mammalian sperm adopt a complex stochastic navigation strategy that depends on the chemical conditions of the local environment and rheological properties. To know more “

Filled with water in bird feathers
biological physics

Filled with water in bird feathers

Researchers have uncovered the details of how a desert sandgrouse’s feathers absorb water, a finding that could aid in the design of man-made water storage materials. To know more “

More articles

#Performance #capacity #complex #neural #network
Image Source :

Leave a Comment