Chaos & Separation in Biological Neural Networks

Biological neural networks as non-linear systems with numerous degrees of freedom have high tendency for chaotic behavior.


Biological neural networks as non-linear systems with numerous degrees of freedom have high tendency for chaotic behavior. Utilizing the high computational power of such networks requires thorough understanding of their dynamics. The universality of chaos allows circumventing the complexity of the biological model and offers straight forward techniques for “communicating” with the neural network. This open channel allows performing computational tasks such as classification and pattern recognition without the need for superficial readout modules for interpreting the network state. The key is the separation quality of neural networks. The path from order to chaos gives rise to separation between distinguished input signals. An external driving force with fixed frequency synchronizes the network activity bringing it into a periodic steady state, where the separation between the signals is clear and simple. The main drawback is the lack of an adaptive task-oriented learning mechanism. Yet, the absence of such “smart” readout module may be compensated by a multi-frequency multi-network array which has the potential for executing high complexity computations.


Chaos & Separation

The computational power of neural networks relates to their separation quality, as it plays a critical role in classification and pattern recognition tasks. The basic notion is that two distinct input time series should drive the network into two separable states. Separation is strongly linked with chaotic dynamics.

Neural networks are dissipative systems. Their fading memory property requires perpetual inputs for the network to maintain activity. Otherwise, the network relaxes to a single resting state. In dissipative systems chaos is the key for separation. A periodic external force synchronizes the system activity and makes the separation not only clear but also simple and meaningful. In a state of pure order no separation is discernable. All initial conditions relax to a single steady state, no matter how far they are from each other. The path from order to chaos introduces more complex steady states which allow the desirable separation. The initial conditions space is divided into basins. Close initial conditions relax to the same state, but initial conditions which dwell in different basins settle down to separable states. At the onset of chaos the system is no longer able to relax to a steady state with a finite period. The collapse of basin structure is accompanied by extreme sensitivity to initial conditions and reduced immunity to noise.

The figure below displays an example of a two-dimensional basin map for initial conditions stimulus with two jittered spikes.


Computational Tasks
Segment Classification:

The separation property in neural networks is the key for performing many computational tasks. A basic task which demonstrates the computational power of the system is segment classification. The analysis of chaotic dynamics in neural networks introduces straight-forward methods for performing the designated task. Each segment is interpreted into a distinct initial condition. As the network is driven by a periodic driving force, it eventually reaches steady state. Segments which correspond to initial conditions of different basins are easily separated and classified. This is a direct method in opposed to readout based computations which require the use of an external module for interpreting the network state.

In the current algorithm, the emphasis is shifted into finding the right network and driving frequency which give the desired separation. For classification with two segment templates a frequency with a binary separation can work fine. For more complicated tasks multiple separation is needed. One way for doing so is by selecting a frequency with higher number of basins. More advanced method is to use an incorporated array of networks working in different frequencies. For a given vector of initial conditions, classification with a specific network and frequency produce an output vector, which contains corresponding basin indexes. For a multi-network multi-frequency analysis the result is a classification matrix, which consists of the output vectors of the various parallel processes. The desired separation is then achieved by cross-classification.

The main drawback of the method is the lack of an adaptive task-oriented learning mechanism, such as the readout module. However, the wide diversity in nature and behavior between randomly generated networks driven by various frequencies indicates the vast dimensions of the possible separation space. Together with an algorithm for smart cross-classification there is much promise in this method for handling complex computations.
Language Recognition:

Language recognition is an intricate task. Although the human ear is highly sensitive to language patterns, conventional computational methods have failed to demonstrate satisfying performance. Neural networks with their pattern recognition capabilities are believed to hold the key in this field.

We were given a set of 80 speech samples, half of them in English and the other half in Japanese. The task was to use the samples as a train set for building a reliable classification procedure between the two languages. In interpreting the samples into network stimulus, two different encoding methods were considered: Time domain encoding and FFT encoding. We used a standard network with N=135 neurons and lambda=5. Following the classification algorithm the network was driven by a periodic signal and the steady state basin division was recorded. A frequency sweep for 0.8:0.2:20 Hz produced a classification matrix of 97×80 (97 frequencies x 80 samples).


A close-up view of the lower left part of the matrix:


In coming to build a classification tree, we tried to identify recurrent patterns in the matrix which lead to a distinction between the languages. One method was to scan along the rows and in each row to look for basin numbers which appears only in one language group. If significant parts of one group share the same basin and only a few in the other group, this basin can be classified as common feature of the relevant language.

With one network using FFT encoding we have managed to classify 28 out of 40 Japanese samples with 8 common features defined. In combining the results from a total of 6 classification matrixes (from various networks, using both time encoding and FFT encoding), we succeeded in defining common features for 38 out of 40 Japanese speech samples, which is 95%. In conclusion, we have managed to build a classification tree between English & Japanese with 5% error. Additional networks and broader spectrum sweep are likely to produce even higher performance.

The classification was achieved according to a train set of 80 speech samples, 40 in each language. The true test for the tree reliability is by examining its performance against a large set of test samples. However, as common features had to meet a strict criterion, the probability for error is likely to be relatively low, especially with concern to test samples which share a group of common features, and not only one.

Chaos & Readout modules:

The readout module is an adaptive task-oriented learning mechanism. It offers a binary separation map which can be adjusted according to the desired separation. For multiple separation tasks more than one module is needed. Using a periodic driving signal, classification tasks can be performed relatively fast. The training stage is done offline and real-time separation is achieved during the transient phase, without the need to wait for the network to reach a steady state. The main draw back is the high sensitivity to noise, as jittered initial conditions tend to corrupt the recurrent classification pattern. In some cases, partial reconstruction can be achieved using various filters. With readouts, just as with biological networks, high performance is reached with intermediate states on the route from order to chaos.

In coming to evaluate the computational power of biological neural networks, an attempt was made to establish the relations between the chaotic dynamics of the network and its performance in executing various computational tasks.

Following the notion of computation on the edge of chaos, we have shown in both biological networks and readouts that high performance may be associated with intermediate states on the route from order to chaos. In biological networks it was expressed by multiple but limited division of the basin map, which paved the way to performing not only basic tasks like segment classification, but also highly complex procedures as building reliable classification trees used for language recognition. With readout modules, intermediate states provided long network relaxation time with a corresponding active window for classification, side by side with reduced jitter sensitivity due to limited basin division.

In investigating the chaotic nature of the system we followed the universal footsteps of chaos. Analogy with relatively simple system like a damped driven pendulum handed us powerful methods for exploring the dynamics of neural networks without the need to deal with the complex biological model itself.

Summing up our various findings, one can portrait a highly complex map of the network dynamics. Nonetheless, steady state analysis has revealed new paths for reading the network states without the need for mediating readout modules. The direct access to the basin map of the network can lead the way to developing advanced classification algorithms, like the one we used for language recognition.

We are grateful to our project supervisors Igal Raichelhauz and Karina Odinaev for their help and guidance throughout this work. We are also grateful to Johanan Erez and the Ollendorff Minerva Center Fund for supporting this project.