Edge of Chaos in Biological Neural Networks

We will analyze in this project the significance of the edge of chaos for real time computations in neural microcircuit models consisting of spiking neurons and dynamic synapses. We want to examine whether the edge of chaos predicts well those values of circuit parameters that yield maximal computational power.

We will analyze in this project the significance of the edge of chaos for real time computations in neural microcircuit models consisting of spiking neurons and dynamic synapses. We want to examine whether the edge of chaos predicts well those values of circuit parameters that yield maximal computational power. We have set three goals for the project:

  1. Find a definition for the edge of chaos in our neural network system
  2. Analyze our system, and find corresponding parameters yielding the edge of chaos
  3. Simulate and conclude whether using these parameters we achieve maximal computational power


Object recognition and classification problems are at the front end of research in the field of computerized vision. Systems with the ability to classify objects can be used for numerous purposes, starting with discovering tumors, and ending with suspect identification in airports. Obvious solutions for such problems are biological systems, possibly based on the human brain. The human brain can classify between different types of objects, distinguish between different faces, and can in general be an efficient system for object recognition and classification. While historically the brain has been viewed as a type of computer, and vice-versa, this is true only in the loosest sense. Computers do not provide us with accurate hardware for describing the brain (even though it is possible to describe a logical process as a computer program or to simulate a brain using a computer) as they do not possess the parallel processing architectures that have been described in the brain. Even when speaking of multiprocessor computers, the functions are not nearly as distributed as in the brain.
In this project, we research a liquid-state machine’s computational abilities and optimization of calculations via setting the network to its “edge of chaos”.


The approach
This project’s goals were to study chaos in neural circuits and research its effects on its abilities to classify natural images. To benchmark the circuits abilities we have used the results of our previous project “Object Recognition and Classification in Biological Neural Networks”. We use the test of adding noise (jitter) to images and inspect if the system still manages to classify them correctly.

Figure 1 – The performance results for increasing jitter

We then built, trained and tested several systems, each with a different sized neuron network. In each system we measured it’s performance rating and a property called the separation property. Suffice to say this property is minimized when the system is between chaos and order – the so-called edge of chaos (for more details see report). We then constructed the following graph:

Figure 2 – Performance and chaos as a function of the size of the network

Since both the performance scale and the separation property are inverted scales, we look for the minimum in both of them – and we can see they are the same! Therefore at 200 neurons we are at the edge of chaos, in respect to this parameter, and indeed at this point we have achieved maximal performance.

We then continued to study inside the set of 200 neuron systems, and traverse the second most affecting prarmeter in the network – the probability of existence of a connection between two neurons in consequtive layers. This defines the form of the network. Plotting the results we can see that indeed in both cases there is some correlation between the scales:


Figure 3 – Performance and chaos as a function of the form of the network


Although the graph isn’t as decisive as the previous one, we can still see clearly the mutual minimum is at 80%, and again by chosing the parameter according to the edge of chaos criteria we have achieved optimal performance.

Further on we have viewed a different point of view on the chaos definition and checked how far does the system’s initial state effect the output of the system after a long series of identical inputs. In this case we use the fading memory property – or Dfade (defined in the full report) which is essence reaches zero for the edge of chaos. We then summed up a great amount of simulations into a graph which reveals the amount of fading memory, or the time lasting effects of the initial state on the output, as can be seen:


Figure 4 – Dfade coefficient versus the number of neurons in the system

We can see that indeed the absolute value nearest zero is recieved for the 200 neuron network as anticipated by our previous results. In fact we can see the results are quite parallel, and in conclusion using both chaos degrees we have recieved the same bottom line – we have found optimal system performance at the point where the system lies at the brink of chaos and order.


CSIM: the neural Circuit SIMulator. CSIM is a tool for simulating heterogeneous networks composed of different model neurons and synapses. This simulator is written in C++ with an MEX interface to Matlab.

Learning-Tool: The Learning-Tool is a set of Matlab scripts that allows us to asses the real-time computing capability of neural microcircuit models. Learning-Tool is based on the theoretical framework for analyzing real time computations in neural microcircuits: in the Liquid State Machine.


It has been previously suggested such optimization can be found on the border between order and chaos in the system’s response to input.

We have seen that in fact we have achieved better performance in our system once we tuned it to execute the calculations at the edge of chaos.

When treating the parameters space as perpendicular we have achieved better performance out of our system by tweaking it to process information at the edge between chaos and order. Beyond so, we have also seen there exists a strong correlation (in size- throughout the scale and in form- in the wide locality of our starting point) between the performance of the system and the separation property. Whereas we have defined the separation property to measure how “far” the system is from the edge of chaos.

In conclusion, we have seen that in the cases we have checked, as presumed, tuning the system to work at the edge of chaos has produced not only better performance than the initial (ordered or chaotic) state – but produced the best performance in the space parameter locality of the initial state. We have hopes this research can influence further studies and propagate into multiple applications and further advance the science of biologically induced machine learning.


We would like to thank our supervisor Karina Odinaev for her guidance. We would also like to thank the VISL lab staff for their assistance and support, especially Johanan Erez for his help and support.
We are also grateful to the Ollendorf Minerva Center Fund for supporting this project.