This project researches object recognition and classification using neural networks. Different sets of pictures (such as airplanes, ships, faces etc) form a database. Using this database of pictures, we investigated how we can harness an LSM's computational power in order to obtain a system which can classify between the different sets. We also researched different patterns of scanning the database in order to improve the LSM's abilitie
This project researches object recognition and classification using neural networks. Different sets of pictures (such as airplanes, ships, faces etc) form a database. Using this database of pictures, we investigated how we can harness an LSM’s computational power in order to obtain a system which can classify between the different sets. We also researched different patterns of scanning the database in order to improve the LSM’s abilities.
Object recognition and classification problems are at the front end of research in the field of computerized vision. Systems with the ability to classify objects can be used for numerous purposes, starting with discovering tumors, and ending with suspect identification in airports. Obvious solutions for such problems are biological systems, possibly based on the human brain. The human brain can classify between different types of objects, distinguish between different faces, and can in general be an efficient system for object recognition and classification. While historically the brain has been viewed as a type of computer, and vice-versa, this is true only in the loosest sense. Computers do not provide us with accurate hardware for describing the brain (even though it is possible to describe a logical process as a computer program or to simulate a brain using a computer) as they do not possess the parallel processing architectures that have been described in the brain. Even when speaking of multiprocessor computers, the functions are not nearly as distributed as in the brain.
This project’s goals were to study neural circuits and research its abilities to classify natural images into groups working on the mathematical framework of the LSM. As explained, the LSM’s input is time-dependent spike trains. Therefore, our first step towards understanding the LSM’s computational power was to find a map from images to time-dependent spike trains. After some research, we arrived at an efficient way to preserve the inner-frame correlation was using the Hilbert scan.
As we can observe from the following graph, the Hilbert scan becomes efficient when dealing with images bigger than 40X40 (1,600) pixels – as such that we shall use in this research.
Figure 2 – The autocorrelation within the pixel-sequence
as a function of the distance between pixels
Later on we have used an elaborate method to use the Hilbert scan to convert the pictures to proper timely inputs to the system (please see full report for details). To test the system’s performance we have arbitrarily chosen a few similar pictures, added noise to their signatures (as measured in the Jitter column) and checked the system’s performance. As can be seen the success rates we have achieved are indeed high (another performance ladder is the MAE, lower is better, see report for details).
Using a database of 4 different sets of images: watches, grand pianos, motorbikes and airplanes. Each set containing circa 100 pictures. 80% of them will be used for the training simulations and 20% for the test runs. We have run numerous tests to check how the system manages to classify a given image to one of four previously learned sets. The results are summarized in the following table:
Figure 4 – The final success rates for comparing different groups
We can see that we have achieved very good results. Further study on the system’s classification properties has also lead us to the conclusion that indeed the Hilbert curve has properties conserving the local autocorrelation in natural images.
CSIM: the neural Circuit SIMulator. CSIM is a tool for simulating heterogeneous networks composed of different model neurons and synapses. This simulator is written in C++ with an MEX interface to Matlab.
Learning-Tool: The Learning-Tool is a set of Matlab scripts that allows us to asses the real-time computing capability of neural microcircuit models. Learning-Tool is based on the theoretical framework for analyzing real time computations in neural microcircuits: in the Liquid State Machine.
As can be seen by the results, we have achieved good results. Implying our system has indeed succeeded in extracting and learning the characteristics for each object we have given it. We must note however, that as we presumed the results were not perfect, and for certain groups the success rate was as low as 57% – although we can see this result is an outliner compared with the other cases in which most were succesful in more than 80% of the cases.
To improve the system’s performance for applications we suggest increasing the training sequences and selecting the number of neurons in each pool of the system carefully – as the effects of the size and shape of the network on performance differs greatly upon slightly different applications.
Moreover, further mathematical study of how the static inner NMC works might lend insight into optimization of the building of the neuron pools around and perhaps a further optimized input preprocessing algorithm.
During the work on this project we have also come to understand the significance of the different parameters of the parts of our complex system (the Liquid State Machine and the neural network). For future development of our system, we suggest to research each parameter’s effect on the complex system as a whole and find the state whereas the system’s performance shall be maximized. It has been previously suggested such optimization can be found on the border between order and chaos in the system’s response to input.
We would like to thank our supervisor Karina Odinaev for her guidance. We would also like to thank the VISL lab staff for their assistance and support, especially Johanan Erez for his help and support.
We are also grateful to the Ollendorf Minerva Center Fund for supporting this project.