Automatic Recognition of “offside” Situation in Soccer Game

This project is aim to find a football in a football court, and try to stay on with it in order to identify a situation when the ball is kicked.

Abstract
This project is aim to find a football in a football court, and try to stay on with it in order to identify a situation when the ball is kicked.
It’s a part of a project, which aims to find an offside situation in a football game.
An offside occurs when one player kicked the ball towards another player from his team, but the other player was beyond the last line of the defense, (except of the goalkeeper).

The Problem
We should try and find a ball, which is rather small, in a very large picture, which has a lot of details and objects in it, further more, we should follow the ball but sometimes it is hidden behind a players head or foot so tracking it harder.
We should also try and follow its movement so we can track a change in its direction.
This project can be joined with projects, which find the players (and the referee) and the court lines, in order to find an offside situation.

The Solution
1
Our algorithm is based on a simple edge detection which is perform on a B/W picture.
We create it using the RGB picture but leaving the green color(the grass), so the ball looks even brighter in comparison with the picture.
We decided that the balls edges looks like this:

2

So in our search, we look for this frame, we know that the ball is brighter than its surrounding so we can decide on a threshold value for the edges in order to get a binary picture.
Than we search for our frame.
This is done both in the whole and in a small picture around the ball (if we have it from former frames).
Besides that we predict the balls place using the last five frames, giving more weight to the last frames, meaning we calculate the balls place relying more on the last frame than the previous one and that one more than the one before it.
After we have all of the results we decide where the ball is, if there is no argue between the three results than we assume that the ball is found and the probability that this is the ball is very high.
If the results are different than we relay on the prediction, and try to get the ball on the next frame.
2

Tools
This project wasn’t aimed to work in real time so we used MATLAB environment usually version 6.0.
When a real time processing will be needed, C or C++ are recomended.
We used the Adobe Acrobat Premier software in order to separate the video film into its different pictures, which we can process using MATLAB.
The pictures were taken in tif format and their size was 288*352 pixels.
In these video’s we found that the balls size was somewhat 3*3 pixels.
We saw the video’s and found the ball’s location and kept it in a reference file in order to watch our output.

Results
In the first film we used out of 125 pictures, the ball was found in 120 that only in one of the 5 misses it wasn’t on the ball, in the other 4 misses we missed by one or two pixels.
First film: movie1.avi
In the second film, out of 86 pictures we missed the ball in 4 pictures, but the misses were hard to find since the ball was almost completely hidden in those pictures.
Second film: movie2.avi
So the results are certainly above 90% needed.

Ideas for future
Certain things should be done about the quality of the films, we could also use films without a moving camera, so we shouldn’t consider the cameras movement and only the absolute movement of the ball, the track after the ball and the revaluation of its location would be much easier.
We can also use the lines of the field and the coordinates of the players, so we don’t confuse them with the ball(players heads can be very confusing in that matter…), also if we know that the ball is near a player, than the ball will certainly be kicked somewhere by that player so changes in the ball movement would be easier to predict.

Acknowledgments
We would like to thank our supervisor, Guy Gilboa, for his guidance and support of this project. We would also like to thank the staff of the VISL lab, especially Johanan Erez for all their help.
Also we would like to thank The Ollendorff Minerva Center which supported this project.