In order to save an average colored image 256X256 pixels we need 0.2 Mbytes of space. This kind of capacity is much beyond the space needed.
Abstract
In order to save an average colored image 256X256 pixels we need 0.2 Mbytes of space. This kind of capacity is much beyond the space needed. For example, one second of moving picture (movie or film) takes 30pics/sec X 256 X 256~2Mbyte of space per second. This size is huge when we handle real life applications, such as: Computer Networks, Internet, video transmission, videoconference etc. Thus, we use enormous databases for them. One of the most effective tools for requiring less capacity is data compression. That’s where we meet different algorithms for image compression.
TheProblem
The question is how to do it? How can we take a large amount of data and compress it having the same data (almost) after the compression?
In order to compress we need to know what are the rules of compression and some utilization we could use throughout the algorithm. It’s also necessary to know ways we could sort of “cheat” the human eye, so a person wouldn’t notice the differences in the two images, before and after the compression.
The Solution
We brought up two algorithms that solve the problem of image compression.
The first algorithm is image compression based on the DPCM method. The second algorithm is Lempel-Ziv Compression using Peano-Hilbert Scanning.
The Algorithm
The DPCM method is a compression algorithm based on prediction, quantization and Hufman Coding. For each pixel (block) a prediction is calculated and we transmit only the error between the pixels (blocks) and their predictions. We may use a quantizer and form Lossy (stronger) compression.
In the Lempel Ziv algorithm we scan the image using the Peano-Hilbert Plane Filling Curve and compress using Lempel-Ziv compression. This is a LossLess compression method.
Block diagram of the DPCM algorithm
Block diagram of Lempel-Ziv algorithm with Peano Hilbert Scan
Conclusions
The algorithm of DPCM for compression using quantizer is a Lossy compression. Where as the algorithm of Lempel-Ziv is a Loss-Less compression. We concluded that the algorithm based on LZW is great on compression of written document files. We got better results there than the JPEG algorithm. And also for images with 8 gray levels and less (like a written document) the LZW algorithm is preferable than the JPEG.
The Loss-Less compression has a great importance on medical applications such as ultrasound and Roentgen pictures, where there are special regulations for compression without any losses. (Such regulations exits in U.S.A and Western Europe)
Acknowledgment
We would like to thank our supervisor Tsachy Weissman, for his patience and for guiding us throughout the project. We would also like to thank Johanan Erez and the rest of the laboratory staff for their help and support. Finally, we wish to thank the Ollendorff Minerva Center for their support.