Revision as of 16:11, 22 September 2009 by Rscheidt (Talk | contribs)

Lossy versus Lossless Images: What is the difference?

As "analog" 35 mm cameras (and the film used for it!) become more and obsolete, digital cameras and the storage and transmission of digital images are rapidly becoming the de facto standard for today's photography needs.

The resolution of a camera - e.g. 6 MP (Megapixel) or 10 MP - determines the number of pixels the camera uses to represent the "continuous" signal (e.g. a mountain, or your smiling significant other) that your digital camera is sampling.

Thus the digital camera samples the continuous signal, with a period $ T $ (shutter speed) and on for length $ tau $ (related to aperture -- how much light is absorbed):

$ X_s(t) = s_{tau}(t)x(t) $ (Note: image is two-dimensional signal)

A digital camera also quantizes the sampled values, because an infinite amount of storage space (i.e. bits) is not available to represent every pixel. A typical digital camera will allocate 24 bits per pixel, thus allowing $ 2^{24} = 16,777,216 $ possible color representations.

We all have heard of the various image formats used - for example, JPEG, GIF, TIFF, RAW, BMP. Of these, TIFF, RAW, and BMP are referred to as lossless. This means that with the aforementioned compression algorithms, the original signal can be faithfully reconstructed exactly, bit-by-bit. Lossless image compression would be important for such applications such as medical imaging, where it is important that the resolution of the original image is maintained upon compression and decompression. As an example, a lossless compression is important to maintain high contrast and finer details in an MRI scan of brain tissue, so that an accurate diagnosis can be made!

A raw uncompressed digital image that is 10 MP and allocated 24 bits per pixel will be: $ 10^6 pixels * 24 bits/pixel * 1 MB / 8*10^6 bits = 30 MB $

Alumni Liaison

Followed her dream after having raised her family.

Ruth Enoch, PhD Mathematics