Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
College of Informatics
Department of Computer Science
Computer Vision and Image Processing (CoSc4113)
Chapter Six: Image Compression
University of Gondar
2.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Image Compression
Objectives
By the end of this lesson, you will be able to:
Define and explain the basic concept of Image compression
1
2
3
Describe data redundancy
Describe and demonstrate lossy and lossless compression methods
3.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Introduction to Image Compression
Data Redundancy
Image Compression
Contents
1
2
3
4
5
Compression Methods
Huffman Coding
Basic Information theory
4.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Introduction to Image Compression
1
Image Compression
Image compression is a method to reduce the
redundancies in image representation in order to
decrease data storage requirements
It is a technique used to compress an image without
visually reducing the quality of the image itself.
Data vs. Information.
The goal of these processes is to represent an image
with the same quality level, but in a more solid form
5.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Why Compression
1
Image Compression
The large storage requirement of multimedia data.
The video or image files consume large amount of
data and it always required very high bandwidth
networks in transmission as well as communication
costs.
6.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Aims of Image Compression
1
Image Compression
Reduce the data storage and maintain the visual
image quality
Increase the speed of transmission by using the
repetition property of data.
The goal of these processes is to represent an image
with the same quality level, but in a more solid form.
7.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
1
Image Compression
General JPEG Compression
8.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
1
Image Compression
Image Compression General Models
Encoder
Decoder
9.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
1
Image Compression
Image Compression (Encoder)
Image
Transformation
Image
Quantization
Entropy Coding
Color
components (Y,
Cb, Cr) 8x8 DCT
Transform
Scalar
Uniform
Quantization
Zig-zag
Reordering
Difference
Encoding
Huffman
Coding
Huffman
Coding
Bit-stream
AC Huffman
Table
DC Huffman
Table
Quantization
Table
DC
AC
10.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
1
Image Compression
Block Transform Coding System
An image is divided into 8x8 pixels of non-overlapping
blocks.
Each block is transformed by the transform function (e.g.
DCT)
Compression is achieved during the quantization of the
transformed coefficients.
11.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Compression is measured in compression ratio denoted by:
R
C
Size
File
Compressed
Size
File
ed
Uncompress
Ratio
n
Compressio
Example:
The original image is 256×256 pixels, 8-bits per pixel grayscale.
The file is 65536 bytes (64 kb) in size.
After compression the image file is 6554 bytes.
The compression ratio is
10
6554
65536
Ratio
n
compressio
Compression Ration (CR)
1
Image Compression
12.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
The relative data redundancy, R can be determined as:
This indicates that 90% of its data is redundant. The higher the
RD value, the more data is redundant and will be compressed. If
RD=0, (no redundant data)
CR
RD
1
1
9
.
0
10
1
1
RD
Relative Data Redundancy
1
Image Compression
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
There are three types of redundancy:
Coding redundancy
Interpixel redundancy
Psychovisual redundancy
Types of Redundancy
2
Image Compression
15.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
A code is a symbol (letters, numbers, bits) used to represent a
body of information .
Each information is assigned a sequence of code symbols, called
a code word.
The number of symbols in each code word is its length.
Symbols with higher appearing probabilities are assigned with
codes of less amount of data.
Symbol: codes that human represent information
E.g., Chinese characters, English letters, …
Coding: transform symbols into more efficient codes
E.g., A: 00, B: 01, C: 10, …
Coding Redundancy
2
Image Compression
16.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Coding Redundancy
2
Image Compression
Focus on gray value images
Histogram shows the frequency of occurrence of a particular
gray level
Normalize the histogram and convert to a pdf representation –
let rk be the random variable
pr(rk) = nk/n ; k = 0, 1,2 …., L-1, where L is the number of gray level values
l(rk) = number of bits to represent rk
Lavg = k=0 to L-1 l(rk) pr(rk) = average number of bits to encode one pixel.
For M x N image, bits required is MN Lavg
For an image using an 8 bit code, l(rk) = 8, Lavg = 8.
Fixed length codes.
17.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Average bit length (Lavg1)=3 bits, Average bit length (Lavg2)=2.7
bits, Compression ratio= 3/2.7=1.11, Relative Redundancy=0.099
Coding Redundancy: Example
2
Image Compression
18.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Also called spatial redundancy, geometric redundancy,
inter-frame redundancy.
Results from structural or geometric relationships between
objects and the image.
Adjacent pixels are usually highly correlated (pixel similar
or very close to neighboring pixels), thus information is
unnecessarily replicated in the representations.
Neighboring pixels in a natural image are highly
correlated. In natural images, local area usually contains
pixels of same or similar gray level.
Inter-pixel Redundancy
2
Image Compression
19.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Pixels of line 128 of Tiffany:
Pixels of line 128 of wheel:
Inter-pixel Redundancy: Example
2
Image Compression
20.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
• Pixels of line 128 of wheel:
Similar segments (graylevel,
number): (77, 1), (206, 30),
(121, 88), (77,18), (121, 88),
(206, 30), (77, 1)
Coding each segment with 16 bits:
CR=(256x8) / (7x16)=18.3
RD=1-1/18.3=0.95
n1 represent data information and n2 is amounts of data to represent
the same information.
Inter-pixel Redundancy: Example
2
Image Compression
21.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
For human visual perception, certain information has less relative
importance. E.g,: appropriate quantization of gray levels does not
impact its visual quality, e.g.,: Tiffany
Can be eliminated without significant quality loss.
Use Lossy compression or Irreversible compression.
“Quantization”.
Psychovisual Redundancy
2
Image Compression
22.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Measuring amount of information: I(p) = log 1/p = – log p
Average amount of information: entropy.
Theoretically, the lowest average number of bits required to
represent one symbol.
i
i i p
p
S
H
log
)
(
Basic Information Theory
3
Image Compression
23.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Symbol Probability
A 1/4
B 1/4
C 1/4
D 1/4
2
)
4
/
1
(
log
)
4
/
1
(
)
( 2
S
H
In average, minimum 2 bits required.
Symbol Probability
A ½
B 1/8
C 1/8
D 1/4
75
.
1
)
4
/
1
(
log
)
4
/
1
(
)
8
/
1
(
log
)
8
/
1
(
2
)
2
/
1
(
log
)
2
/
1
(
)
( 2
2
2
S
H
In average, minimum 1.75 bits required.
Basic Information Theory: Example
3
Image Compression
24.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
There are two types of compression methods:
Lossy image compression
Lossless image compression
Compression Method
4
Image Compression
25.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Lossy image compression methods are required to achieve high
compression ratios for complex images.
An image reconstructed Lossy compression can be performed in
both spatial or transform domains.
The process of quantization-dequantization introduces loss in the
reconstructed image and is inherently responsible for the “lossy”
nature of the compression scheme.
The quantized transform coefficient is computed by
pq
pq
pq
Q
B
Round
T
Where is the frequency image signals at coordinates (i,j) in the
k block.
pq
B
Lossy Compression Methods
4
Image Compression
26.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
During dequantization, approximate DCT coefficients are obtained by
multiplying the corresponding quantization threshold with the
quantized coefficient as follows:
pq
pq
pq Q
T
B
'
Lossy Compression Methods
4
Image Compression
27.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Lossless compression methods are needed in some digital imaging
applications, such as: medical images, x-ray images, etc.
Lossless compression techniques: run-length coding, Huffman coding,
lossless predictive coding etc.
Lossless Compression Methods
4
Image Compression
28.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman coding is a famous method that uses variable-length
codes (VLC) tables for compressing data.
Given a set of data symbols and their probabilities, the method
creates a set of variable-length codeword with the shortest
average length and assigns them to the symbols.
Huffman Coding
5
Image Compression
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman Coding: example
5
Image Compression
31.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman Coding:example
5
Image Compression
32.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman Coding: example
5
Image Compression
33.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman Coding: example 2
5
Image Compression
34.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Huffman Coding: example 2
5
Image Compression
35.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Binary coderwork tree representation
Huffman Coding: example 2
5
Image Compression
36.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Note: Log2 x = Log10 . Log2 (10) = 3.3219 Log10 x
Huffman Coding: example 2
5
Image Compression
37.
Getnet T. Email:[email protected] , College of Informatics , University of Gondar, August 2025
Achieve minimal redundancy subject to the constraint that the
source symbols be coded one at a time.
Sorting symbols in descending probabilities is the key in the step of
source reduction.
The codeword assignment is not unique. Exchange the labeling of
“0” and “1” at any node of binary codeword tree would produce
another solution that equally works well.
Only works for a source with finite number of symbols (otherwise,
it does not know where to start).
Summary of Huffman Coding
5
Image Compression