CG
3/1/2023
LECTURE No. (7)
Dr. Majid D. Y.
dr.majid@uomosul.edu.iq
Surface Detection Methods
2
of
20
Why?
We must determine what is visible within a
scene from a chosen viewing position
For 3D worlds this is known as visible
surface detection or hidden surface
elimination
3
of
20
Two Main Approaches
Visible surface detection algorithms are
broadly classified as:
– Object Space Methods: Compares objects
and parts of objects to each other within the
scene definition to determine which surfaces
are visible
– Image Space Methods: Visibility is decided
point-by-point at each pixel position on the
projection plane
Image space methods are by far the more
common
4
of
20
Back-Face Detection
The simplest thing we can do is find the
faces on the backs of polyhedra and
discard them
5
of
20
Back-Face Detection Method
• A fast and simple object-space method for identifying
the back faces of a polyhedron is based on the "inside-
outside" tests. A point (x, y, z) is "inside" a polygon
surface with plane parameters A, B, C, and D if
• When an inside point is along the line of sight to the
surface, the polygon must be a back face (we are inside
that face and cannot see the front of it from our viewing
position).
0



 D
Cz
By
Ax
D
Cz
By
Ax
S 



6
of
20
Back-Face Detection Method
• We can simplify this test by considering the
normal vector N to a polygon surface, which has
Cartesian components (A, B, C). In general, if V
is a vector in the viewing direction from the eye
(or "camera") position, as shown in Fig, then this
polygon is a back face if
V.N>0
7
of
20
N calculation
P1(X1,Y1,Z1)
P2(X2,Y2,Z2)
P3(X3,Y3,Z3)
)
P
-
(P
)
P
-
(P
N 2
3
2
1 


















2
3
2
3
2
3
2
1
2
1
2
1
N
Z
Z
Y
Y
X
X
Z
Z
Y
Y
X
X
k
j
i
k
Y
Y
X
X
Y
Y
X
X
j
Z
Z
X
X
Z
Z
X
X
i
Z
Z
Y
Y
Z
Z
Y
Y

































2
3
2
3
2
1
2
1
2
3
2
3
2
1
2
1
2
3
2
3
2
1
2
1
N
       
2
3
2
1
2
3
2
1
A *
N Y
Y
Z
Z
Z
Z
Y
Y 






       
2
3
2
1
2
3
2
1
B *
N X
X
Z
Z
Z
Z
X
X 






       
2
3
2
1
2
3
2
1
C *
N X
X
Y
Y
Y
Y
X
X 






8
of
20
Back-Face Detection Method
• Ensure we have a right handed system with the
viewing direction along the negative z-axis
Now we can simply say that if the z component
(C) of the polygon’s normal is less than zero the
surface cannot be seen
V = (0, 0, Vz)  V.N=Vz.C
so that we only need to consider the sign of C,
the ; component of the normal vector N .
9
of
20
Back-Face Detection Method
• In a right-handed viewing system with
viewing direction along the negative z,
axis, the polygon is a back face if C <= 0.
• In a Left-handed viewing system with
viewing direction along the positive z, axis,
the polygon is a back face if C > 0.
10
of
20
Back-Face Detection (cont…)
In general back-face detection can be
expected to eliminate about half of the
polygon surfaces in a scene from further
visibility tests
More complicated surfaces
though scupper us!
We need better techniques
to handle these kind of
situations
11
of
20
Depth-Buffer Method
Compares surface depth values throughout
a scene for each pixel position on the
projection plane
Usually applied to scenes only containing
polygons
As depth values can be computed easily,
this tends to be very fast
Also often called the z-buffer method
12
of
20
Depth-Buffer Method (cont…)
Images
taken
from
Hearn
&
Baker,
“Computer
Graphics
with
OpenGL”
(2004)
13
of
20
Depth-Buffer Algorithm
1. Initialise the depth buffer and frame buffer
so that for all buffer positions (x, y)
depthBuff(x, y) = 1.0
frameBuff(x, y) = Color
14
of
20
Depth-Buffer Algorithm (cont…)
2. Process each polygon in a scene, one at
a time
– For each projected (x, y) pixel position of a
polygon, calculate the depth z (if not already
known)
– If z < depthBuff(x, y), compute the surface
colour at that position and set
depthBuff(x, y) = z
frameBuff(x, y) = surfColour(x, y)
After all surfaces are processed depthBuff
and frameBuff will store correct values
15
of
20
A-Buffer Method
The A-buffer method is an extension of the
depth-buffer method
The A-buffer method is visibility detection
method developed at Lucasfilm Studios for
the rendering system REYES (Renders
Everything You Ever Saw)
16
of
20
A-Buffer Method (cont…)
The A-buffer expands on the depth buffer
method to allow transparencies
The key data structure in the A-buffer is the
accumulation buffer
17
of
20
A-Buffer Method (cont…)
If depth is >= 0, then the surface data field
stores the depth of that pixel position as
before
If depth < 0 then the data filed stores a
pointer to a linked list of surface data
18
of
20
A-Buffer Method (cont…)
Surface information in the A-buffer includes:
– RGB intensity components
– Opacity parameter
– Depth
– Percent of area coverage
– Surface identifier
– Other surface rendering parameters
The algorithm proceeds just like the depth
buffer algorithm
The depth and opacity values are used to
determine the final colour of a pixel
19
of
20
Scan-Line Method
An image space method for identifying
visible surfaces
Computes and compares depth values
along the various scan-lines for a scene
20
of
20
Depth-Sorting Method
A visible surface detection method that uses
both image-space and object-space
operations
Basically, the following two operations are
performed
– Surfaces are sorted in order of decreasing
depth
– Surfaces are scan-converted in order, starting
with the surface of greatest depth
The depth-sorting method is often also known
as the painter’s method

7-Surface Detection Methods.ppt

  • 1.
    CG 3/1/2023 LECTURE No. (7) Dr.Majid D. Y. [email protected] Surface Detection Methods
  • 2.
    2 of 20 Why? We must determinewhat is visible within a scene from a chosen viewing position For 3D worlds this is known as visible surface detection or hidden surface elimination
  • 3.
    3 of 20 Two Main Approaches Visiblesurface detection algorithms are broadly classified as: – Object Space Methods: Compares objects and parts of objects to each other within the scene definition to determine which surfaces are visible – Image Space Methods: Visibility is decided point-by-point at each pixel position on the projection plane Image space methods are by far the more common
  • 4.
    4 of 20 Back-Face Detection The simplestthing we can do is find the faces on the backs of polyhedra and discard them
  • 5.
    5 of 20 Back-Face Detection Method •A fast and simple object-space method for identifying the back faces of a polyhedron is based on the "inside- outside" tests. A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if • When an inside point is along the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it from our viewing position). 0     D Cz By Ax D Cz By Ax S    
  • 6.
    6 of 20 Back-Face Detection Method •We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye (or "camera") position, as shown in Fig, then this polygon is a back face if V.N>0
  • 7.
    7 of 20 N calculation P1(X1,Y1,Z1) P2(X2,Y2,Z2) P3(X3,Y3,Z3) ) P - (P ) P - (P N 2 3 2 1                   2 3 2 3 2 3 2 1 2 1 2 1 N Z Z Y Y X X Z Z Y Y X X k j i k Y Y X X Y Y X X j Z Z X X Z Z X X i Z Z Y Y Z Z Y Y                                  2 3 2 3 2 1 2 1 2 3 2 3 2 1 2 1 2 3 2 3 2 1 2 1 N         2 3 2 1 2 3 2 1 A * N Y Y Z Z Z Z Y Y                2 3 2 1 2 3 2 1 B * N X X Z Z Z Z X X                2 3 2 1 2 3 2 1 C * N X X Y Y Y Y X X       
  • 8.
    8 of 20 Back-Face Detection Method •Ensure we have a right handed system with the viewing direction along the negative z-axis Now we can simply say that if the z component (C) of the polygon’s normal is less than zero the surface cannot be seen V = (0, 0, Vz)  V.N=Vz.C so that we only need to consider the sign of C, the ; component of the normal vector N .
  • 9.
    9 of 20 Back-Face Detection Method •In a right-handed viewing system with viewing direction along the negative z, axis, the polygon is a back face if C <= 0. • In a Left-handed viewing system with viewing direction along the positive z, axis, the polygon is a back face if C > 0.
  • 10.
    10 of 20 Back-Face Detection (cont…) Ingeneral back-face detection can be expected to eliminate about half of the polygon surfaces in a scene from further visibility tests More complicated surfaces though scupper us! We need better techniques to handle these kind of situations
  • 11.
    11 of 20 Depth-Buffer Method Compares surfacedepth values throughout a scene for each pixel position on the projection plane Usually applied to scenes only containing polygons As depth values can be computed easily, this tends to be very fast Also often called the z-buffer method
  • 12.
  • 13.
    13 of 20 Depth-Buffer Algorithm 1. Initialisethe depth buffer and frame buffer so that for all buffer positions (x, y) depthBuff(x, y) = 1.0 frameBuff(x, y) = Color
  • 14.
    14 of 20 Depth-Buffer Algorithm (cont…) 2.Process each polygon in a scene, one at a time – For each projected (x, y) pixel position of a polygon, calculate the depth z (if not already known) – If z < depthBuff(x, y), compute the surface colour at that position and set depthBuff(x, y) = z frameBuff(x, y) = surfColour(x, y) After all surfaces are processed depthBuff and frameBuff will store correct values
  • 15.
    15 of 20 A-Buffer Method The A-buffermethod is an extension of the depth-buffer method The A-buffer method is visibility detection method developed at Lucasfilm Studios for the rendering system REYES (Renders Everything You Ever Saw)
  • 16.
    16 of 20 A-Buffer Method (cont…) TheA-buffer expands on the depth buffer method to allow transparencies The key data structure in the A-buffer is the accumulation buffer
  • 17.
    17 of 20 A-Buffer Method (cont…) Ifdepth is >= 0, then the surface data field stores the depth of that pixel position as before If depth < 0 then the data filed stores a pointer to a linked list of surface data
  • 18.
    18 of 20 A-Buffer Method (cont…) Surfaceinformation in the A-buffer includes: – RGB intensity components – Opacity parameter – Depth – Percent of area coverage – Surface identifier – Other surface rendering parameters The algorithm proceeds just like the depth buffer algorithm The depth and opacity values are used to determine the final colour of a pixel
  • 19.
    19 of 20 Scan-Line Method An imagespace method for identifying visible surfaces Computes and compares depth values along the various scan-lines for a scene
  • 20.
    20 of 20 Depth-Sorting Method A visiblesurface detection method that uses both image-space and object-space operations Basically, the following two operations are performed – Surfaces are sorted in order of decreasing depth – Surfaces are scan-converted in order, starting with the surface of greatest depth The depth-sorting method is often also known as the painter’s method

Editor's Notes