Visible SurfaceVisible Surface
DetectionDetection
Visible Surface Detection
●
Visible surface detection or hidden surface
removal.
●
Realistic scenes: closer objects occludes the
others.
●
Classification:
– Object space methods
– Image space methods
Object Space Methods
●
Algorithms to determine which parts of the shapes
are to be rendered in 3D coordinates.
●
Methods based on comparison of objects for their 3D
positions and dimensions with respect to a viewing
position.
●
For N objects, may require N*N comparision
operations.
●
Efficient for small number of objects but difficult to
implement.
●
Depth sorting, area subdivision methods.
Image Space Methods
●
Based on the pixels to be drawn on 2D. Try to
determine which object should contribute to that
pixel.
●
Running time complexity is the number of pixels
times number of objects.
●
Space complexity is two times the number of pixels:
– One array of pixels for the frame buffer
– One array of pixels for the depth buffer
●
Coherence properties of surfaces can be used.
●
Depth-buffer and ray casting methods.
Depth Cueing
●
Hidden surfaces are not removed but displayed with
different effects such as intensity, color, or shadow
for giving hint for third dimension of the object.
●
Simplest solution: use different colors-intensities
based on the dimensions of the shapes.
Back-Face Detection
●
Back-face detection of 3D polygon surface is
easy
●
Recall the polygon surface equation:
●
We need to also consider the viewing
direction when determining whether a
surface is back-face or front-face.
●
The normal of the surface is given by:
0<+++ DCzByAx
),,( CBA=N
Back-Face Detection
●
A polygon surface is a back face if:
●
However, remember that after application of
the viewing transformation we are looking
down the negative z-axis. Therefore a
polygon is a back face if:
0view >⋅NV
0ifor
0)1,0,0(
<
>⋅−
C
N
Back-Face Detection
●
We will also be unable to see surfaces with
C=0. Therefore, we can identify a polygon
surface as a back-face if:
0≤C
Back-Face Detection
●
Back-face detection can identify all the
hidden surfaces in a scene that contain non-
overlapping convex polyhedra.
●
But we have to apply more tests that contain
overlapping objects along the line of sight to
determine which objects obscure which
objects.
Depth-Buffer Method
●
Also known as z-buffer method.
●
It is an image space approach
– Each surface is processed separately one pixel
position at a time across the surface
– The depth values for a pixel are compared
and the closest (smallest z) surface
determines the color to be displayed in the
frame buffer.
– Applied very efficiently on polygon surfaces
– Surfaces are processed in any order
Depth-Buffer Method
Depth-Buffer Method
●
Two buffers are used
– Frame Buffer
– Depth Buffer
●
The z-coordinates (depth values) are usually
normalized to the range [0,1]
Depth-Buffer Algorithm
●
Initialize the depth buffer and frame buffer so that
for all buffer positions (x,y),
depthBuff (x,y) = 1.0, frameBuff (x,y) =bgColor
●
Process each polygon in a scene, one at a time
– For each projected (x,y) pixel position of a polygon,
calculate the depth z.
– If z < depthBuff (x,y), compute the surface color at
that position and set
depthBuff (x,y) = z, frameBuff (x,y) = surfCol (x,y)
Calculating depth values efficiently
●
We know the depth values at the vertices.
How can we calculate the depth at any other
point on the surface of the polygon.
●
Using the polygon surface equation:
C
DByAx
z
−−−
=
Calculating depth values efficiently
●
For any scan line adjacent horizontal x
positions or vertical y positions differ by 1
unit.
●
The depth value of the next position (x+1,y)
on the scan line can be obtained using
C
A
z
C
DByxA
z
−=
−−+−
=′
)1(
Calculating depth values efficiently
●
For adjacent scan-lines we can compute the
x value using the slope of the projected line
and the previous x value.
C
BmA
z
m
xx
+
+=′⇒
−=′
/
z
1
Depth-Buffer Method
●
Is able to handle cases such as
View from the
Right-side
Z-Buffer and Transparency
●
We may want to render transparent surfaces (alpha ≠1)
with a z-buffer
●
However, we must render in back to front order
●
Otherwise, we would have to store at least the first
opaque polygon behind transparent one
Partially
transparent
Opaque
Opaque 1st
2nd
3rd
Front
1st or 2nd
1st or 2nd
Must recall this
color and depth
3rd: Need depth
of 1st and 2nd
OK. No Problem Problematic Ordering
A-Buffer Method
●
Extends the depth-buffer algorithm so that
each position in the buffer can reference a
linked list of surfaces.
●
More memory is required
●
However, we can correctly compose different
surface colors and handle transparent
surfaces.
A-Buffer Method
●
Each position in the A-buffer has two fields:
– a depth field
– surface data field which can be either surface
data or a pointer to a linked list of surfaces
that contribute to that pixel position
Scan Line Method
●
Extension of the scan-line algorithm for filling
polygon interiors
 For all polygons intersecting each scan line

Processed from left to right

Depth calculations for each overlapping
surface

The intensity of the nearest position is entered
into the refresh buffer
Tables for The Various Surfaces

Edge table
 Coordinate endpoints for each line
 Slope of each line
 Pointers into the polygon table
o
Identify the surfaces bounded by each line

Polygon table
 Coefficients of the plane equation for each surface
 Intensity information for the surfaces
 Pointers into the edge table
Active List & Flag

Active list
 Contain only edges across the current scan line
 Sorted in order of increasing x

Flag for each surface
 Indicate whether inside or outside of the surface
 At the leftmost boundary of a surface

The surface flag is turned on
 At the rightmost boundary of a surface

The surface flag is turned off

Visible surface detection in computer graphic

  • 1.
  • 2.
    Visible Surface Detection ● Visiblesurface detection or hidden surface removal. ● Realistic scenes: closer objects occludes the others. ● Classification: – Object space methods – Image space methods
  • 3.
    Object Space Methods ● Algorithmsto determine which parts of the shapes are to be rendered in 3D coordinates. ● Methods based on comparison of objects for their 3D positions and dimensions with respect to a viewing position. ● For N objects, may require N*N comparision operations. ● Efficient for small number of objects but difficult to implement. ● Depth sorting, area subdivision methods.
  • 4.
    Image Space Methods ● Basedon the pixels to be drawn on 2D. Try to determine which object should contribute to that pixel. ● Running time complexity is the number of pixels times number of objects. ● Space complexity is two times the number of pixels: – One array of pixels for the frame buffer – One array of pixels for the depth buffer ● Coherence properties of surfaces can be used. ● Depth-buffer and ray casting methods.
  • 5.
    Depth Cueing ● Hidden surfacesare not removed but displayed with different effects such as intensity, color, or shadow for giving hint for third dimension of the object. ● Simplest solution: use different colors-intensities based on the dimensions of the shapes.
  • 6.
    Back-Face Detection ● Back-face detectionof 3D polygon surface is easy ● Recall the polygon surface equation: ● We need to also consider the viewing direction when determining whether a surface is back-face or front-face. ● The normal of the surface is given by: 0<+++ DCzByAx ),,( CBA=N
  • 7.
    Back-Face Detection ● A polygonsurface is a back face if: ● However, remember that after application of the viewing transformation we are looking down the negative z-axis. Therefore a polygon is a back face if: 0view >⋅NV 0ifor 0)1,0,0( < >⋅− C N
  • 8.
    Back-Face Detection ● We willalso be unable to see surfaces with C=0. Therefore, we can identify a polygon surface as a back-face if: 0≤C
  • 9.
    Back-Face Detection ● Back-face detectioncan identify all the hidden surfaces in a scene that contain non- overlapping convex polyhedra. ● But we have to apply more tests that contain overlapping objects along the line of sight to determine which objects obscure which objects.
  • 10.
    Depth-Buffer Method ● Also knownas z-buffer method. ● It is an image space approach – Each surface is processed separately one pixel position at a time across the surface – The depth values for a pixel are compared and the closest (smallest z) surface determines the color to be displayed in the frame buffer. – Applied very efficiently on polygon surfaces – Surfaces are processed in any order
  • 11.
  • 12.
    Depth-Buffer Method ● Two buffersare used – Frame Buffer – Depth Buffer ● The z-coordinates (depth values) are usually normalized to the range [0,1]
  • 13.
    Depth-Buffer Algorithm ● Initialize thedepth buffer and frame buffer so that for all buffer positions (x,y), depthBuff (x,y) = 1.0, frameBuff (x,y) =bgColor ● Process each polygon in a scene, one at a time – For each projected (x,y) pixel position of a polygon, calculate the depth z. – If z < depthBuff (x,y), compute the surface color at that position and set depthBuff (x,y) = z, frameBuff (x,y) = surfCol (x,y)
  • 14.
    Calculating depth valuesefficiently ● We know the depth values at the vertices. How can we calculate the depth at any other point on the surface of the polygon. ● Using the polygon surface equation: C DByAx z −−− =
  • 15.
    Calculating depth valuesefficiently ● For any scan line adjacent horizontal x positions or vertical y positions differ by 1 unit. ● The depth value of the next position (x+1,y) on the scan line can be obtained using C A z C DByxA z −= −−+− =′ )1(
  • 16.
    Calculating depth valuesefficiently ● For adjacent scan-lines we can compute the x value using the slope of the projected line and the previous x value. C BmA z m xx + +=′⇒ −=′ / z 1
  • 17.
    Depth-Buffer Method ● Is ableto handle cases such as View from the Right-side
  • 18.
    Z-Buffer and Transparency ● Wemay want to render transparent surfaces (alpha ≠1) with a z-buffer ● However, we must render in back to front order ● Otherwise, we would have to store at least the first opaque polygon behind transparent one Partially transparent Opaque Opaque 1st 2nd 3rd Front 1st or 2nd 1st or 2nd Must recall this color and depth 3rd: Need depth of 1st and 2nd OK. No Problem Problematic Ordering
  • 19.
    A-Buffer Method ● Extends thedepth-buffer algorithm so that each position in the buffer can reference a linked list of surfaces. ● More memory is required ● However, we can correctly compose different surface colors and handle transparent surfaces.
  • 20.
    A-Buffer Method ● Each positionin the A-buffer has two fields: – a depth field – surface data field which can be either surface data or a pointer to a linked list of surfaces that contribute to that pixel position
  • 21.
    Scan Line Method ● Extensionof the scan-line algorithm for filling polygon interiors  For all polygons intersecting each scan line  Processed from left to right  Depth calculations for each overlapping surface  The intensity of the nearest position is entered into the refresh buffer
  • 22.
    Tables for TheVarious Surfaces  Edge table  Coordinate endpoints for each line  Slope of each line  Pointers into the polygon table o Identify the surfaces bounded by each line  Polygon table  Coefficients of the plane equation for each surface  Intensity information for the surfaces  Pointers into the edge table
  • 23.
    Active List &Flag  Active list  Contain only edges across the current scan line  Sorted in order of increasing x  Flag for each surface  Indicate whether inside or outside of the surface  At the leftmost boundary of a surface  The surface flag is turned on  At the rightmost boundary of a surface  The surface flag is turned off