Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

Revision 1041446 of Explaining basic 3D theory

  • Revision slug: Games/Techniques/3D_on_the_web/Basic_theory
  • Revision title: Explaining basic 3D theory
  • Revision id: 1041446
  • Created:
  • Creator: chrisdavidmills
  • Is current revision? No
  • Comment

Revision Content

This article explains all of the basic 3D theroy that's useful to know when you are first getting started working with 3D.

Coordinate system

3D essentially is all about representations of shapes in a 3D space, with a coordinate system used to calculate their position.

Right hand coordinate system (x, y, z) with a blue cube.

WebGL uses the right-hand coordinate system — the x axis points to the right, the y axis points up, and the z axis points out of the screen, as seen in the above diagram. It is counter-clockwise to cartesian coordinates.

Rendering pipeline

The rendering pipeline is the process by which images are prepared and output onto the screen. The graphics rendering pipeline takes the 3D objects built from primitives described using vertices, applies processing, calculates the fragments and renders them on the 2D screen as pixels.

All the shapes are built from vertices. Every vertex is described by these attributes:

  • Position: Identifies it in a 3D space (x, y, z).
  • Color: Holds an RGBA value (R, G and B for the red, green, and blue channels, alpha for transparency — all values range from 0.0 to 1.0).
  • Normal: A way to describe the direction the vertex is facing.
  • Texture: A 2D image that the vertex can use to decorate its surface instead of a simple color.

Other terminology worth know is as follows:

  • A Primitive: An input to the pipeline — it's built from vertices and can be a triangle, point or line.
  • A Pixel: A point on the screen arranged in the 2D grid, which holds an RGB color.
  • A Fragment: A 3D projection of a pixel, and has all the same attributes as a pixel.

Objects

A face of the given shape is a plane between vertices. For example, a cube has 8 different vertices (points in space) and 6 different faces, each constructed out of 4 vertices. Also, by connecting the points we're creating the edges of the cube. The geometry is built from a vertex and the face, while material is a texture, which uses an image. If we connect the geometry with the material we will get a mesh.

Rendering pipeline consists of vertex and fragment processing. They are programmable — you can write your own shaders that manipulate the output.

Transformation matrix

Vertex processing is all about transforming the coordinates and projecting them onto the screen. A transform converts a vertex from one space to another, and is done by multiplying the vector with the transformation matrix.

There are four stages to this processing: arranging the objects in the world (called world or model transformation), positioning and setting the orientation of the camera (view transformation), defining the camera settings (projection transformation) and outputting the image (viewport transformation).

Model (world) transformation

Objects are drawn in local space, so they need to be transformed to be drawn in the global, world space. It is done with the affine transforms — for example the rotation and scaling belong to the linear transformation, while translation is not linear.

View transformation

View transformation is about placing the camera in the 3D space. The camera has three parameters: location, direction, and orientation. The view matrix is used to transform the camera space to the world space.

Projection (perspective) transformation

Projection sets up what can be seen by the camera — the configuration includes field of view, aspect ratio and optional near and far planes. Objects outside of the view are not visible, and are ignored in the rendering process to boost performance. If an object is partially visible it is clipped to the camera's visible area. Projection transforms individual vertices.

Rasterization (viewport) transformation

Rasterization converts primitives to a set of fragments and maps them to the 3D viewport.

Fragment processing

Fragment processing focuses on textures and lightning. It calculates final colors based on the given parameters.

Output manipulation

During the output manipulation we use the z-buffer, or depth-buffer. Removing everything that is not visible because it was hidden behind another object can greatly increase the performance of the application.

If one object is in front of the other and it's not entirely opaque (the material has transparency), the object behind it has to be rendered with that in mind — alpha blending can be used to calculate the proper colors of the objects in this situation.

Lighting

The color we see on the screen is a result of the light source interacting with the surface color of the object's material. Light might be absorbed or reflected. The standard Phong Lightning Model implemented in WebGL has four basic types of lighting:

  • Diffuse: A distant directional light, like the sun.
  • Specular: A point of light, just like a light bulb in a room or a flash light.
  • Ambient: The constant light applied to everything on the scene.
  • Emissive: The light emitted directly by the object.

Conclusion

Now you know the basic theory behind 3D manipulation. If you want to move on to practice and see some demos in action, follow up with the tutorials below:

Go ahead and create some cool cutting-edge 3D experiments yourself!

Revision Source

<p>This article explains all of the basic 3D theroy that's useful to know when you are first getting started working with 3D.</p>

<h2 id="Coordinate_system">Coordinate system</h2>

<p>3D essentially is all about representations of shapes in a 3D space, with a coordinate system used to calculate their position.</p>

<p><img alt="Right hand coordinate system (x, y, z) with a blue cube." src="https://mdn.mozillademos.org/files/12974/coordinate-system.png" style="height:450px; width:600px" /></p>

<p>WebGL uses the right-hand coordinate system — the <code>x</code> axis points to the right, the <code>y</code> axis points up, and the <code>z</code> axis points out of the screen, as seen in the above diagram. It is counter-clockwise to cartesian coordinates.</p>

<h2 id="Rendering_pipeline">Rendering pipeline</h2>

<p>The rendering pipeline is the process by which images are prepared and output onto the screen. The graphics rendering pipeline takes the 3D objects built from <strong>primitives</strong> described using <strong>vertices</strong>, applies processing, calculates the <strong>fragments</strong> and renders them on the 2D screen as <strong>pixels</strong>.</p>

<p>All the shapes are built from <strong>vertices</strong>. Every vertex is described by these attributes:</p>

<ul>
 <li><strong>Position</strong>: Identifies it in a 3D space (<code>x</code>, <code>y</code>, <code>z</code>).</li>
 <li><strong>Color</strong>: Holds an RGBA value (R, G and B for the red, green, and blue channels, alpha for transparency — all values range from <code>0.0</code> to <code>1.0</code>).</li>
 <li><strong>Normal:</strong> A way to describe the direction the vertex is facing.</li>
 <li><strong>Texture</strong>: A 2D image that the vertex can use to decorate its surface instead of a simple color.</li>
</ul>

<p>Other terminology worth know is as follows:</p>

<ul>
 <li>A <strong>Primitive</strong>: An input to the pipeline — it's built from vertices and can be a triangle, point or line.</li>
 <li>A <strong>Pixel</strong>: A point on the screen arranged in the 2D grid, which holds an RGB color.</li>
 <li>A <strong>Fragment</strong>: A 3D projection of a pixel, and has all the same attributes as a pixel.</li>
</ul>

<h3 id="Objects">Objects</h3>

<p>A face of the given shape is a plane between vertices. For example, a cube has 8 different vertices (points in space) and 6 different faces, each constructed out of 4 vertices. Also, by connecting the points we're creating the edges of the cube. The geometry is built from a vertex and the face, while material is a texture, which uses an image. If we connect the geometry with the material we will get a mesh.</p>

<p>Rendering pipeline consists of vertex and fragment processing. They are programmable — you can <a href="https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/GLSL_Shaders">write your own shaders</a> that manipulate the output.</p>

<h2 id="Transformation_matrix">Transformation matrix</h2>

<p>Vertex processing is all about transforming the coordinates and projecting them onto the screen. A transform converts a vertex from one space to another, and is done by multiplying the vector with the transformation matrix.</p>

<p>There are four stages to this processing: arranging the objects in the world (called world or model transformation), positioning and setting the orientation of the camera (view transformation), defining the camera settings (projection transformation) and outputting the image (viewport transformation).</p>

<h3 id="Model_(world)_transformation">Model (world) transformation</h3>

<p>Objects are drawn in local space, so they need to be transformed to be drawn in the global, world space. It is done with the affine transforms — for example the rotation and scaling belong to the linear transformation, while translation is not linear.</p>

<h3 id="View_transformation">View transformation</h3>

<p>View transformation is about placing the camera in the 3D space. The camera has three parameters: location, direction, and orientation. The view matrix is used to transform the camera space to the world space.</p>

<h3 id="Projection_(perspective)_transformation">Projection (perspective) transformation</h3>

<p>Projection sets up what can be seen by the camera — the configuration includes field of view, aspect ratio and optional near and far planes. Objects outside of the view are not visible, and are ignored in the rendering process to boost performance. If an object is partially visible it is clipped to the camera's visible area. Projection transforms individual vertices.</p>

<h3 id="Rasterization_(viewport)_transformation">Rasterization (viewport) transformation</h3>

<p>Rasterization converts primitives to a set of fragments and maps them to the 3D viewport.</p>

<h2 id="Fragment_processing">Fragment processing</h2>

<p>Fragment processing focuses on textures and lightning. It calculates final colors based on the given parameters.</p>

<h3 id="Output_manipulation">Output manipulation</h3>

<p>During the output manipulation we use the z-buffer, or depth-buffer. Removing everything that is not visible because it was hidden behind another object can greatly increase the performance of the application.</p>

<p>If one object is in front of the other and it's not entirely opaque (the material has transparency), the object behind it has to be rendered with that in mind — alpha blending can be used to calculate the proper colors of the objects in this situation.</p>

<h3 id="Lightning">Lighting</h3>

<p>The color we see on the screen is a result of the light source interacting with the surface color of the object's material. Light might be absorbed or reflected. The standard Phong Lightning Model implemented in WebGL has four basic types of lighting:</p>

<ul>
 <li><strong>Diffuse</strong>: A distant directional light, like the sun.</li>
 <li><strong>Specular</strong>: A point of light, just like a light bulb in a room or a flash light.</li>
 <li><strong>Ambient</strong>: The constant light applied to everything on the scene.</li>
 <li><strong>Emissive</strong>: The light emitted directly by the object.</li>
</ul>

<h2 id="Conclusion">Conclusion</h2>

<p>Now you know the basic theory behind 3D manipulation. If you want to move on to practice and see some demos in action, follow up with the tutorials below:</p>

<ul>
 <li><a href="https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/Building_up_a_basic_demo_with_Three.js">Building up a basic demo with Three.js</a></li>
 <li><a href="https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/Building_up_a_basic_demo_with_Babylon.js">Building up a basic demo with Babylon.js</a></li>
 <li><a href="https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/Building_up_a_basic_demo_with_PlayCanvas">Building up a basic demo with PlayCanvas</a></li>
 <li><a href="https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/Building_up_a_basic_demo_with_A-Frame">Building up a basic demo with A-Frame</a></li>
</ul>

<p>Go ahead and create some cool cutting-edge 3D experiments yourself!</p>
Revert to this revision