Lab 7: Environment Mapping and Normal Mapping

Expected due date: 26-10-2022

Learning Objectives: The purpose of this set of exercises is to become familiar with the concepts behind environment mapping and bump mapping. We will use environment mapping to render a curved reflector and, in the process, learn how to use cube maps and the reflection function. We will also use bump mapping to add small scale-like surface details.

Tasks:


Part 1: Cube Map

Cube maps are a way of texturing an environment so that each of the faces of the cube project respectively unto the scene. A good example of this is the following image:

![[Pasted image 20221114161209.png|450]]

We build upon the textured Gouraud Shading sphere from [[CG Lab 6#Part 3: Texture Mapping| Lab 6 - Part 3]] to change the texturing from a 2D image file to six different image files. To do this we need to change the gl.bindTexture method of mapping textures to the following:

gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);

We also have to change the texture wrapping and texture filtering method to match the new cube mapping method as shown below:

// Set the texture in the fragment shader
gl.uniform1i(gl.getUniformLocation(program, "texMap"), 0);

gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

All that's required to be changed now in order to get the correct textures to the fragment shader is the correct loading of the image files. Previously, we waited until the loading of a created image resource to implement the WebGL code to load that resource in. This is the same process we will use for this exercise however, we need to load six images and attribute them to the correct target. A target is the Cube Map face that this file will be representing, for example we could map the front of the cube map to the texture in the following manner:

var target = gl.TEXTURE_CUBE_MAP_NEGATIVE_Z;
var filepath = '/res/cm/original/cm_front.png' // Negative Z

var image = document.createElement('img');
image.crossorigin = 'anonymous';
image.onload = e => {
        gl.texImage2D(target, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, e.target);
    };
image.src = filepath;

We simply have to repeat this process for the other five faces of the cub map in order to get all six images properly loaded into the application. We have some design freedom here as there are some ways to approach how to code this, most likely in the real world you would want to code some function for ease of loading multiple cube maps. For now, we can take advantage of the JSON object in JavaScript and loop through an array of objects as shown in the code below:

for (let json of CUBE_MAP_OBJ) {
    var { target, filepath } = json;
            
    var image = document.createElement('img');
    image.crossorigin = 'anonymous';
    image.onload = e => {
        gl.texImage2D(target, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, e.target);
    };
    image.src = filepath;
}

Where the CUBE_MAP_OBJ is shown below: For reference see WebGL Fundamentals

const CUBE_MAP_ARRAY = [
        {
            target: gl.TEXTURE_CUBE_MAP_POSITIVE_X,
            filepath: '/res/cm/house/house_posx.png' // Positive X
        },
        {
            target: gl.TEXTURE_CUBE_MAP_NEGATIVE_X,
            filepath: '/res/cm/house/house_negx.png' // Negative X
        },
        {
            target: gl.TEXTURE_CUBE_MAP_POSITIVE_Y,
            filepath: '/res/cm/house/house_posy.png' // Positive Y
        },
        {
            target: gl.TEXTURE_CUBE_MAP_NEGATIVE_Y,
            filepath: '/res/cm/house/house_negy.png' // Negative Y
        },
        {
            target: gl.TEXTURE_CUBE_MAP_POSITIVE_Z,
            filepath: '/res/cm/house/house_posz.png' // Positive Z
        },
        {
            target: gl.TEXTURE_CUBE_MAP_NEGATIVE_Z,
            filepath: '/res/cm/house/house_negz.png' // Negative Z
        },
    ];

We can now use the old texMap variable in the fragment shader, albeit changed from a 2D sampler to a samplerCube, in order to map the texture to the sphere. Here we can use world space normals as the textures will map only to six possible images. The mapping of the fragment is shown below:

gl_FragColor = fColor * textureCube(texMap, fNormal.xyz);

The fColor is a left over from the previous laboratory as the sphere is still being affected by Gouraud Shading calculated on the vertex shader.

Result

Error: Your browser does NOT support WebGL!

Part 2: Environment

As cube maps are a technique for environment texturing we can use the same approach as above but instead of mapping the cube map to a sphere we can inversely map it to a cube that covers the camera. So that the camera is inside of the cube and the texture is mapped on the inside of the cube. To do this we create the cube at the following coordinates:

var skyboxVertices = [
    vec3(-1.0, -1.0, 0.999),
    vec3(0.999, -1.0, 0.999),
    vec3(0.999, 0.999, 0.999),
    vec3(-1.0, 0.999, 0.999),
    vec3(-1.0, -1.0, -1.0),
    vec3(0.999, -1.0, -1.0),
    vec3(0.999, 0.999, -1.0),
    vec3(-1.0, 0.999, -1.0),
];

We then use the old code from the book to create a quadrilateral cube as shown below:

function cube() {
    quad(2, 1, 5, 6);
    quad(6, 5, 8, 7);
    quad(5, 1, 4, 8);
    quad(2, 6, 7, 3);
    quad(4, 3, 7, 8);
    quad(1, 2, 3, 4);
}

function quad(a,b,c,d){
    var indices = [a,b,c,a,c,d];
    for (var i = 0; i < indices.length; ++i) {
        points.push(vertices[indices[i] - 1]);
    }
}

Up until this point we have been rendering the shapes using MVP normalized coordinate vertices, however, now we require to simply use the texture coordinates themselves to infer the reflection effect. To do this we, load the shape and texture the same way as the previous part however, the render method, will now have to render the two shapes differently. The sphere would be rendered the same way as before just a slight adjustment to the index of the points. The skybox on the other hand will need to inversely be textured and as such a unique texMatrix will be used to calculate the correct attribution of the texture. This texMatrix is responsible for mapping the inner part of the box to the texture. We are mapping the "inside" of the box because the camera is enveloped inside of the box and thus the texture needs to flipped based on the position of the sphere. That means that the texMatrix is basically the inverse of the view and model matrix. This type of texture mapping is known as cube mapping as the we are performing directional lookups unto the cube map of square resolution images inside a 90 degree FOV. Our render method therefore creates three matrices:

Using the above information we can define our render method as follows:

function render() {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

theta += 0.57;
phi += 0.00;
radius += 0.00;

var R = mat4(); // World rotation matrix
R = mult(R, rotateY(theta)); // Rotate around the Y axis
R = mult(R, rotateX(phi)); // Rotate around the X axis
R = mult(R, rotateZ(radius)); // Rotate around the Z axis

// Create the matrices 
var modelMatrix = mult(translate(0, 0, -3), R);
var viewMatrix = perspective(fovY, aspect, near, far);
var texMatrix = mult(inverse(modelMatrix), inverse(viewMatrix));

// Send the matrices to the shaders
gl.uniformMatrix4fv(uniformLocations.worldMatrix, false, flatten(modelMatrix));
gl.uniformMatrix4fv(uniformLocations.viewMatrix, false, flatten(viewMatrix));
gl.uniformMatrix4fv(uniformLocations.texMatrix, false, flatten(texMatrix));

// Draw the background
var cubeIndex = pointsArray.length - index;
gl.drawArrays(gl.TRIANGLES, 0, cubeIndex);
        
// Reset the texture matrix
gl.uniformMatrix4fv(uniformLocations.texMatrix, false, flatten(mat4()));

// Draw the sphere
gl.drawArrays(gl.TRIANGLES, cubeIndex, pointsArray.length - cubeIndex);
gl.uniform1i(gl.getUniformLocation(program, "texMap"), 0);
window.requestAnimationFrame(render);
}

We now have all the information required for the shaders to render the data. Our fragment shader does not change however our fragment shader simply calculates the new world normals that will be used, in the first case we use the inverse to map the reflected texture. This is why we sent in an empty matrix for the sphere so that it gets the normal texture. The vertex shader boils down to the two lines shown below:

fNormal = texMatrix * vec4(vPosition, 1);
gl_Position = viewMatrix * modelMatrix * fNormal;;

We now have a textured sphere that is being rotated inside of a cube which has inverted textures resulting in a reflection effect.

Result

Error: Your browser does NOT support WebGL!
Rotation Controls:

X-Rotation

Y-Rotation

Z-Rotation

The above controls will increase/decrease the speed at which the camera rotates around the sphere in the labeled axis. Warning there is no limit to the speed!

Part 3: Reflection

From the last part we kept referring to the reflection as a simple effect, this is due to the sphere not projecting textures from the environment normals and simply showing one of the six faces on the textured cube map . This is because the cube mapping technique will approximate the closet texture in a coordinal direction. Instead we can approach this in that the skybox texture determines the sphere's. To do that we will have to get the eye position and send that to the fragment shader in order to calculate the reflected direction using the reflect WebGL function. We will also need to send a Boolean down to the fragment shader in order to differentiate which objects are reflective and which are not (the background). The render method will change to the following code:

function render(){
    gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

    theta += 0.57;
    phi += 0.00;
    radius += 0.00;

    var R = mat4(); // World rotation matrix
    R = mult(R, rotateY(theta)); // Rotate around the Y axis
    R = mult(R, rotateX(phi)); // Rotate around the X axis
    R = mult(R, rotateZ(radius)); // Rotate around the Z axis

    // Create the matrices 
    var modelMatrix = mult(translate(0, 0, -2), R); // Move the camera back 2 units from origin
    var viewMatrix = perspective(fovY, aspect, near, far);
    var eye = mult(inverse(modelMatrix), vec4(0, 0, 0, 1));

    var texMatrix = mult(inverse(modelMatrix), inverse(viewMatrix));

    // Send the matrices to the shaders
    gl.uniformMatrix4fv(uniformLocations.modelMatrix, false, flatten(modelMatrix));
    gl.uniformMatrix4fv(uniformLocations.viewMatrix, false, flatten(viewMatrix));
    gl.uniform4fv(uniformLocations.eyePos, flatten(eye));
    gl.uniformMatrix4fv(uniformLocations.texMatrix, false, flatten(texMatrix));
    gl.uniform1i(uniformLocations.reflective, 0); // Set reflective to false

    // Draw the background
    var cubeIndex = pointsArray.length - index;
        gl.drawArrays(gl.TRIANGLES, 0, cubeIndex);

    gl.uniform1i(uniformLocations.reflective, 1); // Set reflective to true
    gl.uniformMatrix4fv(uniformLocations.texMatrix, false, flatten(mat4())); // Reset the texture matrix

    // Draw the sphere
    gl.drawArrays(gl.TRIANGLES, cubeIndex, pointsArray.length - cubeIndex);
    
    window.requestAnimationFrame(render);
}

This code change follows the same principle as the previous' part in where we had to differentiate between the texMatrix reflection. We now simply turn on the Boolean when we need to tell the fragment shader that the object being drawn is reflective. The code for the fragment shader is the only thing that changes and is shown below:

vec3 texCoords = normalize(fPosition.xyz);

// Distinguish reflective objects (the sphere) from other objects (the background)

if(reflective){
    vec4 Iw = fPostion - eyePosition;
    texCoords = reflect(Iw.xyz, fPostion.xyz);
}

gl_FragColor = textureCube(texMap, texCoords);

This code is translated from the lecture on reflection environment mapping. The original formula given by the lecturer is as follows where $f_p$ is world position and $e$ is eye position:

$$ Iw = \frac{ f_p - e}{|| f_p - e||} $$

This $I_w$ is then used for the formula for the reflected ray direction $r_w$ which takes in also the $n_w$ which is the normal of the fragment. This formula is calculated using the built in function of reflect. The ray formula is shown below:

$$ r_w = I_w - 2(n_w \times I_w) n_w $$

Result

Error: Your browser does NOT support WebGL!
Rotation Controls:

X-Rotation

Y-Rotation

Z-Rotation

The above controls will increase/decrease the speed at which the camera rotates around the sphere in the labeled axis. Warning there is no limit to the speed!

Part 4: Bump Mapping

Now that the sphere is correctly reflecting the texture of the environment around it, we can now look at the third type of texture mapping that will be introduced in the course, Bump Mapping. Bump mapping is a texture mapping technique which allows objects to be given texture and details without adding new point data for the shader to render. This is all handled in the fragment shader as it will map texture to point data like we have been doing up to this point.

We can take a look directly at the fragment shader as it is the part of the code with the most changes. Currently we have been using the tangent coordinates of the sphere in order to map the textures. Instead we need to map the textures to the world space. As such, we need to rotate the normals of the texture matrix to match that of the new bump mapping. This is done by sending in the texMap variable containing the cube map texture and the bump tangent normals to the helper function provided by the teacher shown below:

vec3 rotate_to_normal(vec3 normal, vec3 v) {
    float a = 1.0/(1.0 + normal.z);
    float b = -normal.x*normal.y*a;
    return vec3(1.0 - normal.x*normal.x*a, b, -normal.x)*v.x
        + vec3(b, 1.0 - normal.y*normal.y*a, -normal.y)*v.y
        + normal*v.z;
}

We also need to make sure that the bump normals are given in inverse spherical mapping so that they are in texture coordinates before being send to be rotated normals. Spherical Inverse mapping is explained by the image below:

![[Pasted image 20221118181809.png]]

In code this is show below as part of the fragment shader:

precision mediump float;
varying vec4 fPosition;

uniform vec4 eyePosition;
uniform bool reflective;

uniform samplerCube texMap;
uniform sampler2D bumpMap;

void main() {
float PI = 3.1415926535897932384626433832795;

vec3 texCoords = normalize(fPosition.xyz);

if (reflective) {
    vec4 v = fPosition - eyePosition;
    texCoords = normalize(reflect(v.xyz, fPosition.xyz));

    // Calculate the vector coordinates and normals for the bump map.
    float phi = atan(fPosition.x, fPosition.z) / (2.0 * PI) + 0.5;
                        float theta = fPosition.y * 0.5 + 0.5;
                        vec2 texCoord = vec2(phi, theta);
                        vec4 bumpVal = texture2D(bumpMap, texCoord);
                        vec4 bumpNorm = 2.0 * bumpVal - 1.0;

    vec3 worldCoords = rotate_to_normal(texCoords.xyz, bumpNorm.xyz);
    gl_FragColor = textureCube(texMap, worldCoords);
} else {
    gl_FragColor = textureCube(texMap, texCoords);
}
}

Loading the texture for the bump map is done in the same way that texture loading was done in Lab 6. However, I ran into a bug that the website would load too fast and thus the image would be created faster than the image resource could be loaded. This bug was fixed by encapsulating the image load in an async function with a small delay. The bump map is loaded by the code below:

async function loadBumpMap() {
    await new Promise(resolve => setTimeout(resolve, 5));
    var image = document.createElement('img');
    image.crossorigin = 'anonymous';
    image.src = '/res/textures/normalmap.png';
    image.onload = function () {
    var texture = gl.createTexture();
    gl.activeTexture(gl.TEXTURE1);
    gl.bindTexture(gl.TEXTURE_2D, texture);
    gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
    };
}

Result

Error: Your browser does NOT support WebGL!

Selected Sky-Box

Rotation Controls:

X-Rotation

Y-Rotation

Z-Rotation

The above controls will increase/decrease the speed at which the camera rotates around the sphere in the labeled axis. Warning there is no limit to the speed!

Next Lab: Worksheet 8


Lab Finished!

Report Finished!

Report Merged!