WebGPU is the next-generation graphics API and future web standard for graphics and compute, aiming to provide modern 3D graphics and computation capabilities with the GPU acceleration.

We just published a WebGPU course on the freeCodeCmap.org YouTube channel.

Dr. Jack Xu developed this course. Dr. Xu has over 25 years of programming experience and has published multiple books about WebGPU.

In this course, you will learn the basics of WebGPU by building 10 WebGPU projects. You will learn how to create each project from scratch and how to add 3D graphics to your web applications.

sinc
3D Sinc Surface created with WebGPU

By the end of this course, you will understand how to create advanced 3D graphics using GPU computing on the web with the WebGPU API. Here are the sections covered in this course:

  • Development Environment
  • Create a Colorful Triangle
  • Create a Square with GPU Buffer
  • Cube with Distinct Face Colors
  • Animation and Camera Control
  • Light Model
  • Cube with Lighting Effects
  • Colormap
  • 3D Simple Surfaces
  • 3D Sinc Surface

Watch the full course below or on the freeCodeCamp.org YouTube channel (2-hour watch).

Transcript

(autogenerated)

Dr. Jack Xu is the author of many WebGPU books. In this course, he will teach you the basics of WebGPU.

Hi, I'm Jack. From this video, you will learn the basics of WebGPU Graphics Programming by building ten separate WebGPU projects.

Each project will be created on top of the previous one from simple primitives to complicated 3d graphics. Our end product will be a beautiful 3D Sinc surface, as shown here.

We will create these WebGPU projects from scratch and show you how to add 3D graphics with GPU acceleration to your web applications.

Here, I want to thank Beau and FreeCodeCamp for publishing this video. FreeCodeCamp is a great resource for the programming community. And thank you for all that you do.

Before getting into this video, I'd like to quickly tell you a little bit about my background. I got my PhD in theoretical physics and have over 25 years of programming experience in C, C++, C# .NET and web development.

I have published over 20 books about practical programming on a variety of topics, including graphics programming, machine learning, quantitative finance, numerical computation methods, and web applications.

Recently, I created a YouTube channel: Practical programming based on my books.

In this channel, I will present several step by step video series, in which I will emphasize the usefulness of example code to real world applications.

The first video series is about WebGPU Graphics Programming that is based on my recently published book, "Practical WebGPU Graphics".

The projects presented here are also selected from this book.

My channel: Practical Programming with Dr.

Xu is new and I plan to update it every week.

I would greatly appreciate if you can check out my channel and subscribe it.

Now let's go to WebGPU Graphics Programming.

Now let's start project one that will set up a development environment.

The source code used in this project can be downloaded from GitHub link here, This project uses this specific version.

So now what is WebGPU?  WebGPU is the next generation graphics API.

It is future web standard for graphics and compute WebGPU will provide modern 3d graphics and computation capability with GPU acceleration.

In this video, we will use the following development tools to build our WebGPU applications.

First is the Visual Studio code, node.js.

We use a TypeScript as our programming language, and we'll use webpack as our modular bundler.

In order to run WebGPU applications, currently, we have three options.

The first one is to use a Chrome Canary to test our WebGPU applications.

The other option is to run WebGPU applications on a regular Crome through origin trial.

Starting from Chrome 94, Web GPU is available as the origin trial in the regular Chrome.

You'll need to register to origin trials and request the token for your origin.

The third option is that Chome will officially support WebGPU in May of this year.

So after this May, you will be able to run your WebGPU applications on regular Chrome.

In this video series, I will use Chrome Canary to test our WebGPU projects.

Here, I suppose you already installed Visual Studio code node.js, and also installed TypeScript globally on your machine.

Let's start programming.

Now let's open a command prompt window and let's make a folder make directory gpuapp and cd into it.

And then, we are going to run this command:  npm init -y to create a package.json file.

Okay, this file will store our dependencies used in this project.

Now, we are going to install webpack and its command-line interface using this command: npm install cli.

WebPack is modular bundler that Bundles relevant files together to generate static assets that can be used by the web applications.

Here we use webpack, mainly for transpiling TypeScript files, but here we don't use it to bundle the other types of files, such as HTML and image files.

Now instalation is complicated.

At this point, we are going to start Visual Studio Code with a command: code period.

Here is Visual Studio Code interface, you can see we only have a package.json file, but not much happening here.

Now open a terminal window by pressing Ctrl+J key.

Now, let's install some dependency packages.

First, we will install jQuery and corresponding type package.

npm install jQuery.

We will use jQuery to manipulate the DOM elements in our applications.

Next, we want to install styling CSS and ts loaders.

In case if our project contains styling sheet file, we need to use style loader and CSS loader to bundle it.

In order to create TypeScript modules in our project, We also need to install TypeScript locally.

npm install TypeScript locally.

We already installed it globally before.

Now, opening the package.json file.

You can see the script section here only contains the test attribute, that is not very useful.

Here we are going to replace this section with the code.

Here the code allows you to run WebPack in different modes like development, production and watch mode.

Now we can see with this file and close it.

Now all the installed packages are going to be stored in the node_modules folder.

Here the package_lock.json file is automatically generated for any npm operations.

So we don't need to do anything with it.

Next step is to initialize TypeScript file with the command tsc --init.

This operation created a configuration file for TypeScript called the ts_config here.

Now, we open this configuration file and we need to replace its content You can see here we set the target to the es6 also note we define the root directory as src and output the directory as dist, distribution folder.

Now save this file and close it.

We then create some folder structures for our project.

First, let's create the src folder.

So here we create a folder called src.

We will store all the source files in this folder.

Next, we create a dist folder, which will contain all files that will be uploaded to the web server for running our applications.

In this folder, there is at least one HTML file.

let's add an index html file to this folder.

Add another file index.html, and add some code to it.

You can see here we add the title WebGPU project.

Here h1 title check whether your browser supports WebGPU or not.

We also have h2 DOM elements here and set its id equal to id_gpu_check.

This h2 element will be used to display text string that will be returned by a typescript function which will be created later.

Now we are about to finish the development environment setup, but we have missed a key step, that is,  we need to install the WebGPU package.

WebGPU working group has created a type definition for WebGPU standard.

This package matches the work-in-progress WebGPU API, which is currently not very stable.

So it can change almost weekly.

So be careful to use it.

Anyway, let's go ahead to install it with this command.

So here we specify the version 0.1.12, this is the latest version, we install it called  webgpu types.

Because the WebGPU is not backward compatible, so you better use the same version to run our project.

Otherwise, you need to do some modification to your code to run application.

So I suggest you use the same version here 0.1.12.

After installation we need to configure it.

Open TypeScript configuration file.

You can see here we already configured it.

We add WebGPU types and node in the types section here you can see here we have node type.

So we need to install it use this command to install this node type.

Okay, the WebGPU is now available for our project.

Now let's add some TypeScript file to our project.

First, added a new file called helper.ts file to the src folder.

So we added a new file called helper.ts file and add some code to it.

You can see here we use this navigator.gpu variable to check the GPU availability.

This variable is defined in the web GPU API.

So you can see here we use the fat arrow to define the typescript function.

Next, we need to create an entry point file called main.ts file.

Okay, save this file first.

And close it.

Let's add another new file to the src folder called the main.ts file.

Now add some code to this file, we just import the jQuery and also import the checkWebGPU function from helper.ts file, just implemented.

So here we define id_gpu_check to display the string returned by the checkWebGPU function.

Here the id represents h2 DOM element defined in the index.html file, make sure this ID is the same as ID defined in that file.

Next, we also need to create a webpack configuration file.

Now add a new file to the root directory webpack_config.js file.

You can see here we define a bundle output directory as dist and an entry point is named main here.

The main, src main.ts File.

Note that the output file has always with bundle.js extension.

Now we can save this file and close it.

Now we can run the following command from terminal window to bundle our TypeScript file: npm run prod in the production mode Okay finished.

You can check the bundle file in the dist folder.

You can see here we have several main bundle.js files.

This is our bundle file, this is license file, this is source map file.

This main.bundle.js file has been referred in the index.html file.

You can see here, this is the main.bundle.js This bundle file is generated by webpack.

In order to test our application we also need a server.

Here we will use live server extension.

Click on the extension link search for live server.

This  first one is live server.

Here I already installed it.

Otherwise we have install link here.

If you haven't installed it, just click install button to install the live server.

So I suppose you already installed it.

Now we can open index.html file, right click anywhere open with live server, click this link.

This will open our default browser to display the web page.

Unfortunately, if your default browser is regular Chrome, it will displays the message: your current browser does not support WebGPU.

So in order to run WebGPU applications here we will install Chrome Canary.

So here search for Chrome Canary.

Click it.

So this nightly build for developer.

Just download Chrome Canary and install it.

I already installed this chrome Canary on my machine.

Please remember the location where your Chrome Canary is installed.

Okay, I suppose you already installed it.

Let's go back to Visual Studio Code.

We want to change the default browser to a Chrome Canary, which will be easy to test our WebGPU applications.

So press Ctrl+Shift+P and type the preference opening setting here.

So open this file.

Here, you need to add these three lines here.

live server setting don't so info message.

I set it to true, this doesn't matter.

I also set the root directory as dist because our index.html file is in this directory.

So this the ones custom browser command line here defines the default browser.

So you need to replace this code.

So you need to copy your Chrome Canary exe file location and paste it here, you must change it to match your situation.

Now we can save this file and close it.

Now start the Chrome Canary and type Chrome flags and search for WebGPU.

Here, enable it here I already enabled it.

If you're not, you just need to enable it to make WebGPU available for our Chrome Canary.

Okay, we can close it.

Okay finally we can test our development environment for WebGPU applications.

Here we have a link on the status bar area you can see go live link.

This time it will start Chrome Canary browser.

Click this link.

You can see Greate your current browser supports WebGPU, Congratulations, we have successfully set up the development environment for WebGPU applications.

Now we completed our first WebGPU project that shows how to set up development environment.

This is project 2 of this video series.

Here I will explain how to create a colorful triangle in WebGPU as shown here.

This is a colorful triangle.

This project is based on our first project.

You can open project one directly from Visual Studio Code or clone the source code from the GitHub repository.

git clone.

This is a GitHub link for the project one source code.

I also put this project 2 source code into the GitHub repository.

Here is the link.

So you can download the source code used in this project from this link.

Now we will open the project one directly from Visual Studio Code.

Here we first add a CSS file called the site.css to the src folder, src folder, add a new file called site.css.

Type some code into this file then save and close it.

Just like WebGL, The first thing we need to do to use WebGPU for graphics rendering is to create a canvas element.

From the dist folder, open index.html file.

Here we need to change h2 elements from here.

Here we define canvas element with ID canvas-webgpu.

Later we will create WebGPU Graphics on this canvas.

The WebGPU programming consists of two parts one is the control code written in typescript or javascript, and the other part is shader code run on the GPU.

Here we will use the official shader language for WebGPU called WGSL, which is short for WebGPU Shading Language.

To take advantage of the code highlight in VS Code, we will convert the shader code with wgsl extension into typescript-like module.

To do this, we need to install a shader loader for webpack.

Now open a new terminal window.

Run the following command to install the shader loader called npm install ts-shader-loader This loader was originally created for loading GLSL shader code.

But we will make it works for WGSL here.

Next, open WebPack configuration.js file and add the following code to it.

Here the file with wgsl, glsl, vs, vs means vertex shader, fs, fs means fragment shader.

So the file with these extensions will be treated as a shader code by this shader loader.

We then need to install WGSL extension from VS Code.

Click this extension link, search for WGSL.

Here is first one: wgsL provides syntax highlight I already installed this module.

So you can see here the code highlight.

In this case, it is easy to write shader code using this extension.

So if you haven't installed it, so click this install button to install it.

Next we need to add some type declarations for our shader.

Add a new sub folder to the src folder.

A new folder called types.

Then add a new file called shader.d.ts file.

Inside this file we add some contents to this file.

Here we declare module.

The file with wgsl, glsl, vs or fs we declare this file as module.

So right now we can save this file and close it.

Now we can write our shader code.

In the src folder add a new file called a shader.wgsl file.

Then add some code From this code, you can see that there are two shaders here.

One is vertext shader and another is fragment shader.

We have shader function for vertex called vs_main function.

vs means vertex shader function.

Here we have fs main: fragment shader main function.

The vertex shader takes vertex data including word position, color, and texture as the input the output is the position in the clip coordinates.

While the other outputs such as color and texture will be passed to the fragment shader.

These values will then be interpolated over the fragments to produce a smooth color gradient.

Now back to our example.

In the vertex shader, we first define the output structure here, that contains built-in position and the output color called  vColor.

Inside the vs_main function, we first define three vertices of our triangle using the floating point vec2 vector.

So we have three vertices.

We also specify a colors for each vertex using a floating point vec3 vector.

You can see the first one is 1 0 0.

This means red.

The second color is 0, 1, 0 the green, the final one is 0, 0, 1, which is a blue.

So we define different color for each vortex.

Next, we define output using the output structure here we thenconvert 2D vector of the position into a vec4, the four dimensional vector we put a z component at zero and a w component as one.

Similarly we convert 3d vector for the color into vec4 here.

The one represents transparency of the color In the fragment shader, we take the output color from the vertex shader as an input and return it as the fragment color.

Okay, this is the shader we will use when creating our colorful triangle in WebGPU.

Now we can save this file and close it.

Next, we'll use TypeScript to write WebGPU control code.

Open the main.ts file.

And replace is content with the following new code.

Here we first introduce the checkWebGPU function from helper.js File.

Then, from shader.wgsl file, we introduce our shader from this file, we also import the site.css file.

Here we will use it to configure our web page layout.

Next, we create a new function called Create triangle.

This function must be async, you can see async function because WebGPU API itself is async.

Inside this function we first check whether your browser supports WebGPU or not.

In WebGPU, we can access the GPU by calling the request adapter here.

Once we have this adapter we can call request device method to get GPU device.

This device provides a context to work with the hardware.

And an interface to create GPU objects, such as buffer and texture.

We can also run commands on this GPU device As with WebGL, we need a context for our canvas element that will be used to display graphics.

Here we use the canvas element to request web GPU context.

You can see here get content by the name webgpu.

This is a keyword used by WebGPU to create context.

We then configure this context with the device and the format.

Next, we define the render pipeline by calling the Create render pipeline method.

Inside this pipeline we have several attributes.

The first one is a vertex, where we'll assign the shader to the module.

Here with an entry point as the vs_main function this function is created in our shader file.

Similarly for the fragment attribute we assign the shader to the module an entry point as fs_main function.

The another required attribute is a primitive here, this attribute contains a field called the topology.

We set it to triangle-list because we want to create a triangle in this project.

Next we create the command encoder and render pass by calling the beginning render pass.

This render pass accepts a parameter of the type GPU render pass descriptor as the render pass option.

Here we only use color attachments that is used to store image information.

In our example, it stores the background color in the clearValue area we have another field called the loadValue.

This field will be removed from WebGPU in next version, so we don't use it here.

Next, we'll assign the pipeline to our render pass and draw our triangle by calling the draw method here.

we call the end function to finish current render pass, meaning that no more instructions are sent to the GPU.

Finally, we submit all instructions to the queue of the GPU device for execution.

After running the command, our triangle will be written to the GPU context and displayed on our canvas element.

Now we finished the implementation of our create triangle method.

This method provides basic procedure to create WebGPU applications, that is, first we initialize WebGPU API, create shader program, set up render pipeline and build render pass, call draw function, submit instructions to the GPU for execution.

Finally, we call the Create triangle function to generate our triangle.

Up to now we have finished our programming for this project.

We can now run the following command on the terminal window: npm run prod to bundle our typescript code in production model.

Okay finished.

Our Bundle file is created successfully.

Now we can click the go-live link from here to open Chrome Canary to view our triangle.

Click this link.

Here is our colorful triangle.

You can see here is red, green and blue.

From one vertex to another you can see the color changes smoothly because WebGPU interpolates the color internally which gives a smooth color gradient.

Now we have completed project 2.

This is a project three: we will create a colorful square using GPU buffer.

In the last project, we create a colorful triangle in WebGPU by writing the vertex and color data directly in the shader code.

This approach is only possible for creating very simple shapes.

It is almost impossible to use this direct approach for creating complex 3d graphics with different colors and textures.

In this project, I will introduce GPU buffer and show you how to use it to store vertex and color information.

In this example, we will create a colorful square to explain the concept of the GPU buffer.

Here we will introduce several new concepts that will be the foundation to create complex 3d graphics objects.

GPU buffer in Web GPU represents a block of memory that can be used to store data for GPU operations.

In this project, I will show you how to use the GPU buffer to store vertex position and color data.

You can download the source code used in this project from this GitHub repository link.

You can see we have specific commitment version, this version is specific for our current project.

Now let's start Visual Studio Code and open project 2 that we built in the last video.

Okay, here's the project 2 we used in the last video.

Now let's first make some changes to the index,html file.

From the dist folder, open index.html file.

Now we need to change here the h1 title to Create Square Using GPU Buffer.

So we don't need to change the other part of the code.

So we can save this file.

Next from the src folder, open the helper.ts file.

Now we need to add a new function called the InitGPU to this helper file.

So we need to add some code here.

You can see we place all WebGPU initialization code to this function, initGPU.

So this can avoid code duplication because initialization will be same for all WebGPU projects.

This function is an async function, and it returns the device, canvas, format, and context, which will be used when we create a render pipeline and render pass.

We also want to add another function called Create GPU buffer to this helper file.

So we need to add another function here called Create GPU buffer.

You can see this function takes three inputs: the device, the data, and the usage flag.

Here we set the default usage flag to vertex or copy destination.

Here you can see we call the device create buffer method to create a GPU buffer and a set the size as the data the byte length to allocate the room for the buffer data.

The other attribute here mapped at creation, we set this attribute to true.

This means that the GPU buffer is owned by CPU, which is accessible in Read Write from typescript code.

Once the GPU buffer is mapped, the application can ask for access to range of content with get, for example, here get mapped range method and set it to the data to be stored in the buffer.

Pleace note the mapped GPU buffer cannot be directly used by the GPU and it must be unmaped by calling the upmap method.

Here, our create GPU buffer method returns unmapped buffer, which can be used by the GPU.

So be careful here about the GPU buffer mapped and unmapped states.

Now we can save this file and close it.

Next we need to make changes to the main.ts file from src folder open the main.ts file.

We need to replace is content with the new code.

You can see here we first introduce initGPU and create GPU buffer from helper.ts file and then we introduce the shader.

The shader code will be discussed in a moment.

Inside the Create square method, we first call the initGPU method to create a GPU object and then get the device from it.

Next, we define the vertex data and the color data here.

You can see this is our vertex data, color data.

Here is vertex coordinates of our square a, b, c, d.

so, the coordinates like here a minus point five minus one and this is a, b, d, we have divide this square into two triangles a, b, d, d, b, c, the vertices of each triangle must be arranged in the counter-clockwise order.

You can see here a-b-d, d-b-c.

We also assign a color to each vertex.

You can see a red b green c blue and d yellow.

You can see here it for a is red for B is green, and so so.

These data are consistent with the coordinates, shown in this figure.

Of course, you can also divide the two triangle in other way around.

For example, a-b-c, c-d-a.

So you draw the diagonal this way.

So get another two triangles.

This doesn't metter.

Next, we create a two buffers by calling the Creator GPU buffer: one is the vertex buffer and another is color buffer.

Next, we create a pipeline by calling the Create render pipeline method.

The new code we add here is for the buffer attribute.

Before we don't have this buffer attribute.

Now, it is a two element array, the first element is for the vertex position.

The second is for the color.

You can see for the vertex position we have arrayStride we set it to eight since each vertex is represented using two floating 32 elements.

Each floating 32 number requires four bytes while for the color it is represented for each vertex using three floating 32 elements.

So, its array stride is set to 12.

The another important parameter here is shader location.

For the vertex position we set the shader location to zero, and for the color we set the shader location to one The other part of the code is very similar to that used in the previous project.

Here we set the primitive topology to the triangle list.

One new thing here is that after setting the pipeline to the render pass here, we also need to set the vertex buffer to the render pass using vertex buffer and color buffer.

You can see here we set vertex buffer as zero slop is vertex buffer.

And at slot one, we set the color buffer.

In the draw function here, we set the number of vertices to six because we have two triangles now.

Now we finished the modification to this file.

Now we can save this file.

Next we need to make modifications to our shader code.

from the src folder, open shader.wgsl file.

Here we need to replace his context with new code.

Here you can see we didn't define any vertices and colors in this shader code because we store these data in GPU buffer.

Instead we introduce two inputs one is at location zero pos, and other is the color at location one, which must be consistent with the shader location definition in the pipeline in our main.ts file, because we define the shader location as zero you can remember for the position and the shader location is one for the color.

So here position is zero.

And here color location is one.

This is how we set up the relationship between the vertex shader and the GPU buffers.

Note we define them here as a vec4 f32.

For the color same thing vec4 f32.

They look like they are not very consistent with the data stored in the vertex and color buffers where the for the position, it is a 2d array and for the color, it is 3d array.

This is because WGSL shader is smart enough to automatically convert them into  vec4 types by adding zero to the z component and one for the w component.

You can see here we also define two outputs one is position, one is vColor using a structure.

Inside this vs_main function, we process the position and color data by assigning the pos to the position and a color to the V color.

Inside the fragment shader we introduce the vColor from the vertex shader and return as fragment color.

Okay, this is the shader we will use when creating our square.

We can now save this file and close it.

Now we can run the following command in the terminal window to bundle our typescript code: npm run prod  to bundle our code in production mode.

Okay, the bundle file is created successfully.

Now we can click the Go Live link to open Chrome Canary to view our square.

So click this go live link.

Here is our colorful square with different vertex color.

Here is red, green, blue and yellow.

From one vertex to another you can see the color changes smoothly because WebGPU interpolates color internally which gives a smooth color gradient.

Okay, now we finished the project three.

In projects two and three, I demonstrated how to create a triangle and a square graphics in WebGPU applications.

In fact, this two objects are flat and two dimensional.

We have not created any real 3d graphics yet.

In this project 4, I will explain how to create a 3d cube with distinct face colors.

This is the first real 3d object we are going to create in a WebGPU application.

You can download the source code used in this project from this GitHub repository link.

Here we have specific version for this project.

This example involves a lot of code and mathematics.

So please get ready to digest it.

In order to create a real 3d objects in WebGPU, you need to have the math background of 3d matrices and transformations.

Since our computer screen is a two dimensional, it cannot directly display 3d Objects.

To view 3d objects on a 2d screen, you have to project your object from 2d to 3d, that will involve a series of coordinate transformations.

From here you can see that we define 3d object in the object coordinate system here.

We then perform various transformations on the object, including scaling, translation and rotation.

We call this transformation as a model transform.

After this transformation, we convert an object in the object coordinates into the object in the world space.

Next, the view transformation locates the viewer in World space and transform our 3d object into camera space that also called the eye coordinates.

The purpose of the projection transform here is to define a view volume called  view frustum, right here.

This is a view frustum.

This is used in two ways: it determines how an object is a project on the screen.

It also defines which portions of the object are clipped out of the final image.

That is only the portion inside this frustum will be kept and anything outside of this frustum will be clipped out.

Finally, we use the view port transform to convert the clip coordinates into the normalized device coordinates.

In WebGPU.

The viewport transform here is automatically performed in the shader code.

Please note that like a WebGL, the WebGPU API does not provide any functions for working with transformation.

In this project, we will use a JavaScript package called GL matrix to create 3d matrix operations and transformations.

Since the 3d matrix and transformations are common for any computer graphics programming, here, I suppose you already have this math background, I will not spend any more time on this.

Instead, I will concentrate on how to create 3d objecta in WebGPU applications.

From this project, you will learn somehow important concepts in web GPU that is uniform buffer and binding group.

We will use a uniform buffer to represent the transformation and projection matrices, and then use the binding group to pass the uniform buffers to the vertex shader.

Now we start with our Visual Studio Code and open our project three that we built in the last section.

Here is code we used in project three.

First we need to install a new NPM package called GL matrix: npm install gl matrix.

We will use this package to perform matrix operations and 3d transformations Now let's make some changes to the index dot HTML file.

From the dist folder, open index dot HTML file.

Here we need to change the h1 title to Cube with Distinct face Color.

So we don't need to change other parts of the code.

Save this file.

Here we are going to create our 3d cube.

these are the coordinates our cube.

From this diagram you can see there are eight vertices and six faces.

if we each face has a different color each vertex will then have three different colors.

For these vertices, For example, c we have the top face, front face, and the right face.

Because different face a different color.

So the vertex C can have three different colors depending on which face we talk about Here is the front face A, B C D.

You can see A B C D here.

As we did before, we can now divide this front face into two triangles.

One is  A-B-C, C-D-A, two triangles you can see here ABC and CBA.

In this way, you can perform triangulization for the other faces.

Next add a new TypeScript file to the src folder and add a new file vertex _data.ts file.

Add the new code to this file Here is the positionn data for our cube: this is front face, right face, and six faces This is the color data for the different faces.

You can see front face we use blue color, right face is red, and back face is yellow and so on.

So different face has a different color.

All the position coordinates uand the color are consistent with this diagram shown here.

Now we can seave this file and close it.

Next we will make some changes to the shaderr code.

From src folder open shader.wgsl file.

Now we need to replace this content with the new code You can see this shader is different from that we used in project three because we need to incorporate the uniform buffers that store transformation and projection matrices Here we define a uniform structure and then use the binding group to pass the model-view-projection matrix to the shader.

Here the mvp matrix means model view projection matrix You can see here the variable type is uniform.

Next we define the output structure as we did in the last project.

We define position and vColor.

This is the same as before.

For the vs_main function we also define the two inputs: one is the position one is the color, the same as the last project.

And inside this function only difference is the position not only has the pos here but we also multiply by this model view projection matrix.

So perform the transformation on this position.

For the fragment shader, we still use vColor as input and return it as fragment color.

This is the same as in the project three.

Now we can save this file and close it Now we open the helper.ts file from src folder.

Here we first need to import from gl-matrix.

We need to introduce the vector three, mat4, and then add a new method called Create View projection to this file.

This method create view projection takes four input arguments: first is the aspect ratio, the camera position, camera look at direction, and the camera up direction.

Here we first create a view projection and view projection matrices.

We then use a mat4.perspective and mat4.lookAt to create projection and view matrices.

Next we use mat4.multiply method to combining projection matrix with view matrix to form our view projection matrix.

This function returns the view matrix, projection matrix, and view projection, and the camera option.

The camera option will be used in the next project when we discuss the camera control.

Now, we need to add another function called the create transformation, which is used to construct the model matrix Here we first create three rotation matrices for rotation about X Y and Z axes We then create a translation and scaling matrices.

Next we perform individual transformation for the input arguments here.

Finally we combine all transformations together to form our final transform matrices which is our model matrices.

Now save this file and close it Next we need to make some changes to the main.ts file.

From the src folder open main.ts file.

Here we need to replace is content with the new code.

Here we first introduce some methods from a helper.ts file, including the newly created createTransforms, create view projection, we alsointroduce the cube data from the vertex_data.js file, as well as mat4 from the gl-matrix library.

Inside this create3d object method, the code is very similar to that we used in the project three.

Here, we create the vertex buffer, you can see and the color buffer using a cube data positions and cube data colors.

Here is pipeline code.

Also similar to the project three.

The difference here is array stride for the position is 12 instead of eight.

Right now for our cube, we have x, y, z coordinates.

So we have 12 here For our color, still 12 Because we have RGB elements.

Here, we also set the shader location to zero for the position and the shader location is one for the color.

The only difference in the pipeline here is we add this depth_stencil attribute.

Here we set depthWriteEnabled to true to enable the deoth stencil testing.

The depth-stencil testing determines whether or not a giving pixel should be drawn.

For 3d graphics.

enabling depth stencil testing is very important.

Otherwise, you may get unexpected results.

The following code is new and specific to our 3d cube.

Here we create a model view and a model view projection matrices by calling the CREATE VIEW projection method.

We then create a uniform buffer for our mvp matrices.

You can see here the usage, we set it as a uniform for our transformation.

The size is 64 since our matrix is four by four, so we have 16 elements.

Each element is a floating 32 number, so its size is 64.

We then define the binding group by calling Create Binding group for this uniform buffer.

Here we set the layout as a default pipeline get binding group layout zero.

This means we use the binding group zero layout.

In the entries attribute, we set the bindings to zero and set buffer here to the uniform buffer we just defined here.

If we have a more uniform buffers, we can add the elements here.

Put them into this entries array.

In addition to the texture view from a context we also introduce the depth texture.

Here we set the size as our canvas width and height.

And the format that we use here is depth 24 Plus, you can also use other format.

The usage here we set it to render_attachment.

Now our render pass description, includes two parts: one is  the color attachments and other part is teh depth stencil attachment.

The color attachments is the same as we use in the project three.

The depth stencil attachment is specific for our 3d Cube.

Next, we construct the model matrix by calling create transforms method defined in the helper.ts file.

We then obtain Model View MVP matrix by multiplying the model matrix with the view projection matrix.

So we got a final model view projection matrix and then we add this model view projection matrix to our uniform buffer by calling the write buffer method.

You should already be familiar with the following code here.

We just define render pass and a set vertex color buffer.

But we also need to set the binding group here using uniform binding group which is needed to pass the uniform buffer to the vertex shader.

Now, we finished the modification to the main.ts file.

We can then save this file.

Up to now we have finished all programming for this project.

Then we can run the following command in the terminal window to bundle our code in production mode: npm run prod.

Okay, the bundle file is created successfully.

Now we can click the Go Live link to open Chrome Canary to view our 3d Cube.

Click this link.

Here is our cube with distinct Face Color displayed on this page.

Okay we have finished this project.

This is project 5, in which we will discuss the animation and camera control.

In the last project, I explained how to use the uniform buffer to create a 3d cube.

In this project, I already illustrate how to animate that cube and how to use mouse to interact with the cube, you can download the source code used in this project from this GitHub repository link Here the specific commit version is used for this project.

For animation we will use the JavaScript requestAnimationFrame function.

This function can make changes to your screen in an efficient and optimized manner.

Here is an example of using request animation frame.

You can see here we loop over and over again using a recursion.

Here we first define the DOM element in the UI and define a start number of zero here.

Next, we create a count function that increases number by one.

And then set it as a text content for the counter element.

Inside this counter function we call request animation frame and pass the counter function itself as a callback function.

This causes it to run again just before the next frame.

Finally, we use the request animation frame function to start the animation.

This is a basic usage of the request animation frame.

For using the mouse to interact with cube, we will use the npm package called the 3d view controls.

This package can be used to control the camera with your mouse you can then interact with 3d object by controlling the camera.

Now let's start Visual Studio Code and open project four that we built in the last section.

Here is the code used in project four.

First we need to install JavaScript package called 3d view controls.

Open terminal window and run npm install 3d view controls We will use this package for camera control.

The package is already installed.

Now let's make some changes to index dot HTML file.

From the dist folder, open this file.

First we need to change h1 title to Animation and Camera control.

We also need to add two radio buttons that let you select whether you want animation or camera control here You can see the radio button.

Okay now we can save this file.

The shader code in this example is the same as that used in the project 4.

So we don't need to make any change to the shader code.

Now from the src folder, open helper.ts file.

Here we need to add a new method called Create Animation.

We need to add create_animation here.

This is code for create animation.

method.

Here the draw augments is a callback function.

Here we want to animate the rotation around X, Y and Z axes using the step function here, the step function.

The Create Animation has a boolean argument here is_animation that controls whether you want to animate the object or not.

Now we can save this file and close it.

Next we need to make some changes to the main.ts file.

Open main.ts file.

We need to replace the content with the your code.

This code actually is very similar to the code used in the last project.

Here, in addition to the other function introduced from helper.ts file, we also introduce Create Animation functionwe just created.

Note that we use here require to introduce 3d view controls module, instead of the import.

This is because the 3d view controls package was created using old modules approach.

So we have to use require instead of import here.

Here we define create camera from this package.

In the create 3d object method, we introduce an input argument is_animation, and we set its default value to true here.

Here is the initialization code and pipeline code are the same as those used in the last project, we don't need to make any change to this code.

In the creating a uniform data section we define v-matrix, this  is view matrix is a new matrix.

Next, we need to add rotation and camera.

Rotation we define using vec3 and camera we define create-camera here.

we have arguments canvas and camera option, we don't need to make any changes to the code for creating a uniform buffer here and a uniform binding group.

This is render pass description is also the same as that used in the last project, here, we put this render pass related code inside the new function called draw function.

This draw function is defined as a callback function for our animation.

You can see here if is_animation is not true we use the camera to generate our view matrix.

And then multiply projection matrix with this view matrix to form our view projection matrix.

The rest code of the draw method is the same as that we used in the last of the project.

Here we start animation by calling the Create Animation function.

You can see we have callback function, using the draw function defined here.

Finally, we need to add a radio button selection here, you can see that if the radio checked value is equal to animation, we run animation.

Otherwise, we use the camera control.

Now we finished the modification to the main.ts file.

So we can save this file now.

Up to now we have finished the all programming.

Now we can run the following command in the terminal window: npm run prod to bundle our TypeScript code.

Okay, the bundle file is created successfully.

Now we can click the Go Live link to open Chrome Canary to view our 3d Cube.

Click this go live link.

You can see the animated cube on this page.

It rotates continuously because we call the animation mode.

Now you can click on this camera control radio button here, then you can use the left mouse button to rotate the cube and the use right button to move it around.

And you can also use mouse wheel to zoom in, zoom out the cube.

So you can use the mouse to interact with the cube.

So now with how completed this project.

In the previous several projects, I explained how to create some 2d and 3d objects.

However, when rendering these graphics objects with solid colors, you may find that the image looks flat and it is failed to illustrate the 3d nature of the objects.

This is because we neglect interaction between light and surface in our objects.

Lighting helps provide visual effect to a scene so that it looks more realistic.

Lighting is one of the most important factors for creating real world shaded 3d graphics objects.

However, WebGPU does not provide much built-in features on lighting.

It just runs two functions: vertex and fragment shaders.

If you want the lighting effect in your 3d scene, you have to create the lighting model yourself.

In this project, I will build a simple lighting model in WebGPU and use it to simulate the light source.

Here you can download the source code used in this project from this GitHub repository link.

Here The specific commit version is used for this project.

Here I will discuss three types of light sources, ambient light, diffuse light, and specular light.

The ambient light is the light that illuminates all objects uniformly regardless of their location or orientation.

It is a global illumination in an environment.

The diffuse light and specular light reflection depends on the angle of the light to a surface.

When the light hits the surface., the diffuse reflection occurs in all directions, as shown here, while the specular light reflection occurs in a single direction.

as shown here.

Here shows the light reflection on a torus surface.

On the left here, light is only from ambient light, you can see this torus looks very flat.

Here in the center shows light from both ambient plus the diffuse light.

You can see the 3d feature already shows here.

On the right here, we have three light sources: ambient plus diffuse and plus specular lights.

You can see this torus looks more realistic.

So, this is the light model we want to build in this project.

The light reflection depends on the angle at which the light hits the surface.

The angle is essential to the diffuse light and the specular light.

This angle is always associated with surface normal that is a perpendicular to the surface.

To build the light model, we need to calculate the surface normal based on the direction in which the surface is facing.

Since the surface of 3d object can be curved, it can face different directions at different vertex.

So, a normal vector is usually different for different vertex that is normal vector is always associated with a particular point on the surface.

Here shows normal vector for 3d cube.

Here, the front face is pointing towards the screen.

So its normal vector is 001.

The right face is facing direction to the right.

So its normal is 1 00.

And the top face is facing towards the top.

So, its normal vector is 010.

So, similarly, you can specify the normal vectors for the back left and the bottom faces.

Note that the length of the normal vectors should be always one, that is, we need to normalize the normal vectors.

For a general quadrilateral A B C, D, its 4 vertices may not in the same plane, we can divide it into two triangles, A, B, C and C D A.

We can calculate its surface normal in two steps.

First, we calculate the weighted normal for each triangle, which is a weighted by the area of that triangle.

The triangle with larger area will get more weight.

As we know the area of a triangle is proportional to its cross product of two sides.

For example, here, for triangle A, B, C, we have a cross product B, C, and C-A, b c and a c.

Similarly for the triangle C D A, we have a cross product D A and A C.

The next step is to calculate the surface normal from the weighted triangle normals, that is, the surface normal equals the sum of two weighted normals, and we can then normalize it.

So we get this quadrilateral normal equals normalize this two cross products.

The sum of this one triangle plus another one.

So, we can combine these two terms to form one term equals cross product of C A and D B.

So, this means the surface normal for this general quadrilateral just equals to the cross product of C, A, and d b.

Just equal to the cross product of these two diagonal vectors.

We will use this formula to calculate the light model for our 3d surfaces.

Once we have calculated the normal vector, we can use it to compute the light intensity.

For diffuse light, its intensity is proportional to the cosine between the light vector here and a normal vector N.

You can see here the cosine alpha equals normalized L dot N here.

If we define the Id, the diffuse intensity, Kd the diffuse material property, so we can have this formula: the diffuse intensity equals Kd times maximum cosine alpha and 0.

Here the maximum is used to avoid the negative value of the cosine alpha here, we only take the positive value.

We can also add the ambient light to the diffuse light intensity like this.

Here Ia is ambient light intensity.

So the total light intensity is maxima Kd and here is the Ia.

So we add the ambient intensity to this diffuse light like this.

For specular light, we have two models one is Phong model, this model is very simple.

So the Is, the intensity of the specular light, equals to Ks, Ks is speclar material property, and s is the shininess of the surface or roughness, the V is the view direction, R is reflection direction.

One issue with Phong model is that the angle between the view direction and reflection direction has to be less than 90 degrees in order for the specular Phong term to contribute.

We can use the Blinn-Phong model to address this issue.

Blinn-Phong model uses a different set of vectors for its computation, based on the half angle vector.

Here's a half angle vector, you can see L  is the light direction, N is normal vector, V is view direction, H is the half angle vector defined L is the light direction plus the view direction.

The view direction plus the light direction gives H here.

The formula in Blinn-Phong model becomes N dot H instead of V dot R here.

So the angle between this N and H is always less than 90 degrees.

So this N dot H always gives positive value.

So we will use this Blinn-Phong mode in our calculation for the specular light.

Now let's go to the programming part.

We started with Visual Studio Code and open project five that we built in the last section.

Here is the code for our project file.

Now let's implement the light model in the shader code.

From src folder open the shader.wgsl.

We need  to replace its content with the new code.

Here we first define the uniform structure here that contains view-projection matrix, the model matrix and normal matrix.

Here the normal matrix is transformation matrix for the normal vector data.

When applying a transform to a surface, we need to derive the normal vector for the resulting surface from original normal vectors.

In order to transform the normal vector correctly, we don't simply multiply them by the same matrix used to transform our object, but we need to multiply them by the transpose of the inverse of that matrix.

Since the current WG SL version does not implement the matrix inverse function yet, so we have to make this transpose of the inverse of the transform matrix in TypeScript code, and then pass it as a uniform matrix as shown here, normal matrix.

We then define output structure that contains three variables: built-in position, the v-position the position after some transformation, we call it the v-position and the v_normal, which is a normal vector after transformation.

In our vs main function, it contains two input arguments one is vertex position called the pos and one is normal vecctor data.

Inside this function, we only perform the model transform on the position.

This gives us v-position here because it is used in the light calculation.

The  v-normal is a result of normal vector data multiplied by the normal transform matrix here.

And the default position here is obtained by performing model-view and projection transform on the vertex data.

For the fragment shader we first introduce the uniform structure called FragUniform that contains two variables: one is light position of light vector, and the other one is eye position.

Or view vector.

It also has another structure called LightUniform.

Here contains parameters for our light model.

Here's a light color and a specular color.

The specular light can have its own color, and the other parameters contain ambient intensity, diffuse intensity, specular intensity and the specular shininess.

Here the fs_main function takes v_position and v_normal as inputs.

Inside this function, we calculate the light model.

Here is the diffuse.

Just use N dot L, the cosine alpha.

For the specular light, we use Blinn Phong model.

This  is N dot H.

The output color from fragment shader will be weighted, you can see, by ambient, diffuse, and specular light.

Its combination gives us the final color.

So, this is the shader we are going to use to calculate the light model.

Now we can save this file and close it.

Next in the src folder, we will add a new type script file called light.ts.

Add a new file light.ts file.

Here is the code for this file.

Here we first create an interface named light inputs that contains parameters used for calculating light model.

This code also contains a new function create-shape-with-light which has four input arguments: one is the vertex data, normal data, and light input, and the is animation parameters.

Here we set the default light parameters if no parameters were passed, so we have some default values for the light parameters.

Next we create a vertex buffer and a normal buffer here.

You can see that the pipeline contains a buffers attribute here.

It is an array that contains two elements.

The first element is the vertex data with the shader location being zero.

The second element is for the normal vector data with the shader location being one here.

The other part of the pipeline is the same to that we used in Project five.

Next we create a uniform data.

Here we define normal matrix and a model matrix, and view, and view-projection matrices here.

For simplicity, here we set the eye position we get from the camera equals eye position.

This means the light vector is equal to the view vector.

Here we then create three uniform buffers: one is vertex uniform buffer, another one is a fragment uniform buffer, and finally it is light uniform buffer.

The vertex uniform buffer is used to store the model matrix, vew-projection matrix, and a normal matrix.

The fragment uniform buffer is used to store the eye position and light position.

And the light uniform buffer is used to store the light parameters which will be used in  calculating the light model in the fragment shader please note here, inside it is a callback draw function, we write a model matrix and a normal matrix inside this draw function, because we want to use the real time model matrix to perform various transformations.

Note we construct the normal matrix here by first inverting this model matrix and then transposing this normal matrix.

And get the final normal matrix.

This means we do the inverse and the transpose on the original model matrix.

And finally, get normal matrix.

when we set the data to the buffer, please note here is 64 and 128 is offset so, we also need to use the correct offset to write this data to the uniform buffer.

Finally, we set the vertex buffer, normal buffer, and a uniform binding group to the render pass.

Now, we finished the coding for the file.

We can save this file and close it Okay we completed the programming for calculating the light model.

In next project, we will use the light model implemented here to create a 3d cube with lighting effect.

In Project 6, I explain how to build a simple light model in WebGPU.

In this project, I will show you how to use this light model to add lighting effect to a 3d cube, as shown here, You can download the source code used in this project from this GitHub repository link.

This specific commitment version is used for this project.

Now start Visual Studio Code and open our project 6 that we built in the last section.

Here is the code used in the project 6 as we discussed in the last section.

In order to calculate the light model, we have to know the normal vector at each vertex on the surface of the 3d object.

Now we need to add normal vector data to the vertex_data of our cube.

From the src folder open vertex_data.ts file.

Here for the CubeData method, we need to add the normal vector data here You can see that vertices on the same face have the same normal vector.

For example, the front face all have 001.

Similarly for the right is the 100, all the 100 and so on.

Once we add the normal vector data to this CubeData function, we need to return the normal data here normals.

We also need to return normals.

Okay now we can save this file and close it.

Next we need to make some changes to the index.html file.

From the dist folder, open the index.html file.

Now we need to replace its content by the new code.

Here we add some input parameters here that let you test our light model.

This parameters also allow you to specify object color, ambent, diffuse, specular light coefficients.

You can also specify surface shininess and specular color for the specular light computation.

Now we can save this file.

Next we need to make some modifications to the main.js file.

From the src folder, open the main.js file.

Now replace its content with new code.

Since most of the code for render pipeline and render pass hve been already included in the light dot ts file, here the main.ts file becomes very simple.

First, we introduce the create  shape with light and the light inputs interface from the light.ts file, and then introduce the cube data from vertex data.ts file.

Next we call the cube data method to get the vertex data and define the default parameter here, the light inputs, we just use the default parameters.

And also is_animation, we set it to true.

And then we call the create shape with light function to create a 3d cube with lighting effect.

This part of the code allows the user to recreate the cube with different lighting effect by changing the input parameters.

Now we finished the modifications to the main.js file.

We can now save this file.

Now we can run the following command in the terminal window to bundle our TypeScript code in production mode: npm run prod Okay, the bundle file is created successfully.

Now we can click the Go Live link to open Chrome Canary to view our 3d cube with light effect.

Click this link.

Here is a red cube with the lighting effect displayed on this page.

We can make some changes to the input parameters and click the redraw button to recreate the cube.

Now, for example, we can set the diffuse and specular coefficients to zero here, to zero, and ambient light to one.

Now click redraw, we get a cube with ambient light only.

You can see the cube looks flat and no 3d feature.

Now let's add the diffuse light and specular light back.

point 2.

We can then change the color of the object.

For example, this red we change it to green, 010.

And also change the specular color from white to yellow, and then click the redraw button.

You can see here the yellow you can increase the shininess, for example, 100.

You can see the spot a little bit yellow.

This is specular light.

Now, go back to animation.

So by changing this parameter you can get new lighting effect on this cube.

You can use this light model to easily create your own 3d object by following the procedure present here.

Okay, now we have completed project 7.

The projects we discussed so far were appled a fixed color by assigning a desired color value to the fragment color variable in the fragment shader.

In this project, I will explain how to use the colormap to render 3d surfaces.

Surfaces play an important role in various applications, including computer graphics games, and 3d data visualization.

In some graphics and chart applications, we need a custom colormap to achieve special visual effects.

In fact, the colormap is just a table or list of colors that are organized in some desired fashion.

We can create a custom colormap with an mx3 color matrix with each row representing RGB values.

The row index can represent the y data of a 2d chart or height of a 3d surface plot.

For a given color map matrix, the color data values can be linearly scaled to the colormap.

Here shows the color strips for the colormaps that we will create in this project.

You can see that each color map has a familiar colormap name, such as hsv, hot, cool, and jet and so on.

You can download the source code used in this project from this GitHub repository link.

This specific commit version is used for this project.

Now start with Visual Studio Code and open our project 7 that we just built in the last section Here contains the source code used in the last project.

We can easily create a custom colormap using simple mathematical formula.

Here, I will provide a several commonly used colormaps using mx3 colormap array.

Now add a new typescript file called colormap_data.ts file to the src folder, s src folder, add a new file called thecolormap_data.ts file.

And then add a new method to this file.

Here the colormap data method contains 11 different colormaps data.

You can see hsv, hot, and jet as default.

Inside this function, you can see each colormap contains 11 RGB color arrays, each, I mean, array  here is RGB values.

Here shaws the color strips generated using this colormap dat.

This is hsv, corresponding to this color strip.

So here is the jet corresponding to this color strip.

So, these color strips are generated using colormap data.

In the colormap_data function, we assume this 11 color arrays are uniformly distributed in the range of zero and one.

For example, the color zero, the first element represents a color at zero location, while color 5 represents a color at 0.5, etc.

However, we have to use interpolation method to get a color at, for example, 0.55 or any other arbitrary locations.

So here, we will use an npm package called interpolate arrays to do the color interpolation.

Now in the terminal window, we run the command: npm install, interpolate arrays to install this package.

Now at the top of this colormap file, we need to introduce this package here.

Here you can see again, we use a require to introduce this package.

because it was created originally using the old modules method.

So we have to use the require instead of import to get this package.

Next, we add a new function here called addColors Here is addColors method to this file.

This function accepts the x argument whose value value is in the range of minimum and maximum.

This function allows you to interpolate color for any arbitrary x value.

Inside this function, we first get the color array by calling the colormap data method with a colormap name as the input.

And then make sure the input x parameter is in the range of minimum and maximum range.

Next, we normalize x to the range of zero and one.

Finally, we interpolate the color for the X variable by calling the interp here the color and normalize x here.

This interp  comes from the interpolate_arrays package.

Now this function returns RGB array with each component being in the range of zero and one.

So now we can save this file and close it.

Next in the src folder, rename the light.ts file to the surface.ts file.

So we rename this file to surface.

This surface.ts file will be used to create different surface plots.

First, we need to make small changes to the light inputs interface.

Here we can remove the color field here, because we don't need to specify the solid color for our object, but we will use the colormap for the object color.

Then we add another new field called is_two_side_lighting.

Since the 3d surfaces are usually open surfaces we need to implement the light model for both the front and back sides, in order to see the surface clearly, but this parameter controls whether you want the one-side or two-side lighting.

Next we change the function name here create surface with the colormap.

So we change create colormap,  and then add a color data as its input argument.

Here we have normal data, we need to add the color data.

This color data means the colormap data.

Here we need to remove this color, and add a new before the value for the is_two_side_ lighting here.

Here we set the default value to one This means the two side lighting.

If zero, it is one side.

Here we create a vertex buffer, normal buffer, we also need to create the color buffer.

Color and use the color data.

Next, here is a buffer array: the first is vertex buffer, this is normal buffer.

We also need to add another buffer for the colormap data.

So, here we need to set the shader location to 2.

we don't need to change the other part of the code for the pipeline.

So, other part is the same as that used in the last project.

The code for the uniform data, camera, and uniform buffer layout is the same as that used in the last project.

So we don't need to change anything here.

But here the light parameters we need to make some changes because the light color we already removed.

So we remove this light color, but we need to add another parameter called is two side lighting parameter.

So we need to add a parameter here, push the is_two_side_lighting parameter by 0 0, 0.

But we will make the parameter consistent with the vec4 array.

The other thing we need to make a change is that we should add the color buffer to the render pass.

We have vertex buffer and normal buffer.

We also need to add color buffer here, called color buffer here color buffer at slot 2.

Okay we finished the modification to this file.

Right now, So we can save this file.

Next we need to make some modification to the shader code because we want to incorporate lighting and the colormap into the shader.

So now from src folder open the shader.wgsl file from this, open this file and we needed to replace its content with new code.

This shader is very similar to that used in the last project.

Inside the vertex shader, in addition to the vertex position, you can see and normal vector we also add the color at the location 2 to this input.

Within the vs_main function we process the colormap and assign this colormap to the v_color.

In the fragment shader here we use the processed colormap data as our object primary color here.

You can see v_color, this processed from vertex shader.

We use it as a primary color to get the final color.

Inside the fragment shader the main function you can see here the is_two_side_lighting parameter here called the prameter zero, controls whether the lighting is applied to one side or two sides of our surface.

We set it to one for two side lighting, which is also default setting.

For two side lighting, you can see this for one side for front side the diffuse we use the N dot L, specular light we use N dot H.

If two side lighting we add another term here for the diffuse we use minus N dot L.

For the specular we use minus N dot H.

This means for the backside light, we just reverse the normal vector to get the light for the backside.

Now we can save this file.

Okay now we've finished the programming for colormap in this project.

In the last project we discussed the colormap model.

In this project I will explain how to use this model to build simple 3d surfaces.

Here is an example of a 3d Simple surface created using a peaks function.

Mathematically, a surface draws a Y function on the surface for each x and z coordinates in a region of interest.

For each x and z value pair the simple 3d surface can have at the most one Y value.

We can define a simple surface by the y coordinates of point above a rectangular grid.

In the x-z plane, the surface is formed by  jointing adjacent points using straight lines.

Typically, the surface is formed using rectangular mesh grdids.

However, WebGPU only provides triangles as the basic units to represent any surface in 3d.

In order to represent a surface using traditional quadrilaterals, we need to write our own functions.

You can download the source code used in this project from this GitHub repository link.

This specific commit version is used for this project.

Now start Visual Studio Code and open project 8 that we built in the last section.

Here shows the code used in the project 8.

First we need to add a new typescript file called surface-data.ts to the src folder, SRC folder.

Add a new file called surface-data.ts We need to introduce vec3,  import vec3, from gl-matrix library, and also need to import from colormap-data.ts file, introduce addColors function.

First, let add a utility function called normalize-point that will be used to normalize the point to the range of minus one and a one.

So, we need to add a function normalize function.

You can see here this function takes a 3d vec3 here the points.

x data range: X minimum, X maximum, y minimum, y maximum, z minimum, z maximum, and also introduce a global scale, parameter that allows you to change the value range for the x, y, and z coordinates.

This will be convenient for setting the default size of our surface.

This function returns the normalized 3d point.

For our simple 3d surface as shown here, this surface can be formed by quadrilateral mesh grid Here is one of the quadrilateral.

This, as shown here, is a unit grid cell.

This unit grid is a quadrilateral with 4 vertices P0, p1, p2 and p3.

We can divide this quadrilateral into two triangles P0-p1- p2, this one triangle, another triangle is p2-p3 and P0 as shown here.

We will create a vertex data, normal vector data and colormap data for this quadrilateral.

Now, we can add a new function called Create_Quad.

You can see here that this function takes 4 points just as shown here, point 012 and 3.

It takes these 4 points as its input arguments.

It also takes vertex data range y minimum and y maximum, as well as the colormap name as its input parameters, because we want to add the colormap to the y values for our surface.

Of course you can colormap the data value in the other direction such as x and z direction.

But we usually add colormap to the y values.

here we first create the vertex position data for the six vertices This is point zero, point one,  point two.

This is the first triangle, this is second triangle: point 2, point 3, and point zero.

So this is vertices for this unit cell for these two triangles.

Next, we define the normal vector for this quadrilateral.

We already discussed how to obtain the normal vector for this quadrilateral in the previous project.

It is simply equal to the cross product of these two diagonal lines of this quadrilateral.

Here we introduce these two lines: first is p2-p0, p two p zero, we call the CA, another line is  p3 and p1, p three P one, we call it DB.

Then, we get the cross product, we call CP, these two lines, then we normalize this CP cross product, we get the normal vector for this quadrilateral.

So, all the vertices on this quadrilateral have the same normal vector.

Next, we add the colormap data to vertices for this quadrilateral.

You can see the P zero p one, P two and p three, this is the colormap data for these 4 vertices.

For P zero, you can see we add the colormap data to it by calling the addColors function with its y component p0[1], and using the y data range: y minimum and Y maximum.

The AddColors function was implemented in the last project.

If you want to add colormap data to the other direction, for example, the x direction you should use x data range and also P zero here inside should be zero, this is the x component.

Similarly, we add the colormap data to the other three vertices.

And finally, we add this colormap data to these two triangles.

These have six vertices, they have a different colormap on different vertices.

Finally, create-quad function returns vertex, normal, and color for this unit grid.

This create-quad function here only creates the data for single unit grid.

Now, we need to create a data for the entire surface.

We need to add a new function called simple surface data.

This is a function that creates the data for entire surface.

This function takes f here this input argument.

F is a math function that describes the 3d Simple surface.

We then define the data range in the x z plane use a minimum and a maximum value for the x and the Z.

Next two input parameters nx and nz represent the grid divisions along the x and z directions.

Here the scale is a global scaling parameter used in the normalize point function.

The scale-y parameter is used to control the y value height relative to the X and Z value, that is, the scale-y parameter here controls aspect ratio of our surface plot.

Inside this method, we first define the size of a unit grid, you can see  dx and dz.

We then calculate the vertex position on our surface by calling the function f.

You can see we use this for loop inside that we call the F(X,Z) function.

This describes our surface.

Here we also calculate y value range: y minimum 1, y maximum one.

Next, we reset the y value range usng the scale-y parameter here next, we normalize vertex position by calling the normalized point using the value range for X y, and Z, and also the scale.

Inside this double for loop, we first define the unit cell P zero p one p two and p three.

This is a unit cell.

Then call the Create-Quad function to get thevertex position, normal vector, and colormap data for this unit grid.

So this vertex normal and color for our surface.

Finally, this simple surface data function returns the vertex data, normal data, and colormap data.

Now we finished programming for 3d Simple surfaces in this project.

In next project, we will use this framework to create a 3d Sinc surface with both the lighting effect and the colormap.

In the last two projects, we discussed the colormap model and simple 3d surface construction.

In this project, I will explain how to use the colormap model and the simple surface data function to create a 3d sinc surface as shown here.

This is a pretty sinc surface we want to create in this project.

You can download the source code used in this project from this GitHub repository link.

This specific commit version is used for this project.

Now start with Visual Studio Code and open our project 9 that we built in the last section.

Here is the code used in the last project.

First, we need to make some changes to the index.html file From the dist folder, open index dot HTML file.

Here we need to change this h1 title to Sinc surface and then we need to change the parameter here.

You can see here we have two_ sided_ light parameter that controls whether we want to apply the lighting effect to one side or two sides of the surface.

Here is a dropdown menu for the colormap that contains 11 colormap names we defined in the colormap-data.ts file, so you can select different colormap for our surface from this dropdown.

Here is the scale parameter that lets you set the default size and aspect ratio for the sinc surface.

Now we can save this file Now let's define our sinc function.

Add a new TypeScript file called the math-func.ts file to the src folder.

Src folder, add a new file called the math-func.ts file.

Here is a definition of the sinc function.

Here r is defined as the square root x square plus z square.

This function equals to sign R or R if r not equal to zero.

Otherwise if it equal to zero this function is equal to one.

In fact, this function is the Fourier transform of a rectangular function.

Now we add some code here to define this function.

You can see this function is very simple: it takes x and z and center parameters here as its input arguments.

The center parameter here lets you set the location of our sinc surface.

Inside this function we define r using this formula sas shown here.

So here let y, r equals to zero, this function equal to one, otherwise it equals to sin r  over r.

This function returns a vector three point on the sinc surface here.

Now we can save this file and close it.

Next we need to make some changes to the main.ts file.

From src folder open main.ts file and now we need to replace the code with the new code.

Since most code for render pipeline and render pass have been already included in the surface.ts file.

So the main.ts file here becomes very simple Here we first introduce the simple surface data from surface-data.ts file, and then we introduce the sinc function from math-func.ts file.

Next from surface.ts file we introduce create surface with Colormap and also introduce the light inputs interface from the surface.ts file.

Next, we create a new function called Create surface.

This function takes light inputs, is_animation, and the colormap name, and scale, and scale-y as its input arguments.

Inside this function, we call the simple surface data with, you see here, the sinc function as the input argument.

From this simple surface data method, we can get the vertex position, normal vector, and colormap data, we then call create surface with colormap function to create our sinc surface with lighting and colormap effects.

Here we define the default input parameters and then we call create surface function to create a 3d sinc surface with default lighting and colormap effects.

This part of the code allows the user to recreate the sinc surface with different input parameters.

Here, this code allows the user to select the different colormap from the dropdown menu.

Now we've finished the modification to the main.ts file.

Okay, save this file.

Now we can run the following command in the terminal window to bundle our typescript code.

Open a terminal window and run the command: npm run prod to bundle our TypeScript code in production mode, Okay, the bundle file is created successfully.

Now we can click this go live link to open Chrome Canary to view our sinc surface.

Click this link.

Okay, here is our sinc surface with default two_side_lighting and jet color map displayed on this page.

Now let's check what happens if we use one side lighting.

So we use camera control.

This is one side and we set it to zero.

You see, you see the back there is no diffusion and specular light in the back.

But the only have very weak ambient light here, If we set it to one you can see the back the difference the one side and two side.

You have light on the back.

Here the scale sets the default size of surface.

For example, we change it to one, you get a smaller surface here change it to 3, you will get a bigger surface.

So they can the default size of the surface.

We go back to 2.

And here the scale y lets you control the aspect ratio.

For example, we set it to zero, you get a taller surface.

And change it to the 0.5 you get a shorter and fatter surface.

We change it to 0.3, so you can use this parameter to control the aspect ratio.

Next we can change the colormap from this dropdown menu.

For example, antum, bone, cool, Cooper gray, hot, hsv, spring, summer and winter.

So you can get different colormap for our surface.

You can see that in WebGPU, we can easily create a beautiful 3d surface with lighting and color map effect.

You can create your own surfaces by simply providing your own math functions.

So now we've completed our project 10.

You can use WebGPU to create advanced graphics in your web applications.

Here are some more examples from my recently published book "Practical WebGPU Graphics" Here are some parametric 3d surface examples.

They all look beautiful because they have both colormap and lighting effects as we did for the simple 3d sinc surface in our projects.

Here are texture map examples on 3d objects.

Texture plays an important role in 3d graphics.

The modern GPUs have support for image texture built-in on the hardware level.

Texture mapping on 3d object provides more interesting and realistic look.

These examples demonstrate that you can do texture mapping on sphere, cylinder, and 3d surface.

You can also use multiple textures on 3d object.

You can see here: each face of our 3d Cube has a different image texture.

Here are examples of domain coloring for functions with complex variables.

These beautiful pictures are drawing pixel by pixel based on bitmap render, which is a computation intensive process.

Here we perform all calculations and rendering directly in GPU, which makes it possible for our real time animation here.

This is impossible in CPU.

These are 3d fratal examples.

They provide more structures than 2d fractals.

Again, this nice pictures can be only create in GPU.

In CPU, it is too slow and impossible to have this real time animation.

Here are some example for large particle systems.

This is a compute boids and we try to simulate the flocking behavior of birds.

This picture shows particle kinematics with collision to the wall.

This shows three mass centers that attract particles.

Here we use Compute shader to perform physics related computations and update the positions and velocities on each frame.

This is only possible in GPU for simulating this large particle system in real time.

If you want to learn how to create these pictures and particle systems, please visit my website at Dr.Xudotnet.com and my YouTube channel at Practical

Programming with DrXuThank you for watching.