OpenGL can be used to create complex 2D and 3D graphics effects.

We just released an advanced OpenGL course on the freeCodeCamp.org YouTube channel.

Victor Gordan created this course. Before this course he created one of the most popular OpenGL course on YouTube. Now, he will help you take your skills to the next level.

Here are the topics covered in this course:

  • The Depth Buffer
  • The Stencil Buffer
  • Face Culling
  • Transparency & Bl
  • The Framebuffer
  • Cubemaps & Skybox
  • The Geometry Shade
  • Instancing
  • Anti-Aliasing

Watch the course below or on the freeCodeCamp.org YouTube channel (1-hour watch).

Transcript

OpenGl can be used to create complex graphics effects. This advanced OpenGL course from Victor Gordan will take your skills to the next level.

Hi everyone! In this course I'll teach you about different advanced topics in OpenGL that will help you achieve more complex effects, better looking renders, and more optimized scenes. If you don't know what OpenGL is or are not familiar with it, then you should first watch the beginners course for OpenGL which you can find on this channel. And that should be all you need to know about this course, enjoy!

In this tutorial we'll take a look at Depth Buffers in OpenGL and also see how we can make use of them for a little graphical effect.

You might remember that we've already made use of the Depth Buffer in the "Going 3D" tutorial in order to fix a weird issue we had. Since the buffer is turned off by default, we want to make sure we have it enabled, and that we clear it each frame just like the Color Buffer. Now what this buffer basically does, is that it stores "depth" values that represent how far away from the near plane of the projection matrix a certain fragment is. A depth value of 0 meaning that it's right on the near plane, and of 1 meaning it's on the far plane.

Using this depth information, we can asses which object should be in front of which other object. We can do this by using the glDepthFunc and inserting one of the following inputs. By default OpenGL chooses GL_LESS which means that if the depth value of an object is less than that of the current depth value, then the first value replaces the second. In most circumstances you should use GL_LESS, but I suppose you could also choose one of the others if you want your game or application to be mind bending.

Now the cool part. Let's visualize the depth buffer. We can easily do this by going to the fragment shader and outputting gl_FragCoord.z as the FragColor. The problem is, that as soon as we press run, you'll notice the screen is mostly pure white. The only way to see a bit of darkness is to get very close to an object. This is due to the fact that depth in OpenGL is not linear. If the depth was linear, then we would have the same amount of precision for depth at a close distance as we would at a far away distance. Since we almost always focus on things that are close to us, we want to make the precision be very high near us, and low away from us. This is achieved by using this formula. Don't worry, we don't have to implement it since OpenGL does it automatically.

Sometimes though, you want to use another formula, so in order to do that, we must first get the z-value by linearizing the depth function, which can be done using this function. Now using this function we get the z value which keep in mind, is not normalized. This value is simply the distance from the near plane. So let's declare the near and far constants of our frustum, and divide the linear depth by the far length to quickly normalize it just to see what the results look like. Now let's just look at a quick problem you can get, and then I'll show you the cool effect we can achieve with depth buffers.

So the main issue that arises from depth buffers is called Z-fighting, and it occurs because two or more triangles have the same depth buffer and thus the depth function can't decide which one is closer than the other, and thus it keeps changing between them constantly. An easy fix to this is usually to make sure you don't have triangles that are too close to one another and parallel. If Z-fighting appears at a far away distance, then you might also consider tweaking the function of the depth buffer so that you have more precision at that distance. And the final trick is to use a bigger integer for the depth buffer. Usually by default it uses 24bits, but you could change it to 32bits if your card supports it. This will increase the precision and thus decrease the change of Z-fighting.

Now for the cool effect. We can take the z value, and plug it into a logistic function for which we have a steepness parameter, and an offset parameter. The steepness will control how fast the depth value changes from close to 0 to close to 1, and the offset will determine at what z-value this change is half way done. The cool thing about this, is that this can give you smooth edges for the end of your frustum if you want that, but the even nicer thing, is that this can somewhat simulate a fog effect. So let's change our background color to a grey, and calculate the depth value using our function. Then multiply our usual direcLight() by the reverse of our depth value, and also add the depth value times the grey color from before. Now we should have a nice simple fog effect. Keep in mind that you might need to adjust the steepness and offset a bit to get it just right.

Have fun playing around with different effects!

In this tutorial we'll take a look at the Stencil Buffer and how we can use it to create useful effects such as outlining a model.

So the Stencil Buffer, just like the Depth Buffer, holds a value for each pixel you can see, these values being used for image masking in general. Unlike the Depth Buffer though, where each pixel holds between 2 and 4 bytes of data, for the Stencil Buffer each pixel only holds 1 byte of data, so values from 0 to 255. But you'll mainly only use the values of 0 and 1.

So let's look at how we can work with this new buffer. First of all we have the function glStencilMask that allows us to choose which parts of the Stencil Buffer we want to be able to modify. It simply takes a pixel from the mask, and a corresponding pixel from the Stencil Buffer, and applies a bitwise "AND" comparison on them. Keep in mind each pixel has a byte of data, so 8 bits. A bitwise "AND" operation compares each bit with it's corresponding counterpart and only outputs 1 if they are both 1. Therefore if we input 0x00 into glStencilMask, which means that we have 8 bits equal to 0, then all the comparisons will fail and the Stencil Buffer won't change at all. But if we input 0xFF into glStencilMask, then all the bits of the mask will be 1 since 0xFF is equal to 8 ones, and so we'll be able to modify any part of the Stencil Buffer.

Now let's look at two more functions we can make use of: glStencilFunc, and glStencilOp. glStencilFunc allows us to control how the Stencil Buffer passes a test or fails a test, while glStencilOp allows us to dictate what happens when the stencil test fails, when the stencil test passes but the depth test fails, and when both pass.

Let's take a deeper look at glStencilFunc. It takes in three arguments: a function, a reference value, and a mask. The function can be one of these, by default being set to GL_ALWAYS so the test always passes. The reference value is simply the value we use to compare in the function. Notice how before comparing the stencil value with the reference value we apply a bitwise AND operation to both using the mask. This means that if you want to compare the numerical value of the two accurately you will want your mask to be 0xFF so that nothing changes.

Now for glStencilOp, it has four arguments: sfail, dpfail, and dppass. These stand for stencil fail, depth fail, and depth pass. For all of these you can choose between the following options. By default they all have GL_KEEP which basically means that nothing changes. There's not really much more to say about these functions, so if you want to know more about them, look them up in the documentation.

The Stencil Buffer can be used for many things such as portals, mirrors, and more, but an easy feature to implement is outlining of models, so let's take a look at that. We first want to render our object like we normally do and update the stencil buffer with 1s everywhere we have a fragment from our object, and 0 everywhere we don't have a fragment from our object. This will essentially create a figure like this. Then we want to disable writing to the stencil buffer so we don't accidentally modify it, and also disable depth testing so that we can make sure the next object we draw will be completely in front of the previous one. Now we want to render a scaled up version of the object we had before, but this time in a flat color, and with the following condition: we only draw it's fragments where the stencil value is not 1, so basically not where the silhouette of the previous object was. Then we just restore writing to the stencil and enable the depth buffer again. That's all we have to do.

Now let's implement it in code. We start of by enabling our stencil buffer using glEnable(GL_STENCIL_TEST), and making sure we also have our depth buffer enabled. Then we use glStencilOp plugging in GL_KEEP, GL_KEEP, and GL_REPLACE. This will make it so that when both the depth and stencil tests pass, we'll use the reference value specified by glStencilFunc. Now let's make sure we clear all the buffers before each frame, and go on to the part where the magic happens.

First we specify our stencil test always passes, and set the reference value to 1. Then we enable writing to all of our stencil buffer with a stencil mask of all 1s. And now we simply draw our object. By this point in time, our stencil buffer looks like this, and our color buffer like this. Remember that all these values are 1 because we specified in the glStencilOp function that if both the stencil and depth tests pass, then make the pixel of the stencil buffer equal to 1. Now for the outline we want our stencil test to only pass when it's not equal to 1 and the depth test passes. Then we disable out depth buffer for previously mentioned reasons, and disable writing to the stencil mask so we can keep our original silhouette.

Now since we want to make the outline a flat color, we'll have to create two new shaders, and a shader program. So let's first create the shader program like so. Then for "outlining.frag" we simply want to return a color, while for "outlining.vert" we want to get the position, and all the uniforms related to transformations, plus a new float uniform called outlining, which we'll multiply with the scale matrix. Now back in the main function we'll want to send that outlining uniform to the shader with a value of something like 1.08. Then we simply draw the same object again, but this time using the other shaders. The last step we need to do is to enable writing to the whole stencil, clear it by always passing the test and replacing the values with 0, and enabling the depth buffer. Now if you run the program, you should see an outline around your object. If your object has it's origin at its geometrical center and if it doesn't have very complex shapes, then the outline probably looks fine. In my case though, you can see that the origin is not at the center since the outline is skewed upwards, and that the shapes are pretty complex, with many curves, therefore this method won't work that well.

A better method that doesn't take much would be to import the normals in the vertex shader, and add those to the position vector, multiplying them by the outlining, which we'll remove from the scale matrix,  and which we'll now lower from 1.08 to 0.08. Now when scaling the vectors, instead of scaling them from their origin, we are sort of scaling them outwardly from the model using the normal as a reference for what "outward" means. This will give you much better results, with one exception. If you have hard edges, then your normals will be close to perpendicular to one another and so will create a little gap when expanding the model.

Therefore a third solution to this would be to simply have another model that's bigger than the first one. To be more specific, it has to be thicker, not necessarily bigger. You can achieve something like this using Blender's solidify modifier. Here you can see I have the initial model, and the bigger version of it. Now I've already exported both as different models, and I only have to import them into my program like so. Then instead of drawing the object a second time, I simply draw the bigger version of it, this time not even needing a uniform to scale it. And as you can see, this gives the best results, though at the cost of doubling the storage cost... You could of course make a function similar to the solidify function that blender has, but that would be slightly more complicated for this tutorial.

Today I'll show you what face culling is, and how it affects performance. We are also gonna measure this performance change by making an FPS counter.

So face culling is a step in the graphics pipeline that decides if a triangle will move on to the fragment shader (aka, if the triangle will be drawn or not). OpenGL decides this by seeing which side of the triangle is currently facing the camera. Generally speaking, in most 3D graphics programs, it is the front side of a triangle that is sent to the fragment shader, and the back side of a triangle that is discarded.

The way OpenGL figures out which side is which, is by an index convention, which can either be clockwise, or counter-clockwise. In a counter-clockwise framework, if the order of the indices of a triangle are counter-clockwise when facing us, then the side we see, is the front side. Likewise, if the order of the indices of a triangle are clockwise when facing us, then the side we see, is the back side. For a clockwise framework it's the exact opposite. Most graphics programs use a counter clockwise standard, but don't expect all of them to use this.

Now in order to put all of this into code, we just have to enable the face culling using glEnable with GL_CULL_FACE, specify which face we want to keep, 99% of the time that will be GL_FRONT, and then specify the standard we want to use. Again I suggest using the counter-clockwise one since from what I've seen it's more common than the clockwise one.

Now if we run the program, you'll notice that when we get inside an object, we won't be able to see its insides, since it contains the backs of the triangles, which get discarded. Therefore we only see the background.

So let's see if this makes any difference in performance. For that we'll need an FPS counter, which I'll display in the title of the window. Let's start by creating three doubles for the previous time, the current time, and the difference of these two. Then we also want an unsigned integer that will act as a counter to see how many frames we have in a certain amount of time. Now, FPS, is simply the amount of frames you get in a second. So that means that in order to get the FPS, we can count the number of frames we get in a second, a frame being one loop in our main while loop. But that would also mean that our FPS will get updated only once a second. Instead let's update it every 30th of a second for example.

To do that we just need to get the current time in seconds using glfwGetTime, the time difference, and increment the counter. Then if the difference is higher or equal to a 30th of a second, we go ahead with the measurement of the FPS. The FPS will simply be equal to 1 divided by the time difference, which is the amount of frames in a second that this time difference gives, but the time difference contains multiple frames, which are equal to the counter, so we also need to multiply it with the counter as well. Now we could stop here, but it's also useful to know how long a frame takes in terms of milliseconds. To do that we simply divide the time difference by the counter which gives us the number of seconds a frame takes, and then multiply that by 1000 to transform it into milliseconds. Then we simply put together the new title and apply it to the window using glfwSetWindowTitle. Then we want to set the previous time as the current time in order to get the time difference back to 0, and also set the counter to 0.

Now if you start your program, you'll be able to see the amount of frames you have. If they are stuck on 60, then that means that you have VSync on, which tries to keep your FPS constant to 60 frames per second. If you wish to disable this, then write glfwSwapInterval(0) in your main function. Keep in mind that this will only be able to deactivate VSync if VSync is not forced by your graphics driver. In any case, I recommend keeping it at 60 frames per second. But if you don't want to do that, at least make sure that the functions that handle user inputs are put into an if statement that works periodically like this one, otherwise the responsiveness of your inputs will vary with your FPS, which you do not want.

And just to show you that face culling improves your FPS, here is a very high resolution model and the difference in FPS. The difference is not big here, but it's noticeable in the numbers. When you have multiple models and a lot of stuff happening, the difference will become even more noticeable.

In this tutorial I'll show you how to quickly get transparency turned on, and also how to make use of the blending feature in OpenGL.

So as you might have noticed, all the textures we've been using so far had 4 components: RED, GREEN, BLUE, and ALPHA (RGBA). The first three of those give color to our scene, while the last one controls the level of transparency different objects have. Now if you look at this grass model I have imported you'll notice there is no transparency to it. To enable that we simply have to create a new fragment shader which will be identical to our normal fragment shader, but we'll check if the alpha value is smaller than a certain threshold, and if it is, we'll discard that fragment. Don't forget to also make a new shader program for this new fragment shader. So if you run the program, you should see the grass is now as it should be.

Now I'll just add a bunch of randomly placed transparent windows. Since the code here is not relevant to OpenGL I won't be explaining it. The shader I use for these is a very basic shader that just displays the textures without any lighting. So now as you can see there are a bunch of WINDOWS, but even though in my texture they are see-through, here they are not. In order to achieve a see-through effect we need blending.

Now for a bit of theory. This is the formula OpenGL uses for blending different colors together. All the C terms stand for Color, while the T terms stand for transparency. Which in case you didn't know or forgot, an alpha level of 0 is fully transparent and an alpha level of 1 is fully opaque. And then the source color is the color in the fragment shader, while the destination color is the color in the color buffer. These transparencies can have different formulas. The most common one is that in which the source gets its transparency value from the alpha part of the source color, while the destination's transparency is 1 - the alpha value of the source. So now let's tell OpenGL we want to use this configuration using glBlendFunc specifying the source and destination functions. Here is a list of some functions you might want to use for one reason or another. Then if you want to you can also use glBlendEquation with one of these arguments in order to specify how you want the previous colors to interact, basically changing the default equation. And lastly you can use the glBlendFuncSeparate function to choose how to interact with the RGB channel, and the alpha channel for both the source and destination. Keep in mind you can't specify a function for each RGB value, only for all of them together.

At this point we just have to enable blending using glEnable(GL_BLEND) right before our windows, and then disable it right after we are done drawing them so that we don't accidentally affect anything else. You should always do this with transparent objects. After that just compile and you should see that the windows are transparent. But there is one problem... the blending is all messed up. It just doesn't look right. And for that we have our friend the Depth Buffer to blame. Since the windows are drawn in a random order, windows that are drawn behind already existing ones don't get drawn at all since the depth test fails. So in order to draw all the windows we should draw the furthest one first, and the closest one last. Or you could of course be lazy about it and simply disable the depth buffer when drawing the windows, but that will not work in most circumstances, and I would not recommend it usually. Another option would be to sort the windows by their distance from the camera.

Now sorting has nothing to do with OpenGL, so I won't be explaining this part, and I actually encourage you to come up with your own solution for sorting the windows. The important thing is that I calculate all the distances from one window to the camera by subtracting the window position from the camera position and getting the length of that vector. Then based on that length I sort all my windows from the furthest to the closest and draw them. It's that simple! You just have to be a bit creative with the sorting part and how you store your objects. Just keep in mind that this method where you just draw the objects in order is not guaranteed to work 100% of the time. If different transparent objects intersect or do some weird stuff, you will get weird results. But other methods are pretty complicated so for now this will have to do.

In this tutorial I'll show you how to implement a custom framebuffer into your OpenGL application and how you can use the framebuffer to achieve post processing effects.

So first of all, what is a framebuffer? Well you can think of a framebuffer as a collection of multiple buffers that result in the final image you see on your screen. So it contains a color buffer, a depth buffer, and a stencil buffer. Now why would you want to use one? Well if we create our own framebuffer, then we can display it on a rectangle that covers the whole screen and then using shaders modify the pixels displayed on the rectangle to achieve different effects. This is called post-processing because you process the pixels after all the rendering has already been done.

Ok, now let's implement the framebuffer. Just like any OpenGL object, we create an unsigned int, we generate the framebuffer using glGenFramebuffers, and we bind it. That was it for the framebuffer. Now we need to add a color texture for it to be of any use. So we'll create a texture just like in the textures tutorial, making sure we clamp the texture to the edges since otherwise certain effects will bleed from one side of the screen to the other due to the default repetition of the texture. Then we simply attach the texture to the framebuffer using glFramebufferTexture2D.

Keep in mind we store our color in a texture, therefore we can access it from a shader, which we want to do. But in the case of the depth buffer, we don't really care about reading it in a shader for this tutorial, so instead of using a texture, we can use a Render Buffer Object which is much faster but has the disadvantage that you can't read it directly in a shader. We create it using glGenRenderbuffers, and then we configure its storage using glRenderbufferStorage, plugging in GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, and the width and height. We use GL_DEPTH24_STENCIL8 so that we can store in it both the stencil buffer and depth buffer. Also be very careful that all the framebuffers components have the same width and height, otherwise you might get an error. Then we just attach it to the framebuffer using glFramebufferRenderbuffer.

And for a bit of error checking, just write this. Sadly the errors are not very specific, they just give you a number. Here are the meanings of the errors you can get.

Now that we have the framebuffer object, we can make our rectangle which will always cover the whole screen, and so we don't want to apply any sort of transformations to it. Now let's create two very basic shaders for the framebuffer, make them into a shader program, and then send the unit of our texture, 0 since it's the only texture in this shader.

Now let's handle the drawing part. First we make sure to bind the framebuffer before we draw anything, including the background. Make sure your buffers are cleared and that you have depth testing enabled after that. Then after we are done drawing everything in the scene, we want to switch back to our default framebuffer by binding 0, and draw the rectangle which displays the framebuffer we've just unbinded. Just make sure to disable depth testing so that the rectangle doesn't fail the depth test.

Now if you run the program you should be seeing whatever you were seeing before without any difference. If your screen is a flat color, then first check that you don't have any errors in your cmd-like window, then make sure your face culling isn't getting rid of the rectangle (so either change the order of the vertices, or disable culling when drawing the rectangle). For any other errors check the source code in the description and look through your code very carefully, there are a lot of things that can go wrong in this case.

Ok, now let's make this interesting. In the framebuffer fragment shader we can do all kinds of interesting effects. Some easy ones would be inversing the colors like so, or making the image black and white. But these are not that interesting. To get more interesting results, we want to sample multiple pixels when we choose the color of a single pixel. For that we first want to declare an array of vec2s which will represent the offset from the pixel we are on to its 8 neighbours. Notice how I divide 1 by 800 to get the width and height of one pixel for my 800 by 800 window. After that we want to create a float array which will represent something called a kernel. It's basically a matrix that helps us achieve cool effects by sort of defining how important each pixel is in relation to the pixel we are currently on, that being the one in the middle. Generally speaking you probably want them to be somewhat symmetrical and always add up to 1. If they add up to more than 1 the final color will be brighter, and if they are under 1, the final color will be darker. Here though they add up to 0, but that is intentional since this is an edge detection kernel and I want everything to be really dark, except the edges of things.

Now all we have to do is to multiply each part of the kernel with each pixel at a certain offset and add up all of them in this vec3 named color. Just keep in mind that even though these look like matrices, we don't actually multiply them the same way we do matrices. Then the last step is to output color as the FragColor, and we're done! I think this kernel gives a pretty cool effect. Here are some other kernels you might want to try out. I suggest you just play around with this a bit and make your own kernels so that you get a feeling as to how they work.

In this tutorial I'll show you what cubemaps are in OpenGL, and how you can use them to create skyboxes.

So what are cubemaps? Well they are simply another type of texture, which holds 6 2D textures, one for each side of a cube. When sampling a cubemap, you specify a 3D vector instead of a 2D one. This allows you to easily sample between all 6 sides of the cube. And since the coordinates of the cube correspond to the sampling vectors, there is no need for UVs. The most common uses for cubemaps are quadsphere texturing and skyboxes.

So now let's code it in. The first thing you need to do is to write out the cube vertices and indices. Then we'll want to create a VAO, VBO, and EBO just like in the first tutorials.

Now let's create an array with 6 strings that will hold the paths to the 6 images we'll be using for the skybox. Then we want to create the cubemap texture itself just like any other texture, except that where we put GL_TEXTURE_2D before, we not put GL_TEXTURE_CUBE_MAP. Make sure to clamp the texture in all three directions since the texture is a cube, therefore 3D. This clamping should prevent any seams from showing up.

And now we'll go over all six textures and read them using the stb library, putting them in the cube texture once we read them. Notice how I disable the vertical flipping. This is because unlike most textures in OpenGL, cubemaps are expected to start in the top left corner, not the bottom left corner. Also notice how I add i to GL_TEXTURE_CUBE_MAP_POSITIVE_X. This represents the side of the cube I am currently assigning a texture to, and I am adding i to it in order to cycle through all the sides. Here is the order of the sides which you can find in the OpenGL docs, and here is the order we wrote the paths in. Notice something weird? Well, normally in OpenGL, the front is in the negative Z direction, but for cubemaps, the front is in the positive Z direction. That means that cubemaps work in a left-handed system, while most of OpenGL works in a right-handed system. This can be very confusing, and I honestly have no idea why they chose to do this, but oh well. Keep in mind you will likely get small bugs because of this if you are not careful. In my case, my right texture kept being displayed upside down for some reason. To fix that I just flipped the texture in an image editor. So be prepared for this sort of stuff since from what I've heard it can happen pretty often with skyboxes.

Ok, back to the tutorial. We'll now need to create two shaders for the skybox. The vertex shader will take in the coordinates, output texture coordinates, and also take in uniforms for matrix transformations. In the main function create a vec4 which holds the final transformed coordinates. Now since these coordinates are now in screen space, we'll do something a bit weird. Instead of feeding gl_Position the coordinates as they are, we'll give it the x, y, w, and again w components. This will result in the z component always being 1 after the perspective division. And since the depth buffer takes the z component as its depth value, the skybox will always have a depth value of 1, so the furthest depth value possible, thus being behind all objects, as it should be. And finally we want to export the texture coordinates as the positions, except we'll flip the z axis to combat the coordinate system change.

For the fragment shader we just want to import the texture coordinates, the cubemap, and then set the fragColor to equal the texture. That's it.

Back in the main function I'll create a shader program and export the skybox texture unit. All that's left now is the drawing part.

We'll start by setting the depth function to GL_LEQUAL instead of the default GL_LESS since our skybox is right on the edge, aka 1, so we need that equal sign. Then we'll activate the shader and create the view and projection matrices which are identical to the ones we created in the camera class, except for one small detail. For the view matrix, we downgrade it to a mat3, and then we scale it back up to a mat4. This will make the last row and column of the matrix equal to 0, thus having no effect on translations. We only want the skybox to rotate, not move around. Then simply export these matrices to the vertex shader.

Now the final part is to draw the cubemap itself just like any other object, except that when we bind the texture, we use GL_TEXTURE_CUBE_MAP instead of the usual GL_TEXTURE_2D. Don't forget to switch the depth testing to it's default GL_LESS after you are done with the skybox.

If you boot up your program, you should now have a nice skybox all around you. If any of the faces are inverted or something like that, play around with the orientation of your images in an image editor. Trust me, it's a lot easier to just do that rather than trying to find a little logic bug caused by the two different coordinate systems, and the different texture reading origins. If you have seams that are clearly lines, then you can try adding glEnable(GL_TEXTURE_CUBE_MAP_SEAMLESS) somewhere in your code. If your seams are not lines, but simply clear differences in color between one face and another, then you probably just have a shitty skybox. If you can't see anything or only parts of the skybox, then you probably have back face culling enabled and wrote the indices in the wrong winding order. Simply disable face culling when drawing the skybox, or write the indices in the correct winding order.

In this tutorial I will show you what the geometry shader is and how you can use it to create things such as visible normals.

So far we've only used the vertex shader, and fragment shader, which suffice in most situations. But sometimes between the vertex and fragment shaders you want to have an extra step to modify the geometry of your meshes. Even though it might seem like you can do that in the vertex shader, you can only do things to individual vertices. If you wanted to modify a whole triangle, so a group of vertices, then you would need to use the geometry shader. The second advantage of the geometry shader is that it can switch between different types of primitives, and so can create or delete vertices.

So let's begin by adding the geometry shader to our shader class. Notice how I am doing the exact same thing as for the other two shaders, except that I use GL_GEOMETRY_SHADER. Now that we support custom geometry shaders, let's create a geometry shader that does nothing.

Just like any other shader we'll begin with the version. Then we need two layouts written like so. The first layout signifies what type of primitive we receive, which can be one of the following, while the second layout shows what type of primitive we are outputting, which can be one of the following. In this case we want to receive a triangle and export a triangle. Then we have our outputs to the fragment shader. Keep in mind you should pass data from the vertex shader to the geometry shader and then to the fragment shader.

Now for importing data into the geometry shader, we need to do something a bit different. Instead of simply having an "in", we'll have a sort of C structure written out like so. Note that we don't have to include the position in this because it is already built into a default structure just like this one called gl_in. Now we need to go back to the vertex shader and replace all the outgoing data with the exact same structure except for this last part and "out" instead of "in". Make sure that everything else from the structure is identical to its counterpart in the geometry shader. Notice how I also included the projection matrix. That is because we only want to apply the projection matrix AFTER we modify our geometry. Now to assign data to these outgoing values we simply write the name we gave them plus a dot and the name of the variable we want to assign data to. Very similar to a C or C++ structure.

Now in the geometry shader we have all the data we need, so all that's left to do is to assemble this data together. To do that we simply assign the position, normal, color, and texture coordinates their data. Notice how here I also have an index besides the name of the part of the structure I want to access. That's because we are in the geometry shader and thus we essentially have an array of such structures, each with different values for a specific vertex. Once we are done assigning the values of a vertex, we must use EmitVertex() to declare that we are done with this vertex, VertEx veRTeX veRtEx vERTeX. Now we can do the same thing for the other two vertices, and once we are done with all three vertices we need for a triangle, we declare that our primitive is complete using EndPrimitive(). And that was it for the default geometry shader. If you run your program you should have exactly what you had before, amazing!

But that's really boring, so let's change things up a bit and make the geometry shader more fun. Something that is extremely easy to do is to calculate the surface normal using a cross product and then add that normal to the three positions of the primitive. This will make your meshes look as if all the primitives exploded or glitched out.

Now let's do something a bit more serious. Let's create a new shader which will take our default vertex shader, a new fragment shader, and a new geometry shader. And let's draw our model using this shader after we draw it using the default shader. Now for the fragment shader we'll simply output a flat color, nothing fancy.

But for the geometry shader, we'll get a triangle, and output not a triangle, but multiple lines! So now we are actually destroying and creating new geometry since as you can see we'll output 6 vertices even though we only got 3. Then we'll use the exact same structure, and in the main function we'll simply retrieve the first vertex position and declare it. But then instead of moving to the next vertex, we'll remain on this one but add to it the normal of the vertex and then emit it. This essentially creates a line across the normal of the vertex. Now we want to do the same for the other vertices, but in order to not link all the lines together we must first end this line by using EndPrimitive and then move on. After doing this for all the vertices we can press play and see that we are now able to visualize all the normals of our model which can be pretty useful for debugging purposes.

Keep in mind that geometry shaders can also be very useful for things such as grass or subdividing models. So they are actually pretty interesting.

In this tutorial I'll show you what Instancing is and how you can use it to massively improve the performance and looks of your OpenGL project.

So instancing is simply a feature that allows you to draw a mesh multiple times in a single draw call. Now why would you want this? Well consider these scenarios where I have a belt of asteroids all made out of a single asteroid mesh that is deformed in the vertex shader to give some variety. In the scenario on the left I simply have a loop that draws each asteroid individually. So that means that each asteroid has a draw call. Now in the scenario on the right I draw all the asteroids together. So that means that I only have a single draw call. If you look at the performance difference, you'll see it is massive.

Now let's code it in. I'll start with the code already written for the first scenario since it's nothing new for this series. So in order to enable instacing, all we have to do is to use glDrawElementsInstanced instead of glDrawElements, and add at the end of it how many instances of the mesh we want. The only problem is that this will draw all the meshes in the exact same positions, so it is useless.

There are multiple ways you could move each mesh to a unique position though. You could have code that does that in the vertex shader for example. Using gl_InstanceID you'll get the index of the the instance you are currently drawing, and so you can use that for controlled random number generation. Alternatively you could have a uniform with all the transformations and retrieve the correct transformation for a specific instance using gl_InstanceID. But the problem with this is that uniforms can't store that much data. So the best way to have a lot of transformations and not have the generation inside the vertex shader is to store the transformations inside a vertex buffer that is attached to the VAO of the mesh.

So let's start by creating a VBO constructor which takes a vector of mat4s. Then we want to add an unsigned integer public variable to the Mesh class that will signify the amount of instances we desire. And of course we also need to add it to the constructor to easily change it. Also in the constructor we should add the vector of matrix transformations for the instances so that we can plug it into the vertex buffer. Now in the Mesh.cpp file we want to create a VBO for the instances and then link its attributes to the VAO only if we are drawing more than one mesh. Make sure to link the matrix as 4 different vec4s since otherwise your program won't work. And at the end use glVertexAttribDivisor plugging in the layout number of each vec4 and 1. This 1 means that the vec4 will be used for a whole instance. If it were 0, then the vec4 would be used for a vertex, and then the next vec4 will be used for the next vertex, which we for sure don't want. And just to make it clear, if it were 2 it would be used for two instances before switch to the next vec4. Now let's also limit the use of glDrawElementsInstanced only for multiple instances.

For the Model class we need to do the exact same thing as for the Mesh class where we add instanceMatrix to the constructor and as a variable, and instancing as a variable yet again. Now in our loadMesh function we want to include the instancing and instanceMatrix in the Mesh creation process.

Now we need a slightly modified shader for instance drawing to accommodate for the new layout. So create a new shader program specifically for the asteroids using a special vertex shader and the default fragment shader. The vertex shader will be identical to the default one except we'll have a 5th layout for the instanceMatrix, we'll delete the uniforms we don't need, and replace all transformations with just the instanceMatrix one. Don't forget to export the lighting uniforms in the main function as well.

Then all that's left to do is to merge the translation, rotation, and scale matrices together and add all of them to the instanceMatrix vector. Then simply add that and the instancing number to the Model constructor of the asteroids, and draw the asteroids once. Remember that we only need to call the draw function once. Now compile and witness thousands upon thousands of asteroids with a minimal performance impact. Of course keep in mind that the performance will differ from one GPU to another, so it might not work as well on yours as it would on someone else's.

In this tutorial I'll show you what Anti-Aliasing is and how you can implement it in your OpenGL project.

So you might have noticed that while horizontal and vertical edges look extremely crisp, diagonal edges tend to look a bit choppy, like a flight of stairs. This is due to how we display images on our screens. Since our displays are made out of a bunch of tiny squares, aka pixels, it is impossible to have a smooth line on the diagonal. But thankfully, we can fake smoothness by bleeding the color of an edge into the adjacent pixels like so.

Now, these jagged edges are called Aliasing, and an Anti-Aliasing technique is what helps us get better edges. There are multiple techniques for Anti-Aliasing each with their advantages and disadvantages, but today I'll focus on MSAA which stands for Multi Sampling Anti-Aliasing. So what does this Multi Sampling refer too?

Well in the rasterization part of the pipeline primitives are filled in. The way it is decided which pixels should be given a color and which should not, is by checking if the sample point of a pixel, which is normally in the center of it, is inside the shape of the primitive. This means that if the sample point is even just slightly outside the triangle, it won't get sampled, even though you would think it should at least do so partially. Well, that is where MSAA comes in. As you might have guessed, this technique simply adds multiple sampling points so that a more accurate result can be reached.

Here for example, two out of the 4 sampling points are inside the triangle, and so the color of that pixel will be somewhere between the color of the background and the color of the primitive.

Now let's actually implement this. Let's start off by creating a variable where we specify how many samples we desire. Now, if you don't have a framebuffer, then you can just give a window hint to GLFW saying you want GLFW_SAMPLES and then the number of samples you want, and then activate GL_MULTISAMPLE. That was it for this tutorial, as... nah I'm joking. But for real though, if you don't have a framebuffer that's all you have to do, you are done. If you do have a framebuffer, then you'll want to delete the GLFW part. Instead you need to go to your framebuffer and replace all GL_TEXTURE_2D with GL_TEXTURE_2D_MULTISAMPLE. Then replace glTexImage2D with glTexImage2DMultisamples, plugging in the type of texture, the number of samples, the color format, the width, the height, and whether or not you want all samples to be in the exact same position in the pixels.

Then for the renderbuffer object we need to change from glRenderbufferStorage to glRenderbufferStorageMultismaple and add the number of samples we want.

Now the problem is that we can't do any sort of post-processing on this framebuffer anymore since it has multisampling enabled. So to get around that we'll need a normal framebuffer which we can post-process. This is just like the one I made in the framebuffer tutorial.

Now in the main function we want to make sure we first bind the multisampling FBO, clear the screen, clear the buffers, and enable depth testing. Then we draw everything we want to draw. After that we bind the multisampling FBO as read only, and the postprocessing FBO as draw only. Now using glBlitFramebuffer we'll resolve all the multisampling and copy the result onto the post-processing FBO.

Now make sure you bind the default framebuffer and draw the framebuffer rectangle using the postprocessing texture. Run the program and you'll see that the edges of primitives are a lot smoother and nicer. Just be aware that applying kernels in the post processing will essentially overwrite the anti aliasing and so you may end up with aliasing again. As for the number of samples you should use, I suggest using either 2, 4, or 8. You can go up to 16 and 32 on some GPUs but the improvement/performance ratio is not worth it.

That was it for this tutorial, as always the source code and all resources used are in the description. Bye!