Quantcast
Channel: Envato Tuts+ Game Development
Viewing all articles
Browse latest Browse all 728

A Beginner's Guide to Coding Graphics Shaders: Part 3

$
0
0

Having mastered the basics of shaders, we take a hands-on approach to harnessing the power of the GPU to create realistic, dynamic lighting.

The first part of this series covered the fundamentals of graphics shaders. The second part explained the general procedure of setting up shaders to serve as a reference for whatever platform you choose. From here on out, we'll be tackling general concepts on graphics shaders without assuming a specific platform. (For convenience's sake, all code examples will still be using JavaScript/WebGL.)

Before going any further, make sure that you have a way to run shaders that you're comfortable with. (JavaScript/WebGL might be easiest, but I encourage you to try following along on your favorite platform!) 

Goals

By the end of this tutorial, you will not only be able to boast a solid understanding of lighting systems, but you'll have built one yourself from scratch. 

Here's what the final result looks like (click to toggle the lights):

You can fork and edit this on CodePen.

While many game engines do offer ready-made lighting systems, understanding how they're made and how to create your own gives you a lot more flexibility in creating a unique look that fits your game. Shader effects don't have to be purely cosmetic either, they can open doors to fascinating new game mechanics! 

Chroma is a great example of this; the player character can run along the dynamic shadows created in real-time:

Getting Started: Our Initial Scene

We're going to skip a lot of the initial setup, since this is what the previous tutorial was exclusively about. We'll start with a simple fragment shader rendering our texture:

You can fork and edit this on CodePen.

Nothing too fancy is happening here. Our JavaScript code is setting up our scene and sending the texture to render, along with our screen dimensions, to the shader.

In our GLSL code, we declare and use these uniforms:

We make sure to normalize our pixel coordinates before we use them to draw the texture. 

Just to make sure you understand everything that's going on here, here's a warm up challenge:

Challenge: Can you render the texture while keeping its aspect ratio intact? (Have a go at this yourself; we'll walk through the solution below.)

It should be fairly obvious why it's being stretched, but here are some hints: Look at the line where we normalize our coordinates:

We're dividing a vec2 by a vec2, which is the same as dividing each component individually. In other words, the above is equivalent to:

We're dividing our x and y by different numbers (the width and height of the screen), so it will naturally be stretched out. 

What would happen if we divided both the x and y of gl_FragCoord by just the x res ? Or what about just the y instead?

For simplicity's sake, we're going to keep our normalizing code as-is for the rest of the tutorial, but it's good to understand what's going on here!

Step 1: Adding a Light Source

Before we can do anything fancy, we need to have a light source. A "light source" is nothing more than a point we send to our shader. We'll construct a new uniform for this point:

We created a vector with three dimensions because we want to use the x and y as the position of the light on screen, and the z as the radius

Let's set some values for our light source in JavaScript:

We intend to use the radius as a percentage of the screen dimensions, so 0.2 would be 20% of our screen. (There's nothing special about this choice. We could have set this to a size in pixels. This number doesn't mean anything until we do something with it in our GLSL code.)

To get the mouse position in JavaScript, we just add an event listener:

Now let's write some shader code to make use of this light point. We'll start with a simple task: We want every pixel within our light range to be visible, and everything else should be black.

Translating this into GLSL might look something like this:

All we've done here is:

  • Declared our light uniform variable.
  • Used the built-in distance function to calculate the distance between the light position and the current pixel's position.
  • Checked if this distance (in pixels) is greater than 20% of the screen width; if so, we return the color of that pixel, otherwise we return black.
You can fork and edit this on CodePen.

Uh oh! Something seems off with how the light is following the mouse.

Challenge: Can you fix that? (Again, have a go yourself before we walk through it below.)

Fixing the Light's Movement

You might remember from the first tutorial in this series that the y-axis here is flipped. You might be tempted to just do:

Which is mathematically sound, but if you did that, your shader won't compile! The problem is that uniform variables cannot be changed.To see why, remember that this code runs for every single pixel in parallel. Imagine all those processor cores trying to change a single variable at the same time. Not good! 

We can fix this by creating a new variable instead of trying to edit our uniform. Or better yet, we can simply do this step before passing it to the shader:

You can fork and edit this on CodePen.

We've now successfully defined the visible range of our scene. It looks very sharp, though....

Adding a Gradient

Instead of simply cutting to black when we're outside the range, we can try to create a smooth gradient towards the edges. We can do this by using the distance that we're already calculating. 

Instead of setting all pixels inside the visible range to the texture's color, like so:

We can multiply that by a factor of the distance:

You can fork and edit this on CodePen.

This works because dist is the distance in pixels between the current pixel and the light source. The term  (light.z * res.x) is the radius length. So when we're looking at the pixel exactly at the light source, dist is 0, so we end up multiplying color by 1, which is the full color.

In this diagram, dist is calculated for some arbitrary pixel. dist is different depending on which pixel we're at, while light.z * res.x is constant.

When we look at a pixel at the edge of the circle, dist is equal to the radius length, so we end up multiplying color by 0, which is black. 

Step 2: Adding Depth

So far we haven't done much more than make a gradient mask for our texture. Everything still looks flat. To understand how to fix this, let's see what our lighting system is doing right now, as opposed to what it's supposed to do.

In the above scenario, you would expect A to be the most lit, since our light source is directly overhead, with B and C being dark, since almost no light rays are actually hitting the sides. 

However, this is what our current light system sees:

They're all treated equally, because the only factor we're taking into account is distance on the xy plane.Now, you might think that all we need now is the height of each of those points, but that's not quite right. To see why, consider this scenario:

A is the top of our block, and B and C are the sides of it. D is another patch of ground nearby. We can see that A and D should be the brightest, with D being a little darker because the light is reaching it at an angle. B and C, on the other hand, should be very dark, because almost no light is reaching them, since they're facing away from the light source. 

It's not the height so much as the direction that surface is facingthat we need. This is called the surface normal.

But how do we pass this information to the shader?  We can't possibly send a giant array of thousands of numbers for every single pixel, can we?  Actually, we're already doing that! Except we don't call it an array, we call it a texture. 

This is exactly what a normal map is; it's just an image where the r, g and b values of each pixel represent a direction instead of a color. 

Example normal map

Above is a simple normal map. If we use a color picker, we can see that the default, "flat" direction is represented by the color (0.5, 0.5, 1) (the blue color that takes up the majority of the image). This is the direction that's pointing straight up. The x, y and z values are mapped to the r, g and b values.

The slanted side on the right is pointing to the right, so its x value is higher; the x value is also its red value, which is why it looks more reddish/pinkish. The same applies for all the other sides. 

It looks funny because it's not meant to be rendered; it's made purely to encode the values of these surface normals. 

So let's load this simple normal map to test with:

And add it as one of our uniform variables:

To test that we've loaded it correctly, let's try rendering it instead of our texture by editing our GLSL code (remember, we're just using it as a background texture, rather than a normal map, at this point):

You can fork and edit this on CodePen.

Step Three: Applying a Lighting Model

Now that we have our surface normal data, we need to implement a lighting model. In other words, we need to tell our surface how to take into account all the factors we have to calculate the final brightness. 

The Phong model is the simplest one we can implement. Here's how it works: Given a surface with normal data like this:

We simply calculate the angle between the light source and the surface normal:

The smaller this angle, the brighter the pixel. 

This means that pixels directly underneath the light source, where the angle difference is 0, will be the brightest. The darkest pixels will be those pointing in the same direction as the light ray (that would be like the underside of the object)

Now let's implement this. 

Since we're using a simple normal map to test with, let's set our texture to a solid color so that we can easily tell whether it's working. 

So, instead of:

Let's make it a solid white (or any color you like really):

This is GLSL shorthand for creating a vec4 with all components equal to 1.0.

Here's what our algorithm looks like:

  1. Get the normal vector at this pixel.
  2. Get the light direction vector.
  3. Normalize our vectors.
  4. Calculate the angle between them.
  5. Multiply the final color by this factor.

1. Get the Normal Vector at This Pixel

We need to know what direction the surface is facing so we can calculate how much light should reach this pixel. This direction is stored in our normal map, so getting our normal vector just means getting the current pixel color of the normal texture:

Since the alpha value doesn't represent anything in the normal map, we only need the first three components. 

2. Get the Light Direction Vector

Now we need to know in which direction our light is pointing. We can imagine our light surface is a flashlight held in front of the screen, at our mouse's location, so we can calculate the light direction vector by just using the distance between the light source and the pixel:

It needs to have a z-coordinate as well (in order to be able to calculate the angle against the 3-dimensional surface normal vector). You can play around with this value. You'll find that the smaller it is, the sharper the contrast is between the bright and dark areas. You can think of this as the height you're holding your flashlight above the scene; the further away it is, the more evenly light is distributed.

3. Normalize Our Vectors

Now to normalize:

We use the built-in function normalize to make sure both of our vectors have a length of 1.0. We need to do this because we're about to calculate the angle using the dot product. If you're a little fuzzy on how this works, you might want to brush up on some of your linear algebra. For our purposes, you only need to know that the dot product will return the cosine of the angle between two vectors of equal length

4. Calculate the Angle Between Our Vectors

Let's go ahead and do that with the built-in dot function:

I call it diffuse just because this is what this term is called in the Phong lighting model, due to how it dictates how much light reaches the surface of our scene.

5. Multiply the Final Color by This Factor

That's it! Now go ahead and multiply your color by this term. I went ahead and created a variable called distanceFactor so that our equation looks more readable:

And we've got a working lighting model! (You might want to expand the radius of your light to see the effect more clearly.)

You can fork and edit this on CodePen.

Hmm, something seems a bit off. It feels like our light is tilted somehow. 

Let's revise our maths for a second here. We've got this light vector:

Which we know will give us (0, 0, 60)when the light is directly on top of this pixel. After we normalize it, it will be (0, 0, 1).

Remember that we want a normal that's pointing directly up towards the light to have the maximum brightness. Our default surface normal, pointing upwards, is (0.5, 0.5, 1).

Challenge: Can you see the solution now? Can you implement it?

The problem is that you can't store negative numbers as color values in a texture. You can't denote a vector pointing to the left as (-0.5, 0, 0). So, people who create normal maps need to add 0.5 to everything. (Or, in more general terms, they need to shift their coordinate system). You need to be aware of this to know that you should subtract 0.5 from each pixel before using the map. 

Here's what the demo looks like after subtracting 0.5 from the x and y of our normal vector:

You can fork and edit this on CodePen.

There's one last fix we need to make. Remember that the dot product returns the cosine of the angle. This means that our output is clamped between -1 and 1. We don't want negative values in our colors, and while WebGL seems to automatically discard these negative values, you might get weird behavior elsewhere. We can use the built-in max function to fix this issue, by turning this:

Into this:

Now you've got a working lighting model! 

You can put back the stones texture, and you can find its real normal map in the GitHub repo for this series (or, directly, here):

We only need to change one JavaScript line, from:

to:

And one GLSL line, from:

No longer needing the solid white, we pull the real texture, like so:

And here's the final result:

You can fork and edit this on CodePen.

Optimization Tips

The GPU is very efficient in what it does, but knowing what can slow it down is valuable. Here are some tips regarding that:

Branching

One thing about shaders is that it's generally preferable to avoid branching whenever possible. While you rarely have to worry about a bunch of if statements on any code you write for the CPU, they can be a major bottleneck for the GPU. 

To see why, remember again that your GLSL code runs on every pixel on the screen in parallel. The graphics card can make a lot of optimizations based on the fact that all pixels need to run the same operations. If there's a bunch of if statements, however, then some of those optimizations might start to fail, because different pixels will run different code now. Whether or not if statements actually slow things down seems to depend on the specific hardware and graphics card implementation, but it's a good thing to keep in mind when trying to speed up your shader.

Deferred Rendering

This is a very useful concept when dealing with lighting. Imagine if we wanted to have two light sources, or three, or a dozen; we'd need to calculate the angle between every surface normal and every point of light. This will quickly slow our shader to a crawl. Deferred rendering is a way to optimize that by splitting the work of our shader into multiple passes. Here's an article that goes into the details of what it means. I'll quote the relevant part for our purposes here:

Lighting is the main reason for going one route versus the other. In a standard forward rendering pipeline, the lighting calculations have to be performed on every vertex and on every fragment in the visible scene, for every light in the scene.

For example, instead of sending an array of light points, you might instead draw them all onto a texture, as circles, with the color at each pixel representing the intensity of the light. This way, you'll be able to calculate the combined effect of all the lights in your scene, and just send that final texture (or buffer as it's sometimes called) to calculate the lighting from. 

Learning to split the work into multiple passes for the shader is a very useful technique. Blur effects make use of this idea to speed up the shader, for example, as well as effects like a fluid/smoke shader. It's out of the scope of this tutorial, but we might revisit the technique in a future tutorial!

Next Steps

Now that you've got a working lighting shader, here are some things to try and play around with:

  • Try varying the height (z value) of the light vector to see its effect
  • Try varying the intensity of the light. (You can do this by multiplying your diffuse term by a factor.)
  • Add an ambient term to your light equation. (This basically means giving it a minimum value, so that even dark areas won't be pitch black. This helps make it feel more realistic because things in real life are still lit even if there's no direct light hitting them)
  • Try implementing some of the shaders in this WebGL tutorial. It's done with Babylon.js instead of Three.js, but you can skip to the GLSL parts. In particular, the cell shading and Phong shading might interest you.
  • Get some inspiration from the demos on GLSL Sandbox and ShaderToy 

References

The stones texture and normal map used in this tutorial are taken from OpenGameArt:

http://opengameart.org/content/50-free-textures-4-normalmaps

There are lots of programs that can help you create normal maps. If you're interested in learning more about how to create your own normal maps, this article can help.


Viewing all articles
Browse latest Browse all 728

Trending Articles