Quantcast
Channel: Envato Tuts+ Game Development
Viewing all articles
Browse latest Browse all 728

How to Write a Smoke Shader

$
0
0

There's always been a certain air of mystery around smoke. It's aesthetically pleasing to watch and elusive to model. Like many physical phenomena, it's a chaotic system, which makes it very difficult to predict. The state of the simulation depends heavily on the interactions between its individual particles. 

This is exactly what makes it such a great problem to tackle with the GPU: it can be broken down to the behavior of a single particle, repeated simultaneously millions of times in different locations. 

In this tutorial, I'll walk you through writing a smoke shader from scratch, and teach you some useful shader techniques so you can expand your arsenal and develop your own effects!

What You'll Learn

This is the end result we will be working towards:

Click to generate more smoke. You can fork and edit this on CodePen.

We'll be implementing the algorithm presented in Jon Stam's paper on Real-Time Fluid Dynamics in Games. You'll also learn how to render to a texture, also known as using frame buffers, which is a very useful technique in shader programming for achieving many effects. 

Before You Get Started

The examples and specific implementation details in this tutorial use JavaScript and ThreeJS, but you should be able to follow along on any platform that supports shaders. (If you're not familiar with the basics of shader programming, make sure you go through at least the first two tutorials in this series.)

All the code examples are hosted on CodePen, but you can also find them in the GitHub repository associated with this article (which might be more readable). 

Theory and Background

The algorithm in Jos Stam's paper favors speed and visual quality over physical accuracy, which is exactly what we want in a game setting. 

This paper can look a lot more complicated than it really is, especially if you're not well versed in differential equations. However, the whole gist of this technique is summed up in this figure:

Figure taken from Jos Stams paper cited above

This is all we need to implement to get a realistic-looking smoke effect: the value in each cell dissipates to all its neighboring cells on each iteration. If it's not immediately clear how this works, or if you just want to see how this would look, you can tinker with this interactive demo:

Smoke shader algorithm interactive demo
View the interactive demo on CodePen.

Clicking on any cell sets its value to 100. You can see how each cell slowly loses its value to its neighbors over time. It might be easiest to see by clicking Next to see the individual frames. Switch the Display Mode to see how it would look if we made a color value correspond to these numbers.

The above demo is all run on the CPU with a loop going through every cell. Here's what that loop looks like:

This snippet is really the core of the algorithm. Every cell gains a little bit of its four neighboring cells, minus its own value, where f is a factor that is less than 1. We multiply the current cell value by 4 to make sure it diffuses from the higher value to the lower value.

To clarify this point, consider this scenario: 

Grid of values to diffuse

Take the cell in the middle (at position [1,1] in the grid) and apply the diffusion equation above. Let's assume f is 0.1:

No diffusion happens because all the cells have equal values! 

If we considerthe cell at the top left instead (assume the cells outside of the pictured grid are all 0):

So we get a net increase of 20! Let's consider a final case. After one timestep (applying this formula to all cells), our grid will look like this:

Grid of diffused values

Let's look at the diffuse on the cell in the middle again:

We get a net decreaseof 12! So it always flows from the higher values to the lower ones.

Now, if we wanted this to look more realistic, we could decrease the size of the cells (which you can do in the demo), but at some point, things are going to get really slow, as we're forced to sequentially run through every cell. Our goal is to be able to write this in a shader, where we can use the power of the GPU to process all the cells (as pixels) simultaneously in parallel.

So, to summarize, our general technique is to have each pixel give away some of its color value, every frame, to its neighboring pixels. Sounds pretty simple, doesn't it? Let's implement that and see what we get! 

Implementation

We'll start with a basic shader that draws over the whole screen. To make sure it's working, try setting the screen to a solid black (or any arbitrary color). Here's how the setup I'm using looks in Javascript.

You can fork and edit this on CodePen. Click the buttons at the top to see the HTML, CSS, and JS.

Our shader is simply:

res and pixel are there to give us the coordinate of the current pixel. We're passing the screen's dimensions in res as a uniform variable. (We're not using them right now, but we will soon.)

Step 1: Moving Values Across Pixels

Here's what we want to implement again:

Our general technique is to have each pixel give away some of its color value every frame to its neighboring pixels.

Stated in its current form, this is impossibleto do with a shader. Can you see why? Remember that all a shader can do is return a color value for the current pixel it's processing—so we need to restate this in a way that only affects the current pixel. We can say:

Each pixel should gain some color from its neighbors, while losing some of its own.

Now this is something we can implement. If you actually try to do this, however, you might run into a fundamental problem... 

Consider a much simpler case. Let's say you just want to make a shader that turns an image red slowly over time. You might write a shader like this:

And expect that, every frame, the red component of each pixel would increase by 0.01. Instead, all you'll get is a static image where all the pixels are just a tiny bit redder than they started. The red component of every pixel will only ever increase once, despite the fact that the shader runs every frame.

Can you see why?

The Problem

The problem is that any operation we do in our shader is sent to the screen and then lost forever. Our process right now looks like this:

Shader process

We pass our uniform variables and texture to the shader, it makes the pixels slightly redder, draws that to the screen, and then starts over from scratch again. Anything we draw within the shader gets cleared by the next time we draw. 

What we want is something like this:

Repeatedly applying the shader to the texture

Instead of drawing to the screen directly, we can draw to some texture instead, and then draw that texture onto the screen. You get the same image on screen as you would have otherwise, except now you can pass your output back as input. So you can have shaders that build up or propagate something, rather than get cleared every time. That is what I call the "frame buffer trick". 

The Frame Buffer Trick

The general technique is the same on any platform. Searching for "render to texture" in whatever language or tools you're using should bring up the necessary implementation details. You can also look up how to use frame buffer objects, which is just another name for being able to render to some buffer instead of rendering to the screen. 

In ThreeJS, the equivalent of this is the WebGLRenderTarget. This is what we'll use as our intermediary texture to render to. There's one small caveat left: you can't read from and render to the same texture simultaneously. The easiest way to get around that is to simply use two textures. 

Let A and B be two textures you've created. Your method would then be:

  1. Pass A through your shader, render onto B.
  2. Render B to the screen.
  3. Pass B through shader, render onto A.
  4. Render A to your screen.
  5. Repeat 1.

Or, a more concise way to code this would be:

  1. Pass A through your shader, render onto B.
  2. Render B to the screen.
  3. Swap A and B (so the variable A now holds the texture that was in B and vice versa).
  4. Repeat 1.

That's all it takes. Here's an implementation of that in ThreeJS:

You can fork and edit this on CodePen. The new shader code is in the HTML tab.

This is still a black screen, which is what we started with. Our shader isn't too different either:

Except now if you add this line (try it!):

You'll see the screen slowly turning red, as opposed to just increasing by 0.01 once. This is a pretty significant step, so you should take a moment to play around and compare it to how our initial setup worked. 

Challenge: What happens if you put gl_FragColor.r += pixel.x; when using a frame buffer example, compared to when using the setup example? Take a moment to think about why the results are different and why they make sense.

Step 2: Getting a Smoke Source

Before we can make anything move, we need a way to create smoke in the first place. The easiest way is to manually set some arbitrary area to white in your shader. 

If we want to test whether our frame buffer is working correctly, we can try to add to the color value instead of just setting it. You should see the circle slowly getting whiter and whiter.

Another way is to replace that fixed point with the position of the mouse. You can pass a third value for whether the mouse is pressed or not, that way you can click to create smoke. Here's an implementation for that.

Click to add "smoke". You can fork and edit this on CodePen.

Here's what our shader looks like now:

Challenge: Remember that branching (conditionals) are usually expensive in shaders. Can you rewrite this without using an if statement? (The solution is in the CodePen.)

If this doesn't make sense, there's a more detailed explanation of using the mouse in a shader in the previous lighting tutorial.

Step 3: Diffuse the Smoke

Now this is the easy part—and the most rewarding! We've got all the pieces now, we just need to finally tell the shader: each pixel should gain some color from its neighbors, while losing some of its own.

Which looks something like this:

We've got our f factor as before. In this case we have the timestep (0.016 is 1/60, because we're running at 60 fps) and I kept trying numbers until I arrived at 14, which seems to look good. Here's the result:

Click to add smoke. You can fork and edit this on CodePen.

Uh Oh, It's Stuck!

This is the same diffuse equation we used in the CPU demo, and yet our simulation gets stuck! What gives? 

It turns out that textures (like all numbers on a computer) have a limited precision. At some point, the factor we're subtracting by gets so small that it gets rounded down to 0, so the simulationgets stuck. To fix this, we need to check that it doesn't fall below some minimum value:

I'm using the r component instead of the rgb to get the factor, because it's easier to work with single numbers, and because all of the components are the same number anyway (since our smoke is white). 

By trial and error, I found 0.003 to be a good threshold where it doesn't get stuck. I only worry about the factor when it's negative, to ensure it can always decrease. Once we apply this fix, here's what we get:

Click to add smoke. You can fork and edit this on CodePen.

Step 4: Diffuse the Smoke Upwards

This doesn't look very much like smoke, though. If we want it to flow upwards instead of in every direction, we need to add some weights. If the bottom pixels always have a bigger influence than the other directions, then our pixels will seem to move up. 

By playing around with the coefficients, we can arrive at something that looks pretty decent with this equation:

And here's what that looks like:

Click to add smoke. You can fork and edit this on CodePen.

A Note on the Diffuse Equation

I basically fiddled around with the coefficients there to make it look good flowing upwards. You can just as well make it flow in any other direction. 

It's important to note that it's very easy to make this simulation "blow up". (Try changing the 6.0 in there to 5.0 and see what happens). This is obviously because the cells are gaining more than they're losing. 

This equation is actually what the paper I cited refers to as the "bad diffuse" model. They present an alternative equation that is more stable, but is not very convenient for us, mainly because it needs to write to the grid it's reading from. In other words, we'd need to be able to read and write to the same texture at the same time. 

What we've got is sufficient for our purposes, but you can take a look at the explanation in the paper if you're curious. You will also find the alternative equation implemented in the interactive CPU demo in the function diffuse_advanced().

A Quick Fix

One thing you might notice, if you play around with your smoke, is that it gets stuck at the bottom of the screen if you generate some there.This is because the pixels on that bottom row are trying to get the values from the pixels below them, which do not exist.

To fix this, we simply make sure that the pixels in the bottom row find 0 beneath them:

In the CPU demo, I dealt with that by simply not making the cells in the boundary diffuse. You could alternatively just manually set any out-of-bounds cell to have a value of 0. (The grid in the CPU demo extends by one row and column of cells in each direction, so you never actually see the boundaries)

A Velocity Grid

Congratulations! You now have a working smoke shader! The last thing I wanted to briefly discuss is the velocity field that the paper mentions.

Velocity field from Jon Stams paper

Your smoke doesn't have to uniformly diffuse upwards or in any specific direction; it could follow a general pattern like the one pictured. You can do this by sending in another texture where the color values represent the direction the smoke should flow in at that location, in the same way that we used a normal map to specify a direction at each pixel in our lighting tutorial.

In fact, your velocity texture doesn't have to be static either! You could use the frame buffer trick to also have the velocities change in real time. I won't cover that in this tutorial, but there's a lot of potential to explore.

Conclusion

If there's anything to take away from this tutorial, it's that being able to render to a texture instead of just to the screen is a very useful technique.

What Are Frame Buffers Good For?

One common use for this is post-processingin games. If you want to apply some sort of color filter, instead of applying it to every single object, you can render all your objects to a texture the size of the screen, then apply your shader to that final texture and draw it to the screen. 

Another example is when implementing shaders that require multiple passes, such as blur.You'd usually run your image through the shader, blur on the x-direction, then run it through again to blur on the y-direction. 

A final example is deferred rendering, as discussed in the previous lighting tutorial, which is an easy way to efficiently add many light sources to your scene. The cool thing about this is that calculating the lighting no longer depends on the amount of light sources you have.

Don't Be Afraid of Technical Papers

There's definitely more detail covered in the paper I cited, and it assumes you have some familiarity with linear algebra, but don't let that deter you from dissecting it and trying to implement it. The gist of it ended up pretty simple to implement (after some tinkering with the coefficients). 

Hopefully you've learned a little more about shaders here, and if you have any questions, suggestions, or improvements, please share them below! 


Viewing all articles
Browse latest Browse all 728

Trending Articles