Quantcast
Channel: Envato Tuts+ Game Development
Viewing all articles
Browse latest Browse all 728

Creating Toon Water for the Web: Part 2

$
0
0

Welcome back to this three-part series on creating stylized toon water in PlayCanvas using vertex shaders. In Part 1, we covered setting up our environment and water surface. This part will cover applying buoyancy to objects, adding water lines to the surface, and creating the foam lines with the depth buffer around the edges of objects that intersect the surface. 

I made some small changes to my scene to make it look a little nicer. You can customize your scene however you like, but what I did was:

  • Added the lighthouse and the octopus models.
  • Added a ground plane with color #FFA457
  • Added a clear color for the camera of #6CC8FF.
  • Added an ambient color to the scene of #FFC480 (you can find this in the scene settings).

Below is what my starting point now looks like.

The scene now includes an octopus and a ligthouse

Buoyancy 

The most straightforward way to create buoyancy is just to create a script that will push objects up and down. Create a new script called Buoyancy.js and set its initialize to:

Now, in the update, we increment time and rotate the object:

Apply this script to your boat and watch it bobbing up and down in the water! You can apply this script to several objects (including the camera—try it)!

Texturing the Surface

Right now, the only way you can see the waves is by looking at the edges of the water surface. Adding a texture helps make motion on the surface more visible and is a cheap way to simulate reflections and caustics.

You can try to find some caustics texture or make your own. Here's one I drew in Gimp that you can freely use. Any texture will work as long as it can be tiled seamlessly.

Once you've found a texture you like, drag it into your project's asset window. We need to reference this texture in our Water.js script, so create an attribute for it:

And then assign it in the editor:

The water texture is added to the water script

Now we need to pass it to our shader. Go to Water.js and set a new parameter in the CreateWaterMaterial function:

Now go into Water.frag and declare our new uniform:

We're almost there. To render the texture onto the plane, we need to know where each pixel is along the mesh. Which means we need to pass some data from the vertex shader to the fragment shader.

Varying Variables

A varyingvariable allows you to pass data from the vertex shader to the fragment shader. This is the third type of special variable you can have in a shader (the other two being uniformand attribute). It is defined for each vertex and is accessible by each pixel. Since there are a lot more pixels than vertices, the value is interpolated between vertices (this is where the name "varying" comes from—it varies from the values you give it).

To try this out, declare a new variable in Water.vert as a varying:

And then set it to gl_Position after it's been computed:

Now go back to Water.frag and declare the same variable. There's no way to get some debug output from within a shader, but we can use color to visually debug. Here's one way to do this:

The plane should now look black and white, where the line separating them is where ScreenPosition.x is 0. Color values only go from 0 to 1, but the values in ScreenPosition can be outside this range. They get automatically clamped, so if you're seeing black, that could be 0, or negative.

What we've just done is passed the screen position of every vertex to every pixel. You can see that the line separating the black and white sides is always going to be in the center of the screen, regardless of where the surface actually is in the world.

Challenge #1: Create a new varying variable to pass the world position instead of the screen position. Visualize it in the same way as we did above. If the color doesn't change with the camera, then you've done this correctly.

Using UVs 

The UVs are the 2D coordinates for each vertex along the mesh, normalized from 0 to 1. This is exactly what we need to sample the texture onto the plane correctly, and it should already be set up from the previous part.

Declare a new attribute in Water.vert (this name comes from the shader definition in Water.js):

And all we need to do is pass it to the fragment shader, so just create a varying and set it to the attribute:

Now we declare the same varying in the fragment shader. To verify it works, we can visualize it as before, so that Water.frag now looks like:

And you should see a gradient, confirming that we have a value of 0 at one end and 1 at the other. Now, to actually sample our texture, all we have to do is:

And you should see the texture on the surface:

Caustics texture is applied to the water surface

Stylizing the Texture

Instead of just setting the texture as our new color, let's combine it with the blue we had:

This works because the color of the texture is black (0) everywhere except for the water lines. By adding it, we don't change the original blue color except for the places where there are lines, where it becomes brighter. 

This isn't the only way to combine the colors, though.

Challenge #2: Can you combine the colors in a way to get the subtler effect shown below?
Water lines applied to the surface with a more subtle color

Moving the Texture

As a final effect, we want the lines to move along the surface so it doesn't look so static. To do this, we use the fact that any value given to the texture2D function outside the 0 to 1 range will wrap around (such that 1.5 and 2.5 both become 0.5). So we can increment our position by the time uniform variable we already set up and multiply the position to either increase or decrease the density of the lines in our surface, making our final frag shader look like this:

Foam Lines & the Depth Buffer

Rendering foam lines around objects in water makes it far easier to see how objects are immersed and where they cut the surface. It also makes our water look a lot more believable. To do this, we need to somehow figure out where the edges are on each object, and do this efficiently.

The Trick

What we want is to be able to tell, given a pixel on the surface of the water, whether it's close to an object. If so, we can color it as foam. There's no straightforward way to do this (that I know of). So to figure this out, we're going to use a helpful problem-solving technique: come up with an example we know the answer to, and see if we can generalize it. 

Consider the view below.

Lighthouse in water

Which pixels should be part of the foam? We know it should look something like this:

Lighthouse in water with foam

So let's think about two specific pixels. I've marked two with stars below. The black one is in the foam. The red one is not. How can we tell them apart inside a shader?

Lighthouse in water with two marked pixels

What we know is that even though those two pixels are close together in screen space (both are rendered right on top of the lighthouse body), they're actually far apart in world space. We can verify this by looking at the same scene from a different angle, as shown below.

Viewing the lighthouse from above

Notice that the red star isn't on top of the lighthouse body as it appeared, but the black star actually is. We can tell them apart using the distance to the camera, commonly referred to as "depth", where a depth of 1 means it's very close to the camera and a depth of 0 means it's very far.  But it's not just a matter of the absolute world distance, or depth, to the camera. It's the depth compared to the pixel behind.

Look back to the first view. Let's say the lighthouse body has a depth value of 0.5. The black star's depth would be very close to 0.5. So it and the pixel behind it have similar depth values. The red star, on the other hand, would have a much larger depth, because it would be closer to the camera, say 0.7. And yet the pixel behind it, still on the lighthouse, has a depth value of 0.5, so there's a bigger difference there.

This is the trick. When the depth of the pixel on the water surface is close enough to the depth of the pixel it's drawn on top of, we're pretty close to the edge of something, and we can render it as foam. 

So we need more information than is available in any given pixel. We somehow need to know the depth of the pixel that it's about to be drawn on top of. This is where the depth buffer comes in.

The Depth Buffer

You can think of a buffer, or a framebuffer, as just an off-screen render target, or a texture. You would want to render off-screen when you're trying to read data back, a technique that this smoke effect employs.

The depth buffer is a special render target that holds information about the depth values at each pixel. Remember that the value in gl_Position computed in the vertex shader was a screen space value, but it also had a third coordinate, a Z value. This Z value is used to compute the depth which is written to the depth buffer. 

The purpose of the depth buffer is to draw our scene correctly, without the need to sort objects back to front. Every pixel that is about to be drawn first consults the depth buffer. If its depth value is greater than the value in the buffer, it is drawn, and its own value overwrites the one in the buffer. Otherwise, it is discarded (because it means another object is in front of it).

You can actually turn off writing to the depth buffer to see how things would look without it. You can try this in Water.js:

You'll see how the water will always be rendered on top, even if it is behind opaque objects.

Visualizing the Depth Buffer

Let's add a way to visualize the depth buffer for debugging purposes. Create a new script called DepthVisualize.js. Attach this to your camera. 

All we have to do to get access to the depth buffer in PlayCanvas is to say:

This will then automatically inject a uniform into all of our shaders that we can use by declaring it as:

Below is a sample script that requests the depth map and renders it on top of our scene. It's set up for hot-reloading. 

Try copying that in, and comment/uncomment the line this.app.scene.drawCalls.push(this.command); to toggle the depth rendering. It should look something like the image below.

Boat and lighthouse scene rendered as a depth map
Challenge #3: The water surface is not drawn into the depth buffer. The PlayCanvas engine does this intentionally. Can you figure out why? What's special about the water material? To put it another way, based on our depth checking rules, what would happen if the water pixels did write to the depth buffer?

Hint: There is one line you can change in Water.js that will cause the water to be written to the depth buffer.

Another thing to notice is that I multiply the depth value by 30 in the embedded shader in the initialize function. This is just to be able to see it clearly, because otherwise the range of values are too small to see as shades of color.

Implementing the Trick

The PlayCanvas engine includes a bunch of helper functions to work with depth values, but at the time of writing they aren't released into production, so we're just going to set these up ourselves.

Define the following uniforms to Water.frag:

Define these helper functions above the main function:

Pass some information about the camera to the shader in Water.js. Put this where you pass other uniforms like uTime:

Finally, we need the world position for each pixel in our frag shader. We need to get this from the vertex shader. So define a varying in Water.frag:

Define the same varying in Water.vert. Then set it to the distorted position in the vertex shader, so the full code would look like:

Actually Implementing the Trick

Now we're finally ready to implement the technique described at the beginning of this section. We want to compare the depth of the pixel we're at to the depth of the pixel behind it. The pixel we're at comes from the world position, and the pixel behind comes from the screen position. So grab these two depths:

Challenge #4: One of these values will never be greater than the other (assuming depthTest = true). Can you deduce which?

We know the foam is going to be where the distance between these two values is small. So let's render that difference at each pixel. Put this at the bottom of your shader (and make sure the depth visualization script from the previous section is turned off):

Which should look something like this:

A rendering of the depth difference at each pixel

Which correctly picks out the edges of any object immersed in water in real time! You can of course scale this difference we're rendering to make the foam look thicker/thinner.

There are now a lot of ways in which you can combine this output with the water surface color to get nice-looking foam lines. You could keep it as a gradient, use it to sample from another texture, or set it to a specific color if the difference is less than or equal to some threshold.

My favorite look is setting it to a color similar to that of the static water lines, so my final main function looks like this:

Summary

We created buoyancy on objects floating in the water, we gave our surface a moving texture to simulate caustics, and we saw how we could use the depth buffer to create dynamic foam lines.

To finish this up, the next and final part will introduce post-process effects and how to use them to create the underwater distortion effect.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.


Viewing all articles
Browse latest Browse all 728

Trending Articles