Quantcast
Channel: Envato Tuts+ Game Development
Viewing all 728 articles
Browse latest View live

Creating a Unity Game With PlayMaker

$
0
0
What You'll Be Creating

In this article, I will show you how to make a game object move in five minutes or less using the PlayMaker add-on for Unity, with no code. 

Download Unity

If you haven't already, head over to the Unity website and download the latest version of Unity. Unity is a free, user-friendly game engine that allows developers and studios to create 3D games for Android, iOS, Windows, and over 20 other platforms.

About PlayMaker

playMakerplayMakerplayMaker

What exactly is PlayMaker? PlayMaker is a paid add-on for Unity that allows you to create games without having to code. Created by Hutong Games, PlayMaker uses functional state machines (FSM) to add physics, animation, interactive objects and scene changes easily. Their developers have already created the scripts for you, which can significantly cut your game development time in half. As of writing this tutorial, the current price is $65.00, with over 370 five-star reviews.

Downloading PlayMaker

Once you have Unity, create a new 3D project. I made a simple scene with a terrain and a sphere. In the main viewport, under the menu toolbar, you will find the Asset Store tab. Type PlayMaker into your search box. Follow the instructions to place your purchase. You may need to use the link in your confirmation email orimportit directly through Unity. You may find that it takes you back to the asset store, but don't worry—click Add to Cart and it should give you the download option. 

Download the file and unzip it into a folder you can easily find. Now let's import a custom packageby going to Assets > Custom Package and opening your unzipped PlayMaker file.

Import Custom PackageImport Custom PackageImport Custom Package
Import playMakerImport playMakerImport playMaker

After you'veimported the package, choose Install PlayMaker. This will add the PlayMaker option in your menu bar.

Add the playMaker option to the menu barAdd the playMaker option to the menu barAdd the playMaker option to the menu bar

Game Objects

Every FSM needs to be attached to an object. In this case, our sphere will be the object we are looking to control. Later, we will need to specify our game object once an action is created. For now, let's confirm our sphere has a rigid body. In addition, let's freeze the X and Y position of our spherein the Inspector window.

The PlayMaker Editor

Go to the PlayMaker tab on your top menu bar and click PlayMaker editor. This will open the below window. Click and drag the tab to place it next to your Game tab on the bottom.

Install playMaker EditorInstall playMaker EditorInstall playMaker Editor

On the right of the editor window, you will see four tabs:

  1. FSM
  2. State
  3. Events
  4. Variables
FSM tabsFSM tabsFSM tabs

This is where you will choose the options and parameters for your state. Along with changing the name, you can add in your own description, which comes in handy when you have large projects. For example, this state will move the ball left. 

To add a state machine to an object, choose your object in the Hierarchy. Head over to the Editor window and right click to add an FSM. This begins the state machine, and this will be state one. 

Second, we need to create a System Event (what happens after user interaction). This is what will set off the state, for example button clicked or mouse overFor this project, we will choose mouse over. 

Let's create our next state and name it Ball Moves. You will immediately see a red exclamation point; this is because once you create a state, you will have to define a transition. To define a transition from one state to the other, right-click and choose Transition Target. There you will see the name of the second state you created. You will now see an arrow connecting your two states.

Transition TargetTransition TargetTransition Target

It's time to add physics to our object. The way we do this is by activating the Action Browser. There are numerous actions you can choose for your object, which is pretty cool. 

Action BrowserAction BrowserAction Browser

Feel free to look around at all of the commands you can use in your project. For every action, we will need to set up parameters.  

Adding Movement

Adding ForceAdding ForceAdding Force

In order to make our sphere move, we will need to give it force and velocity through the Action Browser. First, make sure the proper state is chosen. To search for an action, type it in the search box. Let's search for add force. Under the State tab to the right, change the Y variable to 100.

Adding VelocityAdding VelocityAdding Velocity

Now let's add velocity by repeating the steps above. Change the Y variable to 50.

Hit play and voila! Your sphere should move towards the camera. There are almost an unlimited amount of actions you can perform simply with PlayMaker once you understand the interface.

Conclusion

In my opinion, PlayMaker is a great addition to Unity. Even if you do have coding experience, it can make it that much easier to create interactive, fully animated games in half the time. 

Unity has an active economy. There are many other products that help you build out your project. The nature of the platform also makes it a great option from which you can better your skills. Whatever the case, you can see what we have available in the Envato Market.

If you are a beginner, once you learn the interface you can vastly improve your learning curve. PlayMaker also has a very active community, where you can find questions and answers from users. 


Interactive Storytelling: Why and How We Tell Stories

$
0
0

Telling Stories

One of the main activities of humankind is telling stories. We all tell stories. 

When we describe the day just passed to our partner over dinner, we’re telling a story. When at the pub with our friends we list the problems we’re having at work, we’re telling a story.

Every day, we are also passive listeners of other people’s stories.

When we read a book, the author is telling us a story. When we watch a commercial on television, the advertisers are telling us a story. When we listen to a song, the singer is telling us a story.

If you think about it, every day we create and listen to stories. The whole world is some kind of cacophony of stories being mixed together and bouncing from one person to another, in an endless spiral of stories, tales, anecdotes.

We are stories telling other stories.

All these stories are being told and listened to because telling stories is a need.

It's always been.

From the beginning of humankind, man has told stories. 

Telling stories is an activity that humankind has been doing since before language and writing were even invented.

Rock paintingsRock paintingsRock paintings

But what is our purpose when we tell stories? And how can all of this be applied to the creation of a videogame?

The series of articles Interactive Storytelling will deal with this topic, looking into the mechanisms of classic and modern narrative, and will provide you with tools to tell an interactive story.

Why We Tell Stories

Telling stories can have different meanings and goals. Sometimes a story is a way to tell a fact; sometimes it is a way to exorcise our fears; in other cases someone tells a story just for fun.

Telling stories is a way to express our personality and creativity, and to show aspects of our thoughts that we wouldn’t be able to express otherwise.

Indeed, storytelling and communicating facts are two very different matters.

When we are communicating and presenting facts, we simply expose the pertinent events. That’s what journalists do, for example: they present the facts as they happened, without personal details, opinions, or feelings. This way, the listeners can make up their own minds.

But when we’re telling a story, the narrative becomes personal: we add our opinions (even just indirectly, through a look or tone of voice—or, if you’re Italian like me, using gestures), we emphasize details that are important to us, and generally we filter the facts through our way of looking at things and our cultural background.

Somehow, we customize the story we’re telling so that our audience will get the messagewe want to convey, and not simply a list of the events. We don’t want our listeners to come up with their own opinion; we want, first and foremost, to give them ours.

And we do so because we want our storytelling to cause a response.

Why We Tell StoriesWhy We Tell StoriesWhy We Tell Stories

Source: http://www.onespot.com/blog/infographic-the-science-of-storytelling/

We want people listening to our story to have fun, or feel moved. Or to sympathize with us, or get angry or comfort us. Even when we’re not the subject of the story, we tell it to create an emotional response in the other person—maybe to share a feeling.

Let Me Give You an Example

Case 1: you’re at the pub sitting by the bar, watching TV. The journalist is telling the news about the corruption of an important entrepreneur and explains that, as a consequence, many workers are going to lose their jobs.

Response: the information is presented in a neutral way, and you process it according to your personal situation and decide how to respond. If you’re a worker, you’re going to feel empathy and then rage. If you’re a writer, you might get curious, and maybe will want to write an article about the story. If you’re a rich entrepreneur, you may be wondering about the possibility of easily taking over the business of the corrupted man. Whatever the response is, it’s up to you.

An example of a storyAn example of a storyAn example of a story

Case 2: a few minutes later, a man arrives and sits next to you by the bar. He starts to talk and tells you how desperate he is because he’s one of the workers that are losing their jobs, and he has a daughter in high school and a mortgage to pay.

Response: the information is now presented to you in the way your interlocutor intended. His story is filled with rage and disappointment. You process it according to your personal situation, but you’re influenced by the way the story was told. Your response may be similar to one described in the first example or, instead, could be different, affected by the man who told this story.

The better the narrator is at telling stories, the more he will be able to affect who is listening.

An example told from a narratorAn example told from a narratorAn example told from a narrator

So, in summary, we can say that we tell a story to convey a messagewith the goal of provoking a responsein who is listening.

How We Tell Stories

The Tools of the Trade

We use tools to tell our stories: these tools are not just pencil and paper or a microphone. As we’re going to see in greater detail, there are tools we often use without even realizing it.

For example, voice modulation while telling a story is a tool. Emphasizing a word and speeding up or slowing down the narrative are all means of pushing our listeners to respond the way we want. As narrators, we have the power to direct our interlocutors’ reactions towards our desired goal.

As professional narrators (because these articles are precisely meant to give you tips so that you will be able to apply the concepts of storytelling to products like videogames), we have a dutyto do so.

We cannot write a story without knowing where we want to take the players or what kind of response we want to provoke in them: otherwise, our narrative is going to be very weak.

And in order to make the most of it, we have to correctly master the tools at our disposal.

Interdisciplinary Experiences

The first group of tools are interdisciplinary experiences. As narrators of a videogame, we can exploit a great deal of things to tell stories. Background music, for example. Or the lights in a scene; or the chromatic correction applied to the game camera.

But, in order to use these tools, we have to know them. The first advice I can give you, then, is give yourself a smattering of each of these tools. You’re a Game Designer, that’s true, not an Artist or a Sound Designer. Nonetheless, you can’t be totally unaware of how their tools work. Ignoring the basic rules of directing will cause you to write a cutscene lacking in rhythm or using angles not suitable for conveying the intended message to the player.

Clearly, if you work in a very large team, there are going to be people who specialize in those areas—and they will get to decide, probably, the final results regarding those aspects. However, having some knowledge of their fields will allow you to create a common ground and engage constructively with them, and to suggest what your vision is like as author of the story.

For example, if you want to convey fear or anxiety or discomfort during a game sequence, you can use certain kinds of background sounds or give the camera a particular angle. But in order to do so, you have to know howadjusting the angle of the camera can affect the player’s response. All of this is part of the storytelling: camera, music, sounds, colors, are all tools to exploit.

A screenshot from JourneyA screenshot from JourneyA screenshot from Journey

A journey has a direction that tells the story with images and colors.

So, the more you know about those, the better.

Life Experiences

The second group of tools are life experiences. When I teach Game Design, one of the first things I notice is that students think of themselves as potentially good game designers because they play a lot. It’s clear that playing frequently and knowing the relevant market is essential. But just playing will make you a poor narrator as you will tend to repeat what you saw in someone else’s game.

Instead, doing things that are differentfrom playing will widen your knowledge and culture.

One of the secrets of telling a goodstory is knowing who you are addressing. And, as a result, adjusting the way you tell the story so that it will affect your target audience as much as possible.

Now, if your target audience consists of people passionate about opera, one of the first things that you’ll have to do is attend ten performances at the opera house. And not just that. You’ll have to go hiking, paddle a canoe, paint a picture.

Why?

Because you’ll learn new and different tools and ways of handling the narrative that will widen your possibilitiesin storytelling. The more you know, and the more you've studied or tried, the better you’re going to be at telling a story.

Man and Animals

Making Up Stories

One of the main differences between man and animal is the ability to tell a story. Many animals are able to communicate among themselves and transmit to each other information about the location of food or a source of danger.

However, exactly as we do, animals are able to tell themselves stories. An animal can tell itself a story through dreams, while sleeping. I recently observed my cat. When he’s asleep, he often makes the same noises and mouth movements as he does while hunting for bugs around the house. I did a little research and found papers that show that cats dream things that have never happened.

But the big difference from our dreams is that animals cannot invent using their imagination. Their dream, even if it’s never happened, is based on actual facts. We, instead, are free to dream and make up stories based on things that not only have never happened, but that can’t happen (raise your hand if you’ve never dreamed of having some kind of superpower!).

Imagination, therefore, is the third tool at our disposal to tell stories: we can make them up and make them believable—sometimes until our listener is convinced that what we’re telling is the truth. I think that politics is based on this principle… but I’m not a great expert in politics.

Cooperation

The other great difference between us and the animals is the ability to build upon the ideas of others, in a collaborative way.

For example: an ant is able to say to another ant, “There’s food further in this direction.” But a third ant, looking at the conversation, cannot add, “And if you go even further, you’ll find water.” The third ant, in order to express its message, will have to start a new conversation from scratch. This may look like a small limitation, but it’s actually the reason why humankind has been able to achieve such great scientific progress. Because when we study something, we start from what others have studied and demonstrated and toldus, for example through a book. We don’t have to start from scratch and can focus on the next step.

This is a way of telling stories cooperatively. Cooperation, which is our fourth tool, is a form of interaction.

And we can define interaction like this: a reciprocal exchange of information between two parties, through a medium.

Storytelling in Video Games

Finally, after this long premise, we can deal with the heart of the subject: why and how to tell stories in a videogame. We've made clear what storytelling means, what the purpose of storytelling is, and which tools we can use, and we also gave a definition to the word interactive. Which is really important because, with the birth of videogames, a form of narrative previously considered “experimental” has actually become one of the most popular and relevant kinds of contemporary narrative: interactive storytelling.

And this is the reason why storytelling in videogames is so important: because the way you can tell stories with a videogame, you cannot tell them through any other medium.

In fact, videogames allow you to organize messages and to provoke reactions so complex and so powerful they make narrative a unique and, in many ways, new experience.

Interactive Story Telling on NetflixInteractive Story Telling on NetflixInteractive Story Telling on Netflix

Yeah, of course, Interactive Storytelling will be one of the new "big things" in the future.

Conclusion

Man has been telling stories since forever, but only with videogames did he start telling interactive stories.

But what is a classic narrative like? And how is a videogame narrative different from a classic one? Finally, how many kinds of interactive storytelling exist?

The next article will answer these and other questions.

Creating Toon Water for the Web: Part 1

$
0
0

In my Beginner's Guide to Shaders I focused exclusively on fragment shaders, which is enough for any 2D effect and every ShaderToy example. But there's a whole category of techniques that require vertex shaders. This tutorial will walk you through creating stylized toon water while introducing vertex shaders. I will also introduce the depth buffer and how to use that to get more information about your scene and create foam lines.

Here's what the final effect should look like. You can try a live demo here (left mouse to orbit, right mouse to pan, scroll wheel to zoom).

Kayak and lighthouse in waterKayak and lighthouse in waterKayak and lighthouse in water

Specifically, this effect is composed of:

  1. A subdivided translucent water mesh with displaced vertices to make waves.
  2. Static water lines on the surface.
  3. Fake buoyancy on the boats.
  4. Dynamic foam lines around the edge of objects in the water.
  5. A post-process distortion of everything underwater.

What I like about this effect is that it touches on a lot of different concepts in computer graphics, so it will allow us to draw on ideas from past tutorials, as well as developing techniques we can use for a variety of future effects.

I'll be using PlayCanvas for this just because it has a convenient free web-based IDE, but everything should be applicable to any environment running WebGL. You can find a Three.js version of the source code at the end. I'll be assuming you're comfortable using fragment shaders and navigating the PlayCanvas interface. You can brush up on shaders here and skim an intro to PlayCanvas here.

Environment Setup

The goal of this section is to set up our PlayCanvas project and place some environment objects to test the water against. 

If you don't already have an account with PlayCanvas, sign up for one and create a new blank project. By default, you should have a couple of objects, a camera and a light in your scene.

A blank PlayCanvas project showing the objects the scene contains A blank PlayCanvas project showing the objects the scene contains A blank PlayCanvas project showing the objects the scene contains

Inserting Models

Google's Poly project is a really great resource for 3D models for the web. Here is the boat model I used. Once you download and unzip that, you should find a .obj and a .png file.

  1. Drag both files into the asset window in your PlayCanvas project.
  2. Select the material that was automatically created, and set its diffuse map to the .png file.
Click on the diffuse tab and select the boat imageClick on the diffuse tab and select the boat imageClick on the diffuse tab and select the boat image

Now you can drag the Tugboat.json into your scene and delete the Box and Plane objects. You can scale the boat up if it looks too small (I set mine to 50).

You can scale the model up using the properties panel on the right once its selectedYou can scale the model up using the properties panel on the right once its selectedYou can scale the model up using the properties panel on the right once its selected

You can add any other models to your scene in the same way.

Orbit Camera

To set up an orbit camera, we'll copy a script from this PlayCanvas example. Go to that link, and click on Editor to enter the project.

  1. Copy the contents of mouse-input.js and orbit-camera.js from that tutorial project into the files of the same name in your own project.
  2. Add a Script component to your camera.
  3. Attach the two scripts to the camera.

Tip: You can create folders in the asset window to keep things organized. I put these two camera scripts under Scripts/Camera/, my model under Models/, and my material under Materials/.

Now, when you launch the game (play button on the top right of the scene view), you should be able to see your boat and orbit around it with the mouse. 

Subdivided Water Surface

The goal of this section is to generate a subdivided mesh to use as our water surface.

To generate the water surface, we're going to adapt some code from this terrain generation tutorial. Create a new script file called Water.js. Edit this script and create a new function called GeneratePlaneMesh that looks like this:

1
Water.prototype.GeneratePlaneMesh=function(options){
2
// 1 - Set default options if none are provided 
3
if(options===undefined)
4
options={subdivisions:100,width:10,height:10};
5
// 2 - Generate points, uv's and indices 
6
varpositions=[];
7
varuvs=[];
8
varindices=[];
9
varrow,col;
10
varnormals;
11
12
for(row=0;row<=options.subdivisions;row++){
13
for(col=0;col<=options.subdivisions;col++){
14
varposition=newpc.Vec3((col*options.width)/options.subdivisions-(options.width/2.0),0,((options.subdivisions-row)*options.height)/options.subdivisions-(options.height/2.0));
15
16
positions.push(position.x,position.y,position.z);
17
18
uvs.push(col/options.subdivisions,1.0-row/options.subdivisions);
19
}
20
}
21
22
for(row=0;row<options.subdivisions;row++){
23
for(col=0;col<options.subdivisions;col++){
24
indices.push(col+row*(options.subdivisions+1));
25
indices.push(col+1+row*(options.subdivisions+1));
26
indices.push(col+1+(row+1)*(options.subdivisions+1));
27
28
indices.push(col+row*(options.subdivisions+1));
29
indices.push(col+1+(row+1)*(options.subdivisions+1));
30
indices.push(col+(row+1)*(options.subdivisions+1));
31
}
32
}
33
34
// Compute the normals 
35
normals=pc.calculateNormals(positions,indices);
36
37
38
// Make the actual model
39
varnode=newpc.GraphNode();
40
varmaterial=newpc.StandardMaterial();
41
42
// Create the mesh 
43
varmesh=pc.createMesh(this.app.graphicsDevice,positions,{
44
normals:normals,
45
uvs:uvs,
46
indices:indices
47
});
48
49
varmeshInstance=newpc.MeshInstance(node,mesh,material);
50
51
// Add it to this entity 
52
varmodel=newpc.Model();
53
model.graph=node;
54
model.meshInstances.push(meshInstance);
55
56
this.entity.addComponent('model');
57
this.entity.model.model=model;
58
this.entity.model.castShadows=false;// We don't want the water surface itself to cast a shadow 
59
};

Now you can call this in the initialize function:

1
Water.prototype.initialize=function(){
2
this.GeneratePlaneMesh({subdivisions:100,width:10,height:10});
3
};

You should see just a flat plane when you launch the game now. But this is not just a flat plane. It's a mesh composed of a thousand vertices. As a challenge, try to verify this (it's a good excuse to read through the code you just copied). 

Challenge #1: Displace the Y coordinate of each vertex by a random amount to get the plane to look something like the image below.
A subdivided plane with displaced verticesA subdivided plane with displaced verticesA subdivided plane with displaced vertices

Waves

The goal of this section is to give the water surface a custom material and create animated waves.

To get the effects we want, we need to set up a custom material. Most 3D engines will have some pre-defined shaders for rendering objects and a way to override them. Here's a good reference for doing this in PlayCanvas.

Attaching a Shader

Let's create a new function called CreateWaterMaterial that defines a new material with a custom shader and returns it:

1
Water.prototype.CreateWaterMaterial=function(){
2
// Create a new blank material  
3
varmaterial=newpc.Material();
4
// A name just makes it easier to identify when debugging 
5
material.name="DynamicWater_Material";
6
7
// Create the shader definition 
8
// dynamically set the precision depending on device.
9
vargd=this.app.graphicsDevice;
10
varfragmentShader="precision "+gd.precision+" float;\n";
11
fragmentShader=fragmentShader+this.fs.resource;
12
13
varvertexShader=this.vs.resource;
14
15
// A shader definition used to create a new shader.
16
varshaderDefinition={
17
attributes:{
18
aPosition:pc.gfx.SEMANTIC_POSITION,
19
aUv0:pc.SEMANTIC_TEXCOORD0,
20
},
21
vshader:vertexShader,
22
fshader:fragmentShader
23
};
24
25
// Create the shader from the definition
26
this.shader=newpc.Shader(gd,shaderDefinition);
27
28
// Apply shader to this material 
29
material.setShader(this.shader);
30
31
returnmaterial;
32
};

This function grabs the vertex and fragment shader code from the script attributes. So let's define those at the top of the file (after the pc.createScript line):

1
Water.attributes.add('vs',{
2
type:'asset',
3
assetType:'shader',
4
title:'Vertex Shader'
5
});
6
7
Water.attributes.add('fs',{
8
type:'asset',
9
assetType:'shader',
10
title:'Fragment Shader'
11
});

Now we can create these shader files and attach them to our script. Go back to the editor, and create two new shader files: Water.frag and Water.vert. Attach these shaders to your script as shown below.

Watervert and Waterfrag are attached to WaterInitWatervert and Waterfrag are attached to WaterInitWatervert and Waterfrag are attached to WaterInit

If the new attributes don't show up in the editor, click the Parse button to refresh the script.

Now put this basic shader in Water.frag:

1
voidmain(void)
2
{
3
vec4color=vec4(0.0,0.0,1.0,0.5);
4
gl_FragColor=color;
5
}

And this in Water.vert:

1
attributevec3aPosition;
2
3
uniformmat4matrix_model;
4
uniformmat4matrix_viewProjection;
5
6
voidmain(void)
7
{
8
gl_Position=matrix_viewProjection*matrix_model*vec4(aPosition,1.0);
9
}

Finally, go back to Water.js and make it use our new custom material instead of the standard material. So instead of:

1
varmaterial=newpc.StandardMaterial();

Do:

1
varmaterial=this.CreateWaterMaterial();

Now, if you launch the game, the plane should now be blue.

The shader we wrote renders the plane as blueThe shader we wrote renders the plane as blueThe shader we wrote renders the plane as blue

Hot Reloading

So far, we've just set up some dummy shaders on our new material. Before we get to writing the real effects, one last thing I want to set up is automatic code reloading.

Uncommenting the swap function in any script file (such as Water.js) enables hot-reloading. We'll see how to use this later to maintain the state even as we update the code in real time. But for now we just want to re-apply the shaders once we've detected a change. Shaders get compiled before they are run in WebGL, so we'll need to recreate the custom material to trigger this.

We're going to check if the contents of our shader code have been updated and, if so, recreate the material. First, save the current shaders in the initialize:

1
// initialize code called once per entity
2
Water.prototype.initialize=function(){
3
this.GeneratePlaneMesh();
4
5
// Save the current shaders 
6
this.savedVS=this.vs.resource;
7
this.savedFS=this.fs.resource;
8
9
};

And in the update, check if there have been any changes:

1
// update code called every frame
2
Water.prototype.update=function(dt){
3
4
if(this.savedFS!=this.fs.resource||this.savedVS!=this.vs.resource){
5
// Re-create the material so the shaders can be recompiled 
6
varnewMaterial=this.CreateWaterMaterial();
7
// Apply it to the model 
8
varmodel=this.entity.model.model;
9
model.meshInstances[0].material=newMaterial;
10
11
// Save the new shaders
12
this.savedVS=this.vs.resource;
13
this.savedFS=this.fs.resource;
14
}
15
16
};

Now, to confirm this works, launch the game and change the color of the plane in Water.frag to a more tasteful blue. Once you save the file, it should update without having to refresh or relaunch! This was the color I chose:

1
vec4color=vec4(0.0,0.7,1.0,0.5);

Vertex Shaders

To create waves, we need to move every vertex in our mesh every frame. This sounds as if it's going to be very inefficient, but every vertex of every model already gets transformed on each frame we render. This is what the vertex shader does. 

If you think of a fragment shader as a function that runs on every pixel, takes a position, and returns a color, then a vertex shader is a function that runs on every vertex, takes a position, and returns a position.

The default vertex shader will take the world position of a given model, and return the screen position. Our 3D scene is defined in terms of x, y, and z, but your monitor is a flat two-dimensional plane, so we project our 3D world onto our 2D screen. This projection is what the view, projection, and model matrices take care of and is outside of the scope of this tutorial, but if you want to learn exactly what happens at this step, here's a very nice guide.

So this line:

1
gl_Position=matrix_viewProjection*matrix_model*vec4(aPosition,1.0);

Takes aPosition as the 3D world position of a particular vertex and transforms it into gl_Position, which is the final 2D screen position. The 'a' prefix on aPosition is to signify that this value is an attribute. Remember that a uniformvariable is a value we can define on the CPU to pass to a shader that retains the same value across all pixels/vertices. An attribute's value, on the other hand, comes from an array defined on the CPU. The vertex shader is called once for each value in that attribute array.

You can see these attributes are set up in the shader definition we set up in Water.js:

1
varshaderDefinition={
2
attributes:{
3
aPosition:pc.gfx.SEMANTIC_POSITION,
4
aUv0:pc.SEMANTIC_TEXCOORD0,
5
},
6
vshader:vertexShader,
7
fshader:fragmentShader
8
};

PlayCanvas takes care of setting up and passing an array of vertex positions for aPosition when we pass this enum, but in general you could pass any array of data to the vertex shader.

Moving the Vertices

Let's say you want to squish the plane by multiplying all x values by half. Should you change aPositionor gl_Position?

Let's try aPosition first. We can't modify an attribute directly, but we can make a copy:

1
attributevec3aPosition;
2
3
uniformmat4matrix_model;
4
uniformmat4matrix_viewProjection;
5
6
voidmain(void)
7
{
8
vec3pos=aPosition;
9
pos.x*=0.5;
10
11
gl_Position=matrix_viewProjection*matrix_model*vec4(pos,1.0);
12
}

The plane should now look more rectangular. Nothing strange there. Now what happens if we instead try modifying gl_Position?

1
attributevec3aPosition;
2
3
uniformmat4matrix_model;
4
uniformmat4matrix_viewProjection;
5
6
voidmain(void)
7
{
8
vec3pos=aPosition;
9
//pos.x *= 0.5;
10
11
gl_Position=matrix_viewProjection*matrix_model*vec4(pos,1.0);
12
gl_Position.x*=0.5;
13
}

It might look the same until you start to rotate the camera. We're modifying screen space coordinates, which means it's going to look different depending on how you're looking at it.

So that's how you can move the vertices, and it's important to make this distinction between whether you're in world or screen space.

Challenge #2: Can you move the whole plane surface a few units up (along the Y axis) in the vertex shader without distorting its shape?
Challenge #3: I said gl_Position is 2D, but gl_Position.z does exist. Can you run some tests to determine if this value affects anything, and if so, what it's used for?

Adding Time

One last thing we need before we can create moving waves is a uniform variable to use as time. Declare a uniform in your vertex shader:

1
uniformfloatuTime;

Then, to pass this to our shader, go back to Water.js and define a time variable in the initialize:

1
Water.prototype.initialize=function(){
2
this.time=0;///// First define the time here 
3
4
this.GeneratePlaneMesh();
5
6
// Save the current shaders 
7
this.savedVS=this.vs.resource;
8
this.savedFS=this.fs.resource;
9
};

Now, to pass this to our shader, we use material.setParameter. First we set an initial value at the end of the CreateWaterMaterial function:

1
// Create the shader from the definition
2
this.shader=newpc.Shader(gd,shaderDefinition);
3
4
////////////// The new part
5
material.setParameter('uTime',this.time);
6
this.material=material;// Save a reference to this material
7
////////////////
8
9
// Apply shader to this material 
10
material.setShader(this.shader);
11
12
returnmaterial;

Now in the updatefunction we can increment time and access the material using the reference we created for it:

1
this.time+=0.1;
2
this.material.setParameter('uTime',this.time);

As a final step, in the swap function, copy over the old value of time, so that even if you change the code it'll continue incrementing without resetting to 0.

1
Water.prototype.swap=function(old){
2
this.time=old.time;
3
};

Now everything is ready. Launch the game to make sure there are no errors. Now let's move our plane by a function of time in Water.vert:

1
pos.y+=cos(uTime)

And your plane should be moving up and down now! Because we have a swap function now, you can also update Water.js without having to relaunch. Try making time increment faster or slower to confirm this works.

Moving the plane up and down with a vertex shaderMoving the plane up and down with a vertex shaderMoving the plane up and down with a vertex shader
Challenge #4: Can you move the vertices so it looks like the wave below? 

As a hint, I talked in depth about different ways to create waves here. That was in 2D, but the same math applies here. If you'd rather just peek at the solution, here's the gist.

Translucency

The goal of this section is to make the water surface translucent.

You might have noticed that the color we're returning in Water.frag does have an alpha value of 0.5, but the surface is still completely opaque. Transparency in many ways is still an open problem in computer graphics. One cheap way to achieve it is to use blending.

Normally, when a pixel is about to be drawn, it checks the value in the depth buffer against its own depth value (its position along the Z axis) to determine whether to overwrite the current pixel on the screen or discard itself. This is what allows you to render a scene correctly without having to sort objects back to front. 

With blending, instead of simply discarding or overwriting, we can combine the color of the pixel that's already drawn (the destination) with the pixel that's about to be drawn (the source). You can see all the available blending functions in WebGL here.

To make the alpha work the way we expect it, we want the combined color of the result to be the source multiplied by the alpha plus the destination multiplied by one minus the alpha. In other words, if the alpha is 0.4, the final color should be:

1
finalColor=source*0.4+destination*0.6;

In PlayCanvas, the pc.BLEND_NORMAL option does exactly this.

To enable this, just set the property on the material inside CreateWaterMaterial:

1
material.blendType=pc.BLEND_NORMAL;

If you launch the game now, the water will be translucent! This isn't perfect, though. A problem arises if the translucent surface overlaps with itself, as shown below.

Artifacts arise when a translucent surface overlaps with itself Artifacts arise when a translucent surface overlaps with itself Artifacts arise when a translucent surface overlaps with itself

We can fix this by using alpha to coverage, which is a multi-sampling technique to achieve transparencyinstead of blending:

1
//material.blendType = pc.BLEND_NORMAL;
2
material.alphaToCoverage=true;

But this is only available in WebGL 2. For the remainder of this tutorial, I'll be using blending to keep it simple.

Summary

So far we've set up our environment and created our translucent water surface with animated waves from our vertex shader. The second part will cover applying buoyancy on objects, adding water lines to the surface, and creating the foam lines around the edges of objects that intersect the surface. 

The final part will cover applying the underwater post-process distortion effect and some ideas for where to go next.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.

Interactive Storytelling: Linear Storytelling

$
0
0

In the last article we saw where the need for storytelling comes from, which is something intrinsic to humankind, and we said that telling a story means basically to convey a messagein order to obtain a responsein our listener. 

We also started to examine the tools that we, as game designers, have available to learn how to tell stories. Finally, we mentioned the birth of interactive stories, typical of videogames. 

However, in order to thoroughly address this issue, we have to take a step back and start analysing the classic narrative (or passive narrative).

Passive Narrative

In the past, storytelling has traditionally been considered as a one-way relationship: the author of a story chooses a medium (book, theatrical play, movie, etc.) and uses it to tell a story that will be passively received by the audience.

But is it really like that?

Leaving aside the fact that in ancient times attempts were made to directly engage the public during theatrical performances (such as in experimental Greek theatre), passive narrative must actually be considered, more correctly, a two-stage narrative.

Because, if it is true that the author tells us a story to convey a message and to generate a response in us, then two different stages must be taken into account: reception and elaboration.

The greek theatre was often experimentalThe greek theatre was often experimentalThe greek theatre was often experimental
The Greek theatre was often experimental.

Whenever we assist a story, we are passive, it’s true. For example, while watching a movie in the theater, we’re usually sitting in the dark, in silence, ready to just “live” the experience that the director and the authors have prepared for us. This first stage, reception, is one-way: the author tells, we listen. We become receivers of the message from the author.

However, it’s not unusual to go out of the theater and talk about what we just watched, maybe with our friends or our partner. We comment on the movie, discuss our personal opinions (“I liked it”, “I got bored”, etc.), and often elaborate on the scenes, underlining the details we were most impressed by.

Therefore, we analyse the parts of the message from the author that were etched in our brains, the ones that have generated most of a response in us.

It doesn’t matter what kind of movie we just watched; this kind of after reception interaction happens anyway: whether it’s a comedy, drama, documentary or action movie, the second stage, the elaboration one, always happens. Even if we went to the movie by ourselves, we would think about particular scenes and elaborate on them.

The length and intensity of this stage, clearly, can vary depending on how much we liked the movie (that is to say, depending on how much the message from the author managed to create a response in us).

The most famous franchises in the world are the ones that push their fans to wonder and speculate, for example, in between movies, about a character's origins that haven’t been revealed yet. Thousands of Twitter messages, Facebook groups, YouTube videos, and Redditthreads, for example, have been created by fans after watching Star Wars Episode VII, proposing theories dealing with the mystery of Rey’s parents.

For two years Star Wars fans talked everyday about the Reys character for Episode VIIFor two years Star Wars fans talked everyday about the Reys character for Episode VIIFor two years Star Wars fans talked everyday about the Reys character for Episode VII
For two years, Star Wars fans talked every day about Rey's character for Episode VII

When we develop a passion for a story that excites us, it usually happens that we dedicate ten or a hundred times as much time to the second stage compared to the first one.

Let’s ask ourselves: why does this second stage even exist? Why does reading a book, closing it, putting it on our nightstand and forgetting about it not seem to be enough? Why do we want, instead, be directly involved, letting the suspension of disbelief make us live the responses that the author wants to create in us? And then why do we keep trying to interact with that story, reliving and analysing specific parts?

The keyword here is precisely interaction: it is one of the needs of humankind. Without going into too much detail about a complex field such as the human psyche (a field that, however, is increasingly being studied by game designers and authors of movies and books because it is obviously extremely useful in order to calibrate our messages and get exactly the desired responses), one of the fundamental parts of human personality is the ego. And it’s precisely our ego that makes us want to be in the center of the story, or pushes us to discover some contact points between the characters in a story and ourselves. It’s our ego that lets us relate to the characters and make our reactions to the story we’re being told so powerful that they become able to actually affect our reality.

Without the ego, we wouldn’t be moved by reading a dramatic book.

At the same time, the ego leads us to not want to play just a minor role in the story—that is to say, to be just a passive audience.

Without the ego we wouldnt be moved by reading a dramatic bookWithout the ego we wouldnt be moved by reading a dramatic bookWithout the ego we wouldnt be moved by reading a dramatic book
Without the ego, we wouldn’t be moved by reading a dramatic book

We want, by instinct, to be at the center of the scene (and let’s say that we are also living in a time in which society and technology push us in this direction). Thus, if we can’t edit the story while it’s being told to us, we wish to interact with it anyway, at a later stage.

One of the first authors who understood this mechanism was David Lynch. Perhaps one of the most important authors of the modern era, he is certainly the father of TV series as we know them today. In 1990, when David Lynch began telling the story of an unknown (and fictional) town in the northern United States, Twin Peaks, he was following a hunch: he created a mystery that engaged viewers all over the world and led them to look for a solution. 

The dreamlike puzzle created by Lynch and Frost (the other author of Twin Peaks) kept viewers glued to that story for two and a half years (and later for 30 solid years, because the fans never gave up on that unsolved mystery until the release of a very long-awaited third season just last year). The story brought viewers to interact among themselves: they shared theories and possible scenarios. For the first time in the history of television, the second stage became actually important and, clearly, contributed to the success of Lynch’s work.

Then how can we call this experience passive if, sometimes, the second stage lasts longer and is more intense than the first one?

Twin Peaks changed forever the way to tell stories on tvTwin Peaks changed forever the way to tell stories on tvTwin Peaks changed forever the way to tell stories on tv
Twin Peaks changed forever the way to tell stories on TV

You’ll agree with me that the definition is inadequate at least. However, it’s true that during the narration the audience is passive: throughout the duration of the transmission of the message from the author, whoever is receiving that message can only passively listen to it. The audience is not able to intervene in the events or shift their focus to minor details that look interesting to them. Furthermore, in the case of media such as cinema and theater, the audience doesn’t even have the chance to choose the narrative rhythm: the message from the author is shown in an unstoppable way, like a river in flood that overwhelms the viewers.

From this point of view, videogames are deeply different, and their interactive narration opens up countless possibilities that, before videogames became an established medium, used to be unthinkable.

The Evolution of Storytelling

It’s interesting to note how, looking at the videogame world, older media have always felt a little bit of envy. The authors of a movie or a TV series are clearly aware of how charming interaction can be for the audience, and they know that, generation after generation, classic storytelling is getting less and less appealing.

In the last 30 years, many attempts have been made to hybridize the classic nature of certain media, and some have been more successful than others.

One of the most famous attempts of this kind is the book series Choose Your Own Adventure: books where the story is made of forks in the road in which the reader/player can make choices and often fight against enemies or use a style of interaction very similar to that of tabletop role-playing games.

In the eighties all nerds like me read dozens of books like thatIn the eighties all nerds like me read dozens of books like thatIn the eighties all nerds like me read dozens of books like that
In the eighties all nerds (like me) read dozens of books like that

Another example is the eighties TV series Captain Power and the Soldiers of the Future that allowed players, using infra-red devices, to fight against the enemies on the screen and score points, and the player's action figure reacted based on the results.

A legendary taglineA legendary taglineA legendary tagline
A legendary tagline

A recent example is the interactive episode of Puss in Boots, published on Netflixand designed for tablet: it’s a cartoon for children with choices to make and forks in the road in the story.

The diagram of the Puss in Boots branches on NetflixThe diagram of the Puss in Boots branches on NetflixThe diagram of the Puss in Boots branches on Netflix
The diagram of the Puss in Boots' branches on Netflix

I’m really curious about what will happen in the future in this respect.

What about you?

Interactive Storytelling

Now that we have looked at traditional (to a certain extent, passive) storytelling, it’s time to go into the very subject of these articles: interactive storytelling.

First of all, let's try to set things straight: are all games narrative?

To answer, let’s look at a few examples.

Chessis one of the oldest and most popular games in the world. It represents a conflict on a battlefield between two armies and, as many of you will know, chess and go are considered the most strategic games in the world.

However, is let’s simulate a battle enough to define chess as a narrative game?

No.

Because all the elements we highlighted as fundamental to narration are missing: the narrator is missing, and so is the message.

The same goes for videogames.

There are completely abstract games (like Tetris) and games in which storytelling is a simple expedient for the setting of the game. Consider Super Mario Bros, in its first version. There was a basic story (Bowser has kidnapped Princess Peach and Mario must save her). But there’s no actual storytelling, no narrator, no message.

The reason for Super Mario Bros success was certainly not caused by its narrative structureThe reason for Super Mario Bros success was certainly not caused by its narrative structureThe reason for Super Mario Bros success was certainly not caused by its narrative structure
The reason for the success of Super Mario Bros was certainly not its narrative structure

There are responses, but they are directly provoked by gameplay. In fact, taking away the story from Super Mario Bros doesn’t affect the user experience at all.

The lack of any actual storytelling, however, doesn’t invalidate the quality of the game. On the other hand, adding narration to the structure of the game as it is would probably burden the experience and ruin the perfect balance of the design.

Not for nothing, even though in more modern Super Mario games texts and cut-scenes have appeared, the story keeps working as a mere expedient, as a corollary to the gameplay.

When we as designers, therefore, start approaching the design of a new game, we have to ask ourselves a couple of questions:

  1. Does my story (my message) need interactive storytelling?
  2. How can interactive storytelling improve my story?

Answering these questions in the first place will let us understand if and how to include interactive storytelling in our game.

We may realise that a simple story used as an expedient is enough, or that the game doesn’t need a story at all! The assumption that any modern game should have interactive storytelling is a mistake we have to avoid.

If, instead, the answers are positive, then it’s time to learn how to master the art of interactive storytelling.

Linear Interactive Storytelling

The first kind of interactive storytelling that we are going to consider is the linearone. This definition might, at first sight, appear to be counterintuitive, but it’s actually the most common kind of interactive storytelling.

Videogames using this kind of storytelling allow the player to interact with the events, choosing the narrative rhythm (in the case, for example, of a quest that won’t proceed without the player’s intervention), choosing the order in which to go through the events (for example, when there are two parallel quests active at the same time and the player can decide which one to complete first), or setting the desired level of accuracy (for example, when reading documents and clues in a game is not mandatory but increases the player’s knowledge about the story or the game’s setting).

However, as much as the player feels free, the story is eventually going precisely how the author meant.

It’s as if the game designer had taken his message and split it into many different pieces to be put together by the player.

Developing this kind of interaction is clearly more complicated than classic storytelling: certain tricks of the trade commonly used in book-writing, for example, cannot be used here. 

Consider this game with a linear interactive storytelling (maybe one of the most famous in the world): The Secret of Monkey Island. It allows players, on a number of occasions, to explore the story and interact with it in the order and rhythm they prefer. There are at least two large open sections where players have multiple tasks to do, following their own hunches and preferences.

Probably the first game thanks to which I approached interactive storytellingProbably the first game thanks to which I approached interactive storytellingProbably the first game thanks to which I approached interactive storytelling
Probably the first game thanks to which I approached interactive storytelling

A more recent example is The Legend of Zelda: Breath of the Wild, in which the story is told through flashbacks but it is up to the player to decide which parts of the game will be handled first and thus which pieces of the puzzle will be put together first.

Each part of the story, however, had been written in order to coexist without contradicting or hindering each other.

There’s no need to deal with this kind of problem when writing a book.

In order to be sure to create a correct interaction, therefore, a game designer has to use certain tools.

When writing a book, one often takes notes and sketches diagrams. Not all authors, I know, take this approach. Some of them are way more spontaneous: they sit in front of the keyboard and start writing.

But when you’re dealing with interactive storytelling, the spontaneous approach is simply not feasible: outlining the story, using flow charts, creating tables and summaries about every character of the story is the necessary starting point.

All these documents, in fact, will be part of the Game Design Document (GDD), which contains all the elements of the game.

Writing this kind of story, without losing track or making mistakes, is definitely complicated. The more diagrams and notes you’ve got, the more you’ll limit the risk of mistakes.

But it won’t be enough.

When writers finish their work, they will usually hand it to a proofreader who will thoroughly read it and point out mistakes and inconsistencies in the text. Likewise, designers will have to entrust their work to a QA department, made of different people that will check the story and test in a systematic way all cases of interaction—looking for every possible loophole.

Conclusion

And yet… what if we want more? What if we want to give the players the freedom to affect the events and make their experience even more intimate and personal, providing each player with a different response?

In this case we would have to resort to non-linear interactive storytelling that, along with the direct method and the indirect method, will be the subject of the third and last article of this series.

Creating Toon Water for the Web: Part 2

$
0
0

Welcome back to this three-part series on creating stylized toon water in PlayCanvas using vertex shaders. In Part 1, we covered setting up our environment and water surface. This part will cover applying buoyancy to objects, adding water lines to the surface, and creating the foam lines with the depth buffer around the edges of objects that intersect the surface. 

I made some small changes to my scene to make it look a little nicer. You can customize your scene however you like, but what I did was:

  • Added the lighthouse and the octopus models.
  • Added a ground plane with color #FFA457
  • Added a clear color for the camera of #6CC8FF.
  • Added an ambient color to the scene of #FFC480 (you can find this in the scene settings).

Below is what my starting point now looks like.

The scene now includes an octopus and a ligthouseThe scene now includes an octopus and a ligthouseThe scene now includes an octopus and a ligthouse

Buoyancy 

The most straightforward way to create buoyancy is just to create a script that will push objects up and down. Create a new script called Buoyancy.js and set its initialize to:

1
Buoyancy.prototype.initialize=function(){
2
this.initialPosition=this.entity.getPosition().clone();
3
this.initialRotation=this.entity.getEulerAngles().clone();
4
// The initial time is set to a random value so that if
5
// this script is attached to multiple objects they won't 
6
// all move the same way
7
this.time=Math.random()*2*Math.PI;
8
};

Now, in the update, we increment time and rotate the object:

1
Buoyancy.prototype.update=function(dt){
2
this.time+=0.1;
3
4
// Move the object up and down 
5
varpos=this.entity.getPosition().clone();
6
pos.y=this.initialPosition.y+Math.cos(this.time)*0.07;
7
this.entity.setPosition(pos.x,pos.y,pos.z);
8
9
// Rotate the object slightly 
10
varrot=this.entity.getEulerAngles().clone();
11
rot.x=this.initialRotation.x+Math.cos(this.time*0.25)*1;
12
rot.z=this.initialRotation.z+Math.sin(this.time*0.5)*2;
13
this.entity.setLocalEulerAngles(rot.x,rot.y,rot.z);
14
};

Apply this script to your boat and watch it bobbing up and down in the water! You can apply this script to several objects (including the camera—try it)!

Texturing the Surface

Right now, the only way you can see the waves is by looking at the edges of the water surface. Adding a texture helps make motion on the surface more visible and is a cheap way to simulate reflections and caustics.

You can try to find some caustics texture or make your own. Here's one I drew in Gimp that you can freely use. Any texture will work as long as it can be tiled seamlessly.

Once you've found a texture you like, drag it into your project's asset window. We need to reference this texture in our Water.js script, so create an attribute for it:

1
Water.attributes.add('surfaceTexture',{
2
type:'asset',
3
assetType:'texture',
4
title:'Surface Texture'
5
});

And then assign it in the editor:

The water texture is added to the water scriptThe water texture is added to the water scriptThe water texture is added to the water script

Now we need to pass it to our shader. Go to Water.js and set a new parameter in the CreateWaterMaterial function:

1
material.setParameter('uSurfaceTexture',this.surfaceTexture.resource);

Now go into Water.frag and declare our new uniform:

1
uniformsampler2DuSurfaceTexture;

We're almost there. To render the texture onto the plane, we need to know where each pixel is along the mesh. Which means we need to pass some data from the vertex shader to the fragment shader.

Varying Variables

A varyingvariable allows you to pass data from the vertex shader to the fragment shader. This is the third type of special variable you can have in a shader (the other two being uniformand attribute). It is defined for each vertex and is accessible by each pixel. Since there are a lot more pixels than vertices, the value is interpolated between vertices (this is where the name "varying" comes from—it varies from the values you give it).

To try this out, declare a new variable in Water.vert as a varying:

1
varyingvec2ScreenPosition;

And then set it to gl_Position after it's been computed:

1
ScreenPosition=gl_Position.xyz;

Now go back to Water.frag and declare the same variable. There's no way to get some debug output from within a shader, but we can use color to visually debug. Here's one way to do this:

1
uniformsampler2DuSurfaceTexture;
2
varyingvec3ScreenPosition;
3
4
voidmain(void)
5
{
6
vec4color=vec4(0.0,0.7,1.0,0.5);
7
8
// Testing out our new varying variable
9
color=vec4(vec3(ScreenPosition.x),1.0);
10
11
gl_FragColor=color;
12
}

The plane should now look black and white, where the line separating them is where ScreenPosition.x is 0. Color values only go from 0 to 1, but the values in ScreenPosition can be outside this range. They get automatically clamped, so if you're seeing black, that could be 0, or negative.

What we've just done is passed the screen position of every vertex to every pixel. You can see that the line separating the black and white sides is always going to be in the center of the screen, regardless of where the surface actually is in the world.

Challenge #1: Create a new varying variable to pass the world position instead of the screen position. Visualize it in the same way as we did above. If the color doesn't change with the camera, then you've done this correctly.

Using UVs 

The UVs are the 2D coordinates for each vertex along the mesh, normalized from 0 to 1. This is exactly what we need to sample the texture onto the plane correctly, and it should already be set up from the previous part.

Declare a new attribute in Water.vert (this name comes from the shader definition in Water.js):

1
attributevec2aUv0;

And all we need to do is pass it to the fragment shader, so just create a varying and set it to the attribute:

1
// In Water.vert
2
// We declare this along with our other variables at the top
3
varyingvec2vUv0;
4
5
// ..
6
// Down in the main function, we store the value of the attribute 
7
// in the varying so that the frag shader can access it 
8
vUv0=aUv0;

Now we declare the same varying in the fragment shader. To verify it works, we can visualize it as before, so that Water.frag now looks like:

1
uniformsampler2DuSurfaceTexture;
2
varyingvec2vUv0;
3
4
voidmain(void)
5
{
6
vec4color=vec4(0.0,0.7,1.0,0.5);
7
8
// Confirming UV's 
9
color=vec4(vec3(vUv0.x),1.0);
10
11
gl_FragColor=color;
12
}

And you should see a gradient, confirming that we have a value of 0 at one end and 1 at the other. Now, to actually sample our texture, all we have to do is:

1
color=texture2D(uSurfaceTexture,vUv0);

And you should see the texture on the surface:

Caustics texture is applied to the water surfaceCaustics texture is applied to the water surfaceCaustics texture is applied to the water surface

Stylizing the Texture

Instead of just setting the texture as our new color, let's combine it with the blue we had:

1
uniformsampler2DuSurfaceTexture;
2
varyingvec2vUv0;
3
4
voidmain(void)
5
{
6
vec4color=vec4(0.0,0.7,1.0,0.5);
7
8
vec4WaterLines=texture2D(uSurfaceTexture,vUv0);
9
color.rgba+=WaterLines.r;
10
11
gl_FragColor=color;
12
}

This works because the color of the texture is black (0) everywhere except for the water lines. By adding it, we don't change the original blue color except for the places where there are lines, where it becomes brighter. 

This isn't the only way to combine the colors, though.

Challenge #2: Can you combine the colors in a way to get the subtler effect shown below?
Water lines applied to the surface with a more subtle colorWater lines applied to the surface with a more subtle colorWater lines applied to the surface with a more subtle color

Moving the Texture

As a final effect, we want the lines to move along the surface so it doesn't look so static. To do this, we use the fact that any value given to the texture2D function outside the 0 to 1 range will wrap around (such that 1.5 and 2.5 both become 0.5). So we can increment our position by the time uniform variable we already set up and multiply the position to either increase or decrease the density of the lines in our surface, making our final frag shader look like this:

1
uniformsampler2DuSurfaceTexture;
2
uniformfloatuTime;
3
varyingvec2vUv0;
4
5
voidmain(void)
6
{
7
vec4color=vec4(0.0,0.7,1.0,0.5);
8
9
vec2pos=vUv0;
10
// Multiplying by a number greater than 1 causes the 
11
// texture to repeat more often 
12
pos*=2.0;
13
// Displacing the whole texture so it moves along the surface
14
pos.y+=uTime*0.02;
15
16
vec4WaterLines=texture2D(uSurfaceTexture,pos);
17
color.rgba+=WaterLines.r;
18
19
gl_FragColor=color;
20
}

Foam Lines & the Depth Buffer

Rendering foam lines around objects in water makes it far easier to see how objects are immersed and where they cut the surface. It also makes our water look a lot more believable. To do this, we need to somehow figure out where the edges are on each object, and do this efficiently.

The Trick

What we want is to be able to tell, given a pixel on the surface of the water, whether it's close to an object. If so, we can color it as foam. There's no straightforward way to do this (that I know of). So to figure this out, we're going to use a helpful problem-solving technique: come up with an example we know the answer to, and see if we can generalize it. 

Consider the view below.

Lighthouse in waterLighthouse in waterLighthouse in water

Which pixels should be part of the foam? We know it should look something like this:

Lighthouse in water with foam Lighthouse in water with foam Lighthouse in water with foam

So let's think about two specific pixels. I've marked two with stars below. The black one is in the foam. The red one is not. How can we tell them apart inside a shader?

Lighthouse in water with two marked pixelsLighthouse in water with two marked pixelsLighthouse in water with two marked pixels

What we know is that even though those two pixels are close together in screen space (both are rendered right on top of the lighthouse body), they're actually far apart in world space. We can verify this by looking at the same scene from a different angle, as shown below.

Viewing the lighthouse from aboveViewing the lighthouse from aboveViewing the lighthouse from above

Notice that the red star isn't on top of the lighthouse body as it appeared, but the black star actually is. We can tell them apart using the distance to the camera, commonly referred to as "depth", where a depth of 1 means it's very close to the camera and a depth of 0 means it's very far.  But it's not just a matter of the absolute world distance, or depth, to the camera. It's the depth compared to the pixel behind.

Look back to the first view. Let's say the lighthouse body has a depth value of 0.5. The black star's depth would be very close to 0.5. So it and the pixel behind it have similar depth values. The red star, on the other hand, would have a much larger depth, because it would be closer to the camera, say 0.7. And yet the pixel behind it, still on the lighthouse, has a depth value of 0.5, so there's a bigger difference there.

This is the trick. When the depth of the pixel on the water surface is close enough to the depth of the pixel it's drawn on top of, we're pretty close to the edge of something, and we can render it as foam. 

So we need more information than is available in any given pixel. We somehow need to know the depth of the pixel that it's about to be drawn on top of. This is where the depth buffer comes in.

The Depth Buffer

You can think of a buffer, or a framebuffer, as just an off-screen render target, or a texture. You would want to render off-screen when you're trying to read data back, a technique that this smoke effect employs.

The depth buffer is a special render target that holds information about the depth values at each pixel. Remember that the value in gl_Position computed in the vertex shader was a screen space value, but it also had a third coordinate, a Z value. This Z value is used to compute the depth which is written to the depth buffer. 

The purpose of the depth buffer is to draw our scene correctly, without the need to sort objects back to front. Every pixel that is about to be drawn first consults the depth buffer. If its depth value is greater than the value in the buffer, it is drawn, and its own value overwrites the one in the buffer. Otherwise, it is discarded (because it means another object is in front of it).

You can actually turn off writing to the depth buffer to see how things would look without it. You can try this in Water.js:

1
material.depthTest=false;

You'll see how the water will always be rendered on top, even if it is behind opaque objects.

Visualizing the Depth Buffer

Let's add a way to visualize the depth buffer for debugging purposes. Create a new script called DepthVisualize.js. Attach this to your camera. 

All we have to do to get access to the depth buffer in PlayCanvas is to say:

1
this.entity.camera.camera.requestDepthMap();

This will then automatically inject a uniform into all of our shaders that we can use by declaring it as:

1
uniform sampler2D uDepthMap;

Below is a sample script that requests the depth map and renders it on top of our scene. It's set up for hot-reloading. 

1
varDepthVisualize=pc.createScript('depthVisualize');
2
3
// initialize code called once per entity
4
DepthVisualize.prototype.initialize=function(){
5
this.entity.camera.camera.requestDepthMap();
6
this.antiCacheCount=0;// To prevent the engine from caching our shader so we can live-update it 
7
8
this.SetupDepthViz();
9
};
10
11
DepthVisualize.prototype.SetupDepthViz=function(){
12
vardevice=this.app.graphicsDevice;
13
varchunks=pc.shaderChunks;
14
15
this.fs='';
16
this.fs+='varying vec2 vUv0;';
17
this.fs+='uniform sampler2D uDepthMap;';
18
this.fs+='';
19
this.fs+='float unpackFloat(vec4 rgbaDepth) {';
20
this.fs+='    const vec4 bitShift = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);';
21
this.fs+='    float depth = dot(rgbaDepth, bitShift);';
22
this.fs+='    return depth;';
23
this.fs+='}';
24
this.fs+='';
25
this.fs+='void main(void) {';
26
this.fs+='    float depth = unpackFloat(texture2D(uDepthMap, vUv0)) * 30.0; ';
27
this.fs+='    gl_FragColor = vec4(vec3(depth),1.0);';
28
this.fs+='}';
29
30
this.shader=chunks.createShaderFromCode(device,chunks.fullscreenQuadVS,this.fs,"renderDepth"+this.antiCacheCount);
31
this.antiCacheCount++;
32
33
// We manually create a draw call to render the depth map on top of everything 
34
this.command=newpc.Command(pc.LAYER_FX,pc.BLEND_NONE,function(){
35
pc.drawQuadWithShader(device,null,this.shader);
36
}.bind(this));
37
this.command.isDepthViz=true;// Just mark it so we can remove it later
38
39
this.app.scene.drawCalls.push(this.command);
40
};
41
42
// update code called every frame
43
DepthVisualize.prototype.update=function(dt){
44
45
};
46
47
// swap method called for script hot-reloading
48
// inherit your script state here
49
DepthVisualize.prototype.swap=function(old){
50
this.antiCacheCount=old.antiCacheCount;
51
52
// Remove the depth viz draw call 
53
for(vari=0;i<this.app.scene.drawCalls.length;i++){
54
if(this.app.scene.drawCalls[i].isDepthViz){
55
this.app.scene.drawCalls.splice(i,1);
56
break;
57
}
58
}
59
// Recreate it 
60
this.SetupDepthViz();
61
};
62
63
// to learn more about script anatomy, please read:
64
// https://developer.playcanvas.com/en/user-manual/scripting/

Try copying that in, and comment/uncomment the line this.app.scene.drawCalls.push(this.command); to toggle the depth rendering. It should look something like the image below.

Boat and lighthouse scene rendered as a depth mapBoat and lighthouse scene rendered as a depth mapBoat and lighthouse scene rendered as a depth map
Challenge #3: The water surface is not drawn into the depth buffer. The PlayCanvas engine does this intentionally. Can you figure out why? What's special about the water material? To put it another way, based on our depth checking rules, what would happen if the water pixels did write to the depth buffer?

Hint: There is one line you can change in Water.js that will cause the water to be written to the depth buffer.

Another thing to notice is that I multiply the depth value by 30 in the embedded shader in the initialize function. This is just to be able to see it clearly, because otherwise the range of values are too small to see as shades of color.

Implementing the Trick

The PlayCanvas engine includes a bunch of helper functions to work with depth values, but at the time of writing they aren't released into production, so we're just going to set these up ourselves.

Define the following uniforms to Water.frag:

1
// These uniforms are all injected automatically by PlayCanvas
2
uniformsampler2DuDepthMap;
3
uniformvec4uScreenSize;
4
uniformmat4matrix_view;
5
// We have to set this one up ourselves
6
uniformvec4camera_params;

Define these helper functions above the main function:

1
#ifdef GL2
2
floatlinearizeDepth(floatz){
3
z=z*2.0-1.0;
4
return1.0/(camera_params.z*z+camera_params.w);
5
}
6
#else
7
#ifndef UNPACKFLOAT
8
#define UNPACKFLOAT
9
floatunpackFloat(vec4rgbaDepth){
10
constvec4bitShift=vec4(1.0/(256.0*256.0*256.0),1.0/(256.0*256.0),1.0/256.0,1.0);
11
returndot(rgbaDepth,bitShift);
12
}
13
#endif
14
#endif
15
16
floatgetLinearScreenDepth(vec2uv){
17
#ifdef GL2
18
returnlinearizeDepth(texture2D(uDepthMap,uv).r)*camera_params.y;
19
#else
20
returnunpackFloat(texture2D(uDepthMap,uv))*camera_params.y;
21
#endif
22
}
23
24
floatgetLinearDepth(vec3pos){
25
return-(matrix_view*vec4(pos,1.0)).z;
26
}
27
28
floatgetLinearScreenDepth(){
29
vec2uv=gl_FragCoord.xy*uScreenSize.zw;
30
returngetLinearScreenDepth(uv);
31
}

Pass some information about the camera to the shader in Water.js. Put this where you pass other uniforms like uTime:

1
if(!this.camera){
2
this.camera=this.app.root.findByName("Camera").camera;
3
}
4
varcamera=this.camera;
5
varn=camera.nearClip;
6
varf=camera.farClip;
7
varcamera_params=[
8
1/f,
9
f,
10
(1-f/n)/2,
11
(1+f/n)/2
12
];
13
14
material.setParameter('camera_params',camera_params);

Finally, we need the world position for each pixel in our frag shader. We need to get this from the vertex shader. So define a varying in Water.frag:

1
varyingvec3WorldPosition;

Define the same varying in Water.vert. Then set it to the distorted position in the vertex shader, so the full code would look like:

1
attributevec3aPosition;
2
attributevec2aUv0;
3
4
varyingvec2vUv0;
5
varyingvec3WorldPosition;
6
7
uniformmat4matrix_model;
8
uniformmat4matrix_viewProjection;
9
10
uniformfloatuTime;
11
12
voidmain(void)
13
{
14
vUv0=aUv0;
15
vec3pos=aPosition;
16
17
pos.y+=cos(pos.z*5.0+uTime)*0.1*sin(pos.x*5.0+uTime);
18
19
gl_Position=matrix_viewProjection*matrix_model*vec4(pos,1.0);
20
21
WorldPosition=pos;
22
}

Actually Implementing the Trick

Now we're finally ready to implement the technique described at the beginning of this section. We want to compare the depth of the pixel we're at to the depth of the pixel behind it. The pixel we're at comes from the world position, and the pixel behind comes from the screen position. So grab these two depths:

1
floatworldDepth=getLinearDepth(WorldPosition);
2
floatscreenDepth=getLinearScreenDepth();
Challenge #4: One of these values will never be greater than the other (assuming depthTest = true). Can you deduce which?

We know the foam is going to be where the distance between these two values is small. So let's render that difference at each pixel. Put this at the bottom of your shader (and make sure the depth visualization script from the previous section is turned off):

1
color=vec4(vec3(screenDepth-worldDepth),1.0);
2
gl_FragColor=color;

Which should look something like this:

A rendering of the depth difference at each pixel A rendering of the depth difference at each pixel A rendering of the depth difference at each pixel

Which correctly picks out the edges of any object immersed in water in real time! You can of course scale this difference we're rendering to make the foam look thicker/thinner.

There are now a lot of ways in which you can combine this output with the water surface color to get nice-looking foam lines. You could keep it as a gradient, use it to sample from another texture, or set it to a specific color if the difference is less than or equal to some threshold.

My favorite look is setting it to a color similar to that of the static water lines, so my final main function looks like this:

1
voidmain(void)
2
{
3
vec4color=vec4(0.0,0.7,1.0,0.5);
4
5
vec2pos=vUv0*2.0;
6
pos.y+=uTime*0.02;
7
8
vec4WaterLines=texture2D(uSurfaceTexture,pos);
9
color.rgba+=WaterLines.r*0.1;
10
11
floatworldDepth=getLinearDepth(WorldPosition);
12
floatscreenDepth=getLinearScreenDepth();
13
floatfoamLine=clamp((screenDepth-worldDepth),0.0,1.0);
14
15
if(foamLine<0.7){
16
color.rgba+=0.2;
17
}
18
19
gl_FragColor=color;
20
}

Summary

We created buoyancy on objects floating in the water, we gave our surface a moving texture to simulate caustics, and we saw how we could use the depth buffer to create dynamic foam lines.

To finish this up, the next and final part will introduce post-process effects and how to use them to create the underwater distortion effect.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.

Creating Toon Water for the Web: Part 3

$
0
0

Welcome back to this three-part series on creating stylized toon water in PlayCanvas using vertex shaders. In Part 2 we covered buoyancy & foam lines. In this final part, we're going to apply the underwater distortion as a post-process effect.

Refraction & Post-Process Effects

Our goal is to visually communicate the refraction of light through water. We've already covered how to create this sort of distortion in a fragment shader in a previous tutorial for a 2D scene. The only difference here is that we'll need to figure out which area of the screen is underwater and only apply the distortion there. 

Post-Processing

In general, a post-process effect is anything applied to the whole scene after it is rendered, such as a colored tint or an old CRT screen effect. Instead of rendering your scene directly to the screen, you first render it to a buffer or texture, and then render that to the screen, passing through a custom shader.

In PlayCanvas, you can set up a post-process effect by creating a new script. Call it Refraction.js, and copy this template to start with:

1
//--------------- POST EFFECT DEFINITION------------------------//
2
pc.extend(pc,function(){
3
// Constructor - Creates an instance of our post effect
4
varRefractionPostEffect=function(graphicsDevice,vs,fs,buffer){
5
varfragmentShader="precision "+graphicsDevice.precision+" float;\n";
6
fragmentShader=fragmentShader+fs;
7
8
// this is the shader definition for our effect
9
this.shader=newpc.Shader(graphicsDevice,{
10
attributes:{
11
aPosition:pc.SEMANTIC_POSITION
12
},
13
vshader:vs,
14
fshader:fs
15
});
16
17
this.buffer=buffer;
18
};
19
20
// Our effect must derive from pc.PostEffect
21
RefractionPostEffect=pc.inherits(RefractionPostEffect,pc.PostEffect);
22
23
RefractionPostEffect.prototype=pc.extend(RefractionPostEffect.prototype,{
24
// Every post effect must implement the render method which
25
// sets any parameters that the shader might require and
26
// also renders the effect on the screen
27
render:function(inputTarget,outputTarget,rect){
28
vardevice=this.device;
29
varscope=device.scope;
30
31
// Set the input render target to the shader. This is the image rendered from our camera
32
scope.resolve("uColorBuffer").setValue(inputTarget.colorBuffer);
33
34
// Draw a full screen quad on the output target. In this case the output target is the screen.
35
// Drawing a full screen quad will run the shader that we defined above
36
pc.drawFullscreenQuad(device,outputTarget,this.vertexBuffer,this.shader,rect);
37
}
38
});
39
40
return{
41
RefractionPostEffect:RefractionPostEffect
42
};
43
}());
44
45
//--------------- SCRIPT DEFINITION------------------------//
46
varRefraction=pc.createScript('refraction');
47
48
Refraction.attributes.add('vs',{
49
type:'asset',
50
assetType:'shader',
51
title:'Vertex Shader'
52
});
53
54
Refraction.attributes.add('fs',{
55
type:'asset',
56
assetType:'shader',
57
title:'Fragment Shader'
58
});
59
60
// initialize code called once per entity
61
Refraction.prototype.initialize=function(){
62
vareffect=newpc.RefractionPostEffect(this.app.graphicsDevice,this.vs.resource,this.fs.resource);
63
64
// add the effect to the camera's postEffects queue
65
varqueue=this.entity.camera.postEffects;
66
queue.addEffect(effect);
67
68
this.effect=effect;
69
70
// Save the current shaders for hot reload 
71
this.savedVS=this.vs.resource;
72
this.savedFS=this.fs.resource;
73
};
74
75
Refraction.prototype.update=function(){
76
if(this.savedFS!=this.fs.resource||this.savedVS!=this.vs.resource){
77
this.swap(this);
78
}
79
};
80
81
Refraction.prototype.swap=function(old){
82
this.entity.camera.postEffects.removeEffect(old.effect);
83
this.initialize();
84
};

This is just like a normal script, but we define a RefractionPostEffect class that can be applied to the camera. This needs a vertex and a fragment shader to render. The attributes are already set up, so let's create Refraction.frag with this content:

1
precisionhighpfloat;
2
3
uniformsampler2DuColorBuffer;
4
varyingvec2vUv0;
5
6
voidmain(){
7
vec4color=texture2D(uColorBuffer,vUv0);
8
9
gl_FragColor=color;
10
}

And Refraction.vert with a basic vertex shader:

1
attributevec2aPosition;
2
varyingvec2vUv0;
3
4
voidmain(void)
5
{
6
gl_Position=vec4(aPosition,0.0,1.0);
7
vUv0=(aPosition.xy+1.0)*0.5;
8
}

Now attach the Refraction.js script to the camera, and assign the shaders to the appropriate attributes. When you launch the game, you should see the scene exactly as it was before. This is a blank post effect that simply re-renders the scene. To verify that this is working, try giving the scene a red tint.

In Refraction.frag, instead of simply returning the color, try setting the red component to 1.0, which should look like the image below.

Scene rendered with a red tint Scene rendered with a red tint Scene rendered with a red tint

Distortion Shader

We need to add a time uniform for the animated distortion, so go ahead and create one in Refraction.js, inside this constructor for the post effect:

1
varRefractionPostEffect=function(graphicsDevice,vs,fs){
2
varfragmentShader="precision "+graphicsDevice.precision+" float;\n";
3
fragmentShader=fragmentShader+fs;
4
5
// this is the shader definition for our effect
6
this.shader=newpc.Shader(graphicsDevice,{
7
attributes:{
8
aPosition:pc.SEMANTIC_POSITION
9
},
10
vshader:vs,
11
fshader:fs
12
});
13
14
// >>>>>>>>>>>>> Initialize the time here 
15
this.time=0;
16
17
};

Now, inside this render function, we pass it to our shader and increment it:

1
RefractionPostEffect.prototype=pc.extend(RefractionPostEffect.prototype,{
2
// Every post effect must implement the render method which
3
// sets any parameters that the shader might require and
4
// also renders the effect on the screen
5
render:function(inputTarget,outputTarget,rect){
6
vardevice=this.device;
7
varscope=device.scope;
8
9
// Set the input render target to the shader. This is the image rendered from our camera
10
scope.resolve("uColorBuffer").setValue(inputTarget.colorBuffer);
11
/// >>>>>>>>>>>>>>>>>> Pass the time uniform here 
12
scope.resolve("uTime").setValue(this.time);
13
this.time+=0.1;
14
15
// Draw a full screen quad on the output target. In this case the output target is the screen.
16
// Drawing a full screen quad will run the shader that we defined above
17
pc.drawFullscreenQuad(device,outputTarget,this.vertexBuffer,this.shader,rect);
18
}
19
});

Now we can use the same shader code from the water distortion tutorial, making our full fragment shader look like this:

1
precisionhighpfloat;
2
3
uniformsampler2DuColorBuffer;
4
uniformfloatuTime;
5
6
varyingvec2vUv0;
7
8
voidmain(){
9
vec2pos=vUv0;
10
11
floatX=pos.x*15.+uTime*0.5;
12
floatY=pos.y*15.+uTime*0.5;
13
pos.y+=cos(X+Y)*0.01*cos(Y);
14
pos.x+=sin(X-Y)*0.01*sin(Y);
15
16
vec4color=texture2D(uColorBuffer,pos);
17
18
gl_FragColor=color;
19
}

If it all worked out, everything should now look like as if it's underwater, as below.

Underwater distortion applied to the whole scene Underwater distortion applied to the whole scene Underwater distortion applied to the whole scene
Challenge #1: Make the distortion only apply to the bottom half of the screen.

Camera Masks

We're almost there. All we need to do now is to apply this distortion effect just on the underwater part of the screen. The most straightforward way I've come up with to do this is to re-render the scene with the water surface rendered as a solid white, as shown below.

Water surface rendered as a solid white to act as a maskWater surface rendered as a solid white to act as a maskWater surface rendered as a solid white to act as a mask

This would be rendered to a texture that would act as a mask. We would then pass this texture to our refraction shader, which would only distort a pixel in the final image if the corresponding pixel in the mask is white.

Let's add a boolean attribute on the water surface to know if it's being used as a mask. Add this to Water.js:

1
Water.attributes.add('isMask',{type:'boolean',title:"Is Mask?"});

We can then pass it to the shader with material.setParameter('isMask',this.isMask); as usual. Then declare it in Water.frag and set the color to white if it's true.

1
// Declare the new uniform at the top
2
uniformboolisMask;
3
4
// At the end of the main function, override the color to be white 
5
// if the mask is true 
6
if(isMask){
7
color=vec4(1.0);
8
}

Confirm that this works by toggling the "Is Mask?" property in the editor and relaunching the game. It should look white, as in the earlier image.

Now, to re-render the scene, we need a second camera. Create a new camera in the editor and call it CameraMask. Duplicate the Water entity in the editor as well, and call it WaterMask. Make sure the "Is Mask?" is false for the Water entity but true for the WaterMask.

To tell the new camera to render to a texture instead of the screen, create a new script called CameraMask.js and attach it to the new camera. We create a RenderTarget to capture this camera's output like this:

1
// initialize code called once per entity
2
CameraMask.prototype.initialize=function(){
3
// Create a 512x512x24-bit render target with a depth buffer
4
varcolorBuffer=newpc.Texture(this.app.graphicsDevice,{
5
width:512,
6
height:512,
7
format:pc.PIXELFORMAT_R8_G8_B8,
8
autoMipmap:true
9
});
10
colorBuffer.minFilter=pc.FILTER_LINEAR;
11
colorBuffer.magFilter=pc.FILTER_LINEAR;
12
varrenderTarget=newpc.RenderTarget(this.app.graphicsDevice,colorBuffer,{
13
depth:true
14
});
15
16
this.entity.camera.renderTarget=renderTarget;
17
};

Now, if you launch, you'll see this camera is no longer rendering to the screen. We can grab the output of its render target in Refraction.js like this:

1
Refraction.prototype.initialize=function(){
2
varcameraMask=this.app.root.findByName('CameraMask');
3
varmaskBuffer=cameraMask.camera.renderTarget.colorBuffer;
4
5
vareffect=newpc.RefractionPostEffect(this.app.graphicsDevice,this.vs.resource,this.fs.resource,maskBuffer);
6
7
// ...
8
// The rest of this function is the same as before
9
10
};

Notice that I pass this mask texture as an argument to the post effect constructor. We need to create a reference to it in our constructor, so it looks like:

1
//// Added an extra argument on the line below
2
varRefractionPostEffect=function(graphicsDevice,vs,fs,buffer){
3
varfragmentShader="precision "+graphicsDevice.precision+" float;\n";
4
fragmentShader=fragmentShader+fs;
5
6
// this is the shader definition for our effect
7
this.shader=newpc.Shader(graphicsDevice,{
8
attributes:{
9
aPosition:pc.SEMANTIC_POSITION
10
},
11
vshader:vs,
12
fshader:fs
13
});
14
15
this.time=0;
16
//// <<<<<<<<<<<<< Saving the buffer here 
17
this.buffer=buffer;
18
};

Finally, in the render function, pass the buffer to our shader with:

1
scope.resolve("uMaskBuffer").setValue(this.buffer);

Now to verify that this is all working, I'll leave that as a challenge.

Challenge #2: Render the uMaskBuffer to the screen to confirm it is the output of the second camera.

One thing to be aware of is that the render target is set up in the initialize of CameraMask.js, and that needs to be ready by the time Refraction.js is called. If the scripts run the other way around, you'll get an error. To make sure they run in the right order, drag the CameraMask to the top of the entity list in the editor, as shown below.

PlayCanvas editor with CameraMask at top of entity listPlayCanvas editor with CameraMask at top of entity listPlayCanvas editor with CameraMask at top of entity list

The second camera should always be looking at the same view as the original one, so let's make it always follow its position and rotation in the update of CameraMask.js:

1
CameraMask.prototype.update=function(dt){
2
varpos=this.CameraToFollow.getPosition();
3
varrot=this.CameraToFollow.getRotation();
4
this.entity.setPosition(pos.x,pos.y,pos.z);
5
this.entity.setRotation(rot);
6
};

And define CameraToFollow in the initialize:

1
this.CameraToFollow=this.app.root.findByName('Camera');

Culling Masks

Both cameras are currently rendering the same thing. We want the mask camera to render everything except the real water, and we want the real camera to render everything except the mask water.

To do this, we can use the camera's culling bit mask. This works similarly to collision masks if you've ever used those. An object will be culled (not rendered) if the result of a bitwise AND between its mask and the camera mask is 1.

Let's say the Water will have bit 2 set, and WaterMask will have bit 3. Then the real camera needs to have all bits set except for 3, and the mask camera needs to have all bits set except for 2. An easy way to say "all bits except N" is to do:

1
~(1<<N)>>>0

You can read more about bitwise operators here.

To set up the camera culling masks, we can put this inside CameraMask.js's initialize at the bottom:

1
// Set all bits except for 2 
2
this.entity.camera.camera.cullingMask&=~(1<<2)>>>0;
3
// Set all bits except for 3
4
this.CameraToFollow.camera.camera.cullingMask&=~(1<<3)>>>0;
5
// If you want to print out this bit mask, try:
6
// console.log((this.CameraToFollow.camera.camera.cullingMask >>> 0).toString(2));

Now, in Water.js, set the Water mesh's mask on bit 2, and the mask version of it on bit 3:

1
// Put this at the bottom of the initialize of Water.js
2
3
// Set the culling masks 
4
varbit=this.isMask?3:2;
5
meshInstance.mask=0;
6
meshInstance.mask|=(1<<bit);

Now, one view will have the normal water, and the other will have the solid white water. The left half of the image below is the view from the original camera, and the right half is from the mask camera.

Split view of mask camera and original cameraSplit view of mask camera and original cameraSplit view of mask camera and original camera

Applying the Mask

One final step now! We know the areas underwater are marked with white pixels. We just need to check if we're not at a white pixel, and if so, turn off the distortion in Refraction.frag:

1
// Check original position as well as new distorted position
2
vec4maskColor=texture2D(uMaskBuffer,pos);
3
vec4maskColor2=texture2D(uMaskBuffer,vUv0);
4
// We're not at a white pixel?
5
if(maskColor!=vec4(1.0)||maskColor2!=vec4(1.0)){
6
// Return it back to the original position
7
pos=vUv0;
8
}

And that should do it!

One thing to note is that since the texture for the mask is initialized on launch, if you resize the window at runtime, it will no longer match the size of the screen.

Anti-Aliasing

As an optional clean-up step, you might have noticed that edges in the scene now look a little sharp. This is because when we applied our post effect, we lost anti-aliasing. 

We can apply an additional anti-alias on top of our effect as another post effect. Luckily, there's one available in the PlayCanvas store we can just use. Go to the script asset page, click the big green download button, and choose your project from the list that appears. The script will appear in the root of your asset window as posteffect-fxaa.js. Just attach this to the Camera entity, and your scene should look a little nicer! 

Final Thoughts

If you've made it this far, give yourself a pat on the back! We covered a lot of techniques in this series. You should now be comfortable with vertex shaders, rendering to textures, applying post-processing effects, selectively culling objects, using the depth buffer, and working with blending and transparency. Even though we were implementing this in PlayCanvas, these are all general graphics concepts you'll find in some form on whatever platform you end up in.

All these techniques are also applicable to a variety of other effects. One particularly interesting application I've found of vertex shaders is in this talk on the art of Abzu, where they explain how they used vertex shaders to efficiently animate tens of thousands of fish on screen.

You should now also have a nice water effect you can apply to your games! You could easily customize it now that you've put together every detail yourself. There's still a lot more you can do with water (I haven't even mentioned any sort of reflection at all). Below are a couple of ideas.

Noise-Based Waves

Instead of simply animating the waves with a combination of sine and cosines, you can sample a noise texture to make the waves look a bit more natural and unpredictable.

Dynamic Foam Trails

Instead of completely static water lines on the surface, you could draw onto that texture when objects move, to create a dynamic foam trail. There are a lot of ways to go about doing this, so this could be its own project.

Source Code

You can find the finished hosted PlayCanvas project here. A Three.js port is also available in this repository.

Interactive Storytelling: Non-Linear

$
0
0

In this final part of our series about interactive storytelling, we'll talk about the future of storytelling in videogames.

Non-Linear Interactive Storytelling

Or the Philosopher's Stone

Non-linear interactive storytelling is similar to the philosopher's stone: everybody talks about it, everybody wants it, but no one has found it yet.

Let's start with the definition: what is non-linear interactive storytelling? 

It's simple: this is storytelling that changesbased on the player's choices. In the previous article, we discussed linear interactive storytelling and how it gives the player only the illusion of choice. Of course, there are some really sophisticated games that give a betterillusion about freedom and choices and even the chance to really change the course of the story. But still, it is an illusion.

Bioshock gameplayBioshock gameplayBioshock gameplay
Bioshock gives players some interesting choices, but they are gameplay and narrative related, not story related.

So the best definition of non-linear interactive storytelling is a way to break this illusion and give the player real choices. However, this requires some advanced technology: Artificial Intelligence.

That's because true non-linear interactive storytelling requires an AI capable of reacting to the player's actions. As in real life. The theory is quite simple, on paper. The player does something in the game's world, and the world and everybody inside it will react.

But, of course, creating a system like that is nearly impossible with the current technology, because of the complex calculations needed. We're talking about totally removing the scripted part from a game! And right now at least 90% of a game is scripted.

In the second part, we talked about Zelda Breath of the Wild. That, I think, is a starting point: a game where the developers set out rules about the world, and the player can play with them freely.

Extend this idea to all elements, and you'll have the illusion broken.

Again: this has never been done, but I'm sure somebody will do it in the future. Maybe with the next console generation, when the calculation power increases.

Okay, that's the future. But what about today?

Today, there are some games that are trying to create a non-linear experience. I'll talk about two of them, as examples.

The first is an AAA game. Everybody knows it: Detroit Become Human. In his most recent game, David Cage is trying really hard to give the player a lot of forks and choices. Yes, it's still an illusion, but it's the game that I know with the highest number of narrative forks. While playing this game, you have the feeling that every choice matters. Even the small ones.

Detroit Become Human narrative forksDetroit Become Human narrative forksDetroit Become Human narrative forks
This flowchart shows all the choices in one scene of Detroit Become Human

Don't get me wrong: it's all scripted. And I think that's the wrong way to achieve real non-linear storytelling. But it's one of the games that comes closest. It's great to play without trying to discover the hidden script, just to "live" the experience.

Of course, the game itself, chapter after chapter, will show you a flowchart about the forks. And you will know where the narrative joints are. But, really: if you can, try it without thinking about that, and you'll have a better illusion of choice in the game.

The second game is an indie experiment: Avery. It's an experimental game based on an AI. It's free, it's for both iOS and Android, and you must try it. It's a game where the AI will respond to the player dynamically. And that's the right way, I'm sure, to achieve true non-linear interactive storytelling.

Avery gameplayAvery gameplayAvery gameplay
Avery in all its splendor

Of course, keep in mind that it's an indie game and it's free. But it's amazing. It's an "Artificial Intelligence Conversation". Those among you who are a little older (like me) will surely remember Eliza, the first chatterbot. Avery is an evolution of that. You'll talk with an AI that has lost its memory and is scared because something is wrong. Again: try it because, playing Avery, you can see one of the first steps towards our philosopher's stone.

Direct and Indirect Mode

As I said at the start of the article, we have the theory. More theory than we really need, probably—that's because we can't work on the real part, so we're talking, writing and thinking too much about it.

But a good part of this theory is found in some interesting papers and books. In those you will find two main definitions: direct mode and indirect mode. These are the ways in which the storytelling should react to the player's actions.

Direct mode is quite simple: the player does something, and the story will change in response. Action -> reaction. This is the way, for example, most tabletop role-playing games work.

The Game Master explains a situation -> the player makes a choice -> the Game Master tells how the story reacts to that choice.

The two games that I gave as examples before also work in this way. And when we have a full non-linear interactive storytelling game, I guess this mode will be the most common.

Also note that the majority of linear storytelling works this way: there is a setting, with a conflict, the character does something, and the story (the ambient environment, the villain, or some other character) reacts.

But there is a more sophisticated way to tell a non-linear story: indirect mode.

This is more like how the real world works. You do something, which causes a small direct reaction, which engages a chain reaction that can go on to have effects in distant places.

This is the so-called "butterfly effect". You will discover that this type of story-telling works only if there is not a real "main character". Because, in the real world, everyone is the main character of his or her own story. But there are billions of stories told every second around the world. And each story, somehow, is influenced by all the other stories.

Back to gaming, there are already games that use this concept: MMOs. Think about World of Warcraft: there is no main character, and the "total story" (the sum of all stories about all characters) is a complex web that links all individual stories. So actually, in the first part of this article, I lied: there is already a way to create non-linear interactive storytelling, and that's to put the domain of the story in the players' hands!

World of Warcraft gameplayWorld of Warcraft gameplayWorld of Warcraft gameplay
World of Warcraft is a place where the stories are told between players.

Of course, in World of Warcraft, there are still scripted parts (the enemies, the quests, the NPCs, etc.), and that's why WoW is not an example of true non-linear interactive storytelling. But when the players have the ability to create their own story, there is not only non-linear storytelling, but also it's told in indirect mode.

So think about this: some day, in the near future, we'll have a game where the AI will be so advanced that we'll play with it in the same way we play with the other humans in WoW.

That's the goal. That's true non-linear interactive storytelling.

Conclusion

I started writing this series of articles almost six months ago. It's been a labour of love, and I'm thankful to Envato Tuts+ for encouraging me to pursue it. This is a topic I really care about, and there are a lot of things that I had to cut to keep the series to only three parts. 

If you are interested, though, there are lots of articles and videos on this topic. For example, I could have talked about the dissonance ludonarrative (look it up!). I also had to cut a big part about Florence (a really great linear interactive storytelling game—again, try it if you can). And so on.

However, I'm happy to have this series wrapped up, and I hope you've enjoyed the articles and will find them useful.

Interactive storytelling is, in my opinion, one of the big challenges that the industry will face in its next step. In the last two console generations, we saw incredible advances in graphics and gameplay. Now it's time to think about the story. Because, you know, stories matter.

How to Create an Abstract Icon Set in Adobe Illustrator

$
0
0
Final product imageFinal product imageFinal product image
What You'll Be Creating

Welcome to how to create an abstract icon set in Adobe Illustrator! The theme we're working on will be chess. In this tutorial we will learn the step by step process of how to create an abstract set of icons using basic shapes and tools.

For more examples of abstract icon sets, check out GraphicRiver where you can find a wide variety of different abstract icons.

1. How to Set Up a New Project File

Step 1

Let's get started by setting up a New Document in Adobe Illustrator (File > New or Control-N). For this tutorial, we will use the settings below:

  • Number of Artboards: 1
  • Width: 850 px
  • Height: 850 px
  • Units: Pixels

In the Advanced tab, use the following settings:

  • Colour Mode: RGB
  • Raster Effects: Screen (72ppi)
  • Preview Mode: Default
Create a new project settingsCreate a new project settingsCreate a new project settings

Step 2

Go to Edit > Preferences > General and set the Keyboard Increment to 1 px

Edit project preferencesEdit project preferencesEdit project preferences

Step 3

Go to Units and use the settings shown below. 

  • General: Pixel
  • Stroke: Points
  • Type: Points
Edit units in project preferencesEdit units in project preferencesEdit units in project preferences

2. How to Set Up the Layers

Step 1

Next, you will need to structure the project by creating layers. Select the Layers panel and create two layers. Name them as follows:

  • Layer 1: Background
  • Layer 2: Icons
Set up the project layers in Adobe IllustratorSet up the project layers in Adobe IllustratorSet up the project layers in Adobe Illustrator

Step 2

Make sure that you select the background layer to begin creating the background.

Select the Background layer Select the Background layer Select the Background layer

3. How to Create the Background Color

Step 1

With the background layer selected, click on the Rectangle Tool (M) and create a 850 x 850 rectangular area to place your icons in. This should fit the entire area of the Artboard.

Use the Rectangle Tool to create a 850 x 850 boxUse the Rectangle Tool to create a 850 x 850 boxUse the Rectangle Tool to create a 850 x 850 box

Step 2

Make sure that the rectangle is still selected, and click on the Gradient Tool. In the angle section, select 45 Degrees from the drop-down menu.

Edit the gradient angle to 45 degreesEdit the gradient angle to 45 degreesEdit the gradient angle to 45 degrees

Step 3

Select two colors for the gradient. For this tutorial, we will use the following:

  • R: 70
  • G: 82
  • B: 162
Adjust the left gradient colorAdjust the left gradient colorAdjust the left gradient color

Step 4

For the second color, we will use the following:

  • R: 138
  • G: 105
  • B: 173
Adjust the right gradient colorAdjust the right gradient colorAdjust the right gradient color

Step 5

The final background should look like the image below. Lock the background layer by clicking on the lock icon, and click on the icon layer to start creating the icons. 

The final gradient backgroundThe final gradient backgroundThe final gradient background

4. How to Create a Pawn Icon

Step 1

Choose the Pen Tool (P) and adjust the settings of stroke to the following:

  • Stroke Weight: 6
  • Cap: Round Cap
  • Corner: Miter Join
  • Align Stroke: Align Stroke to Center
Edit the Stroke SettingsEdit the Stroke SettingsEdit the Stroke Settings

Step 2

Start by selecting the Ellipse Tool (L) and creating a small circle. To create a perfect circle, press and hold the Shift key on the keyboard whilst clicking and dragging with the mouse.

Use the Ellipse Tool to create a small circleUse the Ellipse Tool to create a small circleUse the Ellipse Tool to create a small circle

Step 3

Select the Pen Tool (P) and create two lines underneath the circle.

Use the Pen Tool to create two linesUse the Pen Tool to create two linesUse the Pen Tool to create two lines

Step 4

Select the Rounded Rectangle Tool (M) and create a shape underneath the two lines. You can adjust the curvature of the corners by pressing the Up Arrow or the Down Arrow on your keyboard whilst creating the shape (clicking and dragging with the mouse).

With the shape still selected, change it to a fill shape.

Use the Rounded Rectangle Tool to create a shapeUse the Rounded Rectangle Tool to create a shapeUse the Rounded Rectangle Tool to create a shape

Step 5

Duplicate the shape by copying it (Control-C) and then paste it into place (Control-Shift-V)

Add a stroke to the second shape with a width of 20 pt. For the best results, align the stroke to the outside. 

Duplicate the round rectangle shape and add a stroke to itDuplicate the round rectangle shape and add a stroke to itDuplicate the round rectangle shape and add a stroke to it

Step 6

With the second shape still selected, go to the top menu and select Object > Expand to open the Expand window. Make sure both Fill and Stroke are selected, and then click OK.

Expand the stroke and the shapeExpand the stroke and the shapeExpand the stroke and the shape

Step 7

This will create a shape from the stroke. Make sure that the shape is still selected and, using the Pathfinder, click on Unite. This will merge the shape.

Combine the shapes using the unite button in pathfinderCombine the shapes using the unite button in pathfinderCombine the shapes using the unite button in pathfinder

Step 8

Change the shape to a stroke by using the Eyedropper Tool (I) and align the stroke to the outside.

Create a new stroke using the eyedropper toolCreate a new stroke using the eyedropper toolCreate a new stroke using the eyedropper tool

Step 9

Use the Ellipse Tool (L) to create a small circle at the bottom to complete the pawn icon.

Use the Ellipse Tool to create a small circleUse the Ellipse Tool to create a small circleUse the Ellipse Tool to create a small circle

5. How to Create a Rook Icon

Step 1

To start creating the rook icon, use the Rectangle Tool (M) to create a small rectangle shape.

Use the Rectangle Tool to create a small shapeUse the Rectangle Tool to create a small shapeUse the Rectangle Tool to create a small shape

Step 2

Create a slightly larger rectangle shape and set it to Stroke by using the Eyedropper Tool and selecting one of the strokes we used previously on the pawn icon. 

Create a larger rectangle and set it to strokeCreate a larger rectangle and set it to strokeCreate a larger rectangle and set it to stroke

Step 3

Use the Rectangle Tool (M) to create a small rectangle shape at the top. Then use the Ellipse Tool (L) to create a small circle at the bottom (holding the Shift key on the keyboard to create a perfect circle). 

Add a rectangle and a circle shapeAdd a rectangle and a circle shapeAdd a rectangle and a circle shape

Step 4

Use the Rectangle Tool (M) and follow the steps below to create the top of the rook icon.

Use the Rectangle Tool to create the Rook IconUse the Rectangle Tool to create the Rook IconUse the Rectangle Tool to create the Rook Icon

Step 5

Select all the shapes in Step 4 and click on Pathfinder > Unite.

Click on the Unite button in PathfinderClick on the Unite button in PathfinderClick on the Unite button in Pathfinder

Step 6

This will merge all the shapes together.

The Unite button will merge the shapes togetherThe Unite button will merge the shapes togetherThe Unite button will merge the shapes together

Step 7

With the new shape selected, change it into a stroke by using the Eyedropper Tool (I) to ensure that the weight of the stroke is the same.

Use the Eyedropper Tool to change copy the stroke settingsUse the Eyedropper Tool to change copy the stroke settingsUse the Eyedropper Tool to change copy the stroke settings

Step 8

To create the semicircles, you need to start by creating a circle using the Ellipse Tool (L). Place a rectangle over the circle using the Rectangle Tool (M).

Create a rectangle on top of a circleCreate a rectangle on top of a circleCreate a rectangle on top of a circle

Step 9

Select both the shapes (making sure that the rectangle is arranged on top of the circle) and then select Pathfinder > Minus Front.

Click on the Minus Front buttonClick on the Minus Front buttonClick on the Minus Front button

Step 10

This will remove the rectangle shape and the part of the circle which is overlapping.

Minus front removes any overlapping shapesMinus front removes any overlapping shapesMinus front removes any overlapping shapes

Step 11

Duplicate the semicircle by copying it (Control-C) and then pasting it (Control-V).

Duplicate the new semi circleDuplicate the new semi circleDuplicate the new semi circle

Step 12

Select the duplicate semicircle and then Right-Click on it. From the menu, select Transform > Select. Choose Vertical axis at 90 Degrees and then click OK.

Reflect the duplicate semi circle verticallyReflect the duplicate semi circle verticallyReflect the duplicate semi circle vertically

Step 13

Make sure that both semicircles are aligned horizontally opposite each other.

Make sure both shapes are opposite each otherMake sure both shapes are opposite each otherMake sure both shapes are opposite each other

Step 14

Place the semicircles on either side of the shapes to complete the rook icon.

Combine the shapes to create the Rook IconCombine the shapes to create the Rook IconCombine the shapes to create the Rook Icon

6. How to Create a Knight Icon

Step 1

To start creating the knight icon, create a duplicate of the rook icon. With the duplicate icon, use the Selection Tool (V) or the Direct Selection Tool (A) to remove the middle shapes of the icon.

Duplicate Rook icon and remove some shapesDuplicate Rook icon and remove some shapesDuplicate Rook icon and remove some shapes

Step 2

Use the Polygon Tool to create a triangle. Whilst using the Polygon Tool, press the Up Arrow key or Down Arrow key on the keyboard to increase or decrease the number of sides. 

To create an equilateral triangle,press and hold the Shift key on the keyboard whilst creating the shape.

Create a smaller equilateral triangle and place it in the middle.

Create two triangles using the Polygon ToolCreate two triangles using the Polygon ToolCreate two triangles using the Polygon Tool

Step 3

Rotate the semicircles on either side of the triangle so that the strokes align diagonally (Right Click > Transform > Rotate).

Rotate the semi circles to align with the triangleRotate the semi circles to align with the triangleRotate the semi circles to align with the triangle

Step 4

To create the head of knight icon, use the Rounded Rectangle Tool and the Ellipse Tool (L) to create the shapes below.

Use the Rounded Rectangle Tool and Ellipse ToolUse the Rounded Rectangle Tool and Ellipse ToolUse the Rounded Rectangle Tool and Ellipse Tool

Step 5

Use the Rectangle Tool (M) and place two rectangles above the shapes as shown below.

Place two rectangles above the new shapesPlace two rectangles above the new shapesPlace two rectangles above the new shapes

Step 6

Use the Minus Front Tool on the overlapping shapes. 

Click on the Minus Front buttonClick on the Minus Front buttonClick on the Minus Front button

Step 7

This will create a straight edge where the shapes used to overlap. Move the two shapes together so that they resemble the image below.

Move the semi circle inside the larger shapeMove the semi circle inside the larger shapeMove the semi circle inside the larger shape

Step 8

Use the Minus Front Tool again to remove the left side of the shape and change them both into a stroke.

Delete half the shape and change it to a strokeDelete half the shape and change it to a strokeDelete half the shape and change it to a stroke

Step 9

Select both shapes and use Pathfinder > Unite to combine them into a single shape.

Combine the strokes using the Unite buttonCombine the strokes using the Unite buttonCombine the strokes using the Unite button

Step 10

Select the Add Anchor Point Tool (+) and add a point near the left corner of the shape. Then use the Direct Selection Tool (A) to drag the top left corner to the left to create the ear.

Add an anchor point and edit itAdd an anchor point and edit itAdd an anchor point and edit it

Step 11

Use the Direct Selection Tool (A) and remove the point shown in the image below.

Delete an anchor pointDelete an anchor pointDelete an anchor point

Step 12

Use the Pen Tool (P) and draw out three extra points on the right side of the shape.

Use the Pen Tool to draw the neck of the horseUse the Pen Tool to draw the neck of the horseUse the Pen Tool to draw the neck of the horse

Step 13

Select the curve points and drag them to the middle using the mouse to create the neck of the horse. Hold the Shift key on the keyboard to select multiple points.

Use the curve points to round out the cornersUse the curve points to round out the cornersUse the curve points to round out the corners

Step 14

Once you are happy with the way the horse head looks, use the Ellipse Tool (L) to create the eye (remember to press and hold the Shift key on the keyboard to create a perfect circle).

Use the Ellipse Tool to create the eyeUse the Ellipse Tool to create the eyeUse the Ellipse Tool to create the eye

Step 15

Add a line underneath the head and position it underneath the previous shapes to create the final knight icon.

Add the horse head to create the final Knight IconAdd the horse head to create the final Knight IconAdd the horse head to create the final Knight Icon

7. How to Create a Bishop Icon

Step 1

To start creating the bishop icon, create a duplicate of the knight icon. With the duplicate icon, use the Selection Tool (V) or the Direct Selection Tool (A) to remove the top shapes (the horse head) of the icon.

Delete the horse head from the duplicate iconDelete the horse head from the duplicate iconDelete the horse head from the duplicate icon

Step 2

Use the Ellipse Tool (L) and hold the Shift key on the keyboard to create a perfect circle on top of the icon.

Add a circle using the Ellipse ToolAdd a circle using the Ellipse ToolAdd a circle using the Ellipse Tool

Step 3

Use the Direct Selection Tool (A) to move the top point of the circle slightly upwards. This will create an egg-like shape.

Use the Direct Selection Tool to edit the anchor pointsUse the Direct Selection Tool to edit the anchor pointsUse the Direct Selection Tool to edit the anchor points

Step 4

Select the handlebars with the Direct Selection Tool (A) and move them to the middle point to create a teardrop shape.

Edit the handle bars to create a tear drop shapeEdit the handle bars to create a tear drop shapeEdit the handle bars to create a tear drop shape

Step 5

Use the Ellipse Tool (L) to create a circle and place it on top of the teardrop shape. Then use the Pen Tool (P) to create a cross, and place it inside the teardrop shape to create the final bishop icon.

Add a cross and circle to complete the final Bishop IconAdd a cross and circle to complete the final Bishop IconAdd a cross and circle to complete the final Bishop Icon

8. How to Create a Queen Icon

Step 1

To start creating the queen icon, create a duplicate of the bishop icon. With the duplicate icon use the Selection Tool (V) or the Direct Selection Tool (A) to remove the top and middle shapes of the icon.

This will leave just the two circles at the bottom and the semicircles on the sides.

Duplicate the Bishop icon and delete the middle shapesDuplicate the Bishop icon and delete the middle shapesDuplicate the Bishop icon and delete the middle shapes

Step 2

Rotate the semicircles slightly. To do this, Right-Click on the shapes, and then go to Transform > Rotate, following the image below.

Rotate the semi circlesRotate the semi circlesRotate the semi circles

Step 3

Use the Ellipse Tool (L) to create two circles, with the outer circle using a stroke and the inner circle using a fill. Remember to hold the Shift key on the keyboard to create a perfect circle.

Create two circles using the Ellipse ToolCreate two circles using the Ellipse ToolCreate two circles using the Ellipse Tool

Step 4

Use the Ellipse Tool (L) to create two overlapping circles side by side on top of the other shapes.

Create two overlapping circles using the Ellipse ToolCreate two overlapping circles using the Ellipse ToolCreate two overlapping circles using the Ellipse Tool

Step 5

Use the Ellipse Tool (L) to create a small circle on top.

Create a small circle using the Ellipse ToolCreate a small circle using the Ellipse ToolCreate a small circle using the Ellipse Tool

Step 6

To create the base of the crown, use the Pen Tool (P) or the Rectangle Tool (M) to draw the shape as seen in the image below. When using the Pen Tool, you can hold the Shift key on the keyboard in order the draw a straight line vertically or horizontally. This will make it easier to align.

From there, use the Direct Selection Tool (A) to select the curve points (hold Shift to select multiple points) and move the points towards the centre with the mouse to transform the corners into curves.

Use the Pen Tool and edit the curve points to create the crownUse the Pen Tool and edit the curve points to create the crownUse the Pen Tool and edit the curve points to create the crown

Step 7

Complete the queen icon by placing the crown on top of the shapes, as shown in the image below.

Combine all the shapes to create the final Queen IconCombine all the shapes to create the final Queen IconCombine all the shapes to create the final Queen Icon

9. How to Create a King Icon

Step 1

To create the king icon, duplicate the queen icon. With the duplicate icon, use the Direct Selection Tool (A) to delete the middle circle shapes.

Duplicate the Queen Icon and delete the middle shapesDuplicate the Queen Icon and delete the middle shapesDuplicate the Queen Icon and delete the middle shapes

Step 2

Use the Direct Selection Tool (A) to delete the small circle shape at the top of the crown.

Delete the top circleDelete the top circleDelete the top circle

Step 3

Use the Rectangular Shape Tool (M) to create two squares (one larger outline and one smaller fill shape). Place the smaller square inside and in the middle of the larger square.

From there, select both squares and rotate them both by 45 degrees (Right Click > Transform > Rotate) and place both squares in the centre space of the icon.

Add two squares in the middle and rotate by 45 degreesAdd two squares in the middle and rotate by 45 degreesAdd two squares in the middle and rotate by 45 degrees

Step 4

Use the Pen Tool (P) to create a small cross on top of the crown.

Add a cross shape using the Pen ToolAdd a cross shape using the Pen ToolAdd a cross shape using the Pen Tool

Awesome! You're Finished!

Congratulations! You have successfully completed this tutorial. Feel free to share your own creations below! I hope you found this tutorial helpful and that you've learned many new tips and tricks that you can use for your future illustrations. See you next time!

The final Chess Icons Set in Adobe IllustratorThe final Chess Icons Set in Adobe IllustratorThe final Chess Icons Set in Adobe Illustrator

Learn More Icon Skills!

If you liked this and are looking to learn some new icon skills, check out these tutorials below! Expand your expertise by going through these in-depth guides. Happy designing!

Check out my video course on Animating Icons in Adobe Illustrator and After Effects:

Or why not check out these tutorials:


Viewing all 728 articles
Browse latest View live