Quantcast
Channel: Envato Tuts+ Game Development
Viewing all 728 articles
Browse latest View live

Unity 2D Joints: Distance, Hinge, Target, and Fixed Joints

$
0
0
Final product image
What You'll Be Creating

Introduction

Unity is a well known, well documented, and very recognised game engine. It's a multi-platform solution, and it also allows you to create games or applications aimed at several platforms (iOS, Android, Web, and PC, among others). Originally, Unity was focused on 3D development, but recent releases provide tools for 2D development.

Unity is a great choice for aspiring game developers, since it works for most mobile, desktop and console platforms, and even better, it is free to use for lower-revenue developers and studios.

One of the key components of Unity are physics joints. Those joints have physical properties (as the name suggest) and let you create several connections between objects within your scene.

Using joints, you can describe a connection between two objects, which means you can simulate the physics of almost any multi-part object you can think of, including doors, sliding platforms, chains, or even wrecking balls!

This tutorial will focus on explaining how 2D physics joints work and how to use them to achieve great effects (without sacrificing the game performance).

Prerequisites

First, ensure you have the latest version of Unity, which you can download here. In this tutorial we're using the version 5.4.1f1. Make sure that you are using the latest Unity version; otherwise you may have problems following the tutorial and using the physics joints.

Before we start, I would also like to give special thanks to Ryukin Studio for letting us use their package 2D Platformer Art Pack - Demo for this tutorial.

Next, download the Unity2DJointsStarter file. Unzip and open the project in Unity. The Demo scene should open automatically, but if not, you can open it from the project’s Scene folder.

The scene should look as follows:

Starting Demo scene

As you can see, we have a character and several static platforms placed around the scene. The idea behind this tutorial is to create a small 2D platform game while using the 2D joints to create obstacles for the player to overcome. Throughout this tutorial, you’ll get to try out each of Unity’s 2D joint types and see how they work, with the exception of the Wheel Joint 2D.

If you press play, you’ll see that all the basic gameplay functionalities for the game are already implemented. You can use the keyboard arrows to move the character and the spacebar key to jump.

First game experience

Distance Joint 2D

The distance joint allows a sprite controlled by 2D physics to rotate around a specific point, while maintaining a certain distance from that point. You’ll use the distance joint 2D to create the first obstacle, a swinging blade. Create a new GameObject in the Position (4.75, -1.25, 0) and call it Blade.

Next, expand the GameAssets folder and open the Platforms folder. There you will find several prefabs used for creating the level platforms. Drag the Platform6 prefab from the folder and drop it over the BladeGameObject. This will make it a child of that object.

Now you need to edit some of the properties of this new game object. First, change the Platform6 name to Platform and then its position to (-0.07042211, 3.60032, 0). Now set the Order in Layer to 3 in the Sprite Renderer component.

Distance Joint 2D - First gameobject configuration

Inside the Box Collider 2D, disable the Used by Effector parameter and remove the attached Platform Effector 2D component. Finally, add a Rigidbody 2D, set Is Kinematic to true, and turn Freeze Rotation on in Z.

Distance Joint 2D - Platform final configuration

The platform that will hold the blade is ready. It is time to add the blade itself. Inside the Free Assets\Sprite folder, select the AxeBlade sprite and drag it over the Blade GameObject, creating a new child.

As you've probably noticed, the sprite is too big, and it’s not in the right place. Change the Position to (-0.11, 3.56, 0) and the Scale to (0.5, 0.5, 1). Also, set the Order in Layer to 2.

Before you can make the blade work, you will need to add several components to it. Start by adding a Polygon Collider 2D and a Rigidbody 2D. Make sure that the Mass is set to 10.

Distance Joint 2D - AxeBlade GameObject

You are now ready to add the Distance Joint 2D. Add the Distance Joint 2D component. As you may notice, this new component has several parameters that you can adjust to produce the best result for your game.

Fear not, you’ll soon learn what all these parameters do and how you can adjust them. You’ll also notice that once you have the component attached to the AxeBlade, a green line extends from the AxeBlade to the center of the screen. This is the origin point, the Connected Anchor property.

Run the scene, keeping your eye on the blade. Right away, you’ll notice that the blade flies across the screen until it stops near the origin point, then begins to roll back and forth on the joint. You will need to fix this issue.

Turn your attention to the Distance Joint 2D component in the Inspector. The first parameter is Enable Collision. This determines whether or not the two objects connected by the joint can collide with each other. In this case, you don’t want this to happen, so leave it unchecked.

The second parameter is Connected Rigid Body. While one end of the distance joint always remains connected to the object that contains the component, you can use this field to pass a reference to an object for the other end of the joint’s connection. If you leave this field empty, Unity will simply connect the other end of the joint to a fixed point in the scene.

You want to connect AxeBlade to Platform in this case, so drag Platform to the Connected Rigid Body field.

With the connection set up and AxeBlade still selected, you should now see the two objects connected by the joint in the Scene view. You can check the connection by changing the Y value of the AxeBlade. Don't forget to set the AxeBlade Position back to (-0.11, 3.56, 0) once you have finished playing with it.

The next parameter of the component is Auto Configure Connected Anchor. You should check this box if you want Unity to automatically set the anchor location for the other object this joint connects to. What’s the anchor, you may ask? Well, we will talk about it in a moment, but for now, you need to learn that the anchor point can be set manually or automatically (by Unity).

In this case you want to leave this field unchecked, since we are going to set the anchor manually.

The following parameter is Anchor. This one indicates where the endpoint of the joint attaches to the GameObject as specified in the object’s local coordinate space. In the Scene view, with AxeBlade selected, you can see the anchor point as an unfilled blue circle centered over AxeBlade. Leave the default value unmodified (0, 0).

Note that the unfilled blue circle representing the joint’s first anchor point may appear filled if you currently have the Transform tool selected in the Scene view. Likewise, you will see a white square if you have the Scale tool selected. In either case, moving the anchor away from (0, 0) will show that it is indeed an unfilled circle.

The Connected Anchor parameter specifies the anchor point of the other end of the joint. If the Connected Rigid Body field is empty, this value is in the scene’s coordinate system. However, when Connected Rigid Body is set, as it is now, the Connected Anchor coordinates refer to the connected rigid body’s local coordinate space. This anchor point appears in the above image as a solid blue circle centered over the Platform game object. Once again, you can leave this value at (0, 0).

As with the Anchor parameter, you can let Unity automatically set the distance of the joint by enabling the Auto Configure Distance field. When this option is on, Unity will automatically detect the distance between the two objects and set it as a distance that the joint keeps between them. In our specific case, we want this feature on.

The Distance parameter, as the name suggests, indicates the distance to maintain between both objects. Back in the Scene view, you can see a small green line intersecting the line connecting the two objects. This can be easily seen if you move the AxeBlade object from the Platform, as in the image below.

Distance Joint 2D - Distance property

This line indicates the distance enforced by the joint. When you run the scene, the joint will move AxeBlade so that the Anchor you defined is at the point where the small line is. You can increase or decrease this value by turning Auto Configure Joint off. Remember to set the AxeBlade position back to (-0.11, 3.56, 0) once you have played with its values. Auto Configure Distance should also be checked.

The next parameter is Max Distance Only. If enabled, the joint will only enforce a maximum distance, meaning that the objects can get closer to each other, but never further away than the value specified in the Distance field. In this example, you want the distance to be fixed, so leave this parameter unchecked.

Finally, the last parameter is Break Force. Here you can specify the force level you want to use to break the joint. Basically, if the force level is reached, the joint will be deleted and the connection between the two objects will disappear. When the value is set to Infinity, it means that the joint is unbreakable. For this scenario, you want it to be unbreakable, so leave the field as it is.

Since you want the blade to swing like a pendulum, change its Rotation to (0, 0, 310).

Now, press Play, and you will see the AxeBlade hanging on the Platform.

Ok, so the blade is swinging, but when it hits the player, nothing happens. You want to kill the player and make it restart on the latest checkpoint. To do this, go to the GameAssets folder, and open the Scripts folder. Grab the Blade script and add it to the AxeBlade. If you press Play again, the player should now die when the blade hits.

The final configuration is the following:

Distance Joint 2D - Final Configuration

To finish, grab the Blade GameObject from the Hierarchy and drop it in the Obstacles folder. This will create a prefab of the swinging blade.

Hinge Joint 2D

The Hinge Joint 2D component allows a GameObject controlled by rigid-body physics to be attached to a point in space around which it can rotate. The rotation can happen in response to a collision or be started by a motor torque provided by the joint itself. You can and should set limits to prevent the hinge from making a full rotation.

In order to learn the basics of the hinge joint, you will create a dual trapdoor mechanism where the player has to drop to continue the level. Let’s do it!

Add a new empty GameObject, name it Hinge, and place it in the Position (18.92, 0, 0). Then, open the Assets\Free Assets\Sprites and drag two MetalPlatform sprites into the Hinge GameObject. Name the first HingePlat1 and the second HingePlat2.

Place them side by side; the HingePlat1 should be placed at Position (0, 0, 0) and the HingePlat2 at (2.5, 0, 0). Change the Order in Layer of both to 2.

In order to add the Hinge Joint 2D component, you need to add two additional components to the sprites. Select the HingePlat1 and add a Box Collider 2D and a Rigidbody 2D. Under the Rigidbody 2D parameter, check the Is Kinematic property. Now, repeat the same steps for the HingePlat2 (without checking the Is Kinematic property).

You may also notice that the Size of the Box Collider 2D is bigger than the HingePlat1 sprite, thus leading to incorrect collision detection between the player and the platform. To fix the issue, change the property Size (inside Box Collider 2D) of both HingePlat1 and HingePlat2 to (2.45, 0.5).

This HingePlat2 will be the initial trapdoor. Therefore, you need to assign a new component called Hinge Joint 2D. This component has several new properties; we will only cover the new ones.

Enable Collision means that the two connected objects can collide between them. Connected Rigid Body specifies where the other object is that this joint connects to. Add the HingePlat1 to this field. Use Motor specifies if the hinge motor should be enabled. You should check this property. Motor Speed specifies the target motor speed in degrees/sec. Set it to -100 (to return to the original position from the upward position).

Maximum Motor Force represents the maximum torque (or rotation) the motor can apply while attempting to reach the target speed. You should set it to 25Use Limits, as the name suggests, represents the rotation angle limit threshold. Check it!

Lower Angle is the lower end of the rotation arc allowed by the limit, while Upper Angle is the upper end of the rotation arc allowed by the limit. Change the Upper Angle to 60. Break Torque is similar to Break Force and specifies the rotation level needed to break and delete the joint. Again, Infinity means it is unbreakable.

Now uncheck the Auto Configure Connection. You must set the Anchor and Connected Anchor manually. The former should be set to (-1.12,0), and the latter to (1.09,0). Your Scene should now look like the following image:

Hinge Joint 2D - Anchor and Connected Anchor

The final HingePlat2 configuration is:

Hinge Joint 2D - First platform final configuration

Run your game, and let's test the first trapdoor.

If everything is OK, it is now time to move on and create the second trapdoor. The procedure is exactly equal to the aforementioned description; the only difference lies in the configuration details of the Hinge Joint 2D. The main steps are:

  1. Add two additional MetalPlatform sprites to the Hinge GameObject.
  2. Name them HingePlat3 and HingePlat4, and set the Order in Layer to 2 and the Position to (4.87, 0, 0) and (7.25, 0, 0) respectively.
  3. Add a RigidBody 2D and a Box Collider 2D to both platforms.
  4. Change the Size of both Box Collider 2D to (2.45,0.5).
  5. HingePlat4 should have Is Kinematic enabled.
  6. Add the Hinge Joint 2D property to the HingePlat3.
  7. Add the HingePlat4 into the Connected Rigid Body of the HingePlat3.
  8. Uncheck the Auto Configure Connection.
  9. Change the Anchor to (1.5, 0) and the Connected Anchor to (-1, 0).
  10. Check the Use Motor property and set the Motor Speed to 100 and the Maximum Motor Force to 25.
  11. Check the Use Limits, and the Upper Angle should be -60.

Run your game, and let's test the dual trapdoor.

Since we want to force the player to pass through the dual trapdoor, we must create some walls around it. Select the Hinge GameObject and from the Assets\Free Assets\Sprites folder add six MetalPlatform into the Hinge GameObject. Name them from Wall1 to Wall4 and change the Order in Layer to 2.

Place the Wall1 in the Position (-0.9, -1.43, 0) and change the Z Rotation to 90. The remaining should be configured as follows:

  • Wall2 Position: (-0.9, -3.8, 0), Z Rotation: 90 

  • Wall3 Position: (8.2, 1.55, 0), Z Rotation: 90

  • Wall4 Position: (8.2, 3.95, 0), Z Rotation: 90

Now, for each wall you should add one additional component, a Box Collider 2D. For each Box Collider 2D change the Size to (2.21, 0.63). Next add a Platform Effector 2D to each wall and set the Surface Arc to 360 and the Side Arc to 180. Go back to the Box Collider 2D and turn Used By Effector on.

The final result should be similar to:

Hinge Joint 2D - Final Configuration and layout

Finally, drag the Hinge GameObject into the GameAssets\Obstacles folder to create your second prefab.

Target Joint 2D

The Target Joint 2D connects to a specific target, rather than another rigid body object, as other joints normally do. We will apply this joint to create an obstacle platform that moves upwards and downwards.

The first thing you need to do is to create the platform itself. On the Free Assets/Sprites folder, there you will find the MetalPlatform sprite. Drag it and place it on the scene.

Change its name to RisingPlatform and its Position to (35, -6, 0), and set the Order in Layer to 2. You need some components to make this work. Add a Rigidbody 2D and a Box Collider 2D. Next, set the Mass to 20 and check Freeze Position X and Freeze Rotation Z. On the Box Collider 2D, make sure the Size is set to (2.35, 0.55). Finally, set the Gravity Scale to 0 so that the object is not affected by gravity.

The basic components are now ready. It is now time for you to add the Target Joint 2D component. As aforementioned, the Target Joint 2D does not connect to another object with a rigid body; instead, it connects to a specific target. This is a spring type joint and can be used for picking up and moving objects acting under gravity, like our moving platform.

Let’s take a closer look at the component itself. The Anchor works exactly like in the previous joints, so there is no need to explain it. The Target sets the world coordinates towards which the joint attempts to move. This is a very important field, since this joint will try to keep zero linear distance between the Anchor and the Target points. For this example, you want the Target to be (35, -6).

Next, you have Auto Configure Target. If this property is enabled, the joint will calculate the target by itself. If you leave this option on, the target will change as the platform moves. Since you don’t want that to happen, leave this box unchecked (uncheck Auto Configure Target).

The following is Max Force; here you set the force that the joint can apply while trying to reach the target position. Set this value to 80.

The Damping Ratio is the degree to which you want to suppress spring oscillation. This value can go from 0 to 1, where 1 means no movement. You want the platform to move freely, so set this field to 0. Next, you have Frequency which, as the name suggests, sets the frequency at which the spring oscillates while trying to reach the target. Set this parameter to 5.

Finally, the last one is Break Force, which is identical to the previous joints, so let the value stay at its default. 

Since you want your platform to keep moving over time, you need to change the Target position from time to time. Fortunately, you already have the script to handle that. Go to the Scripts folder, find the TracktorPlatform script, and add it to your game object. Set Target A to (35, -6) and Target B to (35, 0). The Time should be 2.

By the end, your component should look like this:

Target Joint 2D - Final Configuration

If you press Play, you should be able to see the resulting target joint working as intended. Don’t forget to create a prefab, as you did with the previous obstacles, and place it in the Obstacles folder.

Fixed Joint 2D

The Fixed Joint 2D is used to keep two objects controlled by rigid-body physics relative to each other. Therefore, both objects are always offset at a given position and angle.

To use this joint, you will create a metal platform that can only be moved when the player moves a magnet attached to it. The player will have to push the magnet in order to remove the platform from its way and then progress on the level.

Start by creating an empty GameObject, name it MagnetPlatform, and set its Position to (41.81, 5.22, 0). Next, grab the Magnet sprite from the Sprites folder and drop it over the MagnetPlatform. Set the Magnet Position to (0.55, 3.15, 0) and the Order in Layer to 3.

Next, add a Rigidbody 2D component and a Circle Collider 2D. Set the Mass to 200 and turn Freeze Rotation Z on. Make sure the Radius on the Circle Collider is set to 0.4. By the end, your Magnet should look something like this:

Fixed Joint - Magnet configuration

Now, from the Sprites folder, drag the MetalPlatform sprite into the MagnetPlatform GameObject. Set the Position of the MetalPlatform to (0.57, -1.73, 0), the Rotation to (0, 0, 90), and the Scale to (2, 1, 1). Set the Order in Layer to 3.

Next, add the following components: Rigidbody 2D, and a Box Collider 2D. On Rigidbody 2D, set the Mass to 1000 and enable Freeze Rotation Z. Finally, on BoxCollider 2D, set the Size to (2.31, 0.52).

Now that everything is ready, it’s time for you to configure the Fixed Joint 2D and establish the connection between the magnet and the metal platform. With MetalPlatform selected, add the FixedJoint 2D component.

This new joint allows two game objects with a rigid body to maintain a relative linear and angular offset to each other. In this case, you want the metal platform to keep a certain distance from the magnet.

These linear and angular offsets are set on the relative positions and orientations of the two connected game objects. This means that when you change the position of one of the objects in the Scene View, you will also update the offsets.

This joint uses a simulated spring that is configured to be as stiff as possible. However, it’s possible to change the spring’s strength and make it weaker. This simulated spring applies its force between the objects, and it’s responsible for keeping the offsets you desire.

Taking a look at the component itself, you will find several fields that you are already familiar with.

With the exception of two of those fields, all parameters work exactly like on the other joints, so there is no need to explain them again. Regarding the Damping Ratio, this parameter represents the degree to which you want to suppress spring oscillation. You can set values from 0 to 1, where 1 means no movement.

The Frequency is the frequency at which the spring oscillates while the two connected objects are approaching the separation distance. This parameter can take values from 1 to 1000000. The higher the value, the stiffer the spring will be.

Note that, although higher values mean a stiffer spring, if you set the Frequency to 0, the spring will be completely stiff.

Now, regarding the obstacle you are trying to build, all you need to do is to drag the Magnet game object to the Connected Rigid Body field. All the other parameters can be left with their default values.

Fixed Joint 2D - Configuration

Press Play and test the new obstacle.

Now that the obstacle is ready, create a prefab of it by dragging the GameObject to the Obstacles folder.

Conclusion

This concludes the first tutorial about Unity Joints 2D. You learned about four joints, namely the distance, hinge, target, and fixed joints. With this knowledge you can already start creating your 2D worlds filled with dynamic and complex objects. In the second tutorial, you'll learn about the slider, relative, spring, and friction joints.

If you have any questions or comments, as always, feel free to drop a line in the comments.


Unity 2D Joints: Slider, Relative, Spring, and Friction Joints

$
0
0
Final product image
What You'll Be Creating

In the previous tutorial, we started looking at how 2D physics joints work in Unity and how to use them to achieve great effects (without sacrificing the game performance). In that tutorial, we covered the distance, hinge, target, and fixed joints.

Today, we'll continue by looking at the slider, relative, spring, and friction joints.

Slider Joint

This joint allows a game object controlled by rigid-body physics to slide along a line in space. The object can move anywhere along the line in response to collisions, forces, or by a motor force. Therefore, your next obstacle will be a moving platform that continuously moves between two fixed points.

Create a new empty GameObject, call it MovingPlatform, and set the Position of the object to (72.36, 1.57, 0). Next, you need to create three different platforms. Go to the Platforms folder and drag three sprites Platform1Platform6, and Platform12 into the MovingPlatform.

The Position of Platform1 should be (0, -1, 0), while for Platform6 it's (14, -1, 0), and for Platform12 (20.3, -1, 0). After placing the sprites, your scenario should look like this:

Slider Joint 2D - Initial positioning

Now that you have all the assets in place, it's time to create and configure the Slider Joint 2D. Select Platform6 and add a Slider Joint 2D component to it. Unity will automatically add a RigidBody 2D as well. Enable Freeze Rotation Z as well. Now, select Platform1 and add a RigidBody 2D. Enable the options Is Kinematic and Freeze Rotation Z.

Now, select Platform6, remove the Platform Effect 2D component, and disable the option Used By Effector on the Box Collider 2D component.

The fields available in the Slider Joint 2D component are similar to those you saw in the Hinge joint, but with a few semantic changes. Because slider joints deal in linear rather than angular motion, Motor Speed is measured in units per second. Likewise, instead of angle limits, you can specify translation limits. Translation Limits work the same way as angle limits, except they specify the distance the rigid body can be from the Connected Anchor point.

Turn Enable Collision on, since you want the platform to collide with the connected platforms. Drag Platform1 to the Connected Rigid Body. Set the Connected Anchor to (-0.082, -0.0646). Enable Use Motor and set the Motor Speed to -2.

If you press Play, you will see that the middle platform is now moving. However, once it has reached the nearest platform, it stops. To fix this, you will need to add the MovingPlatform script to Platform6. Drag Platform12 to Platform A and Platform1 to the field Platform B.

Slider Joint 2D - Script configuration

Press Play, and verify that everything is working as expected. Now that the obstacle is ready, create a prefab of it and place it in the Obstacles folder.

Relative Joint

Relative Joint 2D allows two game objects controlled by rigid-body physics to maintain a position based on each other’s location. To implement this joint, you will create two platforms. One is fixed in space, while the other will move around it, creating a circular obstacle.

Create a new empty GameObject at the Position (102.7, 4.2, 0) and name it Relative. Inside the Assets\GameAssets\Platforms folder, drag Platform11 into the Relative GameObject and change its name to Platform2. Change its Position to (-.45, -5.97, 0) and the Scale to (.5, .5, 1).  Add a Rigidbody 2D component and check the Is Kinematic property. The first part of the obstacle is created, so let's now proceed and add the second platform.

Add the MetalPlatform sprite (inside the Free Assets\Sprites folder) into the Relative GameObject, name it Platform, change the Order in Layer to 2, and set the Position to (-.37, 1.23, 0). Add two components: a Rigidbody 2D and a Box Collider 2D. Change the Size of the Box collider 2D to (2.32, 0.55).

Now add a Relative Joint 2D component to the Platform. Inside the Connected Rigid Body you should place Platform2.

The Max Force property sets the linear (straight line) offset between joined objects—a high value (of up to 1,000) uses high force to maintain the offset. The Max Torque sets the angular (rotation) movement between joined objects. Set both values to 200. The Auto Configure Offset should be checked.

If you run your game, you will see that nothing moves. You need to do a final step that involves adding the RelativeJoint (inside the Scripts folder) script into the Platform GameObject. Play the game and test if everything is working as intended.

The final joint configuration is:

Relative Joint 2D - Final configuration

Now that the obstacle is ready, create a prefab of it by placing the GameObject in the Obstacles folder.

Spring Joint

The Spring Joint 2D component allows two game objects controlled by rigid-body physics to be attached together as if by a spring. The spring will apply a specific force along its axis between the two objects, attempting to keep them a certain distance apart.

To use this joint, you will create a floating bridge composed by clouds; the clouds will be attached to another big cloud using the Spring Joint.

Start by creating an empty GameObject. Name it Spring and set its Position to (120, 10, 0). Add the Walkaway1 sprite (inside the Platform folder) into the Spring GameObject. Change its name to CloudSpring and the Order in Layer to 1. Add a Rigidbody 2D and check the Is Kinematic property.

Now, let's add four new sprites. Select the Platform9 sprite from the same folder and add four of them into the Spring GameObject. Name each object PlatSpring01 to PlatSpring04.

Each PlatSpring must be placed in the correct Position as follows:

  • PlatSpring01: -7.63, -12.35, 0

  • PlatSpring02: -2.67, -11.16, 0

  • PlatSpring03: 1.88, -12.44, 0

  • PlatSpring04: 6.77, -12.58, 0

Spring Joint 2D - Initial Positioning

Change the Order in Layer to 4 for all PlatSpring objects. Now, for each PlatSpring you need to add two components: a Rigidbody 2D and one Spring Joint 2D. For the Rigidbody 2D components, you only need to check the Freeze Rotation on the Z axis.

When you add the Spring Joint 2D, it creates by default a Connected Rigid Body to the Scene center.

Spring Joint 2D - Bad Connection Rigid Body illustration

That is not the intended effect. Therefore, you should attach the CloudSpring GameObject to the Connected Rigid Body of each PlatSpring.

Spring Joint 2D - Connection Rigid Body illustration

When you do that, the green line connection is automatically updated and placed inside the CloudSpring.

You can change the Connected Anchor and the Distance if you want. However, in this tutorial you will only change the Connected Anchor.

For each PlatSpring, change the Connected Anchor accordingly:

  • PlatSpring01-4.8, 0

  • PlatSpring02: -1.7, 0

  • PlatSpring031.07, -0.11

  • PlatSpring04: 4.13, -0.11

The complete configuration of PlatSpring01 is the following:

Spring Joint 2D - PlatSpring01 Configuration

Play your game and test if everything is well placed and configured. Now that the obstacle is ready, create a prefab of it by placing the GameObject in the Obstacles folder.

Friction Joint

The Friction Joint 2D connects objects controlled by rigid-body physics and reduces both the linear and angular velocities between the objects until they reach zero (i.e. it slows them down). To implement the Friction Joint 2D, you will create a moving platform that is connected to a cloud with physical properties.

Start by creating an empty GameObject. Name it FrictionPlatform and set its Position to (133, 5, 0). Add the Platform11 sprite (inside the Assets\Sprites folder) into the FrictionPlatform. Change its name to Support and set the Order in Layer to 1, its Position to (0, 0, 0), and its Scale to (.25, .25, 1). You should also add a Rigidbody 2D and check the Is Kinematic property. The remaining properties should be left at the defaults.

Now, add the MetalPlatform sprite into the FrictionPlatform. Change its Position to (.31, -8.33, 0), and the Order in Layer to 3. With the MetalPlatform selected, add a Rigidbody 2D, a Box Collider 2D, and a Platform Effector 2D. The Box Collider 2DSize should be adjusted to (2.35, 0.5) with Used By Effector on, while in the Rigidbody 2D you must check the Freeze Position Y and the Freeze Rotation Z. You should now have something like the following:

Friction Joint 2D - Positioning

If you play the game and jump over the MetalPlatform, you will notice that nothing moves. Let's change that by adding a Friction Joint 2D into the MetalPlatform. The Connected Rigid Body should be filled by the Support. For this joint, we will let Unity set the best anchor coordinate by enabling Auto Configure Connected. The Max Force and Max Torque should be 1, while the Break Force and Break Torque should be Infinity.

Run your game and let's test the friction joint. Now that the final obstacle is ready, create a prefab of it and place it in the Obstacles folder.

Conclusions

This concludes the second tutorial part about Unity Joints 2D. You learned about four joints, namely the slider, relative, spring, and friction joints. With this knowledge allied to everything you learned in the first part, you are now able to create complex 2D games using complex 2D physics properties. 

If you have any questions or comments, as always, feel free to drop a line in the comments.

Basic 2D Platformer Physics, Part 5: Object vs. Object Collision Detection

$
0
0

In this part of the series, we'll start working towards making it possible for objects to not only interact physically with the tilemap alone, but also with any other object, by implementing a collision detection mechanism between the game objects.

Demo

The demo shows the end result of this tutorial. Use WASD to move the character. The middle mouse button spawns a one-way platform, the right mouse button spawns a solid tile, and the spacebar spawns a character clone. The sliders change the size of the player's character. The objects that detect a collision are made semi-transparent. 

The demo has been published under Unity 5.4.0f3, and the source code is also compatible with this version of Unity.

Collision Detection

Before we speak about any kind of collision response, such as making it impossible for the objects to go through each other, we first need to know whether those particular objects are overlapping. 

This might be a very expensive operation if we simply checked each object against every other object in the game, depending on how many active objects the game currently needs to handle. So to alleviate our players' poor processor a little bit, we'll use...

Spatial Partitioning!

This basically is splitting the game's space into smaller areas, which allows us to check the collisions between objects belonging only to the same area. This optimization is severely needed in games like Terraria, where the world and the number of possible colliding objects is huge and the objects are sparsely placed. In single-screen games, where the number of objects is heavily restricted by the size of the screen, it often is not required, but still useful.

The Method

The most popular spatial partitioning method for 2D space is quad tree; you can find its description in this tutorial. For my games, I'm using a flat structure, which basically means that the game space is split into rectangles of a certain size, and I'm checking for collisions with objects residing in the same rectangular space.

The Rectangular space for our objects

There's one nuance to this: an object can reside in more than one sub-space at a time. That's totally fine—it just means that we need to detect objects belonging to any of the partitions our former object overlaps with.

An object residing in more than one sub-space

Data for the Partitioning

The base is simple. We need to know how big each cell should be and a two-dimensional array, of which each element is a list of objects residing in a particular area. We need to place this data in the Map class.

In our case, I decided to express the size of the partition in tiles, and so each partition is 16 by 16 tiles big. 

For our objects, we'll want a list of areas that the object is currently overlapping with, as well as its index in each partition. Let's add these to the MovingObject class.

Instead of two lists, we could use a single dictionary, but unfortunately the performance overhead of using complex containers in the current iteration of Unity leaves much to be desired, so we'll stick with the lists for the demo.

Initialize Partitions

Let's move on to calculate how many partitions we need to cover the entire area of the map. The assumption here is that no object can float outside the map bounds. 

Of course, depending on the map size, the partitions need not exactly match the map bounds. That's why we're using a ceiling of calculated value to ensure we have at least enough to cover the whole map.

Let's initiate the partitions now.

Nothing fancy happening here—we just make sure that each cell has a list of objects ready for us to operate on.

Assign Object's Partitions

Now it's time to make a function which will update the areas a particular object overlaps.

First off, we need to know which map tiles the object overlaps with. Since we're only using AABBs, all we need to check is what tile each corner of the AABB lands on.

Now to get the coordinate in the partitioned space, all we need to do is divide the tile position by the size of the partition. We don't need to calculate the bottom right corner partition right now, because its x coordinate will be equal to the top right corner's, and its y coordinate will be equal to the bottom left's.

This all should work based on the assumption that no object will be moved outside of the map bounds. Otherwise we'd need to have an additional check here to ignore the objects which are out of bounds. 

Now, it is possible that the object resides entirely in a single partition, it may reside in two, or it can occupy the space right where four partitions meet. This is under the assumption that no object is bigger than the partition size, in which case it could occupy the whole map and all the partitions if it was big enough! I've been operating under this assumption, so that's how we're going to handle this in the tutorial. The modifications for allowing bigger objects are quite trivial, though, so I'll explain them as well.

Let's start by checking which areas the character overlaps with. If all the corner's partition coordinates are the same, then the object occupies just a single area.

The object occupying a single area

If that's not the case and the coordinates are the same on the x-axis, then the object overlaps with two different partitions vertically.

An object occupying two of the same partitions along the x-axis

If we were supporting objects that are bigger than partitions, it'd be enough if we simply added all partitions from the top left corner to the bottom left one using a loop.

The same logic applies if only the vertical coordinates are the same.

An object occupying two of the same partitions along the y-axis

Finally, if all the coordinates are different, we need to add all four areas.

An object occupying four quadrants

Before we move on with this function, we need to be able to add and remove the object from a particular partition. Let's create these functions, starting with the adding.

As you can see, the procedure is very simple—we add the index of the area to the object's overlapping areas list, we add the corresponding index to the object's ids list, and finally add the object to the partition.

Now let's create the removal function.

As you can see, we'll use the coordinates of the area that the character is no longer overlapping with, its index in the objects list within that area, and the reference to the object we need to remove.

To remove the object, we'll be swapping it with the last object in the list. This will require us to also make sure that the object's index for this particular area gets updated to the one our removed object had. If we did not swap the object, we would need to update the indexes of all the objects that go after the one we need to remove. Instead, we need to update only the one we swapped with. 

Having a dictionary here would save a lot of hassle, but removing the object from an area is an operation that is needed far less frequently than iterating through the dictionary, which needs to be done every frame for every object when we are updating the object's overlapping areas.

Now we need to find the area we are concerned with in the areas list of the swapped object, and change the index in the ids list to the index of the removed object.

Finally, we can remove the last object from the partition, which now is a reference to the object we needed to remove.

The whole function should look like this:

Let's move back to the UpdateAreas function.

We know which areas the character overlaps this frame, but last frame the object already could have been assigned to the same or different areas. First, let's loop through the old areas, and if the object is no longer overlapping with them then let's remove the object from these.

Now let's loop through the new areas, and if the object hasn't been previously assigned to them, let's add them now.

Finally, clear the overlapping areas list so it's ready to process the next object.

That's it! The final function should look like this:

Detect Collision Between Objects

First of all, we need to make sure to call UpdateAreas on all the game objects. We can do that in the main update loop, after each individual object's update call.

Before we create a function in which we check all collisions, let's create a struct which will hold the data of the collision. 

This will be very useful, because we'll be able to preserve the data as it is at the time of collision, whereas if we saved only the reference to an object we collided with, not only would we have too little to work with, but also the position and other variables could have changed for that object before the time we actually get to process the collision in the object's update loop.

The data which we save is the reference to the object we collided with, the overlap, the speed of both objects at the time of collision, their positions, and also their positions just before the time of collision.

Let's move to the MovingObject class and create a container for the freshly created collision data which we need to detect.

Now let's go back to the Map class and create a CheckCollisions function. This will be our heavy duty function where we detect the collisions between all the game objects.

To detect the collisions, we'll be iterating through all the partitions.

For each partition, we'll be iterating through every object within it.

For each object, we check every other object that is further down the list in the partition. This way we'll check each collision only once.

Now we can check whether the AABBs of the objects are overlapping each other.

Here's what happens in the AABB's OverlapsSigned function.

As you can see, if an AABB's size on any axis is zero, it cannot be collided with. The other thing you could notice is that even if the overlap is equal to zero, the function will return true, as it will reject the cases in which the gap between the AABBs is larger than zero. That's mainly because if the objects are touching each other and not overlapping, we still want to have the information that this is the case, so we need this to go through. 

As the last thing, once the collision is detected, we calculate how much the AABB overlaps with the other AABB. The overlap is signed, so in this case if the overlapping AABB is on this AABB's right side, the overlap on the x axis will be negative, and if the other AABB is on this AABB's left side, the overlap on the x axis will be positive. This will make it easy later on to come out of the overlapping position, as we know in which direction we want the object to move.

Moving back to our CheckCollisions function, if there was no overlap, that's it, we can move to the next object, but if an overlap occurred then we need to add the collision data to both objects.

To make things easy for us, we'll assume that the 1's (speed1, pos1, oldPos1) in the CollisionData structure always refer to the owner of the collision data, and the 2's are the data concerning the other object. 

The other thing is, the overlap is calculated from the obj1's perspective. The obj2's overlap needs to be negated, so if obj1 needs to move left to move out of the collision, obj2 will need to move right to move out of the same collision.

There's still one small thing to take care of—because we're iterating through the map's partitions and one object can be in multiple partitions at the same, up to four in our case, it's possible that we'll detect an overlap for the same two objects up to four times. 

To remove this possibility, we simply check whether we've already detected a collision between two objects. If that's the case, we skip the iteration.

The HasCollisionDataFor function is implemented as follows.

It simply iterates through all the collision data structures and looks up whether any already belong to the object we are about to check collision for. 

This should be fine in general use case as we're not expecting an object to collide with many other objects, so looking through the list is going to be quick. However, in a different scenario it might be better to replace the list of CollisionData with a dictionary, so instead of iterating we could tell right away if an element is already in or not. 

The other thing is, this check saves us from adding multiple copies of the same collision to the same list, but if the objects are not colliding, we'll anyway be checking for overlap multiple times if both objects belong to the same partitions. 

This shouldn't be a big concern, as the collision check is cheap and the situation is not that common, but if it were a problem, the solution might be to simply have a matrix of checked collisions or a two-way dictionary, fill it up as the collisions get checked, and reset it right before we call the CheckCollisions function.

Now let's call the function we just finished in the main game loop.

That's it! Now all our objects should have the data about the collisions.

To test if everything works properly, let's make it so that if a character collides with an object, the character's sprite will turn semi-transparent.

Reviewing Collisions via Animation

As you can see, the detection seems to be working well!

Summary

That's it for another part of the simple 2D platformer physics series. We managed to implement a very simple spatial partitioning mechanism and detect the collisions between each object. 

If you have a question, a tip on how to do something better, or just have an opinion on the tutorial, feel free to use the comment section to let me know!

How to Create an Animated Waterfall

$
0
0

Are you ready to fight against another dangerous enemy of your game? An enemy? Yes—it's the performance!

As we discussed in my last article, poor performance will kill a great game.

On mobile and web platforms, especially, performance is a really big issue. But this is not only a current issue: in the past, PCs and consoles were less powerful than today. And by searching in the past, we can find great and smart solutions.

Today you'll learn how to put a great waterfall inside your game without degrading performance.

We'll use the below items:

  • Unity 5 (it's free: download it now!)
  • A texture like this:
The Waterfall Texture

And... stop! It's kind of magic!

The Waterfall Effect

Check the video below:

This is a great waterfall, isn't it? I love it. I would really like to put a waterfall like that in my game. But how?

We can start with a deconstruction technique. Look at the scene with attention: what parts is it made of?

  • the river (on the top)
  • the waterfall (vertical)
  • the river (on the bottom)
  • some steam where the waterfall ends
  • some spray where the waterfall starts and ends

We can do it with a couple of items, without killing the performance.

But first, I'll explain to you...

Why Can't We Use "Real Water" on Mobile?

In a 3D game, water is made with specific shaders and components (like particles) that cost a lot of time during the rendering phase. For this reason, high-level solutions are not recommended for a mobile game.

This tutorial is written in order to suggest a simple way of putting a waterfall inside a game without losing FPS.

1. Create the First Scene

  • Open Unity
  • Create three planes as in the example below
The waterfalls structure
  • Create three materials with the Mobile/Particle/Alpha Blended shader (called Waterfall_bottom, Waterfall_main, Waterfall_top).
  • Add to all three materials the texture "waterfall_texture".

2. The Code

  • The "core" of this idea is to use the Animated UV Map.
  • Create a new script in c# (name it "ScrollUV").
  • Inside, put this really short code:
  • Save the file and add this script to all planes.

3. Adjust the Numbers

You must adjust the tiling of the textures in the three materials conforming to the size of your planes.

Adjust the tiling of the texture is an important step

Also, you must adjust the speed: the vertical plane should be faster than the others.

The final effect should be like this:

The final effect

And... it's finished. Press play and see the result.

Bonus: A Particle System

If you want to add some particles in order to add more effects to your waterfall, here are a couple of ideas.

Note: the textures are made with Paint or similar. They are very, very simple, so anybody can create textures like that.

Spray

Create a PS and named it "PS_spray". Move it in your scene until it is at the bottom of the waterfall.

Use these parameters:

Particle System for spary effect

Duplicate it and move it until it's at the top of the waterfall.

Steam

Create a PS and named it "PS_steam". Move it in your scene until it is at the bottom of the waterfall.

Use these parameters:

Particle System for steam effect

Bonus 2: Lava

This is a simple trick to have a lava waterfall: change the color of the texture like this:

The Lava Texture

You may want to increase the max particles of the steam; and remember to remove the spray particles.

Conclusion

Sometimes the quick solution is also the best solution.

This tutorial, for example, was born in the past, during the '90s, when PCs were less powerful than today and developers had to find some creative solutions to work with their limitations.

The "trick" of an animated UV map is perfect for a lot of situations.

For example, you can use it to animate a background. Or, if the texture has more tiles (like a "frame from a cartoon"), you can create a short cinematic sequence. The only limit is your imagination.

Why is it very important to understand techniques like this nowadays? Because always, in game dev, you'll find bounds that you'll have to find a way around. And studying the past is, in my opinion, the best way to learn for the future.

Basic 2D Platformer Physics, Part 6: Object vs. Object Collision Response

$
0
0

In the previous installment of the series, we implemented a collision detection mechanism between the game objects. In this part, we'll use the collision detection mechanism to build a simple but robust physical response between the objects.

The demo shows the end result of this tutorial. Use WASD to move the character. The middle mouse button spawns a one-way platform, the right mouse button spawns a solid tile, and the spacebar spawns a character clone. The sliders change the size of the player's character. 

The demo has been published under Unity 5.4.0f3, and the source code is also compatible with this version of Unity.

Collision Response

Now that we have all the collision data from the work we've done in the previous part, we can add a simple response to colliding objects. Our goal here is to make it possible for the objects to not go through each other as if they were on a different plane—we want them to be solid and to act as an obstacle or a platform to other objects. For that, we need to do just one thing: move the object out of an overlap if one occurs.

Cover the Additional Data

We'll need some additional data for the MovingObject class to handle the object vs. object response. First of all, it's nice to have a boolean to mark an object as kinematic—that is, this object will not be pushed around by any other object. 

These objects will work well as platforms, and they can be moving platforms as well. They are supposed to be the heaviest things around, so their position will not be corrected in any way—other objects will need to move away to make room for them.

The other data that I like to have is information on whether we're standing on top of an object or to its left or right side, etc. So far we could only interact with tiles, but now we can also interact with other objects. 

To bring some harmony into this, we'll need a new set of variables which describe whether the character is pushing something on the left, right, top, or bottom.

Now that's a lot of variables. In a production setting it would be worth turning these into flags and having just one integer instead of all these booleans, but for the sake of simplicity we will handle leave these as they are. 

As you may notice, we have quite fine-grained data here. We know whether the character pushes or pushed an obstacle in a particular direction in general, but we can also easily inquire whether we're next to a tile or an object.

Move Out of Overlap

Let's create the UpdatePhysicsResponse function, in which we'll handle the object vs. object response.

First of all, if the object is marked as kinematic, we simply return. We don't handle the response because the kinematic object need not respond to any other object—the other objects need to respond to it.

Now, this is assuming that we won't need a kinematic object to have the correct data concerning whether it's pushing an object on the left side, etc. If that's not the case, then this would need to be modified a bit, which I'll touch on later down the line.

Now let's start to handle the variables which we've just recently declared.

We save the results of the previous frame to the appropriate variables and for now assume that we're not touching any other object.

Let's start iterating through all our collision data now.

First off, let's handle the cases in which the objects are barely touching each other, not really overlapping. In this case, we know we don't really have to move anything, just set the variables. 

As mentioned before, the indicator that the objects are touching is that the overlap on one of the axes is equal to 0. Let's start by checking the x-axis.

If the condition is true, we need to see whether the other object is on the left or right side of our AABB.

Finally, if it's on the right then set the mPushesRightObject to true and set the speed so it's no greater than 0, because our object can no longer move to the right as the path is blocked.

Let's handle the left side the same way.

Finally, we know that we won't need to do anything else here, so let's continue to the next loop iteration.

Let's handle the y-axis the same way.

This is also a good place to set the variables for a kinematic body, if we need to do so. We wouldn't care if the overlap is equal to zero or not, because we are not going to move a kinematic object anyway. We'd also need to skip the speed adjustment as we wouldn't want to stop a kinematic object. We'll skip doing all this for the demo, though, as we're not going to use the helper variables for kinematic objects.

Now that this is covered, we can handle the objects which have properly overlapped with our AABB. Before we do that, though, let me explain the approach I took to collision response in the demo.

First of all, if the object does not move, and we bump into it, the other object should remain unmoved. We treat it as a kinematic body. I decided to go this way because I feel it's more generic, and the pushing behaviour can always be handled further down the line in the custom update of a particular object.

When we bump into another object the other object should not move

If both objects were moving during the collision, we split the overlap between them based on their speed. The faster they were going, the larger part of the overlap value they will be moved back.

The last point is, similarly to the tilemap response approach, if an object is falling and while going down it scratches another object even by one pixel horizontally, the object will not slide off and continue going down, but will stand on that one pixel.

When we bump into another object the other object should not move even if its coming vertically

I think this is the most malleable approach, and modifying it should not be very hard if you want to handle some response differently.

Let's continue the implementation by calculating the absolute speed vector for both objects during the collision. We'll also need the sum of the speeds, so we know what percentage of the overlap our object should be moved.

Note that instead of using the speed saved in the collision data, we're using the offset between the position at the time of collision and the frame prior to that. This will just be more accurate in this case, as the speed is representing movement vector before the physical correction. The positions themselves are corrected if the object has hit a solid tile, for example, so if we want to get a corrected speed vector we should calculate it like this.

Now let's start calculating the speed ratio for our object. If the other object is kinematic, we'll set the speed ratio to one, to make sure that we move the whole overlap vector, honoring the rule that the kinematic object should not be moved.

Now let's start with an odd case in which both objects overlap each other but both have no speed whatsoever. This shouldn't really happen, but if an object is spawned overlapping another object, we'd like them to move apart naturally. In that case, we'd like both of them to move by 50% of the overlap vector.

Another case is when the speedSum on the x axis is equal to zero. In that case we calculate the proper ratio for the y axis, and set that we should move 50% of the overlap for the x axis.

Similarly we handle the case where the speedSum is zero only on the y-axis, and for the last case we calculate both ratios properly.

Now that the ratios are calculated, we can see how much we need to offset our object.

Now, before we decide whether we should move the object out of collision on the x axis or the y axis, let's calculate the direction from which the overlap has happened. There are three possibilities: either we bumped into another object horizontally, vertically, or diagonally. 

In the first case, we want to move out of overlap on the x-axis, in the second case we want to move out of the overlap on the y axis, and in the last case, we want to move out of overlap on whichever axis had the least overlap.

Looking at our paths coming from each direction

Remember that to overlap with another object we need the AABBs to overlap each other on both x and y axes. To check whether we bumped into an object horizontally, we'll see if the previous frame we were already overlapping the object on the y-axis. If that's the case, and we haven't been overlapping on the x-axis, then the overlap must have happened because in the current frame the AABBs started to overlap on the x axis, and therefore we deduct that we bumped into another object horizontally.

First off, let's calculate if we overlapped with the other AABB in the previous frame.

Now let's set up the condition for moving out of the overlap horizontally. As explained before, we needed to have overlapped on the y axis and not overlapped on the x axis in the previous frame.

If that's not the case, then we will move out of overlap on the y-axis.

As mentioned above, we also need to cover the scenario of bumping into the object diagonally. We bumped into the object diagonally if our AABBs weren't overlapping in the previous frame on any of the axes, because we know that in the current frame they are overlapping on both, so the collision must have happened on both axes simultaneously.

But we want to move out of the overlap on the axis in case of a diagonal bump only if the overlap on the x axis is smaller than the overlap on the y-axis.

That's all the cases solved. Now we actually need to move out the object from the overlap.

As you can see, we handle it very similarly to the case where we just barely touch another AABB, but additionally we move our object by the calculated offset.

The vertical correction is done the same way.

That's almost it; there's just one more caveat to cover. Imagine the scenario in which we land on two objects simultaneously. We have two nearly identical collision data instances. As we iterate through all the collisions, we correct the position of the collision with the first object, moving us up a bit. 

Then, we handle the collision for the second object. The saved overlap at the time of collision is no longer up to date, as we have already moved from the original position, and if we were to handle the second collision the same we handled the first one, we would again move up a little bit, making our object get corrected twice the distance it was supposed to.

Vertically landing on a group of objects

To fix this issue, we'll keep track of how much we've already corrected the object. Let's declare the vector offsetSum right before we start iterating through all the collisions.

Now, let's make sure to add up all the offsets we applied to our object in this vector.

And finally, let's offset each consecutive collision's overlap by the cumulative vector of corrections we've done so far.

Now if we land on two objects of the same height at the same time, the first collision would get processed properly, and the second collision's overlap would be offset to zero, which would no longer move our object.

Vertically landing on a group of objects

Now that our function is ready, let's make sure we use it. A good place to call this function would be after the CheckCollisions call. This will require us to split our UpdatePhysics function into two parts, so let's create the second part right now, in the MovingObject class.

In the second part we call our freshly finished UpdatePhysicsResponse function and update the general pushes left, right, bottom, and top variables. After this, we only need to apply the position.

Now, in the main game update loop, let's call the second part of the physics update after the CheckCollisions call.

Done! Now our objects cannot overlap over each other. Of course, in a game setting we'd need to add a few things such as collision groups, etc., so it's not mandatory to detect or respond to collision with every object, but these are things which are dependent on how you want to have things set up in your game, so we are not going to delve into that.

Summary

That's it for another part of the simple 2D platformer physics series. We made use of the collision detection mechanism implemented in the previous part to create a simple physical response between objects. 

With these tools it's possible to create standard objects such as moving platforms, pushing blocks, custom obstacles, and many other kinds of objects which cannot really be a part of the tilemap, but still need to be part of the level terrain in some way. There's one more popular feature that our physics implementation is lacking still, and those are the slopes. 

Hopefully in the next part we'll start extending our tilemap with the support for these, which would complete the basic set of features a simple physics implementation for a 2D platformer should have, and that would end the series. 

Of course, there's always room for improvement, so if you have a question or a tip on how to do something better, or just have an opinion on the tutorial, feel free to use the comment section to let me know!

Show Us What You've Made With Envato for a Chance to Win $250!

$
0
0

Have you created something using a tutorial here on Envato Tuts+? Whether it's an illustration, an audio project or pretty much anything else, you can submit it to this month's Made With Envato contest for a chance to win $250 in Envato Market credits!

There are also prizes up for grabs for items you've created using a purchase from Envato Market, Envato Elements, or Envato Studio. So if you've used any Envato products this month, read on to find out how you can enter and win.

 Made With Envato contest banner

The Prizes You Can Win

Envato is giving away $1,000 in Envato Market credits every month to the people who've created the best projects using Envato products. 

And this month, for the first time, there are prizes on offer in four different categories, which means more chances to win!

Here's how that prize fund is split:

  • $250: Best Audio & Video submission
  • $250: Best Web Design & Development submission
  • $250: Best Graphic Design, Illustration, Photo, or 3D submission
  • $250: Best Elements, Studio, or Tuts+ submission

How You Can Enter

To enter the Made With Envato contest, simply share the projects you've created using Envato products. It could be something you've created using a purchase on Market, a service from Studio, a tutorial you've used from Tuts+, or something you've used from your Elements subscription.

To enter, simply upload an image of your project to the Made With Envato forum thread. You must include the name and link of the item(s), services, or tutorials you used from Envato Market, Envato Elements, Envato Studio, and/or Envato Tuts+. You can also share your work on Twitter or Instagram using the hashtag #madewithenvato.

That's it! Pretty simple, and completely free. You can enter up to three projects per month, and even if you don't win this month, your entry will still be considered for contests in future months. So keep on entering, and you never know when you'll win!

The deadline is 31 January, so there's still time to create something new if you don't already have a project to enter.

Get More Info

For full details of the contest, along with those all-important terms & conditions, see the Made With Envato contest announcement page. And if you need some inspiration, check the December and November prize winners!

Made With Envato December winner

Lighting in Unity 5

$
0
0
Final product image
What You'll Be Creating

Unity has become more and more popular among aspiring game programmers. This is due to the fact that Unity directly supports a multitude of platforms such as mobile, desktop, and console environments. Moreover, it is free to use for lower revenue developers or studios.

Unity supports several technologies and components. Some of the key components are the lighting and illumination techniques. In Unity you can illuminate a scene by simulating the complex behaviour of lights or by applying a simple lighting model.

This tutorial will focus on explaining how lighting works in Unity 5, the lighting types and properties, and how to use them to create rich illumination effects.

Prerequisites

First, ensure you have the latest version of Unity. In this tutorial we're using version 5.5.0f3. Make sure that you are using the latest Unity version; otherwise you may have problems following the tutorial and using the physics joints.

Next, download the LightingInUnity5-Starter file. Unzip and open the project in Unity. The Demo scene should open automatically, but if not, you can open it from the Assets folder.

Direct and Indirect Lighting

In a real-world scenario, you have two types of lighting effects: direct and indirect lighting. Direct lighting, as the name suggests, is light that comes directly from a light source (a lamp, the sun, or other). On the other hand, indirect lighting is light that comes from another object.

Let us test and implement both of them.

Direct lighting can already be seen in action in your Scene. As you probably noticed, you have a directional light in the scene called Directional Light.

Directional lights are mostly used in outdoor scenes for sun and moonlight. They affect all the surfaces of the objects in the scene. They are also the least expensive to the graphics processor.

You can select the Directional Light and view its properties in the Inspector pane. You will notice several interesting properties such as Type, Color, and Intensity. You can play with the Color and the Rotation properties and see the result in real time. By changing those values, you are directly playing with directional light.

Directional Light Inspector Rotation

At this moment, your 3D scene does not have any indirect light. You can check that by moving the Sphere closer to the green or red wall. The Sphere color will not change.

Global Illumination - Sphere surface

One way to use indirect lighting is to use static GameObjects. Keep in mind that the use of static game objects can improve the performance of your game rendering, but will also reduce the quality, so you must find the right balance for your game.

Add a box mesh box into your scene (GameObject > 3D Object > Cube), and place it next to the sphere. Name it Cube.

Global Illumination - Box mesh

Select the Cube, and inside the Inspector enable the Static property.

Global Illumination - Cube Inspector Properties

When you enable the Static property, Unity automatically creates a light map for that object and applies the correct illumination model. You can now move the camera towards the Cube and see the indirect lighting working. The green (in the image) or the red color is now spreading into the Cube.

Global Illumination - Light map with shadows

If you change the direction of the Directional Light, you will notice that Unity automatically updates the light maps as well.

The indirect lighting that is being applied to the Cube is not applied to the Sphere because the Sphere is not static. You can solve this by making the Sphere static (Inspector > Static).

At this moment, you can add and configure direct and indirect lighting to static objects. However, 3D scenes are composed of dynamic objects. Therefore, how can we apply such effects to those objects? That question leads us to the next section.

Light Probes

When your scene contains non-static objects, you need to use specific lighting techniques to correctly illuminate them so that they don't look disconnected from the scene.

Through the use of light probes and their positions, you can sample strategic points in the scene. Each light probe can sample a specific region and then calculate the lighting in that specific region. These calculations are fast enough to be used during gameplay. The use of light probes avoids any disconnect between the lighting of moving objects and static light-mapped objects in the scene.

The Sphere is a dynamic object, therefore, and contrary to what it may seem, it is not correctly illuminated. If you turn the Directional Light off, you will notice that the only lighting affecting the Sphere is the ambient light. To correctly illuminate the Sphere, you need to use light probes.

To add light probes to an area of your scene, you go to Component > Rendering > Light Probe Group. This will create a light probe group in your scene.

Light Probes - Light probe group

The next step is to put the probes of the group in the correct position. You now want to place them near each corner of the Box.

The best way to do this is to change to an orthographic view (by clicking on the cube on the upper right corner of the scene). Then, select each light probe node and drag it into each Box corner (similar to the next image).

Light Probes - Side selection

Repeat the process until you have positioned all nodes.

Light Probes - Procedure

There are scenes that require additional probes to correctly illuminate objects. In order to add more probes, you can select one probe and then under the Inspector, click on the Duplicate Selected button.

Light Probes - Light Probe Group

After the duplication, you need to place the new node in the right place (the duplicated one will spawn in the same position as the selected one).

Light Probes - Probes positioning

If you take a closer look at the interface in the Inspector, you will notice that you can also add individual probes (Add Probe), delete probes (Delete Selected), or select all the probes from the group (Select All).

To see the light probes in action, select the Sphere and add a Rigidbody to it (AddComponent > Rigidbody). Then, assign a material to the collider. You can use the rubber material.

Light Probes - RigidBody

Now, place the Sphere near the top of the Box and press Play. You can now see the proper illumination of the Sphere. To see the differences, disable the light probes and Play the scene again.

Light Probes - Testing the light probes

Point Light

Point lights are the most common light in games. They're usually used for explosions and light bulbs. Since they emit light in all directions, they have an average cost on the graphics processor. However, calculating shadows when using point lights is more expensive.

Add a point light by selecting GameObject > Light > Point Light. Next, position the Point light inside the Box near the top.

Point light - light positioning

Select the Point light and take a look at the parameters in the Inspector.

Point light - Inspector parameters

The first parameter is Type. Here you can set the type of light you want to use. You can either select Spot, Directional, Point, or Area. Each provides a specific light effect. You can select any of the options and see the results in real time. However, for this tutorial you will use the Point option. This will create a light bulb effect (light shines in all directions equally).

The second parameter is Baking. You can set it to Realtime, Baked, or Mixed. Leave the value as default. The Range parameter defines how far the light is emitted from the center of the Point light. The Color parameter defines the emitted light color.

Intensity defines the brightness of the light, and Bounce Intensity defines the indirect light intensity multiplier. The Shadow Type defines the shadow properties and shadow types. You can either set it to No Shadows, Hard Shadows, or Soft Shadows. Remember that point light shadows are the most expensive for the engine, so be careful when selecting this option.

If you select Hard Shadows or Soft Shadows, you will see the shadow produced by the Sphere and the Point light. When you change the Strength value, the shadow will also attenuate or accentuate. The Resolution lets you define the detail level of the shadows. Finally, Bias and Normal Bias let you configure the offset used when comparing the pixel position in light space with the value from the shadow map.

The Cookie is an optional parameter that represents an alpha channel of a texture used as a mask that determines how bright the light is in different positions. Since this is a point light, the texture used must be a Cubemap.

The Draw Halo option only renders a halo around the light source. Flare defines a reference to a flare to be rendered at the light's position. Flare and Draw Halo can be useful when debugging the 3D scene and possible bottlenecks.

Render Mode defines how important lighting is when the renderer is rendering the scene. The more important, the more intensive the rendering will be. The Render Mode can be set to AutoImportant, or Non-Important. Finally, Culling Mask is used to select or exclude groups of objects that are affected by the point light.

Now that you've seen all the point light properties, let's talk about direct and indirect lighting when using point lights. When you create a game, one of the main characteristics is the use of lighting and illumination effects. However, a heavy lighting scenario will lead to heavier loading times, FPS bottlenecks, and more work for the CPU. Therefore, you are advised to play with Shadows Type and Baking properties to balance your scene performance.

For example, if you set the Baking to Realtime and the Shadow Type to Hard Shadows or Soft Shadows, you will have a very nice, realistic shadow effect between the Sphere and the Box. However, this setup can be heavy for the CPU if your scene is composed of many lights.

How to solve that issue? You can initially set the Baking parameter to Bake. Once you do this, Unity will automatically create lightmaps for this light, giving you a performance boost. However, this performance boost has its cost: now you don’t have shadows being cast by the point light into dynamic objects, like the Sphere.

The best practice is to analyze each light individually, and then proceed accordingly. For example, in this scene it would make more sense to have RealtimeBaking and HardShadows, because only one light source is present inside the box. 

However, if your light source is a torch in a wall, it would be better to set Baking to Bake and let Unity produce the lightmaps.

Recall that it is up to the developer to manage the quality and performance of the game. Therefore, keep in mind that you have to be careful in keeping that balance, especially if you are developing for mobile devices.

Spot Light

Spot lights emit light from a light source targeting a specific region. They only illuminate objects within a specific region, a region delimited by a 3D cone. Basically, they work like the headlights of a car. As you may imagine, they are perfect for flashlights, car headlights, or lamp posts. They are also the most expensive light type for the graphics processor.

Let us now change the Point light into a spot light. Select the Point light you created earlier and change its Type to Spot. Next, Rotate the light so that it can illuminate the floor of the Box. Rename it to Spot light.

Spot light configuration

As you may have noticed, inside the Inspector, the Spot parameters are similar to the Point ones. However, you have a new one called Spot Angle.

Spot light Inspector properties

The Spot Angle determines the angle of the light cone in degrees.

Regarding direct and indirect lighting, spot lights work exactly like point lights. You have the same limitations and advantages. Therefore, you have to be careful when setting up the balance of lights in your game.

Note that you can always play with shadows and baking in order to balance looks and performance.

Area Light

An area light is light is emitted in all directions on one side of a rectangular area of a plane. This rectangle is defined in both Width and Height properties. The area lights are only available during light map baking, which means that they have no effect on objects at runtime.

Select the Spot light and change its Type to Area and its name to Area light. Next, position the light inside your box, and in the Inspector, change the Width and the Height, in order to cover the whole area inside the Box.

Area light positioning

By looking at the Area light parameters, you'll notice that most of the parameters are similar to the previous ones, for example Type, Draw Halo, Flare, Render Mode, and Culling Mask

The real new parameters are the Width and Height. Both are used to set the size of the rectangular light area.

Area light Inspector properties

If you press Play, you will see that the area light is casting its light on all objects within its range. The size of the rectangle is determined by the Width and Height properties. The side on which light is being cast is the plane’s normal, which is the same as the light’s positive Z direction. The light is emitted from the whole surface of the rectangle. Because of this, shading and shadows from the affected object tend to be much softer than with point or directional light sources.

The lighting calculations for an area light are quite processor-intensive, so they are not available at runtime and can only be Baked into light maps.

Conclusions

This concludes the tutorial about lighting in Unity 5. You learned about several lighting effects and configurations. With this knowledge, you can now apply several lighting effects to your game or application.

Unity has an active economy. There are many other products that help you build out your project. The nature of the platform also makes it a great option from which you can better your skills. Whatever the case, you can see everything we have available in the Envato Marketplace.

If you have further questions or comments, as always, feel free to drop a line in the comments section.

The Four Elements of Game Design: Part 1

$
0
0

What is a game? There are a lot of theories, and while most game designers will agree on certain aspects, there has never really been a solid answer. 

Game design is only really in its infancy: while board games like chess have been around for thousands of years, it's only really in the past few decades that people started taking game design seriously. With the rise in popularity of both board and computer gaming, people now expect more and more from their games, meaning games which entertained us in the 1980s very rarely hold up to today's standards.

Fat worm blows a sparky image
"Fat Worm Blows a Sparky". Although critically acclaimed at the time, the creator later admitted that it suffered from questionable design: "it seemed logical that the players ought to suffer [as much as the developer]".

While game design is a complex task, the process of designing a game does not have to be hard. There are some simple rules we should follow, and we can view these as the absolute fundamentals—the elements of game design. As creators and artists, we do not always have to follow these rules, but understanding them will allow us to break them on our own terms.

So, what is a game? Well, that's a complex question, so we need to break it down. Let's examine the first layer of game design: what is the most absolutely fundamental aspect of a game?

1. Challenge

A game is, at its core, a challenge. The simplest games—throwing rocks at things, or “tag, you're it” running games—were, once upon a time, important survival techniques. Fast runners were able to outrun predators, and good rock throwers could hunt more reliably.

The path from rock-throwing to online deathmatch isn't exactly clear, but it seems that games satisfy some desire deep within us. We feel elated when we win, and get upset when we lose. Gaming is a fairly primal desire.

So in order for a game to feel satisfying, we need some sort of challenge: a goal or objective. Traditionally, we set win-states and lose-states (save the princess, don't die), but challenges are not just about winning the game. Every obstacle, every puzzle, every adversary defeated is a challenge. We tend to break these challenges up: micro-challenges (such as jump a pit, kill a bad guy), main challenges (complete the level), and the overall challenge (complete the game). 

Of course, not all challenges have to be set by the designer: there are communities based around concepts like speedrunning, and certain players enjoy “Ironmade” mode in games (where if you die once, you start over at the very beginning) or “pacifist runs”, where the player isn't allowed to kill anyone. One particularly interesting and complex challenge is the A press challenge, where the commentator completes a Mario level by pressing the A button as few times as possible.

What Is a Game Without a Challenge?

Essentially, a game without challenge is not a game—it's a toy. This does not necessarily have to be a bad thing, however, as both Minecraft and the Sims are both fantastically popular and tend to fall more under the "toy" category (although players can set their own challenges). 

There are also several rather fantastic online toys, such as Daniel Benmergui's "Storyteller" or Ben Pitt's "You are the road". They do not have any real win or lose conditions, are great fun to play with, and are excellent examples of what is possible if you want to go down this path.

Games can also contain toy elements within them. Spore is a game wherein you design a species to go from microscopic life to space-faring race, but for some people simply creating wonderful monsters within the creature creator was enough. If you've ever spent more than five minutes in a character creator for an RPG or Sims-style game, then you can probably appreciate the importance of creating something exactly how you envision.

Spore creature creator image
The spore creature creator, showing off a suitably strange individual

That said, toys are not games. If you want to make a toy, go ahead. If you want to make a game, you need a challenge.

What Happens When We Do Challenge Incorrectly?

It's tempting to think of “bad challenge” as making the game too easy or too hard. It's an important part of game design, and something we've talked about in a previous article, but there's more to it than that. A challenge has to be fair to the user, and that means not only setting the difficulty at a reasonable level, but ensuring that the player can be reasonably expected to complete it.

One obvious example of “bad challenge” is with the card game Solitaire (or Klondike, to use its proper name). Depending on the variant, an estimated 79% percent of games of Solitaire are winnable. This means that before you even make your first move, there is a 21% chance that you could not possibly win.

This 79% win rate also assumes that the player moves perfectly—that they have complete knowledge of the deck and of all future moves. Of course, in reality, there are times when you'll be given what seem like two identical options (do I move the 4 of spades, or the 4 of clubs?) and one of them will put you into an unwinnable gamestate. With no way to determine the correct move other than guessing, the “challenge” of Solitaire can often feel more like blind luck than skill. 

Despite this, Solitaire is arguably the most popular computer game of all time—partly due to being bundled with Windows, but also partly due to its simplicity and quick playtime.

Freecell game image
Game 11982, aka the impossible freecell level.

If making our challenge unreasonably difficult is bad, then so is the reverse—making the challenge non-existent. For most of us, games are abstract learning experiences. When we have mastered a game, it no longer provides entertainment. This is why we rarely play children's games such as snakes and ladders or tic-tac-toe—if we are able to “solve” the game, then there is no challenge, and therefore no enjoyment to be had.

Of course, solvability depends on the player's skill. Children enjoy tic-tac-toe because it provides that all-important challenge, whereas a super-gamer might be able to solve Connect 4. A player needs to constantly be challenged to maintain interest: this is why many people dream of being chess grandmasters, but very few people care about tic-tac-toe championships.

But, of course, a game is more than just a challenge. If I asked you to name 100 different animals, then you might be challenged, but you probably wouldn't have fun. Listing animals isn't really a game—it's simply a test of your knowledge. So what makes a challenge fun? What separates a test from a game?

2. Choice

Choice makes a challenge interesting—or specifically, meaningful choice does. When we go into a game, we expect to make choices, and for these choices to affect the game. These choices can be academic (do you want to be a warrior or a mage) or be split-second decisions in the heat of combat (do we attack, or anticipate a counterattack and dodge?).

The choices we make are reflections on our skill in the game. As we play, we get better at the game, and we make the “correct” choices more often. If games were once important for human survival, then it is because we were able to train ourselves for dangerous situations. The ability to make choices is central to that, and our choices show us who lives, and who becomes tiger food.

Player choices should be reflected in the gameplay. If the gameplay does not change as the result of choices the player makes, then we have to ask, why are we playing? A massive RPG might have some dialogue decisions which do nothing, but we accept them because overall we are making choices that do matter. 

Giving “fake choice” to a player can allow a player to feel more involved, but too much fake choice is likely to make the game feel cheap and meaningless. The free indie game Emily is away is quite engaging (and certainly worth an hour of your time), but the game's ending is arguably cheapened by being inevitable no matter what you do.

Enter the Gungeon image
A typical fake choice from "Enter the Gungeon". Selecting "resist" starts the boss fight, whereas selecting "give up" brings you to the second screen, which then returns you to make your choice again. 

What Is a Game Without Choice?

Games are, in many regards, tests. If you remove the ability for players to make choices, you instead make them passive observers, and turn your game into an “interactive movie”. One criticism we often hear about “low choice games” (including certain RPGs) is that the player doesn't feel like anything they do matters, and the story continues regardless.

So can a game without choice even be considered a game? Well, to some extent. Snakes and ladders is considered a game by many people, although there is literally no way to influence the outcome short of flipping the table. It is important to note, though, that snakes and ladders is played almost exclusively by children. 

The challenges facing children are different to those facing most adults: learning to play with your friends, counting how many squares you move, seeing where the ladders go. Because snakes and ladders does not rely on skill, it also means adults and children can play together quite happily. For most of us, snakes and ladders isn't a “proper” game—but for children, it's good enough.

What Happens When We Provide Incorrect Choices?

The concept of choice within gaming is quite massive, so rather than attempt to cover it all here, we'll take a look at some of the most common mistakes people make designing choice mechanics:

“No choice”— removing the ability of the player to make choices. There are several ways to do this, but one of the most common is player vs. player abilities. When players use abilities like stunlocks or invincibility, then in-game interaction disappears and the “victim” will find themselves having little to no fun. 

This is also something we've discussed before, but the key is to maintain interaction between players. If you're planning on implementing any form of PvP combat, consider designing abilities that promote choice rather than denying it—a spell which prevents enemies from attacking or using spells still allows them to run away or drink potions, and is infinitely more interesting than simply stunning them.

“Choice doesn't matter”—a situation where no matter what the player chooses, all outcomes are the same. Doing this removes agency from the player, and often makes them feel as if they're not the ones in control.

One notable example is from COD:BLOPs, specifically the Cuba mission. Rather than trying to describe it in detail, I highly recommend you watch the YouTube video. The player is able to complete the level with minimal player input, calling into question the "game" element of it. Noted video game critic Totalbiscuit also addresses the issue in a video below.

“One correct choice”—when one option is so strong it makes the decision-making process a formality. Games where certain characters or cards are so powerful that they dominate the meta—always choosing the best character in a fighting game, or always choosing to be an elf when making a wizard. Although it can be very hard to create a perfect balance, no one option should be automatically better than everything else. This even pops up in Monopoly, where the correct response to landing on a property is always to buy it.

“Uninformed choice”—when choices presented to a player aren't explained. Forge an alliance with Clan Douglas or Clan Fraser? Who are they, and why do I care? A player should be able to understand why they are making choices, and what long-term effects those choices are likely to have.

A common way to do this is to overwhelm a new player with choices. When a player is learning the game, they are unlikely to understand the expected outcome of every option available to them. In fact, an abundance of choices can lead to "analysis paralysis", where the player simply fails to make a choice because they are so overwhelmed. 

This is one of the reasons why a good tutorial is so useful, and why it's often useful to lock game aspects until a player has demonstrated mastery of basic skills. For complex strategy games such as Crusader Kings or Europa Universalis, many new players are simply scared off because they have no idea what they should be doing.

The problem of uninformed choice is further compounded if one of the choices later turns out to be “wrong”—as an example, if an RPG player levels up skills that later turn out to be wrong. A player might fancy themselves as an ice wizard, only to found out that ice spells are vastly inferior to fire spells in the late game. These are sometimes referred to as “noob traps”, as they initially seem appealing, only to reveal their uselessness as the game progresses.

The whole concept of choice is what makes a challenge fun. When a player is able to interact with the game, then they become active participants. When a player is removed from the game, then they become passive observers. When you engage the player, you ensure that the challenge maintains their interest.

Conclusion

Of course, the concepts of challenge and choice are still only part of game design. We can present a challenge to the player, but we need to ensure that the skill of the player is fully tested. If a player manages to jump over a pit, then we can't automatically assume the player has mastered jumping. If they kill one enemy, then they haven't mastered combat. So how do we make sure that the player can test themselves to the fullest?

We'll examine how to do that (and look at the final two elements) in our next article.


The Four Elements of Game Design: Part 2

$
0
0

In our previous article, we examined some of the aspects of game design, in particular challenge and choice. Challenge and choice can create something which is entertaining, but by themselves do not make a game—otherwise a quiz could be considered a game. 

To fulfil the game criteria, the player needs to be tested not just once, but continually. We need a third element.

3. Change

A game is more than a singular challenge. It is, essentially, a series of micro challenges—we might face hundreds or thousands during the course of the game. Things like “collect a coin” or “kill an enemy” or “jump the pit”. 

Sometimes, the player might deliberately “fail” a micro-challenge in order to benefit overall (intentionally taking damage to reach a powerup) but doing counter-intuitive moves is often a result of complex gameplay and a clear overall goal. A game needs to contain challenges which are not static, and revolve around choices the player has previously made.

Change is what provides content. When we play Mario, we don't repeat the first level over and over; we want to progress to the later levels. It's not just the desire to complete the game, but the satisfaction of overcoming more difficult challenges which drives us. 

Mario 1-1 level
By now, most gamers are familiar with level 1-1 from Super Mario Bros. (Image taken from Mario Wiki)

A player wins a game when all challenges have been completed. A player gets bored of a game when all challenges become trivial. If changing things around provides us with more content (more challenges), then the player will take longer to get bored of the game. This is partly why some "hardcore" gamers choose to deride casual games such as Farmville: they see the game as trivial, and therefore boring. Of course, the definition of trivial (and boring) varies for every individual.

There are many ways to provide the stimulus of change within a game. The most obvious is simply having multiple levels, but achievements can also provide an aspect of change. Completing a game shows you're good at it, but a pacifist run where the player doesn't kill shows mastery of a different sort. It provides more content to the player and in some cases can double the lifespan of a game.

What Is a Game Without Change?

Without change, players become bored very quickly. The use of change not only provides content, but allows the players to feel as if they're making progress. Mario (again) is a great example of this. Although in the first Mario, worlds were uniform, in Super Mario Bros 2, the worlds changed. World 1 was a grass world, World 2 was a desert, and so forth. Not only did the environment change, but some monsters were also world-specific, so the worlds had specific feels to them.

Theming levels is one way to provide a certain feel, and that slight graphical swap can sometimes be enough to reaffirm a feeling of progress. Other games have used similar tricks, including the old "palette swap", aka changing a few colours. Old JRPGs would take standard enemies and paint them red or black as "upgraded" versions, and Mortal Kombat famously includes nine different characters who utilise the same basic character model.

Mortal Kombat ninjas
Ermac, Tremor, Scorpion, Reptile, Chameleon, Sub-Zero, Rain, Smoke and Noob Saibot. image by Chakham on deviantart.

It's useful to think of the micro-challenges within a game as puzzles. A puzzle is something that can be solved; solving it again doesn't provide you with any greater accomplishment. Minor variations on that puzzle can keep things interesting indefinitely, however. 

Crosswords are probably the greatest example of this—the basic principle behind the crossword is simple, but by simply varying the words, enough content can be produced for newspapers across the world every day.

What Happens When We Do Change in the Wrong Way?

As we've seen, no change or not enough change simply means the player will get bored quickly. The otherwise excellent mobile game Flow Free comes with over 1,000 levels, with more purchasable. While some people will love this, it seems likely that most players will never get near to completing them all. Of course, it's hard to argue that “too much content” is a bad thing, but most of the levels end up looking very samey—and although the levels change, the puzzle behind them remains essentially static. It doesn't take long for a decent player to “solve” the game, and at that point solving the levels themselves becomes trivial.

This is the risk that procedurally generated content faces. Although, in theory, infinite content sounds amazing, the fact is that without some very clever design behind those algorithms, most of the content generated will become stale very quickly as players solve the "core puzzles". 

Imagine creating a game which asks the player to add two random numbers. Once the player has solved 11+8, then solving 12+7 or 18+1 doesn't actually provide a new experience. And if all your game consists of is adding numbers together, then players will get bored pretty quickly.

On the flip side, it's also possible to implement change too quickly. The gradual implementation within a game is often referred to as a difficulty curve. Players want that challenge, and ideally the difficulty should increase in line with their skill. There's nothing wrong with the odd difficulty spike, but the player shouldn't find themselves hitting an impossible brick wall.

Undertale undyine battle
Undertales Undyne fight. Here, the game goes from being a reasonable challenge to "hope you like dying repeatedly". 

One other possible problem is changing too much about the game itself. That is, changing things outside our expectations. Imagine playing a racing game that suddenly switches to a first-person shooter, or a puzzle game that suddenly switches to a platformer.

These are extreme examples, but they do happen. 

Bioshock utilised pipemania-style puzzles for its hacking section, although for the most part, this mostly optional content was well received. 

Less well received was the mandatory boss combat for Deus Ex: Human Revolution. While most players had no issues, players who decided to put character points into stealth suddenly found themselves against a brick wall. Not only did it break what the game (for these players) was about, but it meant that the choices the player had made before (choosing to invest points into stealth) was “wrong”: a trap choice for the player.

So, change is important, but it's essentially just one half of the "content creation" coin. If players only need challenge, choice, and change, then what stops crossword puzzles from being games? What about jigsaws, or sliding block puzzles? These things are "game-y", but they are not games. We need one last thing—something which ensures the player is constantly engaged, and cannot just "solve" our game immediately.

4. Chance

Chance is the final element of game design. As we play our game, we make plans on how to acquire gold, defeat our foes, and otherwise accomplish our goals. However, we won't always be successful: our gaming skill is fully tested when we are able to adapt to conditions which weren't initially planned. You're not expected to play perfectly, as long as your overall strategy is sound.

Chance can exist in games in many forms. Most common are dice, cards, and random number generators, but humans can provide their own chance as well. Humans misjudge, make gambits, and otherwise play unpredictably. Without that human element, games like chess would be remarkably dull—two players facing each other would have the same outcome every time.

What Is a Game Without Chance?

A game without chance is simply a puzzle. It can be solved, and when a puzzle is solved, it no longer provides entertainment. This is why newspapers publish a new crossword every day, rather than simply republishing the same one over and over.

So how does chance affect game design? Imagine that I give you a map of the world and four coloured pens, and ask you to colour in the countries in such a way that no two colours touch. This is something you should be able to accomplish with a bit of time and brainpower. It might briefly keep you entertained, but it's not really a game. It's a puzzle, and it has a solution.

World map in 4 colours
Four-colour theory as applied to the world map. (Although a fifth colour is used for the sea for clarity, it's not necessary.)

If, however, I randomly colour a country in every time you make a move, things change. Your end goal is essentially the same, but now you have an additional parameter to worry about. You can't simply colour countries in one at a time; you need to start thinking about how to avoid my countries. 

The game no longer has a singular solution—it has a selection of “best fit” possible solutions. The game cannot be solved on the first move: every move requires “resolving” the game. Indeed, the game might unfold in such a way that winning isn't even possible.

This is what really defines a game, and separates games from puzzles. You are trying to make the best decisions in an ever-changing world, and that change is unpredictable. You can make informed decisions and calculated choices, but there is very rarely a correct choice—simply a best choice.

What Happens When We Do Chance in the Wrong Way?

The balancing of chance within a game is a delicate act. Because a game is a direct test of players' skill, we tend to find that players of low skill levels (such as children or non-gamers) favour games with a high chance aspect, and high-skill players (hardcore gamers or competitive players) dislike it. This is, unsurprisingly, because players like winning, and luck tends to neutralise the impact skill has.

The main danger when implementing chance is that it can override choice. Choices a player makes might be rendered unimportant by the roll of the die. When a player feels as if the choices they are making are pointless, they will lose interest. Imagine a Risk player who only rolled 1's for the whole game. Of course, some players will like high-variance games, but that's part of understanding your player base.

We've talked before, both briefly and in depth, about the implementation of randomness within games, but the basic principle is this: a player needs to feel both that they are capable of winning and that the choices they make are relevant. Tipping the scales of chance one way or the other can affect that and cause frustration.

Putting It All Together

When we make our game, we use these four elements, layered on top of each other, to bring our vision to life. We can think of it this way:

A game is something where a player is set a challenge. The player may make choices to overcome the challenge, although chance dictates that the player cannot guarantee the choices they make are correct. The choices the player makes change the state of the game, so that the player must constantly be re-evaluating the best choice for any situation. Once a player has made a sufficient number of choices, the game is won or lost.

This is the essence of game design. There are further concepts that can be built upon this—ideas like narrative, exploration, and music—but these are all dressing. They enhance the experience of a game, but they are not necessarily relevant to the game itself.

Why Care?

The most important question, of course, is why is any of this important? We all know what a game is, why spend so long breaking it down?

Simply, because so many of us get it wrong. Many fledgling designers make games that show off how clever the designer is, but fail to appreciate that a game is supposed to be played. Even experienced game designers working for large companies can get it wrong on occasion, individual egos refusing to take on criticism. Look at any game that failed critically, and you're guaranteed to be able to identify where they broke these rules.

Of course, sometimes you're able to break the rules and get away with it. No-one is really criticising the developers of Minecraft for not making creative mode hard enough. But Minecraft's creative mode isn't really a game—it's a toy. It's a canvas where you can create beautiful worlds without the risk of being eaten by monsters. A world that has inspired amazing feats of ingenuity, from replicas of Star Trek's Enterprise to a working, in-game mobile phone.

So before you make your game, think about what you're aiming for. Game design is a complex field, and you need to juggle storytelling, music direction, art design, and a million other things. But if you fail at the very core—if your foundation is unstable—then you're building on sand, and it'll come crumbling down around you.

New Camera Features in Phaser

$
0
0

Introduced in the 2.4.7 version of Phaser, the new camera features look really interesting and it's worth checking them out. In this tutorial you'll see how easy it is to apply camera effects in your HTML5 games built with Phaser.

Note: if you need an introduction to the Phaser framework, you can check Getting Started With Phaser: Building "Monster Wants Candy", where I break down the source code and explain it in detail.

There are three interesting new features you can use: camera flash, fade, and shake. They do exactly what you can expect from them. Let's see why those are very useful and should be considered in your next game development project with Phaser.

Until now I was using the Juicy plugin to achieve such functionalities, but the source code was abandoned a long time ago and I had to manage it myself. Now with the features being built-in and part of the Phaser source code I don't have to worry about any compatibility issues or framework updates. They are also a lot easier to implement.

Enclave Phaser Template

I'll use Enclave Phaser Template as a case study—it's a set of basic functionalities, from states through audio and highscore management to tweens and animations. The template is open sourced and available on GitHub as part of the open.enclavegames.com initiative, so you can easily see how it all got implemented, including the new camera effects.

Enclave Phaser Template

Ok, let's move on to the actual implementation part.

Camera Flash

Flashing the camera can be used for hit or impact effects—for example, when the player is hit by the enemy's bullet, the screen can turn red for a short moment. Here's the flash camera effect with the parameters explained:

It fills the screen with the solid color and fades away to alpha 0 over the given duration. You can use the force parameter to overwrite any other flash effects and have this as the only one running at the current moment. The default color is white, and the flash lasts for half a second (500 milliseconds):

Flashing the camera can be used for various effects. In the Enclave Phaser Template, it makes a nice seamless transition when bringing up a new state, to show the main menu after all the resources have finished loading. Instead of showing everything instantly, we can use the flash with black color as a base:

It is executed at the very end of the create function in the MainMenu.js file representing the menu state. You can see the effect in action on a gif:

Enclave Phaser Template - Flash

As you can see, this achieves a nice, smooth appear effect. Now let's move on to camera fade.

Camera Fade

To make the feeling of moving between states complete, we can use fade to achieve a reversed flash and make the state fade out smoothly. If done properly, this can bring a fade-out, fade-in effect, which looks really nice. Here's the theory:

The parameters are exactly the same as in a camera's flash, except the default color is not white, but black:

It starts filling the screen from alpha 0 with the given color and ends with a solid fill. The actual source code from the clickStart action on the Start button in the MainMenu.js file looks like this:

It fades the screen over a period of 200 milliseconds and then launches a new state after the same amount of time to synchronize both actions. This is what it looks like in action:

Enclave Phaser Template - Fade

Combining flash and fade makes a nice transition between states. Now, let's move on to the shake effect.

Camera Shake

Another useful Phaser camera method is shake—it can be used for situations where a player hits the obstacles when flying through the asteroid field, or utilizing a powerful bomb from the inventory. It can be executed when colliding with the game objects floating on the screen.

The first parameter controls how much the camera is shaking, and the second one how long the shake will last. The third one is about replacing the already running shake if this parameter is set to true. The fourth controls the shake horizontally, vertically or both, and the last parameter decides on whether the camera will shake outside of the world bounds showing whatever is there. Here's the example with the default values:

It will shake the camera with 0.05 intensity, for half a second (500 milliseconds), the force parameter is set to true, the camera will shake in both directions, and also outside of the world bounds. If you don't need to customize the shake and leave the default parameters, then you can just omit them in the call and it will work the same as above:

And this is how the actual shake looks in the Enclave Phaser Template source code when the points are added in the Game.js file:

The last three parameters are exactly the same as the default ones, so could have been omitted, but were left for educational purposes. See it in action:

Enclave Phaser Template - Shake

In this case the intensity is lower than the default value, and the duration is shorter to make it feel a little bit weaker, so it won't distract the player too much.

ResetFX

There's also a handy little method along with those three explained above. You can reset any active effects, and from the programming point of view you don't even have to know if there are any running at the given time—there's a special resetFX method you can use.

It immediately clears any ongoing camera effects and removes them from the screen.

Events

If you want to know if any specific effect is active or already ended, you can use the events provided by the framework: onFlashComplete, onFadeComplete, and onShakeComplete.

Remember the fade example on button click in the main menu? It was done by waiting a fixed amount of time, and then the state was switched to a new one. We can do it better using the onFadeComplete event:

This way was implemented in the next step, in the Story.js file when switching from Story state to the Game one. You have to admit it looks better, and the state is launched exactly when the effect is completed, no matter how long it lasts.

Summary

As you can see, those are quite powerful features when it comes to adding this extra "juice" or polish to your games. They are at the same time also very easy to use—it's great to see them natively implemented in Phaser.

Feel free to grab the source code of Enclave Phaser Template, implement the effects, and share links to your newly upgraded games with us in the comments!

Getting Started With Crafty: Introduction

$
0
0

If you have ever developed HTML5 games before, you might be familiar with a few game engines that can make game development a lot easier. They have all the code you need to detect collisions, spawn different entities, or add sound effects to your game. In this tutorial, you will learn about another such game engine called Crafty. It is very easy to use and supports all major browsers, including IE9. 

Once minified, the library is only 127kb in size, so it won't result in any major delay in loading your game. It supports sprite maps, so you can easily draw game entities on the screen. You can also create custom events that can be triggered whenever you want and on whatever object you want. 

There are also components for sounds, animation, and other effects. Overall, this game engine is a great choice if you want to develop some basic HTML5 games.

Initial Setup

The first thing that you need to do is include Crafty in your project. You can either download the file and load it in your projects or you can directly link to the minified version hosted on a CDN. Once the library has been loaded, you can use the following line to initialize Crafty:

The first two arguments determine the width and height of our stage. The third argument is used to specify the element that we are going to use as our stage. If the third argument is not provided, Crafty will use a div with id cr-stage as its stage. Similarly, if the width and height arguments are missing, Crafty will set the width and height of our stage equal to the window width and height.

At this point, you should have the following code:

Creating Entities

Entities are building blocks of a Crafty game. Everything from the player to the enemies and obstacles is represented using entities. You can pass a list of components to an entity. Each of these components will add extra functionality to our entity by assigning properties and methods from that component to the entity. You can also chain other methods to an entity to set various properties like its width, height, location, and color. Here is a very basic example of adding components to an entity:

Every entity that you want to display to the user will need both the 2D component and a rendering layer. The rendering layer can be DOM, Canvas, or WebGL. Please note that WebGL support was added in version 0.7.1. Currently, only the Sprite, Image, SpriteAnimation, and Color components support WebGL rendering. Text entities still need to use DOM or Canvas for now.

Now, you can use the attr() method to set the value of various properties including the width, height, and position of your entity. Most methods in Crafty will return the entity they are called on, and attr() is no exception. This means that you will be able to chain more methods to set other properties of your elements. Here is an example:

This will create an orange rectangular entity on the stage.

Moving Entities Around

Now that you have created the entity, let's write some code to move it around using the keyboard. You can move an entity in four different directions, i.e. up, down, left, and right, by adding the Fourway component to it. 

The entity can then be moved by either using the arrow keys or W, A, S, and D. You can pass a number as an argument to the fourway constructor to set the speed of the entity. Here is what the code of the entity should look like now:

You can restrict the movement of an entity to just two directions using the Twoway component. Replacing the Fourway component in the code above with Twoway will confine the movement of the box to just the horizontal direction. This is evident from the following demo:

You can also add your own components to different entities for identification or to group similar entities together. In this case, I am adding the Floor component to our orange box. You can use some other descriptive names to help you identify different entities.

There is another very useful component that you can use to move elements around, and it is called the Gravity component. When added to an entity, it will make that entity fall down. You might want to stop the given entity from falling further, once it encounters some other entities. This can be achieved by passing an identifier component as an argument to the gravity function. Here is the code that makes the small black square fall on the floor or platform:

Final Thoughts

As you see in the tutorial, we were able to create the basic structure of a simple game using very little code. All we had to do was add components to our entities and specify the values of different properties like width, height, or movement speed.

This tutorial was meant give you a basic idea of entities and other concepts related to Crafty. In the next part, you will learn about entities in much more detail. If you have any questions about this tutorial, let me know in the comments.

Getting Started With Crafty: Entities

$
0
0

In the last tutorial, you learned about the basics of entities and how they are the building blocks in your game. In this tutorial, you will go beyond the basics and learn about entities in more detail.

Entities and Their Components

Every entity is made up of different components. Each of these components adds its own functionality to the entity. Crafty offers a lot of built-in components, but you can also create your own custom components using Crafty.c().

You learned about a few basic components like 2D, CanvasColor, and Fourway in the first tutorial. Let's begin by creating another entity using these components:

When you have a lot of entities with different components, it might become necessary to know if a given entity has a specific component attached to it. This can be accomplished by using the .has() method.

Similarly, you can add components to an entity by using .addComponent(). To add multiple components at once, you can pass them as a string with different components separated by commas or pass each component as different argument. Nothing will happen if the entity already has the component that you are trying to add. 

You can also remove components from an entity using .removeComponent(String Component[, soft]). This method takes two arguments. The first one is the component that you want to remove, and the second argument determines whether the element should be soft removed or hard removed. Soft removal will only cause .has() to return false when queried for that specific component. A hard removal will remove all the associated properties and methods of that component. 

By default, all components are soft removed. You can set the second argument to false to hard remove a component. 

Setting Values for Different Attributes

You will probably need to set different values for specific attributes of all the entities in your game. For example, an entity that represents the food of the main player in the game should look different than the entity that represents the player itself. Similarly, a power up is going to look different than food entities. Crafty allows you to set the values of different entities in a game either separately or all at once using .attr().

The main entity is currently stored in the playerBox variable. So you can set value of different properties directly using the following code:

The z property sets the z-index of different entities. An entity with higher z value will be drawn over another one with a lower value. Keep in mind that only integers are allowed as valid z-index values.

Let's create a food entity with smaller size, a different position, and rotation applied to it. The rotation is specified in degrees, and using a negative value rotates the entity in counterclockwise direction.

As you can see in the demo below, both the food and main player are easily distinguishable now. If you try to move the main player over the food, you will see that the food is now drawn below the main player because of a lower z-index.

Binding Events to Entities

There are a lot of events that you may need to respond to while developing a game. For example, you will have to bind the player entity to a KeyDown event if you want it to grow in size when a specific key is pressed. Crafty allows you to bind different entities to specific events using the .bind() method. Here is an example:

In the following demo, try moving the player over the food and then press the 'T' and 'O' keys. Pressing 'T' will set the opacity of the player to 0.5, and pressing 'O' will restore the opacity back to 1.

Now, let's bind a collision event to our player so that it grows in size whenever it hits food. We will have to add a collision component to the player and use the .checkHits() method. This method will perform collision checks against all the entities that have at least one of the components that were specified when .checkHits() was called. 

When a collision occurs, a HitOn event will be fired. It will also have some relevant information about the hit event. A HitOff event is also fired once the collision ends.

The width and height of the player increases each time a hit occurs. We can use this event for a lot of things including changing the position of food or increasing the score in the game. You could also destroy the food entity by using the destroy() method once the hit was detected.

Making a Selection

In the previous section, we had to just change the properties of a single entity. This could be done easily by using the variable assigned to each entity. This is not practical when we are dealing with about 20 different entities.

If you have used jQuery before, you might be familiar with the way it selects elements. For example, you can use $("p") to select all the paragraphs. Similarly, you can select all entities that have the same common component by using Crafty("component")

Here are a few examples:

Once you have the selection, you can get the number of elements selected using the length property. You can also iterate through each of these entities or bind events to all of them at once. The following code will change all food entities whose x value is greater than 300 to purple.

Once you have a selection, you can use get() to obtain an array of all entities in the selection. You can also access the entity at a specific index using get(index).

Final Thoughts

In this tutorial, you learned how to add or remove components from an entity. You also learned how to select entities with a given component or components and manipulate their attributes and properties one by one. All these things will be very useful when we want to manipulate different entities on our screen based on a variety of events.

If you have any questions about the tutorial, let me know in the comments.

Getting Started With Crafty: Controls, Events, and Text

$
0
0

In the last tutorial, you learned about entities in Crafty and how you can manipulate them using different methods. In this tutorial, you will learn about different components that will allow you to move different entities around using the keyboard.

Crafty has three different components to move elements around. These are Twoway, Fourway, and Multiway. This tutorial will introduce you to all these components. In the end, you will learn about the Keyboard component and various methods associated with it. 

Twoway and Fourway

The Twoway component allows an entity to move left or right using arrow keys or A and D. It also allows the entity to jump using the up arrow or the W key. You will have to add a Gravity component to your entities to make them jump. 

The .twoway() method accepts two arguments. The first one determines the speed of the entity in the horizontal direction, while the second argument determines the jump speed of the entity. Leaving out the second argument will set the value of the jump speed equal to twice the speed in the horizontal direction.

The Fourway component will allow an entity to move in four different directions by either using the arrow keys or W, A, S, D. The .fourway() method only accepts one argument, which will determine the speed of the given entity in all directions.

Multiway

One major drawback of the Fourway component is that it does not allow you to set different speeds for the horizontal and vertical directions. 

On the other hand, the Multiway component allows you to set the speed in each axis individually. It also allows you to assign different keys to move the entity in different directions. The first argument in the .multiway() method is the speed of our entity. The second argument is an object to determine which key will move the entity in which direction.

The directions are specified in degrees. 180 is left, 0 is right, -90 is up, and 90 is down. Here are a few examples:

The code above sets the speed of the black box equal to 150 in the horizontal direction and 75 in the vertical direction. The orange box moves at a speed of 150 in all directions but has been assigned different keys for movement. The purple box does not move strictly horizontally or vertically but at a 45-degree angle. The speed here is in pixels per second.

Basically, you can assign any key to any direction using the Multiway component. This can be very helpful when you want to move multiple entities independently.

This component also has a .speed() method, which can be used to change the speed of an entity at a later time. You can also disable the key controls at any time using the .disableControl() method.

The Keyboard Component

The three components in the previous sections allow you to move an entity around using different keys. However, you might want more control over what happens when a key is pressed. For example, you might want to make the player bigger once a specific key is pressed or make the player more powerful once another key is pressed. This can be achieved using the Keyboard component.

This component also gives you access to the .isDown(String keyName/KeyCode) method, which will return true or false based on whether the key pressed has the given KeyName or KeyCode.

Crafty also has two different keyboard events, KeyDown and KeyUp. You can bind these events to any entity in your game without using the Keyboard component. The KeyDown event is triggered whenever the DOM keydown event occurs. Similarly, the KeyUp event is triggered whenever the DOM keyup event occurs.

In the above code, the blackBox already had the Keyboard component. This allowed us to use the .isDown() method to determine the key pressed.

Try pressing L and K in the following demo to increase the width and height of the two boxes respectively.

The Text Component

It is very easy to add text to your game using Crafty. First, you need to create an entity with the Text component. Then, you can add text to your entity using the .text() method, which accepts a string as its parameter. 

The location of the text can be controlled by using the .attr() method to set the value of x and y co-ordinates. Similarly, the color of the text can be specified using the .textColor() method. A few other properties like the size, weight, and family of the font can be set using the .textFont() method. Here is an example:

As I mentioned earlier, the .text() method requires you to supply a string as parameter. This means that if the game score is a number, you will have to convert it to a string for the .text() method to render it.

Most of the 2D properties and methods will work without any issue with the Text component. For example, you can rotate and move it around with ease. However, there are two things that you need to keep in mind. Styling of the text works best when rendered using the DOM. When rendered on Canvas, the width and height of the text entity will be set automatically.

Final Thoughts

Using the knowledge from this and the last tutorial, you should now be able to move different entities around using the keyboard. You can also change the appearance of the main player and other entities based on the different keys pressed.

If you have any questions about this tutorial, let me know in the comments.

Getting Started With Crafty: The Game Loop

$
0
0

Up to this point in this series, you have learned how to manipulate different entities and use the keyboard to move them around. In this part, you will learn how to use the game loop in Crafty to continuously check for various events and animate different entities.

The game loop in Crafty is implemented in Crafty.timer.step, which uses global events to communicate with the rest of the engines. The loop is driven by requestAnimationFrame when available. Each loop consists of one or more calls to the EnterFrame event and a single call to RenderScene which results in the redrawing of each layer. 

The final value of all the properties and variables is resolved before a scene is rendered. For example, if you move your player 5 px to the right ten times inside a single EnterFrame event, it will be directly drawn 50 px to the right by skipping all the intermediate drawings.

EnterFrame and RenderScene

Everything in your game that needs to change over time is ultimately linked to the EnterFrame event. You can use the .bind() method to bind different entities to this event. Functions bound to this event are also passed an object with properties like dt which determines the number of milliseconds that have passed since the last EnterFrame event. 

You can use the dt property to provide a smooth gaming experience by determining how far the game state should advance.

The RenderScene event is used to make sure that everything visible on the screen matches the current state of the game at the last EnterFrame event. Normally, you will not need to bind this event yourself unless you decide to implement your own custom rendering layer.

Using Tween to Animate 2D Properties

You can use the Tween component when you just want to animate the 2D properties of an entity over a specific period of time. You can animate the x, y, w, h, rotation, and alpha properties using this component. Let's animate the x value and height of the orange and black boxes that you've been creating in the last two tutorials. 

Here is the code you need:

You've probably noticed that the orange box is not rotating around its center but its top left corner. You can change the rotation center by using the .origin() method. It can accept two integer arguments, which determine the pixel offset of origin in the x and y axes. 

It also accepts a string value as its argument. The string value can be a combination of center, top, bottom, middle, left, and right.  For example, .origin("center") will rotate the entity around its center, and .origin("bottom right") will rotate the entity around the bottom right corner. 

You can pause or resume all tweens associated with a given entity using the .pauseTweens() and .resumeTweens() methods. Similarly, you can also use .cancelTween() to cancel a specific tween.

Understanding the Crafty Timer

The Crafty.timer object handles all the game ticks in Crafty. You can use the .FPS() method with this object to get the target frame rate. Keep in mind that it is not the actual frame rate.

You can also use the .simulateFrames(Number frames[, Number timestep]) method to advance the game state by a given number of frames. The timestep is the duration to pass each frame. If it is not specified, a default value of 20ms is used.

Another useful method is .step(), which will advance the game by performing a step. A single step can consist of one or more frames followed by a render. The number of frames will depend on the timer's steptype. This method triggers a variety of events like EnterFrame and ExitFrame  for each frame and PreRender, RenderScene, and PostRender events for each render.

There are three different modes of steptypefixed, variable, and semifixed. In fixed mode, each frame in Crafty is sent the same value of dt. However, this steptype can trigger multiple frames before each render to achieve the target game speed. 

You can also trigger just one frame before each render by using the variable mode. In this case, the value of dt is equal to the actual time elapsed since the last frame. 

Finally, the semifixed mode triggers multiple frames per render, and the time since last frame is equally divided among them.

Creating a Very Basic Game

If you have read all the tutorials in this series, you should have gained enough knowledge by now to create a very basic game. In this section, you will learn how to put everything you learned to use and create a game where the main player has to eat a piece of food. 

The food will be a rotating red square. As soon as the food comes in contact with the player, it disappears from the old location and spawns in a new random location. The player can be moved using A, W, S, D, or the arrow keys.

One more thing that you need to take care of is the position of the player. It is supposed to stay with the bounds of the game stage.

Let's write the code for the food first:

By default, Crafty would have used the top left corner of the food entity to rotate it. Setting origin to center makes sure that the food entity rotates around its center.

The player entity checks the current location of the player in each frame and resets the location if the player tries to go outside the game stage.

You can use a Text entity to keep track of the score. The score is shown in the top left corner. The gameScore variable stores the number of times the player hits the food entity.

Now, you just have to write the code to move the food to a different location when a hit is detected. The following code will do exactly that.

You need to keep in mind that you subtract the width and height of your food entity from the stage width and height respectively. This makes sure that the food is always completely inside the stage. Here is a demo of the game:

Final Thoughts

With the help of Crafty, you have created a very basic game by writing a few lines of code. Right now, the game lacks a few features which could make it more interesting. First, there are no sounds. Second, there is no way for the player to get out, and the difficulty level also remains the same throughout the game. You will learn about sound, sprites, mouse events, and other features of the library in the next series.

If you had any problems or doubts while going through all the examples in the series, let me know in the comments.

Getting Started in WebGL, Part 3: WebGL Context and Clear

$
0
0

In the previous articles, we learned how to write simple vertex and fragment shaders, make a simple webpage, and prepare a canvas for drawing. In this article, we'll start working on our WebGL boilerplate code. 

We'll obtain a WebGL context and use it to clear the canvas with the color of our choice. Woohoo! This can be as little as three lines of code, but I promise you I won't make it that easy! As usual, I'll try to explain the tricky JavaScript concepts as we meet them, and provide you with all the details you need to understand and predict the corresponding WebGL behavior.

This article is part of the "Getting Started in WebGL" series. If you haven't read the previous parts, I recommend that you read them first:

  1. Introduction to Shaders
  2. The Canvas Element

Recap

In the first article of this series, we wrote a simple shader that draws a colorful gradient and fades it in and out slightly. Here's the shader that we wrote:

In the second article of this series, we started working towards using this shader in a webpage. Taking small steps, we explained the necessary background of the canvas element. We:

  • made a simple page
  • added a canvas element
  • acquired a 2D-context to render to the canvas
  • used the 2D-context to draw a line
  • handled page resizing issues
  • handled pixel density issues

Here's what we made so far:

In this article, we borrow some pieces of code from the previous article and tailor our experience to WebGL instead of 2D drawing. In the next article—if Allah wills—I'll cover viewport handling and primitives clipping. It's taking a while, but I hope you'll find the entire series very useful!

Initial Setup

Let's build our WebGL-powered page. We'll be using the same HTML we used for the 2D drawing example:

... with a very small modification. Here we call the canvas glCanvas instead of just canvas (meh!).

We'll also use the same CSS:

Except for the background color, which is now black.

We won't use the same JavaScript code. We'll start with no JavaScript code at all, and add functionality bit by bit to avoid confusion. Here's our setup so far:

Now let's write some code!

WebGL Context

The first thing we should do is obtain a WebGL context for the canvas. Just as we did when we obtained a 2D drawing context, we use the member function getContext:

This line contains two getContext calls. Normally, we shouldn't need the second call. But just in case the user is using an old browser in which the WebGL implementation is still experimental (or Microsoft Edge), we added the second one. 

The cool thing about the || operator (or operator) is that it doesn't have to evaluate the entire expression if the first operand was found to be true. In other words, in an expression a || b, if a evaluates to true, then whether b is true or false doesn't affect the outcome at all. Thus, we don't need to evaluate b and it's skipped entirely. This is called Short-Circuit Evaluation.

In our case, getContext("experimental-webgl") will be executed only if getContext("webgl") fails (returns null, which evaluates to false in a logical expression). 

We've also used another feature of the or operator. The result of or-ing is neither true nor false. Instead, it's the first object that evaluates to true. If none of the objects evaluates to true, or-ing returns the rightmost object in the expression. This means, after running the above line, glContext will either contain a context object or null, but not true or false.

Note: if the browser supports both modes (webgl and experimental-webgl) then they are treated as aliases. There would be absolutely no difference between them.

Putting the above line where it belongs:

Voila! We have our initialize function (yeah, keep dreaming!).

Handling getContext Errors

Notice that we didn't use try and catch to detect getContext issues as we did in the previous article. It's because WebGL has its own error reporting mechanisms. It doesn't throw an exception when context creation fails. Instead, it fires a webglcontextcreationerror event. If we are interested in the error message then we should probably do this:

Taking these lines apart:

Just like when we added a listener to the window load event in the previous article, we added a listener to the canvas webglcontextcreationerror event. The false argument is optional; I'm just including it for completeness (since the WebGL specification example has it). It's usually included for backwards compatibility. It stands for useCapture. When true, it means that the listener is going to be called in the capturing phase of the event propagation. If false, it's going to be called in the bubbling phase instead. Check this article for more details about events propagation.

Now to the listener itself:

In this listener, we keep a copy of the error message, if any. Yep, having an error message is totally optional:

What we've done here is pretty interesting. errorMessage was declared outside the function, yet we used it inside. This is possible in JavaScript and is called closures. What is interesting about closures is their lifetime. While errorMessage is local to the initialize function, since it was used inside onContextCreationError, it won't be destroyed unless onContextCreationError itself is no longer referenced.

In other words, as a long as an identifier is still accessible, it can't be garbage collected. In our situation:

  • errorMessage lives because onContextCreationError references it.
  • onContextCreationError lives because it's referenced somewhere among the canvas event listeners. 

So, even if initialize terminates, onContextCreationError is still referenced somewhere in the canvas object. Only when it's released can errorMessage be garbage-collected. Moreover, subsequent calls of initialize won't affect the previous errorMessage. Every initialize function call will have its own errorMessage and onContextCreationError.

But we don't really want onContextCreationError to live beyond initialize termination. We don't want to listen to other attempts at getting WebGL contexts anywhere else in the code. So:

Putting it all together:

To verify that we've successfully created the context, I've added a simple alert:

Now switch to the Result tab to run the code.

And it doesn't work! Obviously, because initialize was never called. We need to call it right after the page is loaded. For this, we'll add these lines above it:

Let's try again:

It works! I mean, it should unless a context couldn't be created! If it doesn't, please make sure you are viewing this article from a WebGL-capable device/browser.

Notice that we did another interesting thing here. We used initialize in our load listener before it was even declared. This is possible in JavaScript due to hoisting. Hoisting means that all declarations are moved to the top of their scope, while their initializations remain in their places.

Now, wouldn't it be nice to test if our error reporting mechanism actually works? We need getContext to fail. One easy way to do so is to obtain a different type of context for the canvas first before attempting to create the WebGL context (remember when we said that the first successful getContext changes the canvas mode permanently?). We'll add this line just before getting the WebGL context:

And:

Great! Now whether the message you saw was "Couldn't create a WebGL context" or something like "Canvas has an existing context of a different type" depends on whether your browser supports webglcontextcreationerror or not. At the time of writing this article, Edge and Firefox don't support it (it was planned for Firefox 49, but still doesn't work on Firefox 50.1). In such case, the event listener won't be called and errorMessage will remain set to "Couldn't create a WebGL context". Fortunately, getContext still returns null, so we know that we couldn't create the context. We just don't have the detailed error message.

The thing about WebGL error messages is that... there are no WebGL error messages! WebGL returns numbers indicating error states, not error messages. And when it happens to allow error messages, they are driver dependent. The exact wording of the error messages is not provided in the specification—it's up to the driver developers to decide how they should put it. So expect to see the same error worded differently on different devices.

Ok then. Since we made sure that our error reporting mechanism works, the "successfully created" alert and getContext("2d") are no longer needed. We'll omit them.

Context Attributes

Back to our revered getContext function:

There is more to it than meets the eye. getContext can optionally take one more argument: a dictionary that contains a set of context attributes and their values. If none is provided, the defaults are used:

I will explain some of these attributes as we use them. You can find more about them in the WebGL Context Attributes section of the WebGL specification. For now, we don't need a depth buffer for our simple shader (more about it later). And to avoid having to explain it, we'll also disable premultiplied-alpha! It takes an article of its own to properly explain the rationale behind it. Thus, our getContext line becomes:

Note: depth, stencil and antialias attributes, when set to true, are requests, not requirements. The browser should try to its best to satisfy them, but it's not guaranteed. However, when they are set to false, the browser must abide.

On the other hand, the alpha, premultipliedAlpha and preserveDrawingBuffer attributes are requirements that must be fulfilled by the browser.

clearColor

Now that we have our WebGL context, it's time to actually use it! One of the basic operations in WebGL drawing is clearing the color buffer (or simply the canvas in our situation). Clearing the canvas is done in two steps:

  1. Setting the clear-color (can be done only once).
  2. Actually clearing the canvas.

OpenGL/WebGL calls are expensive, and the device drivers are not guaranteed to be awfully smart and avoid unnecessary work. Therefore, as a rule of thumb, if we can avoid using the API then we should avoid using it. 

So, unless we need to change the clear-color every frame or mid-drawing, we should write the code setting it in an initialization function instead of a drawing one. This way, it's called only once at the beginning and not with every frame. Since the clear-color is not the only state variable that we'll be initializing, we'll make a separate function for state initialization:

And we'll call this function from within the initialize function:

Beautiful! Modularity will keep our not so short code cleaner and more readable. Now to populate the initializeState function:

clearColor takes four parameters: red, green, blue, and alpha. Four floats, whose values are clamped to the range [0, 1]. In other words, any value less than 0 becomes 0, any value larger than 1 becomes 1, and any value in-between remains unchanged. Initially, the clear-color is set to all zeroes. So, if transparent black was ok with us, we could have omitted this altogether.

Clearing the Drawing Buffer

Having set the clear-color, what's left is to actually clear the canvas. But one can't help but ask a question, do we have to clear the canvas at all?

Back in the old days, games doing full-screen rendering didn't need to clear the screen every frame (try typing idclip in DOOM 2 and go somewhere you are not supposed to be!). The new contents would just overwrite the old ones, and we would save the non-trivial clear operation. 

On modern hardware, clearing the buffers is extremely fast. Moreover, clearing the buffers can actually improve performance! To put it simply, if the buffer contents were not cleared, the GPU may have to fetch the previous contents before overwriting them. If they were cleared, then there's no need to retrieve them from the relatively slower memory.

But what if you don't want to overwrite the whole screen, but incrementally add to it? Like when making a painting program. You want to draw the new strokes only, while keeping the previous ones. Doesn't the act of leaving the canvas without clearing make sense now?

The answer is still no. On most platforms you'd be using double-buffering. This means that all the drawing we perform is done on a back buffer while the monitor retrieves its contents from a front buffer. During the vertical retrace, these buffers are swapped. The back becomes the front and the front becomes the back. This way we avoid writing to the same memory that's currently being read by the monitor and displayed, thus avoiding artifacts due to incomplete drawing or drawing too fast (having drawn several frames written while the monitor is still tracing a single one).

Thus, the next frame doesn't overwrite the current frame, because it's not written to the same buffer. Instead, it overwrites the one that was in the front buffer before swapping. That's the last frame. And whatever we've drawn in this frame won't appear in the next one. It'll appear in the next-next one. This inconsistency between buffers causes flickering that is normally undesired.

But this would have worked if we were using a single buffered setup. In OpenGL on most platforms, we have explicit control over buffering and swapping the buffers, but not in WebGL. It's up to the browser to handle it on its own. 

Umm... Maybe it's not the best time, but there's one thing about clearing the drawing buffer that I didn't mention before. If we don't explicitly clear it, it would be implicitly cleared for us! 

There are only three drawing functions in WebGL 1.0: clear, drawArrays, and drawElements. Only if we call one of these on the active drawing buffer (or if we've just created the context or resized the canvas), it is to be presented to the HTML page compositor at the beginning of the next compositing operation. 

After compositing, the drawing buffers are automatically cleared. The browser is allowed to be smart and avoid clearing the buffers automatically if we cleared them ourselves. But the end result is the same; the buffers are going to be cleared anyway.

The good news is, there's still a way to make your paint program work. If you insist on doing incremental drawing, we can set the preserveDrawingBuffer context attribute when acquiring the context:

This prevents the canvas from being automatically cleared after compositing, and simulates a single buffered setup. One way it's done is by copying the contents of the front buffer to the back buffer after swapping. Drawing to a back buffer is still necessary to avoid drawing artifacts, so it can't be helped. This, of course, comes with a price. It may affect performance. So, if possible, use other approaches to preserve the contents of the drawing buffer, like drawing to a frame buffer object (which is beyond the scope of this tutorial).

clear

Brace yourselves, we'll be clearing the canvas any moment now! Again, for modularity, let's write the code that draws the scene every frame in a separate function:

Now we've done it! clear takes one parameter, a bit-field that indicates what buffers are to be cleared. It turns out that we usually need more than just a color buffer to draw 3D stuff. For example, a depth buffer is used to keep track of the depths of every drawn pixel. Using this buffer, when the GPU is about to draw a new pixel, it can easily decide whether this pixel occludes or is occluded by the previous pixel that resides in its place. 

It goes like this:

  1. Compute the depth of the new pixel.
  2. Read the depth of the old pixel from the depth buffer.
  3. If the new pixel's depth is closer than the old pixel's depth, overwrite the pixel's color (or blend with it) and set its depth to the new depth. Otherwise, discard the new pixel.

I used "closer" instead of "smaller" because we have explicit control over the depth function (which operator to use in comparison). We get to decide whether a larger value means a closer pixel (right-handed coordinates system) or the other way around (left-handed). 

The notion of right- or left-handedness refers to the direction of your thumb (z-axis) as you curl your fingers from the x-axis to the y-axis. I'm bad at drawing, so, look at this article in the Windows Dev Center. WebGL is left-handed by default, but you can make it right-handed by changing the depth function, as long as you are taking the depth range and the necessary transformations into account.

Since we chose not to have a depth buffer when we created our context, the only buffer that needs to be cleared is the color buffer. Thus, we set the COLOR_BUFFER_BIT. If we had a depth buffer, we would have done this instead:

The only thing left is to call drawScene. Let's do it right after initialization:

Switch to the Result tab to see our beautiful red clear-color!

Canvas Alpha-Compositing

One of the important facts about clear is that it doesn't apply any alpha-compositing. Even if we explicitly use a value for alpha that makes it transparent, the clear-color would just be written to the buffer without any compositing, replacing anything that was drawn before. Thus, if you have a scene drawn on the canvas and then you clear with a transparent color, the scene will be completely erased. 

However, the browser still does alpha-compositing for the entire canvas, and it uses the alpha value present in the color buffer, which could have been set while clearing. Let's add some text below the canvas and then clear with a half-transparent red color to see it in action. Our HTML would be:

and the clear line becomes:

And now, the reveal:

Look closely. Up there in the top-left corner... there's absolutely nothing! Of course you can't see the text! It's because in our CSS, we've specified #000 as the canvas background. The background acts as an additional layer below the canvas, so the browser alpha-composites the color buffer against it while it completely hides the text. To make this clearer, we'll change the background to green and see what happens:

And the result:

Looks reasonable. That color appears to be rgb(128, 127, 0), which can be considered as the result of blending red and green with alpha equals 0.5 (except if you are using Microsoft Edge, in which the color should be rgb(255, 127, 0) because it doesn't support premultiplied-alpha for the time being). We still can't see the text, but at least we know how the background color affects our drawing.

Alpha Blending

The result invites curiosity, though. Why was red halved to 128, while green was halved to 127? Shouldn't they both be either 128 or 127, depending on the floating point rounding? The only difference between them is that the red color was set as the clear-color in WebGL code, while the green color was set in the CSS. I honestly don't know why this happens, but I have a theory. It's probably because of the blending function used to merge the two colors.

When you draw something transparent on top of something else, the blending function kicks in. It defines how the pixel's final color (OUT) is to be computed from the layer on top (source layer, SRC) and the layer below (destination layer, DST). When drawing using WebGL, we have many blending functions to choose from. But when the browser alpha-composites the canvas with the other layers, we only have two modes (for now): premultiplied-alpha and not premultiplied-alpha (let's call it normal mode). 

The normal alpha mode goes like:

In the premultiplied-alpha mode, the RGB values are assumed to be already multiplied with their corresponding alpha values (hence the name pre-multiplied). In such case, the equations are reduced to:

Since we are not using premultiplied-alpha, we are relying on the first set of equations. These equations assume that the color components are floating-point values that range from 0 to 1. But this is not how they are actually stored in memory. Instead, they are integer values ranging from 0 to 255. So srcAlpha (0.5) becomes 127 (or 128, based on how you round it), and 1 - srcAlpha (1 - 0.5) becomes 128 (or 127). It's because half 255 (which is 127.5) is not an integer, so we end up with one of the layers losing a 0.5 and the other one gaining a 0.5 in their alpha values. Case closed!

Note: alpha-compositing is not to be confused with the CSS blend-modes. Alpha-compositing is performed first, and then the computed color is blended with the destination layer using the blend-modes.   

Back to our hidden text. Let's try making the background into transparent-green:

Finally:

You should be able to see the text now! It's because of how these layers are painted on top of each other:

  1. The text is drawn first on a white background.
  2. The background color (which is transparent) is drawn on top of it, resulting in a whitish-green background and a greenish text.
  3. The color buffer is blended with the result, resulting in the above... thing. 

Painful, right? Luckily, we don't have to deal with all of this if we don't want our canvas to be transparent! 

Disabling Alpha

Now our color buffer won't have an alpha channel to start with! But wouldn't that prevent us from drawing transparent stuff? 

The answer is no. Earlier, I mentioned something about WebGL having flexible blending functions that are independent from how the browser blends the canvas with other page elements. If we use a blending function that results in a premultiplied-alpha blending, then we have absolutely no need for the drawing-buffer alpha channel:

If we just disregard outAlpha altogether, we don't really lose anything. However, whatever we draw still needs an alpha channel to be transparent. It's only the drawing buffer that lacks one.

Premultiplied-alpha plays well with texture filtering and other stuff, but not with most image manipulation tools (we haven't discussed textures yet—assume they are images we need to draw). Editing an image that is stored in premultiplied-alpha mode is not convenient because it accumulates rounding errors. This means that we want to keep our textures not premultiplied as long as we are still working on them. When it's time to test or release, we have to either:

  • Convert all textures to premultiplied-alpha before bundling them with the application.
  • Leave the textures be and convert them on-the-fly while loading them.
  • Leave the textures be and get WebGL to premultiply them for us using: 

Note: pixelStorei has no effect on compressed textures (Umm... later!).

All of these options may be a bit inconvenient. Fortunately, we can still achieve transparency without having an alpha channel and without using premultiplied-alpha:

Just ignore outAlpha completely and remove the dstAlpha from the outRGB equation. It works! If you get used to using it, you may start questioning the reason why dstAlpha was ever included in the original equation to begin with! 

Since we didn't draw primitives in this tutorial (we only used clear, which doesn't use alpha-blending) we don't really need to write any alpha-blending concerned WebGL code. But just for reference, here are the steps needed to enable the above alpha-blending in WebGL:

If you still insist on having an alpha channel in the color buffer, you can use blendFuncSeparate to specify separate blending functions for RGB and alpha.

Clearing Alpha

If for some blending effect you need an alpha channel in your color buffer but you just don't want it to blend with the background, you can clear the alpha channel after you are done rendering:

This concludes our tutorial. Phew! So much for the simple act of clearing the screen! I hope you've found this article useful. Next time, I'll explain WebGL viewports and primitives clipping. Thanks a lot for reading!

References


Getting Started in WebGL, Part 4: WebGL Viewport and Clipping

$
0
0

In the previous parts of this series, we learned much about shaders, the canvas element, WebGL contexts, and how the browser alpha-composites our color buffer over the rest of the page elements. 

In this article, we continue writing our WebGL boilerplate code. We are still preparing our canvas for WebGL drawing, this time taking viewports and primitives clipping into account. 

This article is part of the "Getting Started in WebGL" series. If you haven't read the previous parts, I recommend reading them first:

  1. Introduction to Shaders
  2. The Canvas Element for Our First Shader
  3. WebGL Context and Clear

Recap

  • In the first article of this series, we wrote a simple shader that draws a colorful gradient and fades it in and out slightly.
  • In the second article of this series, we started working towards using this shader in a webpage. Taking small steps, we explained the necessary background of the canvas element.
  • In the third article, we acquired our WebGL context and used it to clear the color buffer. We also explained how the canvas blends with the other elements of the page.

In this article, we continue from where we left, this time learning about WebGL viewports and how they affect primitives clipping.

Next in this series—if Allah wills—we'll compile our shader program, learn about WebGL buffers, draw primitives, and actually run the shader program we wrote in the first article. Almost there!

Canvas Size

This is our code so far:

Note that I've restored the CSS background color to black and the clear-color to opaque red.

Thanks to our CSS, we have a canvas that stretches to fill our webpage, but the underlying 1x1 drawing buffer is hardly useful. We need to set a proper size for our drawing buffer. If the buffer is smaller than the canvas, then we are not making full use of the device's resolution and are subject to scaling artifacts (as discussed in a previous article). If the buffer is larger than the canvas, well, the quality actually benefits a lot! It's because of the super-sampling anti-aliasing the browser applies to downscale the buffer before it's handed over to the compositor. 

However, the performance takes a good hit. If anti-aliasing is desired, it's better achieved through MSAA (multi-sampling anti-aliasing) and texture filtering. For now, we should aim at a drawing buffer of the same size of our canvas to make full use of the device's resolution and avoid scaling altogether.

To do this, we'll borrow the adjustCanvasBitmapSize from part 2 (with some modifications):

Changes:

  • We used clientWidth and clientHeight instead of offsetWidth and offsetHeight. The latter ones include the canvas borders, so they may not be exactly what we are looking for. clientWidth and clientHeight are more suited for this purpose. My bad!
  • adjustDrawingBufferSize is now scheduled to run only if changes took place. Therefore, we needn't explicitly check and abort if nothing changed.
  • We no longer need to call drawScene every time the size changes. We'll make sure it's called on a regular basis somewhere else.
  • A glContext.viewport appeared! It gets its own section, so let it pass for now!

We'll also borrow the resize events throttling function, onWindowResize (with some modifications too):

Changes:

  • It's now onCanvasResize instead of onWindowResize. It's ok in our example to assume that the canvas size changes only when the window size is changed, but in the real world, our canvas can be a part of a page where other elements exist, elements that are resizable and affect our canvas size.
  • Instead of listening to the events related to changes in canvas size, we'll just check for changes every time we are about to redraw the canvas contents. In other words, onCanvasResize gets called whether changes occurred or not, so aborting when nothing has changed is necessary.

Now, let's call onCanvasResize from drawScene:

I mentioned that we'll be calling drawScene regularly. This means that we are rendering continuously, not only when changes occur (aka when dirty). Drawing continuously consumes more power than drawing only when dirty, but it saves us the trouble of having to track when the contents have to be updated. 

But it's worth considering if you are planning to make an application that runs for extended periods of time, like wallpapers and launchers (but you wouldn't do these in WebGL to begin with, would you?). Therefore, for this tutorial, we'll be rendering continuously. The easiest way to do it is by scheduling re-running drawScene from within itself:

No, we didn't use setInterval or setTimeout for this. requestAnimationFrame tells the browser that you wish to perform an animation and requests calling drawScene before the next repaint. It's the most suitable for animations among the three, because:

  • The timings of setInterval and setTimeout are often not honored precisely—they are best-effort based. With requestAnimationFrame, the timing will generally match the display refresh rate.
  • If the scheduled code contains changes in page contents layout, setInterval and setTimeout could cause layout-thrashing (but that's not our case). requestAnimationFrame takes care of that and doesn't trigger unnecessary reflow and repaint cycles.
  • Using requestAnimationFrame allows the browser to decide how often to call our animation/drawing function. This means it can throttle it down if the page/iframe becomes hidden or inactive, which means more battery life for mobile devices. This also happens with setInterval and setTimeout in several browsers (FirefoxChrome)—just pretend you don't know!

Back to our page. Now, our resizing mechanism is complete:

  • drawScene is being called regularly, and it calls onCanvasResize every time.
  • onCanvasResize checks the canvas size, and if changes took place, schedules an adjustDrawingBufferSize call, or postpones it if it was already scheduled.
  • adjustDrawingBufferSize actually changes the drawing buffer size, and sets the new viewport dimensions while at it.

Putting everything together:

I've added an alert that pops up every time the drawing buffer is resized. You may want to open the above sample in a new tab and resize the window or change the device orientation to test it. Notice that it only resizes when you've stopped resizing for 0.6 seconds (as if you'll measure that!).

One last remark before we end this buffer resizing thing. There are limits to how large a drawing buffer can be. These depend on the hardware and browser in use. If you happen to be:

  • using a smartphone, or
  • a ridiculously high-resolution screen, or
  • have multiple monitors/work-spaces/virtual desktops set, or
  • are using a smartphone, or
  • are viewing your page from within a very large iframe (which is the easiest way to test this), or
  • are using a smartphone

there's a chance that the canvas gets resized to more than the possible limits. In such case, the canvas width and height will show no objections, but that actual buffer size will be clamped to the maximum possible. You can get the actual buffer size using the read-only members glContext.drawingBufferWidth and glContext.drawingBufferHeight, which I used to construct the alert. 

Other than that, everything should work fine... except that on some browsers, parts of what you draw (or all of it) may actually never end up on the screen! In this case, adding these two lines to adjustDrawingBufferSize after resizing might be worthwhile:

Now we are back to where stuff makes sense. But note that clamping to drawingBufferWidth and drawingBufferHeight may not be the best action. You might want to consider maintaining a certain aspect ratio.

Now let's do some drawing!

Viewport and Scissoring 

Remember in the first article of this series when I mentioned that inside the shader, WebGL uses the coordinates (-1, -1) to represent the lower left corner of your viewport, and (1, 1) to represent the upper right corner? That's it. viewport tells WebGL which rectangle in our drawing buffer should be mapped to (-1, -1) and (1, 1). It's just a transformation, nothing more. It doesn't affect buffers or anything.

I also said that anything outside the viewport dimensions is skipped and is not drawn altogether. That's almost entirely true, but has a twist to it. The trick lies in the words "drawn" and "outside". What really counts as drawing or as outside?

This line limits our viewport rectangle to the left half of the canvas. I've added it to the drawScene function. We usually don't need to call viewport except when the canvas size changes, and we actually did it there. You can delete the one in the resize function, but I'll just leave it be. In practice, try to minimize your WebGL calls as much as you can. Let's see what this line does:

Oh, clear(glContext.COLOR_BUFFER_BIT) totally ignored our viewport settings! That's what it does, duh! viewport has no effect on clear calls at all. What the viewport dimensions affect is the clipping of primitives. Remember in the first article, I said that we can only draw points, lines and triangles in WebGL. These will be clipped against the viewport dimensions the way you think they are ... except points. 

Points

A point is drawn entirely if its center lies within the viewport dimensions, and will be omitted entirely if its center lies outside them. If a point is fat enough, its center can still be inside the viewport while a part of it extends outside. This extending part should be drawn. That's how it should be, but that's not necessarily the case in practice:

You should see something that resembles this if your browser, device and drivers stick to the standard (in this regard):

The points' size depends on your device's actual resolution, so don't mind the difference in size. Just pay attention to how much of the points appear. In the above sample, I've set the viewport area to the middle section of the canvas (the area with the gradient), but since the points' centers are still inside the viewport, they should be entirely drawn (the green things). If this is the case in your browser, then great! But not all users are that lucky. Some users will see the outside parts trimmed, something like this:

Most of the time, it really makes no difference. If the viewport is going to cover the entire canvas, then we don't care whether the outsides will be trimmed or not. But it would matter if these points were moving smoothly heading outside the canvas, and then they suddenly disappeared because their centers went outside:

(Press Result to restart the animation.)

Again, this behavior is not necessarily what you see. According to history, Nvidia devices won't clip the points when their centers go outside, but will trim the parts that go outside. On my machine (using an AMD device), Chrome, Firefox and Edge behave the same way when run on Windows. However, on the same machine, Chrome and Firefox will clip the points and won't trim them when run on Linux. On my Android phone, Chrome and Firefox will both clip and trim the points!

Scissoring

It seems that drawing points is bothersome. Why even care? Because points needn't be circular. They are axis-aligned rectangular regions. It's the fragment shader that decides how to draw them. They can be textured, in which case they are known as point-sprites. These can be used to make plenty of stuff, like tile-maps and particle effects, in which they are really handy since you only need to pass one vertex per sprite (the center), instead of four in the case of a triangle-strip. Reducing the amount of data transferred from the CPU to the GPU can really pay off in complex scenes. In WebGL 2, we can use geometry instancing (which has its own catches), but we are not there yet.

So, how do we deal with points clipping? To get the outer parts trimmed, we use scissoring:

Scissoring is now enabled, so here's how to set the scissored region:

While primitives' positions are relative to the viewport dimensions, the scissor box dimensions are not. They specify a raw rectangle in the drawing buffer, not minding how much it overlaps the viewport (or not). In the following sample, I've set the viewport and scissor box to the middle section of the canvas:

(Press Result to restart the animation.)

Note that the scissor test is a per-sample operation that discards the fragments that fall outside the test box. It has nothing to do with what's being drawn; it just discards the fragments that go outside. Even clear respects the scissor test! That's why the blue color (the clear color) is bound to the scissor box. All that remains is to prevent the points from disappearing when their centers go outside. To do this, I'll make sure the viewport is larger than the scissor box, with a margin that allows the points to still be drawn until they are completely outside the scissor box:

(Press Result to restart the animation.)

Yay! This should work nicely everywhere. But in the above code, we only used a part of the canvas to do the drawing. What if we wanted to occupy the whole canvas? It really makes no difference. The viewport can be larger than the drawing buffer without problems (just ignore Firefox's ranting about it in the console output):

See:

Be mindful of the viewport size, though. Even if the viewport is but a transformation that costs you no resources, you don't want to rely on per-sample clipping alone. Consider changing the viewport only when needed, and restore it for the rest of the drawing. And remember that the viewport affects the position of the primitives on the screen, so account for this as well.

That's it for now! Next time, let's put the whole size, viewport and clipping things behind us. On to drawing some triangles! Thanks for reading so far, and I hope it was helpful.

References

Create a Space Shooter With PlayCanvas: Part 2

$
0
0

This is the second part of our quest to create a 3D space shooter. In part one we looked at how to set up a basic PlayCanvas game, with physics and collision, our own models, and a camera.

For reference, here's a live demo of our final result again.

In this part, we're going to focus on dynamically creating entities with scripts (to spawn bullets and asteroids) as well as how to add things like an FPS counter and in-game text. If you've already followed the previous part and are happy with what you have, you can start building from that and skip the following minimum setup section. Otherwise, if you need to restart from scratch:

Minimum Setup

  1. Start a new project.
  2. Delete all objects in the scene except for Camera, Box, and Light.
  3. Put both the light and the camera insidethe box object in the hierarchy panel.
  4. Place the camera at position (0,1.5,2) and rotation (-20,0,0).
  5. Make sure the light object is placed in a position that looks good (I put it on top of the box).
  6. Attach a rigid body component to the box. Set its type to dynamic. And set its damping to 0.95 (both linear and angular).
  7. Attach a collision component to the box.
  8. Set the gravity to 0 (from the scene settings).
  9. Place a sphere at (0,0,0) just to mark this position in space.
  10. Create and attach this script to the box and call it Fly.js:

Test that everything worked. You should be able to fly with Z to thrust and arrow keys to rotate!

8. Spawning Asteroids

Dynamically creating objects is crucial for almost any type of game. In the demo I created, I'm spawning two kinds of asteroids. The first kind just float around and act as passive obstacles. They respawn when they get too far away to create a consistently dense asteroid field around the player. The second kind spawn from further away and move towards the player (to create a sense of danger even if the player is not moving). 

We need three things to spawn our asteroids:

  1. An AsteroidModel entity from which to clone all other asteroids.
  2. An AsteroidSpawner script attached to the root object that will act as our factory/cloner.
  3. An Asteroid script to define the behavior of each asteroid. 

Creating an Asteroid Model 

Create a new entity out of a model of your choosing. This could be something out of the PlayCanvas store, or something from BlendSwap, or just a basic shape. (If you're using your own models, it's good practice to open it up in Blender first to check the number of faces used and optimize it if necessary.)

Give it an appropriate collision shape and a rigid body component (make sure it's dynamic). Once you're happy with it, uncheck the Enabled box:

Where to toggle the Enabled property

When you disable an object like this, it's equivalent to removing it from the world as far as the player is concerned. This is useful for temporarily removing objects, or in our case, for keeping an object with all its properties but not having it appear in-game.

Creating the Asteroid Spawner Script 

Create a new script called AsteroidSpawner.js and attach it to the Root object in the hierarchy. (Note that the Root is just a normal object that can have any components attached to it, just like the Camera.)

Now open up the script you just created. 

The general way of cloning an entity and adding it to the world via script looks like this:

This is how you would clone an object if you already had an "oldEntity" object. This leaves one question unanswered: How do we access the AsteroidModel we created? 

There are two ways to do this. The more flexible way is to create a script attribute that holds which entity to clone, so you could easily swap models without touching the script. (This is exactly how we did the camera lookAt script back in step 7.)

The other way is to use the findByName function. You can call this method on any entity to find any of its children. So we can call it on the root object:

And so this will complete our code from above. The full AsteroidSpawner script now looks like this:

Test that this worked by launching and looking to see if your asteroid model exists.

Note: I used newEntity.rigidbody.teleport instead of newEntity.setPosition. If an entity has a rigidbody, then the rigidbody will be overriding the entity's position and rotation, so remember to set these properties on the rigidbody and not on the entity itself.

Before you move on, try making it spawn ten or more asteroids around the player, either randomly or in some systematic way (maybe even in a circle?). It would help to put all your spawning code into a function so it would look something like this:

Creating the Asteroid Script

You should be comfortable adding new scripts by now. Create a new script (called Asteroid.js) and attach it to the AsteroidModel. Since all of our spawned asteroids are clones, they will all have the same script attached to them. 

If we're creating a lot of asteroids, it would be a good idea to make sure they are destroyed when we no longer need them or when they're far enough away. Here's one way you could do this:

Debugging Tip: If you want to print anything out, you can always use the browser's console as if this were any normal JavaScript app. So you could do something like console.log(distance.toString());to print out the distance vector, and it will show up in the console.

Before moving on, check that the asteroid does disappear when you move away from it.

9. Spawning Bullets

Spawning bullets will be roughly the same idea as spawning asteroids, with one new concept: We want to detect when the bullet hits something and remove it. To create our bullet system, we need:

  1. A bullet model to clone. 
  2. A Shoot.js script for spawning bullets when you press X. 
  3. A Bullet.js script for defining each bullet's behavior. 

Creating a Bullet Model

You can use any shape for your bullet. I used a capsule just to have an idea of which direction the bullet was facing. Just like before, create your entity, scale it down, and give it a dynamic rigid body and an appropriate collision box. Give it the name "Bullet" so it will be easy to find.

Once you're done, make sure to disable it (with the Enabled checkbox).

Creating a Shoot Script

Create a new script and attach it to your player ship. This time we're going to use an attribute to get a reference to our bullet entity:

Go back to the editor and hit "parse" for the new attribute to show up, and select the bullet entity you created. 

Now in the update function, we want to:

  1. Clone it and add it to the world.
  2. Apply a force in the direction the player is facing. 
  3. Place it in front of the player.

You've already been introduced to all these concepts. You've seen how to clone asteroids, how to apply a force in a direction to make the ship move, and how to position things. I'll leave the implementation of this part as a challenge. (But if you get stuck, you could always go look at how I implemented my own Shoot.js script in my project).

Here are some tips that might save you some headache:

  1. Use keyboard.wasPressed instead of keyboard.isPressed. When detecting when the X key is pressed to fire a shot, the former is a convenient way of making it fire only when you press as opposed to firing as long as the button is held.

  2. Use rotateLocal instead of setting an absolute rotation. To make sure the bullet always spawns parallel to the ship, it was a pain to calculate the angles correctly. A much easier way is to simply set the bullet's rotation to the ship's rotation, and then rotate the bullet in its local space by 90 degrees on the X axis.

Creating the Bullet Behavior Script

At this point, your bullets should be spawning, hitting the asteroids, and just ricocheting into empty space. The number of bullets can quickly get overwhelming, and knowing how to detect collision is useful for all sorts of things. (For instance, you might have noticed that you can create objects that only have a collision component but no rigid body. These would act as triggers but would not react physically.) 

Create a new script called Bullet and attach it to your Bullet model that gets cloned. PlayCanvas has three types of contact events. We're going to listen for collisionend, which fires when the objects separate (otherwise the bullet would get destroyed before the asteroid has a chance to react).  

To listen in on a contact event, type this into your init function:

And then create the listener itself:

This is where the name you gave to your asteroids when you spawned them becomes relevant. We want the bullet to only be destroyed when it collides with an asteroid. result is the entity it finished colliding with.

Alternatively, you could remove that check and just have it get destroyed on collision with anything.

It's true that there are no other objects in the world to collide with, but I did have some issues early on with the player triggering the bullet's collision for one frame and it disappearing before it could launch. If you have more complicated collision needs, PlayCanvas does support collision groups and masks, but it isn't very well documented at the time of writing.

10. Adding an FPS Meter

We're essentially done with the game itself at this point. Of course, there are a lot of small polish things I added to the final demo, but there's nothing there you can't do with what you've learned so far. 

I wanted to show you how to create an FPS meter (even though PlayCanvas already has a profiler; you can hover over the play button and check the profiler box) because it's a good example of adding a DOM element that's outside the PlayCanvas engine. 

We're going to use this slick FPSMeter library. The first thing to do is to head over to the library's website and download the minified production version.

Head back to your PlayCanvas editor, create a new script, and copy over the fpsMeter.min.js code. Attach this script to the root object.

Now that the library has been loaded, create a new script that will initialize and use the library. Call it meter.js, and from the usage example on the library's website, we have:

Add the meter script to the root object as well, and launch. You should see the FPS counter in the top left corner of the screen!

11. Adding Text

Finally, let's add some text in our world. This one is a little involved since there are various ways to do it. If you just want to add static UI, one way to do it is to work with the DOM directly, overlaying your UI elements on top of PlayCanvas's canvas element. Another method is to use SVGs. This post discusses some of these different ways.

Since these are all standard ways of handling text on the web, I've opted instead to look at how to create text that exists within the space of the game world. So think of it as text that would go on a sign in the environment or an object in the game.

The way we do this is by creating a material for each piece of text we want to render. We then create an invisible canvas that we render the text to using the familiar canvas fillText method. Finally, we render the canvas onto the material to have it appear in game.

Note that this method can be used for more than just text. You could dynamically draw textures or do anything a canvas can do.

Create the Text Material

Create a new material and call it something like "TextMaterial". Set its diffuse color to black since our text will be white.

Create a plane entity and attach this material to it.

Create the Text Script

You can find the full text.js script in this gist:

https://gist.github.com/OmarShehata/e016dc219da36726e65cedb4ab9084bd

You can see how it sets up the texture to use the canvas as a source, specifically at the line: this.texture.setSource(this.canvas);

Create this script and attach it to your plane. Note how it creates two attributes: text and font size. This way you could use the same script for any text object in your game.

Launch the simulation and you should see the big "Hello World" text somewhere around. If you don't see it, make sure that a) it has a light source nearby and b) you're looking at the correct side of it. The text won't render if you're looking at it from behind. (It also helps to place a physical object near the plane just to locate it at first.)

12. Publishing

Once you've put together your awesome prototype, you can click on the PlayCanvas icon in the top left corner of the screen and select "Publishing". This is where you can publish new builds to be hosted on PlayCanvas and share them with the world!

Dialog box for publishing on PlayCanvas

Conclusion

That's it for this demo. There's a lot more to explore in PlayCanvas, but hopefully this overview gets you comfortable enough with the basics to start building your own games. It's a really nice engine that I think more people should use. A lot of what has been created with it has been tech demos rather than full games, but there's no reason you can't build and publish something awesome with it.

One feature I didn't really talk about but might have been apparent is that PlayCanvas's editor allows you to update your game in real time. This is true for design, in that you can move things in the editor and they'll update in the launch window if you have it open, as well as for code, with its hot-reloading.

Finally, while the editor is really convenient, anything that you can do with it can be done with pure code. So if you need to use PlayCanvas on a budget, a good reference to use is the examples folder on GitHub. (A good place to start would be this simple example of a spinning cube.)

If anything is confusing at all, please let me know in the comments! Or just if you've built something cool and want to share, or figured out an easier way to do something, I'd love to see!

Procedural Generation for Simple Puzzles

$
0
0
Final product image
What You'll Be Creating

Puzzles are an integral part of gameplay for many genres. Whether simple or complex, developing puzzles manually can quickly become cumbersome. This tutorial aims to ease that burden and pave the way for other, more fun, aspects of design.

Together we are going to create a generator for composing simple procedural “nested” puzzles. The type of puzzle that we will focus on is the traditional “lock and key” most often iterated as: get x item to unlock y area. These types of puzzles can become tedious for teams working on certain types of games, especially dungeon crawlers, sandboxes, and role-playing where puzzles are more often relied upon for content and exploration.

By using procedural generation, our goal is to create a function that takes a few parameters and returns a more complex asset for our game. Applying this method will provide an exponential return on developer time without sacrificing gameplay quality. Developer consternation may also decline as a happy side effect.

What Do I Need to Know?

To follow along, you will need to be familiar with a programming language of your choice. Since most of what we are discussing is data only and generalized into pseudocode, any object-oriented programming language will suffice. 

In fact, some drag-and-drop editors will also work. If you would like to create a playable demo of the generator mentioned here, you will also need some familiarity with your preferred gaming library.

Creating the Generator

Let’s begin with a look at some pseudocode. The most basic building blocks of our system are going to be keys and rooms. In this system, a player is barred from entering a room’s door unless they possess its key. Here's what those two objects would look like as classes:

Our key class only holds two pieces of information right now: the location of the key, and if the player has that key in his or her inventory. Its two functions are initialization and pickup. Initialization determines the basics of a new key, while pickup is for when a player interacts with the key.

In turn, our room class also contains two variables: isLocked, which holds the current state of the room’s lock, and assocKey, which holds the Key object that unlocks this specific room. It contains a function for initialization as well—one to call to unlock the door, and another to check whether the door can currently be opened.

A single door and key are fun, but we can always spice it up with nesting. Implementing this function will allow us to create doors within doors while serving as our primary generator. To maintain nesting, we will need to add some additional variables to our door as well:

This generator code is doing the following:

  1. Taking in the parameter for our generated puzzle (specifically how many layers deep a nested room should go).

  2. Creating two arrays: one for rooms that are being checked for potential nesting, and another for registering rooms that are already nested.

  3. Creating an initial room to contain the entire scene and then add it to the array for us to check later.

  4. Taking the room at the front of the array to put through the loop.

  5. Checking the depth of the current room against the maximum depth provided (this decides if we create a further child room or if we complete the process).

  6. Establishing a new room and populating it with the necessary information from the parent room.

  7. Adding the new room to the roomsToCheck array and moving the previous room to the finished array.

  8. Repeating this process until each room in the array is complete.

Now we can have as many rooms as our machine can handle, but we still need keys. Key placement has one major challenge: solvability. Wherever we place the key, we need to make sure that a player can access it! No matter how excellent the hidden key cache seems, if the player cannot reach it, he or she is effectively trapped. In order for the player to continue through the puzzle, the keys must be obtainable.

The simplest method to ensure solvability in our puzzle is to use the hierarchical system of parent-child object relationships. Since each room resides within another, we expect that a player must have access to the parent of each room to reach it. So, as long as the key is above the room on the hierarchical chain, we guarantee our player is able to gain access.

To add key generation to our procedural generation, we will put the following code into our main function:

This extra code will now produce a list of all the rooms that are above your current room in the maps hierarchy. Then we choose one of those randomly, and set the key’s location to that room. After that, we assign the key to the room it unlocks.

When called, our generator function will now create and return a given number of rooms with keys, potentially saving hours of development time!

That wraps up the pseudocode part of our simple puzzle generator, so now let’s put it into action.

Procedural Puzzle Generation Demo

We built our demo using JavaScript and the Crafty.js library to keep it as light as possible, allowing us to keep our program under 150 lines of code. There are three main components of our demo as laid out below:

  1. The player can move throughout each level, pickup keys, and unlock doors.

  2. The generator which we will use to create a new map automatically every time the demo is run.

  3. An extension for our generator to integrate with Crafty.js, which allows us to store object, collision, and entity information.

The pseudocode above acts as a tool for explanation, so implementing the system in your own programming language will require some modification.

For our demo, a portion of the classes are simplified for more efficient use in JavaScript. This includes dropping certain functions related to the classes, as JavaScript allows for easier access to variables within classes.

To create the game part of our demo, we initialize Crafty.js, and then a player entity. Next we give our player entity the basic four direction controls and some minor collision detection to prevent entering locked rooms.

Rooms are now given a Crafty entity, storing information on their size, location, and color for visual representation. We will also add a draw function to allow us to create a room and draw it to the screen.

We will provide keys with similar additions, including storage of its Crafty entity, size, location, and color. Keys will also be color-coded to match the rooms they unlock. Finally, we can now place the keys and create their entities using a new draw function.

Last but not least, we’ll develop a small helper function that creates and returns a random hexadecimal color value to remove the burden of choosing colors. Unless you like swatching colors, of course.

What Do I Do Next?

Now that you have your own simple generator, here are a few ideas for extending our examples:

  1. Port the generator to enable use in your programming language of choice.

  2. Extend the generator to include creating branching rooms for further customization.

  3. Add the ability to handle multiple room entrances to our generator to allow for more complex puzzles.

  4. Extend the generator to allow for key placement in more complicated locations to enhance player problem solving. This is especially interesting when paired with multiple paths for players.

Wrapping Up

Now that we have created this puzzle generator together, use the concepts shown to simplify your own development cycle. What repetitive tasks do you find yourself doing? What bothers you most about creating your game? 

Chances are, with a little planning and procedural generation, you can make the process significantly simpler. Hopefully, our generator will allow you to focus on the more appealing parts of game-making while cutting out the mundane.

Good luck, and I’ll see you in the comments!

How to Create Artistic Backgrounds From Photos With These 3 Photoshop Actions

$
0
0
Final product image
What You'll Be Creating

Backgrounds are an important—if sometimes underappreciated—element for many designers. Whether you're creating a game, visual novel, or website, the backgrounds you choose play a huge part in setting the mood and scene for your work.

Part of the reasons backgrounds don't always get the love they deserve is that, despite their importance, backgrounds are sometimes slow and tedious work. Luckily, there's always a faster way to get the job done.

In this tutorial, I'll be showing you how to create three dramatically different backgrounds from photos using resources from Envato Elements. These resources are a combination of Photoshop Actions and Photoshop Brushes that, when combined, save you countless hours you would otherwise use to create these backgrounds from scratch. So let's get started!

What You'll Need

Here are the following assets you'll need for this tutorial:

1. How to Create a Charcoal-Inspired Background

Step 1

First up is the charcoal background. I'll be using this amazing Charcoal Photoshop Action to help achieve the final result. Start by opening this Eiffel Tower reference into Photoshop, making sure that you select the 1920x999 pixels size.

Charcoal Pixabay Reference
Eiffel Tower Stock

Step 2

Now let's set up the action. Go to Window > Actions and load your charcoal action into the Actions palette.

Charcoal Photoshop Action

Because there's a brush set included with this action, you'll need to load it as well. So hit B on your keyboard to bring up the Brush Tool. Select Replace Brushes from the drop down menu then select the .abr file to replace your current brushes with the new charcoal ones. 

Here is a look at the charcoal brushes you should now have in the Preset Manager.

Charcoal Photoshop Brushes

Step 3

Many Photoshop actions come with a set of instructions you'll need to follow exactly in order to avoid any program errors. So make sure you pay attention to the attached "Readme" file before you begin.

Now create a New Layer and name it "brush." With the Brush Tool (B) selected, paint black all over the Eiffel Tower since it's where I would like to apply the charcoal effect. It's important to keep the brush part as clean as possible, otherwise it will affect how this action plays out.

Applying the Charcoal Action

When you're ready, select Style 1 under the Charcoal Action, then hit the Play button on the Actions palette.

Playing the Charcoal Action

Step 4

Here is the initial result after you play the action.

Charcoal Action Initial Result

Since it looks quite messy, let's tweak some of the Adjustment Layers found within the new Group created by the action.

First Hide the Visibility of the Sketch Lines 2 Group. Then select all three photo layers and set their Layer Blend Modes to Pin Light and their Opacity to 100%. Follow up by select the Contrast layer and changing the Opacity to 100%. This will allow your subject to show through the smudges more.

Adjust the Photo Layers

To add more texture to the background, create a New Layer above the second photo layer.Select the Brush Tool (B) and use one of the charcoal brushes from the set to paint a large smudge of gray on the layer. Right-click to go to Blending Options and apply a Gradient Overlay with the following settings:

Apply a Gradient Overlay

Step 5

For a tint of color, add a New Adjustment Layer for Color Lookup at the top of your layers. Go to Layer > New Adjustment Layer > Color Lookup. Then set the 3DLUTFile to FoggyNight.

Add a Color Lookup Adjustment Layer

Here is the final charcoal background below. See how simple that was?

Charcoal Background Photoshop Tutorial

2. How to Create a Watercolor-Inspired Background

Step 1

Next up, we'll create a lovely watercolor background from a photo. This time, I'll be using this Artist Photoshop Action and this set of Handcrafted Watercolor Brushes to make the process much easier.

Open your photo into Photoshop. Here I'll be using this Woman Portrait at 1920x1079 pixels.

Woman Portrait Pixabay Stock
Woman Portrait

Step 2

Now go to Window > Action and load the artist action into the Actions palette.

Artist Photoshop Action

Just like before, we'll need to also load all of the brushes attached to this action. So select the Brush Tool (B), Right-click to bring up your brushes, then select Replace Brushes from the drop down options. Choose the artist brush file located from the same folder as the action.

Here is the new set you should have.

Artist Action Brushes

Step 3

To run the action, create a New Layer and name it "brush." Select the Brush Tool (B) at 100% Hardness, and use the brush to Fill in the areas with black where you'd like the effect to take place. Here I painted the woman and the bars behind her.

Paint the Woman to Apply the Action

With the brush layer still selected, hit the Play button on the Actions palette to run your artist action. Here is the initial result.

Initial Watercolor Action Result

Step 4

Even though we're going for a watercolor effect, let's clean up the result a bit. First, Hide the Visibility of the Watercolor Edge Splatter Group. Then go into the Watercolor and Watercolor Texture Groups and Hide some of the layers to allow the portrait to breathe. Feel free to experiment with this for the result you prefer.

Hide Some Texture Layers

Step 5

To create a better color palette, add a Gradient Map Adjustment Layer. Go to Layer > New Adjustment Layer > Gradient Map. Create a gradient that transitions from a deep red color #491802 to white.

Add a Gradient Map

Step 6

Finish this background with a little more texture. Load the Handcrafted Watercolor Brushes we mentioned earlier into the Brush Tool (B).

Handcrafted Watercolor Brushes

Create a New Layer and use the Brush Tool (B) to paint white watercolor splashes all around the canvas. I used brushes 1,15, and 17 for the right side, and brushes 15, 26, and 72 for the left side.

Here is the final watercolor background when you're done.

Final Watercolor Background Photoshop Tutorial

3. How to Create a Paint Splatter-Inspired Background

Step 1

For this last background, we'll be creating another traditional art effect using this Paint Splatter Action.

Here I'll be using this Tiger Stock, downloaded at 1920x1280 pixels. Open this photo into Photoshop.

Tiger Stock Via Pixabay
Tiger Stock

Step 2

Load your paint splatter action. Go to Window > Actions, and load the action from the drop down menu.

Paint Splatter Photoshop Action

Just like the previous actions, you'll need to load the brush set that comes along with it. So select the Brush Tool (B), then Right-click to bring up your brushes. Select ReplaceBrushes from the options and replace them with the paint splatter brushes found within the same folder.

Paint Splatter Photoshop Brushes

Step 2

Now create a New Layer and name it "mask." Select the Brush Tool (B) and use a SoftRound Brush with 0% Hardness to fill your subject with black.

Mask the Tiger with Black

When you're ready, select the mask layer then hit the Play button on the Actions palette. Here is the initial result after you play the action.

Paint Splatter Initial Background

Step 3

To adjust this background even further, Hide the Group for CC Option 1, and Unhide the Group for CC Option 2 instead.

Change the Group Visibility

Now we have a color scheme that suits this photo better. Scroll down to the bottom of the action list to find the layers for the background elements. Unhide the Visibility of the background splash layers as well as the contour fill layers in order to achieve a more dynamic result.

Result with Background Splashes

Finish with a New Photo Filter Adjustment Layer. Set the filter to Cooling Filter (80) and raise the Density to 91%. Change the Layer Blend Mode to Soft Light and you're done!

Add a Cooling Photo Filter

Here is the final paint splatter background.

Final Paint Splatter Background

Conclusion

Let Photoshop actions do all the heavy lifting for your designs. You'll be impressed with the incredible diversity in the effects you can achieve, all while saving countless work hours.

I hope you enjoyed this tutorial, feel free to leave any questions in the comments below. And for more tutorials on photo effects, check out the following links:

Create a Space Shooter With PlayCanvas: Part 1

$
0
0

PlayCanvas makes it really easy to build WebGL-powered, 3D interactive content for the web. It's all JavaScript, so it runs natively in the browser without any plugins. It's a pretty young engine that's only been around since 2014, but it's been quickly gaining traction with names like Disney, King and Miniclip using it to develop games. 

It's a great tool for two main reasons: First, it's a fully featured game engine, so it handles everything from graphics and collision to audio and even integration with gamepads/VR. (So you won't have to hunt down external libraries or worry about browser compatibility issues for most things.) The second, and what I think makes it really stand out, is their browser-based editor.

Screenshot of PlayCanvass editor
This is what a sample project looks like in PlayCanvas's in-browser editor. It's a really powerful and convenient way to organize your work or even collaborate with others in real time.

If you're used to working with the Unity engine, PlayCanvas's editor should look familiar (it even uses a similar component-based system for stringing together functionality). Unlike Unity, PlayCanvas is not cross-platform and can only publish for the web. However, if the web is all you care about, then this ends up being a big plus, since the engine's focus on the web makes it really fast and lightweight compared to the competition.

One final note: While the engine itself is free and open source, the online editor and tools are only free for public projects. It's certainly worth paying for if you're developing commercial work with it, but you can always just use it as purely a code framework as well for free. 

The Final Result

This is what we'll be creating:

In-game screenshot of space shooter

You can try out a live demo.

The project itself is public, so you can poke around and/or fork it on its project page.

You don't need to have any experience with 3D games to follow along, but I will assume some basic familiarity with JavaScript.

Creating Our Own Project From Scratch

The final result is a relatively simple demo where you just fly around and push asteroids, but it covers enough basic functionality that will be useful in making any kind of 3D game. Part 1 will cover the basic setup, working with models, the physics system, and camera controls. Part 2 will cover the bullet system, spawning asteroids, and working with text.

1. Project Setup 

Head over to playcanvas.com and create an account.

Once you're logged in, click on the projectstab in the dashboard and press the big orange newbutton to create a new project. This should bring up the "new project" box. Select "blank project" and give it a name:

Dialog box for creating a new project

Once you're done, hit the create button at the bottom right. This will send you to the project overview page. Here, you can access your settings and add collaborators. For now we'll just dive into the project, so click on the big orange editor button.

When you enter your first project, PlayCanvas will show you a lot of hints about its editor. You can dismiss those for now. The main things to note are:

  • The left (hierarchy) panel is a list of all your world objects. This is also where you can add, duplicate, or delete entities from your scene.
  • The right (inspector) panel is where you edit the properties of the selected object. After selecting an object (by clicking on it), you'll be able to set its position and orientation or attach scripts and components. 
  • The bottom (assets) panel contains all your assets. This is where you can upload textures or 3D models as well as create scripts.
  • The center scene is where you can edit and build your game world. 

2. Creating an Object

To create a new object in your scene, click on the little plus button at the top of the hierarchy panel:

How to create a new entity

Note: You might accidentally create a new object inside an already existing one. This is useful for building objects composed of multiple parts or that are tied together in some way. You can move objects around the hierarchy panel to control nesting. Drag it onto the Root object to place it back at the top of the hierarchy. 

As an example, I'm going to create a new box and color it red. To give it a custom color, we have to create a new material. You can do this in the assets panel, either by right clicking anywhere inside the panel, or clicking on the little plus icon:

How to add a new material

Once created, select your material and give it a descriptive name such as "RedMaterial" (you can see the name field in the inspector panel).

Now scroll down the diffuse section and change the color:

How to change the diffuse color to red

Once that's done, go back and select the new box you created (either by clicking on it in the scene or in the hierarchy panel). Then set its material to the custom material we just created:

Where to set an objects material

And the box should now be red! Note that the material you created can be attached to as many objects as you like.

3. Adding Physics

To enable physics on an object, we have to add two components: Rigid Body and Collision.

Add the Rigid Body by clicking on "Add Component" in the inspector panel on your object:

How to add a rigid body component to an entity

Make sure its type is set to dynamic:

Where to set rigid body type

And then add a collision component in the same way. 

Now launch your game by clicking on the little play button at the top right of your scene. You should see your box falling down through the floor! To fix that, you'll have to add a rigid body and collision to the plane as well, making sure that its rigid body type is static (so that it doesn't fall as well). 

Challenge: Just for fun, try adding a sphere and tilt the plane slightly (either on the X or Z axis) to watch it roll off.

A Note on the Component System

It's worth talking briefly about the component system since it's a fundamental part of the architecture of PlayCanvas. Conceptually, the idea is to separate functionality from objects. The biggest advantage of that is the ability to compose complex behavior out of smaller modular components.

For example, if you look at the camera in your scene, you'll notice it's not a special object. It's just a generic entity with a camera component attached. You could attach a camera component to anything to turn it into a camera, or attach a rigid body and collision to the camera to turn it into a solid object (try it!). 

If you're curious, you can read more about the advantages and drawbacks of component systems on the Wikipedia page.

4. Adding a Model

Now that you're comfortable with the basics, we can start putting together our space game. We need at least a ship and an asteroid to work with. There are two ways to add models:

Grab a Model From PlayCanvas's Library 

PlayCanvas has a store (similar to the Unity Asset Store in some ways) where you can find and download assets directly into your project. To access it, just click on library in the assets panel.

The store is very new, so it's rather sparse, but it's a good place to find placeholders or assets to experiment with. 

I used the hovership asset from the store as my player ship.

Upload Your Own Model

PlayCanvas does support uploading FBX, OBJ, 3DS and COLLADA (DAE) files, but it prefers FBX. You can easily convert any 3D model into FBX by opening it with Blender and exporting it in your desired format.

You can find the asteroid model I used on Blendswap.com. Note that you might want to optimize your 3D models before using them in-game. For example, that asteroid model contains over 200,000 triangles! That might be fine for a special object in the game, but once I added over a hundred asteroids in the scene, things really slowed to a crawl. Blender's Decimate modifier is an easy way to optimize your models. I used that to cut down the asteroid model to around 7,000 triangles without losing too much detail. 

Once the models are in your project (you might need to refresh if you don't see it right away in your assets panel), you can add them to your scene. The easiest way to do that is to just drag the model into the scene:

How to add the hovership model into the scene
This is the actual model itself that you can add to the scene. The other assets around it are the texture/material, etc.

Just like before, add a rigid body and a collision component to the ship. One trick you could do with collision is to add the actual object's mesh as its own collision shape. This would result in a pixel-perfect collision mesh, but wouldn't be very efficient. For this demo, I opted for a simple box as my collision shape (and a sphere for the asteroids) and edited the half-extents to roughly match the shape of the model.

How to Offset the Collision Shape

One problem you might run into when adjusting collision shapes is the inability to offset it from the center. An easy way to get around this (aside from having to offset the model itself in something like Blender before exporting it) is to create a parent object that has the collision and rigid body, and a child object that has the model itself. So you could offset the model as a child relative to the parent that contains the collision. 

This is how I have it set up for the demo project, so you can take a look at that to see how that's done.

5. Changing Gravity & Scene Settings

Since our game is set in space, we need to override the default gravity. You can do this in the scene settings. At the very bottom left of your screen, click on the gear icon. This will open up the settings in the inspector panel. Find the physics section and change the gravity value:

Where to disable gravity

To make sure it worked, try launching again and see if the ship is just floating in space.

It's not quite space without a starry background, so while we're in the scene settings, let's add a skybox. You can grab one from the store or just find one online you like. Once you've got it, add it in the rendering section:

How to add a skybox

That should give the game a more nebulous feel. This would also be a good time to clean up your scene and delete any test objects we created before.

6. Scripting the Ship

This is where we finally get to write some code. PlayCanvas's script system is another thing that should be familiar if you've used Unity. You create scripts that can be attached to any object, and these scripts can have attributes that are configured on a per-object basis. Script attributes are very useful and accomplish two main things:

  1. Modularity. You can create a script that defines how an enemy moves with a speed attribute, and reuse that for different kinds of enemies with different speeds. 
  2. Collaboration. Script attributes can be tweaked directly in the editor without having to touch any code. This allows designers to go in and tweak values themselves without having to bother the programmer or dig through code. 

Create a Script

Go to the assets tab and create a new asset of type Script. This will be the code for the ship's behavior, so name it something like "Fly". Double click on it to open up the script editor.

The PlayCanvas user manual is a very useful reference when writing scripts, as is the API reference. The auto-completion also makes it really easy to figure out what methods are available. We'll start by just making our ship rotate. Type this out in the update function:

Inside of any script, this refers to the script component itself, whereas this.entity refers to the object the script is attached to. You can access any of the components attached to the entity this way. Here we're accessing the rigid body and applying an angular force to it. 

Make sure you save your script now.

Attach a Script

Before our script gets too involved, let's attach it to our ship to see if it works. To do that, just add a script component to your ship, and then add your "fly" script to that. Note that you can only add one script component per object, but you can add multiple scripts inside that component.

Once you launch, you should see your ship rotating!

Add an Attribute

As discussed above, script attributes make our code much more flexible. You can add one by typing this out at the top of your code, right after the first line where the script is created:

In this case, the name of my script is Fly. The only required option is type.

To see the attribute in the editor, go back to your script component, and click on the icon with two arrows on the fly script. This is the parse button which will look for any attributes and update the editor. Your component should now look like this:

Where to find script attributes

Finally, to use the value of the attribute in your script, just do this.[attribute_name]. So if we wanted this to be the speed of rotation, we could change our line of code to:

Note: Since there's no angular damping, the ship will keep rotating faster the longer the force is applied. If you remove the force, it will retain its inertia and keep rotating at the same speed. To change this, set the angular damping in the rigid body component to something above zero. 

Movement With Arrow Keys

Now we want to script it so that we can orient the ship with the arrow keys. A naive approach might look like this:

Can you tell what the problem is with this script? Try it out. Can you easily point the ship where you want? 

Give it some thought before you read on. How would you fix this?

The problem is that we're applying a force in global coordinates without taking into account where the ship is facing. If the ship is horizontal relative to the camera, and we rotate it on the y axis when we press left/right, then it rotates correctly. But if the ship is vertical, a rotation on the y axis is now a barrel roll.

The same problem would happen if we were trying to move the ship forward too. The direction that is "forward" depends on where the ship is facing and cannot be absolute. 

Now conveniently, each entity has three direction vectors we can use: up, right, and forward. To turn left/right, we rotate along the up axis, and up and down we rotate along the right axis. These are up and right relative to the entity. A fixed version would look like this:

Adding forward movement is the same idea:

If the movement feels off or too slippery, spend some time tweaking the speeds and damping factors to get it to where it feels right.

7. Camera Controls 

It's hard to keep track of a moving ship with a static camera. The easiest way to make the camera follow an object is just by putting the camera as a child of that object.

Try dragging the camera in the hierarchy panel onto your ship. A convenient way to adjust the camera's view is by switching to that camera's view in scene. Click on the button at the top of the screen where it says Perspective. This will give you a drop-down with all the different scene views you can select. Select Camera, which should be the furthest one down. This is a special view because whatever you see in the editor is what the camera will see in the game. 

Once you've adjusted the camera's view, make sure to switch back to perspective or any other view to avoid accidentally messing up the camera angles.

Tip: If you have an object selected in the hierarchy, but you can't find it in your scene, press F. This will focus the view on that object and zoom in on it. You can see more keyboard shortcuts by clicking on the keyboard button on the far left of your screen.

At this point, you should have a camera following your ship (as rigid as it may be). (You won't be able to tell if you're moving if the camera is moving along and there are no other objects in the world, so try adding some.)

Camera Scripts

A camera that's just stuck on the player isn't very interesting. This post on the PlayCanvas blog explores various different types of camera motion. The simplest one we can implement is the look at camera.

To do that, first move the camera back onto the root object. 

Next, create a new script called lookAt

The update function of that script should look like:

And it should have an attribute:

Now attach that script to the camera object. Press the parse button and set the target to be the ship entity.

Try launching! If all went well, your camera will be staying in place but will just orient itself towards the ship.

You can implement the other types of cameras in the same way. The trailing follow camera mentioned in the blog post ideally looks the nicest, but I've found it to be too jittery when the framerate dips a little, so for the final demo, I ended up going with a camera that was attached as a child to the ship but scripted to move and rotate as the ship did.

Conclusion

Don't worry if some of this feels a little overwhelming. PlayCanvas is a complex engine with a lot of bells and whistles. There's a lot to explore, and keeping the manual close by is a good way to find your bearings. Another good way to learn is just by finding public projects and looking at how things are made.

Part 2 will start with creating a bullet system, and then adding some asteroids to shoot at, and we'll top it off by adding an FPS counter and in-game text. If you have any requests or suggestions, or if anything is unclear, please let me know in the comments!

Viewing all 728 articles
Browse latest View live