Quantcast
Channel: Envato Tuts+ Game Development
Viewing all 728 articles
Browse latest View live

How to Implement and Use a Message Queue in Your Game

$
0
0

A game is usually made of several different entities that interact with each other. Those interactions tend to be very dynamic and deeply connected with gameplay. This tutorial covers the concept and implementation of a message queue system that can unify the interactions of entities, making your code manageable and easy to maintain as it grows in complexity. 

Introduction

A bomb can interact with a character by exploding and causing damage, a medic kit can heal an entity, a key can open a door, and so on. Interactions in a game are endless, but how can we keep the game code manageable while still able to handle all those interactions? How do we ensure the code can change and continue to work when new and unexpected interactions arise?

Interactions in a game tend to grow in complexity very quickly
Interactions in a game tend to grow in complexity very quickly.

As interactions are added (especially the unexpected ones), your code will look more and more cluttered. A naive implementation will quickly lead to you asking questions like:

"This is entity A, so I should call method damage() on it, right? Or is it damageByItem()? Maybe this damageByWeapon() method is the right one?"

Imagine that cluttering chaos spreading to all your game entities, because they all interact with each other in different and peculiar ways. Luckily, there is a better, simpler, and more manageable way of doing it.

Message Queue

Enter the message queue. The basic idea behind this concept is to implement all game interactions as a communication system (which is still in use today): message exchange. People have communicated via messages (letters) for centuries because it is an effective and simple system.

In our real-world post services, the content of each message can differ, but the way they are physically sent and received remains the same. A sender puts the information in an envelope and addresses it to a destination. The destination can reply (or not) by following the very same mechanism, just changing the "from/to" fields on the envelope. 

Interactions made using a message queue system
Interactions made using a message queue system.

By applying that idea to your game, all interactions among entities can be seen as messages. If a game entity wants to interact with another (or a group of them), all it has to do is send a message. The destination will deal with or react to the message based on its content and on who the sender is.

In this approach, communication among game entities becomes unified. All entities can send and receive messages. No matter how complex or peculiar the interaction or message is, the communication channel always remains the same.

Throughout the next sections, I'll describe how you can actually implement this message queue approach in your game. 

Designing an Envelope (Message)

Let's start by designing the envelope, which is the most basic element in the message queue system. 

An envelope can be described as in the figure below:

Structure of a message
Structure of a message.

The first two fields (sender and destination) are references to the entity that created and the entity that will receive this message, respectively. Using those fields, both the sender and the receiver can tell where the message is going to and where it came from.

The other two fields (type and data) work together to ensure the message is properly handled. The type field describes what this message is about; for instance, if the type is "damage", the receiver will handle this message as an order to decrease its health points; if the type is "pursue", the receiver will take it as an instruction to pursue something—and so on.

The data field is directly connected to the type field. Using the previous examples, if the message type is "damage", then the data field will contain a number—say, 10—which describes the amount of damage the receiver should apply to its health points. If the message type is "pursue"data will contain an object describing the target that must be pursued.

The data field can contain any information, which makes the envelope a versatile means of communication. Anything can be placed in that field: integers, floats, strings, and even other objects. The rule of thumb is that the receiver must know what is in the data field based on what is in the type field.

All that theory can be translated into a very simple class named Message. It contains four properties, one for each field:

As an example of this in use, if an entity A wants to send a "damage" message to entity B, all it has to do is instantiate an object of the class Message, set the property to to B, set the property from to itself (entity A), set type to "damage" and, finally, set data to some number (10, for instance):

Now that we have a way to create messages, it's time to think about the class that will store and deliver them.

Implementing a Queue

The class responsible for storing and delivering the messages will be called MessageQueue. It will work as a post office: all messages are handed to this class, which ensures they will be dispatched to their destination.

For now, the MessageQueue class will have a very simple structure:

The property messages is an array. It will store all the messages that are about to be delivered by the MessageQueue. The method add() receives an object of the class Message as a parameter, and adds that object to the list of messages. 

Here's how our previous example of entity A messaging entity B about damage would work using the MessageQueue class:

We now have a way to create and store messages in a queue. It's time to make them reach their destination.

Delivering Messages

In order to make the MessageQueue class actually dispatch the posted messages, first we need to define how entities will handle and receive messages. The easiest way is by adding a method named onMessage() to each entity able to receive messages:

The MessageQueue class will invoke the onMessage() method of each entity that must receive a message. The parameter passed to that method is the message being delivered by the queue system (and being received by the destination). 

The MessageQueue class will dispatch the messages in its queue all at once, in the dispatch() method:

This method iterates over all messages in the queue and, for each message, the to field is used to fetch a reference to the receiver. The onMessage() method of the receiver is then invoked, with the current message as a parameter, and the delivered message is then removed from the MessageQueue list. This process is repeated until all messages are dispatched.

Using a Message Queue

It's time to see all of the details of this implementation working together. Let's use our message queue system in a very simple demo composed of a few moving entities that interact with each other. For the sake of simplicity, we'll work with three entities: Healer, Runner and Hunter.

The Runner has a health bar and moves around randomly. The Healer will heal any Runner that passes close by; on the other hand, the Hunter will inflict damage on any nearby Runner. All interactions will be handled using the message queue system.

Adding the Message Queue

Let's start by creating the PlayState which contains a list of entities (healers, runners and hunters) and an instance of the MessageQueue class:

In the game loop, represented by the update() method, the message queue's dispatch() method is invoked, so all messages are delivered at the end of each game frame.

Adding the Runners

The Runner class has the following structure:

The most important part is the onMessage() method, invoked by the message queue every time there is a new message for this instance. As previously explained, the field type in the message is used to decide what this communication is all about.

Based on the type of the message, the correct action is performed: if the message type is "damage", health points are decreased; if the message type is "heal", health points are increased. The number of health points to increase or decrease by is defined by the sender in the data field of the message.

In the PlayState, we add a few runners to the list of entities:

The result is four runners randomly moving around:

Adding the Hunter

The Hunter class has the following structure:

The hunters will also move around, but they will cause damage to all runners that are close. This behavior is implemented in the update() method, where all entities of the game are inspected and runners are messaged about damage.

The damage message is created as follows:

The message contains the information about the destination (entity, in this case, which is the entity being analyzed in the current iteration), the sender (this, which represents the hunter that is performing the attack), the type of the message ("damage") and the amount of damage (2, in this case, assigned to the data field of the message).

The message is then posted to the destination via the command this.getMessageQueue().add(msg), which adds the newly created message to the message queue.

Finally, we add the Hunter to the list of entities in the PlayState:

The result is a few runners moving around, receiving messages from the hunter as they get close to each other:

I added the flying envelopes as a visual aid to help show what is going on.

Adding the Healer

The Healer class has the following structure:

The code and structure are very similar to the Hunter class, except for a few differences. Similarly to the hunter's implementation, the healer's update() method iterates over the list of entities in the game, messaging any entity within its healing reach:

The message also has a destination (entity), a sender (this, which is the healer performing the action), a message type ("heal") and the number of healing points (2, assigned in the data field of the message).

We add the Healer to the list of entities in the PlayState the same way we did with the Hunter and the result is a scene with runners, a hunter, and a healer:

And that's it! We have three different entities interacting with each other by exchanging messages.

Discussion About Flexibility

This message queue system is a versatile way to manage interactions in a game. The interactions are performed via a communication channel that is unified and has a single interface that is easy to use and implement.

As your game grows in complexity, new interactions might be needed. Some of them might be completely unexpected, so you must adapt your code to deal with them. If you are using a message queue system, this is a matter of adding a new message somewhere and handling it in another.

For example, imagine you want to make the Hunter interact with the Healer; you just have to make the Hunter send a message with the new interaction—for instance, "flee"—and ensure that the Healer can handle it in the onMessage method:

Why Not Just Send Messages Directly?

Although exchanging messages among entities can be useful, you might be thinking why the MessageQueue is needed after all. Can't you just invoke the receiver's onMessage() method yourself instead of relying on the MessageQueue, as in the code below?

You could definitely implement a message system like that, but the use of a MessageQueue has a few advantages.

For instance, by centralizing the dispatchment of messages, you can implement some cool features like delayed messages, the ability to message a group of entities, and visual debug information (such as the flying envelopes used in this tutorial).

There is room for creativity in the MessageQueue class, it's up to you and your game's requirements.

Conclusion

Handling interactions between game entities using a message queue system is a way to keep your code organized and ready for the future. New interactions can be easily and quickly added, even your most complex ideas, as long as they are encapsulated as messages.

As discussed in the tutorial, you can ignore the use of a central message queue and just send messages directly to the entities. You can also centralize the communication using a dispatch (the MessageQueue class in our case) to make room for new features in the future, such as delayed messages.

I hope you find this approach useful and add it to your game developer utility belt. The method might seem like overkill for small projects, but it will certainly save you some headaches in the long run for bigger games.


A* Pathfinding for 2D Grid-Based Platformers: Ledge Grabbing

$
0
0

In this part of our series on adapting the A* pathfinding algorithm to platformers, we'll introduce a new mechanic to the character: ledge grabbing. We'll also make appropriate changes to both the pathfinding algorithm and the bot AI, so they can make use of the improved mobility.

Demo

You can play the Unity demo, or the WebGL version (16MB), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, right-click a cell to toggle the ground at that point, middle-click to place a one-way platform, and click-and-drag the sliders to change their values.

Ledge Grabbing Mechanics

Controls Overview

Let's first take a look at how the ledge grabbing mechanic works in the demo to get some insight into how we should change our pathfinding algorithm to take this new mechanic into account.

The controls for ledge grabbing are quite simple: if the character is right next to a ledge while falling, and the player presses the left or right directional key to move them towards that ledge, then when character is at the right position, it will grab the ledge.

Once the character is grabbing a ledge, the player has two options: they can either jump up or drop down. Jumping works as normal; the player presses the jump key and the jump's force is identical to the force applied when jumping from the ground. Dropping down is done by pressing the down button (S), or the directional keyn that points away from the ledge.

Implementing the Controls

Let's go over how the ledge grab controls work in the code. The first thing here to do is to detect whether the ledge is to the left or to the right of the character:

We can use that information to determine whether the character is supposed to drop off the ledge. As you can see, to drop down, the player needs to either:

  • press the down button,
  • press the left button when we're grabbing a ledge on the right, or
  • press the right button when we're grabbing a ledge on the left.

There's a small caveat here. Consider a situation when we're holding the down button and the right button, when the character is holding onto a ledge to the right. It'll result in the following situation:

The problem here is that the character grabs the ledge immediately after it lets go of it. 

A simple solution to this is to lock movement towards the ledge for a couple frames after we dropped off the ledge. That's what the following snippet does:

After this, we change the state of the character to Jump, which will handle the jump physics:

Finally, if the character didn't drop from the ledge we check whether the jump key has been pressed; if so, we set the jump's vertical speed and change the state:

Detecting a Ledge Grab Point

Let's look at how we determine whether a ledge can be grabbed. We use a few hotspots around the edge of the character:

The yellow contour represents the character's bounds. The red segments represent the wall sensors; these are used to handle the character physics. The blue segments represent where our character can grab a ledge.

To determine whether the character can grab a ledge, our code constantly checks the side it is moving towards. It's looking for an empty tile at the top of the blue segment, and then a solid tile below it which the character can grab onto. 

Note: ledge grabbing is locked off if the character is jumping up. This can be easily noticed in the demo and in the animation in the Controls Overview section.

The main problem with this method is that if our character falls at a high speed, it's easy to miss a window in which it can grab a ledge. We can solve this by looking up all the tiles starting from the previous frame's position to the current frame's in search of any empty tile above a solid one. If one such tile is found, then it can be grabbed.

Now we've cleared up how the ledge grabbing mechanic works, let's see how to incorporate it into our pathfinding algorithm.

Pathfinder Changes

Make It Possible to Turn Ledge Grabbing On and Off

First of all, let's add a new parameter to our FindPath function that indicates whether the pathfinder should consider grabbing ledges. We'll name it useLedges:

Detect Ledge Grab Nodes

Conditions

Now we need to modify the function to detect whether a particular node can be used for ledge grabbing. We can do that after checking whether the node is an "on ground" node or an "at ceiling" node, because in either case it cannot be used for ledge grabbing.

All right: now we need to figure out when a node should be considered a ledge grabbing node. For cliarity, here's a diagram that shows some example ledge grabbing positions:

...and here's how these might look in-game:

The top character sprites are stretched to show how this looks with characters of different sizes.

The red cells represent the checked nodes; together with the green cells, they represent the character in our algorithm. The top two situations show a 2x2 character grabbing ledges on the left and right respectively. The bottom two show the same thing, but the character's size here is 1x3 instead of 2x2.

As you can see, it should be fairly easy to detect these cases in the algorithm. The conditions for the ledge grab node will be as follows:

  1. There is a solid tile next to the top-right/top-left character tile.
  2. There is an empty tile above the found solid tile.
  3. There is no solid tile below the character (no need to grab ledges if on the ground).

Note that the third condition is already taken care of, since we check for the ledge grab node only if the character is not on ground.

First of all, let's check whether we actually want to detect ledge grabs:

Now let's check whether there is a tile to the right of the top-right character node:

And then, if above that tile there is an empty space:

Now we need to do the same thing for the left side:

There's one more thing we can optionally do, which is disable finding the ledge grab nodes if the falling speed is too high, so the path doesn't return some extreme ledge grabbing positions which would be hard to follow by the bot:

After all this, we can be sure that the found node is a ledge grab node.

Adding a Special Node

What do we when we find a ledge grab node? We need to set its jump value. 

Remember, the jump value is the number which represents which phase of the jump the character would be, if it reached this cell. If you need a recap on how the algorithm works, take another look at the theory article.

It seems that all we'd need to do is to set the jump value of the node to 0, because from the ledge grabbing point the character can effectively reset a jump, as if it were on the ground—but there are a couple points to consider here. 

  • First, it would be nice if we could tell at a glance whether the node is a ledge grab node or not: this will be immensely helpful when creating a bot behaviour and also when filtering the nodes. 
  • Second, usually jumping from the ground can be executed from whichever point would be most suitable on a particular tile, but when jumping from a ledge grab, the character is stuck to a particular position and unable to do anything but start falling or jump upwards.

Considering those caveats, we'll add a special jump value for the ledge grab nodes. It doesn't really matter what this value is, but it's a good idea to make it negative, since that will lower our chances of misinterpreting the node.

Now let's assign this value when we detect a ledge grab node:

Making cLedgeGrabJumpValue negative will have an effect on the node cost calculation—it will make the algorithm prefer to use ledges rather than skip them. There are two things to note here:

  1. Ledge grab points offer a greater possibility of movement than any other in-air nodes, because the character can jump again by using them; from this point of view, it is a good thing that these nodes will be cheaper than others. 
  2. Grabbing too many ledges often leads to unnatural movement, because usually players don't use ledge grabs unless they are necessary to reach somewhere.

In the animation above, you can see the difference between moving up when ledges are preferred and when they are not.

For now we'll leave the cost calculation as it is, but it is fairly easy to modify it, to make ledge nodes more expensive.

Modify the Jump Value When Jumping or Dropping From a Ledge

Now we need to adjust the jump values for the nodes that start from the ledge grab point. We need to do this because jumping from a ledge grab position is quite different than jumping from a ground. There's very little freedom when jumping from a ledge, because the character is fixed to a particular point. 

When on the ground, the character can move freely left or right and jump at the most suitable moment.

First, let's set the case when the character drops down from a ledge grab:

As you can see, the new jump length is a bit bigger if the character dropped from a ledge: this way we compensate for the lack of manoeuvrability while grabbing a ledge, which will result in a higher vertical speed before the player can reach other nodes.

Next is the case where the character drops to one side from grabbing a ledge:

All we need to do is to set the jump value to the falling value.

Ignore More Nodes

We need to add a couple of additional conditions for when we need to ignore nodes. 

First of all, when we're jumping from a ledge grab position, we need to go up, not to the side. This works similarly to simply jumping from the ground. The vertical speed is much higher than the possible horizontal speed at this point, and we need to model this fact in the algorithm:

If we want to allow dropping from the ledge to the opposite side like this:

Then we need to edit the condition that doesn't allow horizontal movement when the jump value is odd. That's because, currently, our special ledge grab value is equal to -9, so it's only appropriate to exclude all negative numbers from this condition.

Update the Node Filter

Finally, let's move on to node filtering. All we need to do here is to add a condition for ledge grabbing nodes, so that we don't filter them out. We simply need to check if the node's jump value is equal to cLedgeGrabJumpValue:

The whole filtering looks like this now:

That's it—these are all the changes that we needed to make to update the pathfinding algorithm.

Bot Changes

Now that our path shows the spots at which a character can grab a ledge, let's modify the bot's behaviour so that it makes use of this data.

Stop Recalculating reachedX and reachedY

First of all, to make  things clearer in the bot, let's update the GetContext() function. The current problem with it is that reachedX and reachedY values are constantly recalculated, which removes some information about the context. These values are used to see whether the bot has already reached the target node on its x- and y-axes, respectively. (If you need a refresher on how this works, check out my tutorial about coding the bot.)

Let's simply change this so that if a character reaches the node on the x- or y-axis, then these values stay true as long as we don't move on to the next node.

To make this possible, we need to declare reachedX and reachedY as class members:

This means we no longer need to pass them to the GetContext() function:

With these changes, we also need to reset the variables manually whenever we start moving towards the next node. The first occurrence is when we've just found the path and are going to move towards the first node:

The second is when we've reached the current target node and want to move towards the next:

To stop recalculating the variables, we need to replace the following lines:

...with these, which will detect whether we've reached a node on an axis only if we haven't already reached it:

Of course, we also need to replace every other occurence of reachedX and reachedY with the newly declared versions mReachedNodeX and mReachedNodeY.

See If the Character Needs to Grab a Ledge

Let's declare a couple variables which we will use to determine whether the bot needs to grab a ledge, and, if so, which one:

mGrabsLedges is a flag that we pass to the algorithm to let it know whether it should find a path including the ledge grabs. mMustGrabLeftLedge and mMustGrabRightLedge will be used to determine whether the next node is a grab ledge, and whether the bot should grab the ledge to the left or to the right.

What we want to do now is create a function that, given a node, will be able to detect whether the character at that node will be able to grab a ledge. 

We'll need two functions for this: one will check if the character can grab a ledge on the left, and the other will check whether the character can grab a ledge on the right. These functions will work the same way as our pathfinding code for detecting ledges:

As you can see, we check whether there's a solid tile next to our character with an empty tile above it.

Now let's go to the GetContext() function, and assign the appropriate values to mMustGrabRightLedge and mMustGrabLeftLedge. We need to set them to true if the character is supposed to grab ledges at all (that is, if mGrabsLedges is true) and if there is a ledge to grab onto.

Note that we also don't want to grab ledges if the destination node is on the ground.

Update the Jump Values

As you may notice, the character's position when grabbing a ledge is slightly different to its position when standing just below it:

The ledge grabbing position is a bit higher than the standing position, even though these characters occupy the same node. This means that grabbing a ledge will require a slightly higher jump than just jumping on a platform, and we need to take this into account.

Let's look at the function which determines how long the jump button should be pressed:

First of all, we'll change the initial condition. The bot should be able to jump, not just from the ground, but also when it is grabbing a ledge:

Now we need to add a few more frames if it's jumping to grab a ledge. First of all, we need to know if it can actually do that, so let's create a function which will tell us whether the character can grab a ledge either to the left or right:

Now let's add a couple frames to the jump when the bot needs to grab a ledge:

As you can see, we prolong the jump by 4 frames, which should do the job fine in our case.

But there's one more thing we need to change here, which doesn't really have much to do with ledge grabbing. It fixes a case when the next node is the same height as the current one, but is not on the ground, and the node after that is in higher up, meaning a jump is necessary:

Implement the Movement Logic for Grabbing Onto and Dropping Off Ledges

We'll want to split the ledge grabbing logic into two phases: one for when the bot is still not near enough to the ledge to start grabbing, so we simply want to continue movement as usual, and one for when the boy can safely start moving towards it to grab it.

Let's start by declaring a Boolean which will indicate whether we have already moved to the second phase. We'll name it mCanGrabLedge:

Now we need to define conditions that will let the character move to the second phase. These are pretty simple:

  • The bot has already reached the goal node on the X axis.
  • The bot needs to grab either the left or right ledge.
  • If the bot moves towards the ledge, it will bump into a wall instead of going further.

All right, the first two conditions are very simple to check now because we've done all the work necessary already:

Now, the third condition we can separate into two parts. The first one will take care of the situation where the character moves towards the ledge from the bottom, and the second from the top. The conditions we want to set for the first case are:

  • The bot's current position is lower than the target position (it's approaching from the bottom).
  • The top of the character's bounding box is higher than the ledge tile height.

If the bot is approaching from the top, the conditions are as follows:

  • The bot's current position is higher than the target position (it's approaching from the top).
  • The difference between the character's position and the target position is less than the character's height.

Now let's combine all these and set the flag which indicates that we can safely move towards a ledge:

There's one more thing we want to do here, and that is to immediately start moving towards the ledge:

OK, now before this huge condition let's create a smaller one. This will basically be a simplified version for the movement when the bot is about to grab a ledge:

That's the main logic behind the ledge grabbing, but there's still a couple of things to do. 

We need to edit the condition in which we check whether it is OK to move to the next node. Currently, the condition looks like this:

Now we need to also move to the next node if the bot was ready to grab the ledge and then actually did so:

Handle Jumping and Dropping From the Ledge

Once the bot is on the ledge, it should be able to jump as normal, so let's add an additional condition to the jumping routine:

The next thing the bot needs to be able to do is gracefully drop off the ledge. With the current implementation it is very simple: if we're grabbing a ledge and we are not jumping, then obviously we need to drop from it!

That's it! Now the character is able to very smoothly leave the ledge grab position, no matter whether it needs to jump up or simply drop down.

Stop grabbing ledges all the time!

At the moment, the bot grabs every ledge it can, regardless of whether it makes sense to do so. 

One solution to this is to assign a large heuristic cost to the ledge grabs, so the algorithm prioritises against using them if it doesn't have to—but this would require our bot to have a bit more information about the nodes. Since all we pass to the bot is a list of points, we don't know whether the algorithm meant a particular node to be ledge grabbed or not; the bot assumes that if a ledge can be grabbed, the it surely should! 

We can implement a quick workaround for this behaviour: we will call the pathfinding function twice. The first time we'll call it with the useLedges parameter set to false, and the second time with it set to true.

Let's assign the first path as the path found without using any ledge grabs:

Now, if this path is not null, we need to copy the results to our path1 list, because when we call the pathfinder the second time, the result in the path will get overwritten.

Now let's call the pathfinder again, this time enabling the ledge grabs:

We'll assume that our final path is going to be the path with ledge grabs:

And right after this, let's verify our assumption. If we've found a path without ledge grabs, and that path is not much longer than the path using them, then we'll make the bot disable the ledge grabs.

Note that we measure the "length" of the path in node count, which can be quite inaccurate because of the node filtering process. It'd be much more accurate to calculate, for example, the Manhattan length of the path (|x1 - x2| + |y1 - y2| of each node), but since this whole method is more of a hack than a real solution, it's OK to use this kind of heuristic here.

The rest of the function follows as it was; the path is copied to the bot instance's buffer and it starts following it.

Summary

That's all for the tutorial! As you can see, it's not so difficult to extend the algorithm to add additional movement possibilities, but doing so definitely increases complexity and adds a few troublesome issues. 

Again, the lack of accuracy can bite us here more than once, especially when it comes to the falling movement—this is the area that needs the most improvement, but I've tried to make the algorithm match the physics as well as I can with the current set of values. 

All in all, the bot can traverse a level in a manner that would rival a lot of players, and I'm very pleased with that result!

The Best Envato Tuts+ Game Development Posts From 2015

$
0
0

As 2015 draws to a close, let's take a look back at the great game development how-to tutorials and articles our wonderful instructor team wrote this year! There were high-level posts on game design and level design, beginners' guides to shaders and more advanced guides to pathfinding, inspirational posts to dip in to the next time you're coming up with ideas for a new game, and plenty more... 

Level Design

I'm really excited to have Mike Stout and Patrick Holleman writing for us. Mike's a video game designer whose major credits include games in the Ratchet & Clank, Resistance, and Skylanders series; Patrick runs The Game Design Forum and has spent hundreds of hours reverse-engineering the design of hit games to see how they tick. Each is using their considerable knowledge and experience to teach valuable level design concepts.

Shaders

Shaders are useful in both 2D and 3D games, and in certain projects you'd have a hard time avoiding coding them at all. But they can often feel like a dark art, since coding them effectively requires a different mindset. Omar's excellent three-part Beginner's Guide demystifies them, making it easy to grasp the basics right away, and David and Daniel's tutorials break down more advanced uses of them in action.

Pathfinding for 2D Platformers

Most game developers who have needed to calculate a path from A to B will be familiar with the A* pathfinding algorithm. (If you aren't, check out Patrick Lester's great introduction.) In this series, Daniel Branicki builds on A* to create a pathfinder that works for grid-based 2D platformers, and then adds a bunch of extra features.   

Inspiration for Your Next Game's Theme and Genre

Next time you're scratching your head trying to come up with an idea for a new game (whether it's a big project or a weekend game jam), get inspiration from one of Matthias's round-ups of underused game genres and themes. 

Creating a Game Using Steering Behaviours

Fernando has been writing tutorials about steering behaviours since 2012. Last year, he showed how we can use them to power the AI for a hockey game; this year, he broke down the game mechanics themselves, and gave us a revealing post-mortem about the whole project.

Learning a New Language or Platform

We have, admittedly, slowed down our efforts to collate the best resources for learning the many game development engines out there—perhaps this deserves renewed focus in 2016—but returning instructors Lee and Aditya did cover JavaFX and the very popular Pygame.

Game Design

While we've focused more on longer series of posts when it comes to coding games, for our game design posts we've mainly gone for one-off articles.

Miscellaneous Tips

These tutorials don't fit in any other category, but they're full of valuable advice nonetheless!

See You In 2016!

In 2016 you can count on more level design tutorials from Mike and Patrick, more maths tutorials from Fernando, more shader tutorials from Omar, and more platformer tutorials from Daniel—and that's just what our regulars have lined up already! What would you like to see on the site? Have a great new year!

References

How to Write a Smoke Shader

$
0
0

There's always been a certain air of mystery around smoke. It's aesthetically pleasing to watch and elusive to model. Like many physical phenomena, it's a chaotic system, which makes it very difficult to predict. The state of the simulation depends heavily on the interactions between its individual particles. 

This is exactly what makes it such a great problem to tackle with the GPU: it can be broken down to the behavior of a single particle, repeated simultaneously millions of times in different locations. 

In this tutorial, I'll walk you through writing a smoke shader from scratch, and teach you some useful shader techniques so you can expand your arsenal and develop your own effects!

What You'll Learn

This is the end result we will be working towards:

Click to generate more smoke. You can fork and edit this on CodePen.

We'll be implementing the algorithm presented in Jon Stam's paper on Real-Time Fluid Dynamics in Games. You'll also learn how to render to a texture, also known as using frame buffers, which is a very useful technique in shader programming for achieving many effects. 

Before You Get Started

The examples and specific implementation details in this tutorial use JavaScript and ThreeJS, but you should be able to follow along on any platform that supports shaders. (If you're not familiar with the basics of shader programming, make sure you go through at least the first two tutorials in this series.)

All the code examples are hosted on CodePen, but you can also find them in the GitHub repository associated with this article (which might be more readable). 

Theory and Background

The algorithm in Jos Stam's paper favors speed and visual quality over physical accuracy, which is exactly what we want in a game setting. 

This paper can look a lot more complicated than it really is, especially if you're not well versed in differential equations. However, the whole gist of this technique is summed up in this figure:

Figure taken from Jos Stams paper cited above

This is all we need to implement to get a realistic-looking smoke effect: the value in each cell dissipates to all its neighboring cells on each iteration. If it's not immediately clear how this works, or if you just want to see how this would look, you can tinker with this interactive demo:

Smoke shader algorithm interactive demo
View the interactive demo on CodePen.

Clicking on any cell sets its value to 100. You can see how each cell slowly loses its value to its neighbors over time. It might be easiest to see by clicking Next to see the individual frames. Switch the Display Mode to see how it would look if we made a color value correspond to these numbers.

The above demo is all run on the CPU with a loop going through every cell. Here's what that loop looks like:

This snippet is really the core of the algorithm. Every cell gains a little bit of its four neighboring cells, minus its own value, where f is a factor that is less than 1. We multiply the current cell value by 4 to make sure it diffuses from the higher value to the lower value.

To clarify this point, consider this scenario: 

Grid of values to diffuse

Take the cell in the middle (at position [1,1] in the grid) and apply the diffusion equation above. Let's assume f is 0.1:

No diffusion happens because all the cells have equal values! 

If we considerthe cell at the top left instead (assume the cells outside of the pictured grid are all 0):

So we get a net increase of 20! Let's consider a final case. After one timestep (applying this formula to all cells), our grid will look like this:

Grid of diffused values

Let's look at the diffuse on the cell in the middle again:

We get a net decreaseof 12! So it always flows from the higher values to the lower ones.

Now, if we wanted this to look more realistic, we could decrease the size of the cells (which you can do in the demo), but at some point, things are going to get really slow, as we're forced to sequentially run through every cell. Our goal is to be able to write this in a shader, where we can use the power of the GPU to process all the cells (as pixels) simultaneously in parallel.

So, to summarize, our general technique is to have each pixel give away some of its color value, every frame, to its neighboring pixels. Sounds pretty simple, doesn't it? Let's implement that and see what we get! 

Implementation

We'll start with a basic shader that draws over the whole screen. To make sure it's working, try setting the screen to a solid black (or any arbitrary color). Here's how the setup I'm using looks in Javascript.

You can fork and edit this on CodePen. Click the buttons at the top to see the HTML, CSS, and JS.

Our shader is simply:

res and pixel are there to give us the coordinate of the current pixel. We're passing the screen's dimensions in res as a uniform variable. (We're not using them right now, but we will soon.)

Step 1: Moving Values Across Pixels

Here's what we want to implement again:

Our general technique is to have each pixel give away some of its color value every frame to its neighboring pixels.

Stated in its current form, this is impossibleto do with a shader. Can you see why? Remember that all a shader can do is return a color value for the current pixel it's processing—so we need to restate this in a way that only affects the current pixel. We can say:

Each pixel should gain some color from its neighbors, while losing some of its own.

Now this is something we can implement. If you actually try to do this, however, you might run into a fundamental problem... 

Consider a much simpler case. Let's say you just want to make a shader that turns an image red slowly over time. You might write a shader like this:

And expect that, every frame, the red component of each pixel would increase by 0.01. Instead, all you'll get is a static image where all the pixels are just a tiny bit redder than they started. The red component of every pixel will only ever increase once, despite the fact that the shader runs every frame.

Can you see why?

The Problem

The problem is that any operation we do in our shader is sent to the screen and then lost forever. Our process right now looks like this:

Shader process

We pass our uniform variables and texture to the shader, it makes the pixels slightly redder, draws that to the screen, and then starts over from scratch again. Anything we draw within the shader gets cleared by the next time we draw. 

What we want is something like this:

Repeatedly applying the shader to the texture

Instead of drawing to the screen directly, we can draw to some texture instead, and then draw that texture onto the screen. You get the same image on screen as you would have otherwise, except now you can pass your output back as input. So you can have shaders that build up or propagate something, rather than get cleared every time. That is what I call the "frame buffer trick". 

The Frame Buffer Trick

The general technique is the same on any platform. Searching for "render to texture" in whatever language or tools you're using should bring up the necessary implementation details. You can also look up how to use frame buffer objects, which is just another name for being able to render to some buffer instead of rendering to the screen. 

In ThreeJS, the equivalent of this is the WebGLRenderTarget. This is what we'll use as our intermediary texture to render to. There's one small caveat left: you can't read from and render to the same texture simultaneously. The easiest way to get around that is to simply use two textures. 

Let A and B be two textures you've created. Your method would then be:

  1. Pass A through your shader, render onto B.
  2. Render B to the screen.
  3. Pass B through shader, render onto A.
  4. Render A to your screen.
  5. Repeat 1.

Or, a more concise way to code this would be:

  1. Pass A through your shader, render onto B.
  2. Render B to the screen.
  3. Swap A and B (so the variable A now holds the texture that was in B and vice versa).
  4. Repeat 1.

That's all it takes. Here's an implementation of that in ThreeJS:

You can fork and edit this on CodePen. The new shader code is in the HTML tab.

This is still a black screen, which is what we started with. Our shader isn't too different either:

Except now if you add this line (try it!):

You'll see the screen slowly turning red, as opposed to just increasing by 0.01 once. This is a pretty significant step, so you should take a moment to play around and compare it to how our initial setup worked. 

Challenge: What happens if you put gl_FragColor.r += pixel.x; when using a frame buffer example, compared to when using the setup example? Take a moment to think about why the results are different and why they make sense.

Step 2: Getting a Smoke Source

Before we can make anything move, we need a way to create smoke in the first place. The easiest way is to manually set some arbitrary area to white in your shader. 

If we want to test whether our frame buffer is working correctly, we can try to add to the color value instead of just setting it. You should see the circle slowly getting whiter and whiter.

Another way is to replace that fixed point with the position of the mouse. You can pass a third value for whether the mouse is pressed or not, that way you can click to create smoke. Here's an implementation for that.

Click to add "smoke". You can fork and edit this on CodePen.

Here's what our shader looks like now:

Challenge: Remember that branching (conditionals) are usually expensive in shaders. Can you rewrite this without using an if statement? (The solution is in the CodePen.)

If this doesn't make sense, there's a more detailed explanation of using the mouse in a shader in the previous lighting tutorial.

Step 3: Diffuse the Smoke

Now this is the easy part—and the most rewarding! We've got all the pieces now, we just need to finally tell the shader: each pixel should gain some color from its neighbors, while losing some of its own.

Which looks something like this:

We've got our f factor as before. In this case we have the timestep (0.016 is 1/60, because we're running at 60 fps) and I kept trying numbers until I arrived at 14, which seems to look good. Here's the result:

Click to add smoke. You can fork and edit this on CodePen.

Uh Oh, It's Stuck!

This is the same diffuse equation we used in the CPU demo, and yet our simulation gets stuck! What gives? 

It turns out that textures (like all numbers on a computer) have a limited precision. At some point, the factor we're subtracting by gets so small that it gets rounded down to 0, so the simulationgets stuck. To fix this, we need to check that it doesn't fall below some minimum value:

I'm using the r component instead of the rgb to get the factor, because it's easier to work with single numbers, and because all of the components are the same number anyway (since our smoke is white). 

By trial and error, I found 0.003 to be a good threshold where it doesn't get stuck. I only worry about the factor when it's negative, to ensure it can always decrease. Once we apply this fix, here's what we get:

Click to add smoke. You can fork and edit this on CodePen.

Step 4: Diffuse the Smoke Upwards

This doesn't look very much like smoke, though. If we want it to flow upwards instead of in every direction, we need to add some weights. If the bottom pixels always have a bigger influence than the other directions, then our pixels will seem to move up. 

By playing around with the coefficients, we can arrive at something that looks pretty decent with this equation:

And here's what that looks like:

Click to add smoke. You can fork and edit this on CodePen.

A Note on the Diffuse Equation

I basically fiddled around with the coefficients there to make it look good flowing upwards. You can just as well make it flow in any other direction. 

It's important to note that it's very easy to make this simulation "blow up". (Try changing the 6.0 in there to 5.0 and see what happens). This is obviously because the cells are gaining more than they're losing. 

This equation is actually what the paper I cited refers to as the "bad diffuse" model. They present an alternative equation that is more stable, but is not very convenient for us, mainly because it needs to write to the grid it's reading from. In other words, we'd need to be able to read and write to the same texture at the same time. 

What we've got is sufficient for our purposes, but you can take a look at the explanation in the paper if you're curious. You will also find the alternative equation implemented in the interactive CPU demo in the function diffuse_advanced().

A Quick Fix

One thing you might notice, if you play around with your smoke, is that it gets stuck at the bottom of the screen if you generate some there.This is because the pixels on that bottom row are trying to get the values from the pixels below them, which do not exist.

To fix this, we simply make sure that the pixels in the bottom row find 0 beneath them:

In the CPU demo, I dealt with that by simply not making the cells in the boundary diffuse. You could alternatively just manually set any out-of-bounds cell to have a value of 0. (The grid in the CPU demo extends by one row and column of cells in each direction, so you never actually see the boundaries)

A Velocity Grid

Congratulations! You now have a working smoke shader! The last thing I wanted to briefly discuss is the velocity field that the paper mentions.

Velocity field from Jon Stams paper

Your smoke doesn't have to uniformly diffuse upwards or in any specific direction; it could follow a general pattern like the one pictured. You can do this by sending in another texture where the color values represent the direction the smoke should flow in at that location, in the same way that we used a normal map to specify a direction at each pixel in our lighting tutorial.

In fact, your velocity texture doesn't have to be static either! You could use the frame buffer trick to also have the velocities change in real time. I won't cover that in this tutorial, but there's a lot of potential to explore.

Conclusion

If there's anything to take away from this tutorial, it's that being able to render to a texture instead of just to the screen is a very useful technique.

What Are Frame Buffers Good For?

One common use for this is post-processingin games. If you want to apply some sort of color filter, instead of applying it to every single object, you can render all your objects to a texture the size of the screen, then apply your shader to that final texture and draw it to the screen. 

Another example is when implementing shaders that require multiple passes, such as blur.You'd usually run your image through the shader, blur on the x-direction, then run it through again to blur on the y-direction. 

A final example is deferred rendering, as discussed in the previous lighting tutorial, which is an easy way to efficiently add many light sources to your scene. The cool thing about this is that calculating the lighting no longer depends on the amount of light sources you have.

Don't Be Afraid of Technical Papers

There's definitely more detail covered in the paper I cited, and it assumes you have some familiarity with linear algebra, but don't let that deter you from dissecting it and trying to implement it. The gist of it ended up pretty simple to implement (after some tinkering with the coefficients). 

Hopefully you've learned a little more about shaders here, and if you have any questions, suggestions, or improvements, please share them below! 

What is Pixel Art?

$
0
0
Final product image
What You'll Be Creating

From Space Invaders to Super Mario, pixel art is well known within the game industry of yore. It's quite likely that you grew up seeing a great deal of the art form through gaming consoles or PCs without a great deal of investigation into the process of creating it. If, however, you were anything like me as a child, simply guiding Link through Hyrule was not enough: you wanted to create the artwork he swung his sword in, too.

As pixel art in game design, illustration, and other media has made quite a comeback in recent years (likely due to nostalgia and an appreciation of a beautiful, if sometimes tedious, style of artwork), it's a great time to ask the question: “What's the deal with pixel art?”

What Qualifies as Pixel Art?

Video game style pixel sprites
Video game style pixel sprites.

Considering that everything you are viewing on your monitor, tablet, or phone is comprised of many, many pixels, the often asked question is “how is this not pixel art?” It's art, it's made of pixels, so surely all digital art is pixel art. While technically correct, when talking about “pixel art”, we're focused on a specific style of artwork most often employed within the gaming industry. Pixel art is a raster-based digital work that is created on a pixel-by-pixel level. Typically very small, the art form is similar to mosaics or cross-stitch in that it focuses on small pieces placed individually to create a larger piece of art.

Many image editing programs can be used to generate pixel art, so long as the program allows artwork to be drawn on a one pixel by one pixel scale. As such, the popularity of artists using MS Paint arose due to its being readily available to Windows users. In the case of other image editing programs, tools outside of hard-edged pencils and erasers are typically discouraged. A hallmark of pixel art tends to be the artist's ability to render complex designs and scenes without the use tools like Dodge, Burn, or shape tools.

What Techniques are Used?

Pixel house showcasing the uses of dithering
Pixel art house showing dithering techniques in the cloud, window, and use of tool-assisted gradient backgrounds.

Often, the color palette within pixel art is limited. In previous years (we're talking a couple decades at this point), the limit in color count was due to the limits of game consoles or display on a computer monitor. As such, a technique known as dithering was employed.Dithering is the staggering of two colors in order to blend them together without having to involve extra colors. The pattern an artist uses, either style of staggering pixels or density of dithering, contributes to how well the colors blend. It's similar in style to the artistic technique of stippling.

Pixel art eggs showcasing anti-aliasing
Pixel art eggs from the Breakfast Icon Tutorial showcasing anti-aliasing around the egg.

Another technique used is anti-aliasing. This allows a an object or game sprite to blend easily into the background or another object. Depending on the overall look an artist is striving to achieve, anti-aliasing may not be used at all. Often, anti-aliasing takes the form of pre-rendered backgrounds and leads to painterly work which allows a game sprite to stand out from the background and be easily seen by the player.

Both techniques can either be done by hand or with the help of tools within a program like Adobe Photoshop. When saving pixel art in either the GIF or PNG format (both of which are the best formats due to the addition of JPEG artifacts often ruining pixel art quality), Photoshop allows for color limiting options, dithering, and hard or anti-aliased edges. The same goes for how an image is re-sized within the program, allowing users to enlarge pixel art without losing its hard edges.

What Does 8-bit Mean Anyway?

256 color palette
An example of an 8-bit, or 256 color palette: the Mac default palette.

It's terrible trendy for pixel art inspired designs to be called 8-bit whether they are truly 8-bit or not. Within pixel art, 8-bit is in reference to the color. An 8-bit console, like the Nintendo Entertainment System, was able to display up to 256 colors . Each color was based on a set of integers, 8 being the highest number of integers that could be stored at the time by that machine. So the color profile that was used held 3 bits (or bytes of data) of red, 3 bits of green, and 2 bits of blue, creating 256 colors that were displayable. Additional limits were placed on video games depending on how much information was stored and accessed on a game cartridge. While a console could display a multitude of colors and animations, limits set allowed the games to render quickly during game play and process faster.

In the early 1990's, consoles like Super Nintendo and Sega Genesis were 16-bit, upping the color display count to a whopping 65,536. This allows for smoother gradients and more complex artwork to be created and animated within video games. By the time consoles and computers displayed 32-bit graphics (think Playstation One), 3-D rendered work was already taking hold and artists rendering pixels were now rendering polygons. Additionally, game consoles were able to render said graphics at a higher speed than their predecessors thanks to advances in technology over the years.

What is Isometric Pixel Art?

Let's say you're playing a side-scrolling video game like Contra (well known as an arcade game in the late 1980's and on the Famicon/NES console). You'll notice that the artwork is in profile and no vanishing point exists. There's no perspective going on at all in games like this. The same goes for Super Mario games throughout the 80's and 90's. Additionally, games like The Legend of Zelda: A Link to the Past showed a top-down view (showing the top or the top and one side), where the player was able to peer into buildings from above. This showed an added dimension to the graphics being displayed, as well as characters within the game, but the overall look was still very flat in comparison to 3-D rendered games produced later.

Example of isometric pixel art
A simple isometric block showing construction lines on the left and being colored-in on the right.

When someone refers to pixel art being “isometric”, they're talking about a type of parallel projection that takes on a 3/4-like view more accurately referred to as “dimetric projection”. It's not quite 3-D, but no longer as flat as the aforementioned perspective styles seen in other pixel art. A well known example of isometric perspective in gaming would be the 1982 classic “Q*Bert”. While the character of Q*Bert himself is flat, the levels on which he hops show the top and two sides of each box. Such a view made the played move Q*Bert in a mostly diagonal fashion.

Creating isometric or dimetric pixel art is far more complicated than a side or top-down view. Often artists work on a grid in order to keep their vertical, horizontal, and diagonal lines from straying into the wrong perspective and their angles at the correct degree for the scene. It's quite similar to working with perspective in technical drawing and takes a fair bit of planning, measuring, and understanding of shapes, space, and how they coordinate in order to form accurate objects, sprites, and environments. Once 3-D graphics became more prevalent, the isometric pixel art style gave way to perspective projection, which is easier for an artist working within a 3-D space to create, as it's the type of space we exist within as well as what's most often taught and used within multiple disciplines of art.

What About Non-gaming Uses of Pixel Art?

Animated pixel dolls
Animated pixel dolls.

While the most prevalent use of pixel art has been in video games, it's an art form unto its own all the same. Pixel artists known as “dollers” (as in, those who make dolls) use the style and techniques from 8-bit and 16-bit video games in order to create base bodies, hair, clothing, and environment for digital doll-like avatars.

Pixel art style website layout
Pixel art style website layout circa 2006.

Many websites from the late 1990's into the mid-Millenium were filled with animated GIFs, avatars, and layouts rendered entirely in pixel art. This was most prevalent in South Korea where the popularity of websites like iBravo and Sayclub had users purchasing components for their profiles or to interact with other users. Additionally, doll-makers were created from the artwork on websites like these (and those like them worldwide) whereupon users would dress up base bodies in pre-made clothing and accessories to display within their profiles on websites like Myspace.

Animated avatar showcasing emotion
Animated avatar showcasing emotion.

Emoticons and kaoani (a Japanese term derived from “kao” meaning “face”, and “ani” meaning “animated”) were all initially rendered in a pixel format. In the case of both, they were often animated allowing users on early social media, message boards, and within instant messengers to display qualities such as mood, activities, or various wordless communications. Animated buddy icons became extremely popular for users of AOL Instant Messenger some fifteen years ago.

Computer icons throughout the 90's were pixel art pieces. Your mouse cursor, unchanged for decades, is still a simple pixel art graphic. Interestingly, most of these uses of pixel art have been replaced by vector graphics (or the popularity of them has) within the past decade. Doll-makers, website avatars, full website layouts, and more are all vector graphics (presented as raster-images) likely due to the need for multiple display sizes within each device (computer, tablet, phone, etc).

Nostalgia as an Art Form

Leaving aside the practical uses of pixel art, artists nostalgic for the style of work within video games from their younger years create illustrations and pieces of design for art's sake. Some pieces are enlarged, retaining the fidelity of each pixel edge, rendering the piece mosaic-like, whereas other artwork is created on the small-scale over a large picture plane rendering the work something akin to “Where's Wally?” (Waldo for my fellow North Americans). In either case (or any other creation based on the style), it's a part of the growing movement to capture the past in the form of art. By creating pieces of work reminiscent of media of the past, our interaction with it is involved within sharing memories we've had with similar styles within video games, internet browsers, and early social media.

Alternatively, artists may just really enjoy the look and feel of pixel art versus having some higher agenda for engaging with the art form. In any case, its popularity has been on the rise appearing in art galleries, on clothing and other accessories, and right back in various gaming platforms. 

Care to dive into pixel art yourself? Why not check out these wonderfully relevant tutorials and take some pixels for a spin:

Quick Tip: How to Render to a Texture in Three.js

$
0
0

By default, everything you render in Three.js gets sent to the screen. After all, what's the point of rendering something if you can't see it? It turns out there's a very important point: capturing the data before it gets sent to the screen (and thereby lost). 

This makes it much easier to apply post-processing effects, like color correction, color shifting, or blurring, and it's useful for shader effects, too.

This technique is known as rendering to a texture or rendering to a frame buffer; your final result is stored in a texture. which you can then render to the screen. In this Quick Tip, I'll show you how to do it, and then walk you through a practical example of rendering a moving cube onto the surfaces of another moving cube.

Note: This tutorial assumes you have some basic familiarity with Three.js. If not, check out How to Learn Three.js for Game Development.

Basic Implementation

There are a lot of examples out there on how to do this that tend to be embedded in more complicated effects. Here is the bare minimum you need to render something onto a texture in Three.js:

We first have the basic scene setup. Then, we create another scene, bufferScene; any object we add to this scene will be drawn to our off-screen target instead of to the screen.

We then create bufferTexture, which is a WebGLRenderTarget. This is what Three.js uses to let us render onto something other than the screen. 

Finally, we tell Three.js to render bufferScene:

This is just like rendering a normal scene, except we specify a third argument: the render target. 

So the steps are:

  1. Create a scene to hold your objects.
  2. Create a texture to store what you render
  3. Render your scene onto your texture

This is essentially what we need to do. It's not very exciting, though, since we can't see anything. Even if you did add things to the bufferScene, you still won't see anything; this is because you need to somehow render the texture you created onto your main scene. The following is an example of how to do that.

Example Usage

We're going to create a cube in a scene, draw it onto a texture, and then use that as a texture for a new cube!

1. Start With a Basic Scene

Here is our basic scene with a red rotating cube and a blue plane behind it. There's nothing special going on here, but you can check out the code by switching to the CSS or JS tabs in the demo.

You can fork and edit this on CodePen.

2. Render This Scene Onto a Texture

Now we're going to take that and render it onto a texture. All we need to do is create a bufferScene just like in the above basic implementation, and add our objects to it.

You can fork and edit this on CodePen.

If done right, we won't see anything, since now nothing is being rendered onto the screen. Instead, our scene is rendered and saved in bufferTexture.

3. Render a Textured Cube

bufferTexture is no different from any other texture. We can simply create a new object and use it as our texture:

You can fork and edit this on CodePen.

You could potentially draw anything in the first texture, and then render it on whatever you like! 

Potential Uses

The most straightforward use is any sort of post-processing effect. If you wanted to apply some sort of color correction or shifting to your scene, instead of applying to every single object, you could just render your entire scene onto one texture, and then apply whatever effect you want to that final texture before rendering it to the screen. 

Any sort of shader that requires multiple passes (such as blur) will make use of this technique. I explain how to use frame buffers to create a smoke effect in this tutorial.

Hopefully you've found this little tip useful! If you spot any errors or have any questions, please let me know in the comments! 

The Super Mario World Method: Understanding Skill Themes

$
0
0

This is the third tutorial in a series I have been writing about how to apply the design lessons of Super Mario World to your own Super Mario Maker levels. In the previous two parts, I focused on how to organize the gameplay elements of your Mario Maker courses at the challenge (small) and cadence (medium) levels of content. If you haven't read those articles, you really should, because this article leans heavily upon terms defined and explained there. 

In this tutorial I'm going to talk about the highest meaningful level of content, the skill theme. Skill themes are collections of levels which require the same player skills to beat. Understanding skill themes as they existed in Super Mario World can help you to create reliable plans for your own levels, in Super Mario Maker or your own games. 

My Example

I've created (and annotated) an example Super Mario Maker level that moves back and forth between two skill themes: "moving targets" and "periodic enemies". Take a look:

The level code is F7E5 0000 0125 6EF3 if you want to try it yourself.

In order to fully understand skill themes, we first need to dive briefly into the history of video game design...

A Brief History of Video Game Design

The history of video game design breaks down into three eras: the arcade era, the composite era, and the set piece era. I've written about this extensively, but only the first two eras are relevant to our understanding of Mario-style game design. 

In the arcade era, video games like Space Invaders, Pac-Man or Galaga derived all of their gameplay from just a couple of variables. These games grew more difficult when enemies got faster, became more numerous, or fired more often. These are all quantitative expansions, which I spoke about in the previous two articles. 

Qualitative evolutions didn’t take on their modern form until Shigeru Miyamoto invented the composite game. A composite game combines two existing genres, allowing the player to use the mechanics of one genre to solve the problems of another genre. 

The first real composite game, Super Mario Bros, is a composite of the action and platformer genres. It illustrates the composite design principle perfectly: the player can solve action game problems (defeating enemies) by using platformer mechanics (jumping).

Mario is a composite of the platformer and action genres
The Super Mario Bros games are composites of the action and platformer genres.

That’s not all a composite game does, though. A true composite game will bounce back and forth between its two composited genres to keep the gameplay feeling fresh. This back-and-forth motion produces a type of ongoing player focus that I call composite flow.

So How Does This Translate to Level Design?

Okay, that was a lot of theory, but I’m going to illustrate composite design and what it means in Mario Maker. The good news is that the theory of composite games actually makes it easier to figure out what to put into your Mario levels, if you understand the theory correctly. 

Mario games are made up of the action and platformer genres. Every level in a Mario game is going to focus either on platformer content or action content. Accordingly, one of the first steps a designer takes when making a level is to decide on the level’s genre focus. That’s a decision you can make before you even start your level! 

If you were designing platformers in 1986, you’d be done preparing already. Of course, games have advanced a lot since then, so you also have to take into account the next big development that Mario games introduced: skill set isolation

Starting with Super Mario World, the designers deliberately targeted different skill sets when making levels. That is, they wanted to make it so levels in the same genre could feel different. Thus, Super Mario World isolates two different skill sets: timing and speed. Either skill set will match up with either genre of gameplay. 

We can see what this looks like on a grid:

Grid of genre vs skill set in Super Mario World

Each combination of genre and skill set results in a very different type of level. These level types belong to the highest level of organization in the "challenge, cadence, skill-theme" (CCST) framework: skill themes. A skill theme is just a collection of levels which are based on the same genre and skill set. 

In this tutorial, I'll demonstrate two skill themes: the "moving targets" skill theme, and the "periodic enemies" skill theme. Don’t worry about the definition of those themes; I'll show you exactly what I mean.

Applying History to Mario Maker

The first skill theme I’d like to show you is the "moving targets" skill theme. One might call this the "moving platforms" theme, because it focuses on platformer mechanics and making precisely-timed jumps to them. I use the term “targets” instead, because in actual Mario games, the platforms don’t always look like platforms. Sometimes the moving targets are walls, barrels or enemies; the term “targets” catches all of those things. 


An example of the moving targets skill theme
An example of the "moving targets" skill theme.

The "periodic enemies" skill theme focuses on moving enemies instead of moving platforms. Instead of having to guide Mario across platforms which move in regular loops, the player has to guide Mario past enemies that move in regular loops. 

A loop doesn't necessarily mean a circle; any enemy that moves in a set space at a set speed and with a set pattern is periodic. The key is that it's really easy to predict where the enemy is going to be, and when, the same way the player would predict the location of a moving platform. 

This is definitely more action-oriented and less platform-oriented, however. In the periodic enemies theme, the danger isn't in falling from a bad jump; the danger is in colliding with an enemy. Indeed, many of Super Mario World's "periodic enemies" levels feature very little jumping at all.

An example of the periodic enemies skill theme
An example of the "periodic enemies" skill theme.

In my level, as in most of the castles and fortresses of Super Mario World, the player is actually doing more running than jumping. Sure, there are jumps, but the periodic enemies theme is all about running at the right time. This does involve some waiting. 

It's possible to blow past the Thwomps and Grinders you see above at full speed, but it's quite hard, and that's the point. If the player simply waits for the right moment, getting past all those obstacles is easy. It's all about timing and finesse rather than speed and reflexes.

The key to working in the periodic enemies theme is mixing and matching the heterogeneous periods of different enemies. Essentially, you want to get several different types of bad guys moving around in one space. Here's an example:

A more complex example of the periodic enemies theme
A more complex example of the "periodic enemies" theme.

The period of the flame jet and the period of the tracked Grinder are totally different (that is, they’re dangerous at different times), and so the player has to internalize the timing for both of those enemies to figure out the window for a safe jump. It's not a demanding jump in terms of thumb-skills, but it does require precise timing.

Next, the crossover ends and the level returns to the moving targets theme. If you, as a level designer, want to stick to Nintendo-style orthodoxy, this is an important step. There are numerous levels which have extensive crossover material in Super Mario World (and other Mario games), but only a small few of them end in a skill theme other than the one they started in. Returning to the moving targets theme, the level continues with a further evolution upon the idea of separation from the platform.

Separation from the platform

The player is only really moving between platforms here, not unlike the way he or she would in the Super Mario World levels Cheese Bridge or Way Cool. This section is a little shorter, and Super Mario Maker doesn’t allow me to build all the crazy shapes and involutions you see in tracked platforms in Super Mario World. Still, this section makes the player concentrate on the timing of two moving platforms instead of just one, which is the kind of evolution you would expect.

After this, I combine the two skill themes together for the final challenge. Here we have strong elements of both moving targets and periodic enemies:

Moving targets and periodic enemies

The player has to detach from the moving platform and pass through two periodic-enemy traps to get back to it. The layout more or less explains itself. I configured the rotation of the flame traps so that the player can get through them if he or she waits just a second at each one. All it takes is two properly timed runs and the player should be in position to drop onto the platform as it passes under the lowest row of blocks.

A Brief Recap

That should give you a good idea about the use of skill themes. The skill themes discussed above are not the only ones in platformers—they’re not even the only skill themes in Super Mario World! The next article will explore two more skill themes, and there are many more in games beyond the Nintendo library. 

Here are some of the lessons we can learn from Super Mario World about how to use skill themes:

  • Many classic video games were designed as composites of two or more genres.
  • Moving back and forth between genres from one level to the next can keep the levels in your game feeling fresh—but the game should never abandon either of its genres totally.
  • In addition to moving between genres, a game can move between skill sets like timing vs. speed, or puzzle-solving vs. combat.
  • Most levels in Super Mario World exist as a part of a skill theme: they stick with one genre focus and one skill set.
  • When a level in the style of Super Mario World does shift between skill themes, it usually shifts in genre but not in skill sets. Platform levels detour into action segments, but timing levels don’t usually switch over to speed.

Good luck making your levels!

An Introduction to Intel RealSense Technology for Game Developers

$
0
0

Intel RealSense technology pairs a 3D camera and microphone array with an SDK that allows you to implement gesture tracking, 3D scanning, facial expression analysis, voice recognition, and more. In this article, I'll look at what this means for games, and explain how you can get started using it as a game developer.

What is Intel RealSense?

RealSense is designed around three different peripherals, each containing a 3D camera. Two are intended for use in tablets and other mobile devices; the third—the front-facing F200—is intended for use in notebooks and desktops. I'll focus on the latter in this article.

The F200 is already included in a number of different notebooks, as well as a couple of other devices, and will soon be available as a stand-alone USB peripheral. (You can already order or reserve a dev kit version for around $100.)

It consists of:

  • A conventional color camera (1080p, 30fps)
  • An infrared laser projector and camera (640x480, 60fps)
  • A microphone array (with the ability to locate sound sources in space and do background noise cancellation)

The infrared projector and camera can retrieve depth information to create an internal 3D model of whatever the camera is pointed at; the color information from the conventional camera can then be used to color this model in.

The SDK then makes it simpler to use the capabilities of the camera in games and other projects. It includes libraries for:

  • Hand, finger, head, and face tracking
  • Facial expression and gesture analysis
  • Voice recognition and speech synthesis
  • Augmented reality
  • 3D object and head scanning
  • Automated background removal

Note that, as well as allowing you to track, say, the position of someone's nose or the tip of their right index finger in 3D space, RealSense can also detect several built-in gestures and expressions, like these:

So, instead of writing code to check if the corners of the player's mouth are curved upwards and deducing whether or not they are smiling, you can just quiz the RealSense library for the "smile" gesture.

What RealSense Brings to Games

Here are a few examples of how RealSense can be (and is being) used in games:

Nevermind, a psychological horror game, uses RealSense for biofeedback: it measures the player's heart rate using the 3D camera, and then reacts to the player's level of fear. If you lose your cool, the game gets harder!

MineScan, by voidALPHA, is a proof-of-concept that lets you scan real-world objects (like stuffed animals) into Minecraft. Any 3D PC game with an emphasis on mods or personalization could use the RealSense camera's scanning capabilities to let players insert their own objects (or even themselves!) into the game.

Faceshift uses RealSense for motion capturing faces in detail. This technology could be used in real-time, within a game, whenever players talk to one another, or during production time to record an actor's expressions as well as their voice for more realistic characters.

There Came an Echo is a tactical RTS that uses RealSense's voice recognition capabilities to let the player command their squad. It's easy to see how this could be adapted to, for instance, a team-based FPS.

Years ago, Johnny Lee explained how to (mis-)use a Wii controller and sensor bar to track the player's head position and adjust the in-game view accordingly. Few games, if any, actually made use of this (no doubt because of the unorthodox setup it required)—but RealSense's head and face tracking capabilities makes this possible, and much simpler. 

There are also several games already using RealSense to power their gesture-based controls:

Laserlife, a sci-fi exploration game from the studio behind the BIT.TRIP series.

Head of the Order, a tournament-style fighting game set in a fantasy world, where players use hand gestures to cast spells at each other.

Space Between, in which you use swimming hand motions to guide turtles, fish, and other sea creatures through a series of tasks in an underwater setting.

Madagascar Move It!, a kids game similar to the Let's Dance series.

Gesture controls aren't exactly new to gaming, but previously they've been almost exclusive to Kinect. Now, they can be used in PC games—that means Steam, and even the web platform.

How to Use RealSense as a Game Developer

First step: download the SDK. (Well, OK, the first step is probably to get a device with a RealSense camera or reserve a dev kit.) 

The SDK contains:

  • Libraries and interfaces for Java, Processing, C++, C#, and JavaScript
  • A Unity Toolkit with scripts and prefabs
  • Code samples and demos
  • Documentation

Next, take a look at the Intel RealSense SDK Training site. Here, you'll find guides to get you started, tutorials to guide you through using certain features (including the Unity Toolkit), and videos of previous webinars. We'll also be publishing RealSense tutorials on Tuts+ over the next few weeks.

Intel's YouTube channel has a great playlist of videos about developing for RealSense. These have a much greater focus on UX and UI than the tutorials above; watch this video for an example:

These UX Guidelines (PDF) are a great accompaniment to the above videos.

Once you've got a good overview of what the SDK can do and how the various libraries work, dive in to the documentation for detail.

Finally, check out the official forums to chat with other developers, see what they're working on, and get advice.

Conclusion

We've covered what RealSense is, what game developers are using it for, and how you can get started using it in your own games. Keep an eye on the Tuts+ Game Development section over the next few weeks for some tutorials on head scanning, keyboard-free typing, and expression recognition.

The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward-looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of ThingsAndroid*Intel RealSense Technology, Modern CodeGame Dev and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.


A Beginner's Guide to Designing Video Game Levels

$
0
0

In this tutorial, I'll explain how to design levels for video games, based on my experience as a designer for the Ratchet & Clank, Resistance, and Skylanders franchises. I'm not going to dive deep into individual concepts, but rather give an outline of the high-level process I use when designing a level.

I'll walk you through an example level I'm creating from scratch, so you can see typical results from each stage of the process.

  • In Step 1: Understanding Constraints, I'll walk you through common limitations I always look out for while designing levels.
  • In Step 2: Brainstorming and Structure, I'll show you how I decide what goes into a level.
  • In Step 3: Bubble Diagrams, I'll introduce you to a visual method for outlining what goes into each area of your level.
  • In Step 4: Rough Maps, I'll talk about how I flesh out each bubble from a Bubble Diagram to figure out what goes into each area. I could write an entire series of tutorials about how to do this, so we'll only go over the basic outline here.
  • In Step 5: Finishing the Design, I'll talk about moving on from your basic design to create final spaces. This is also a huge topic that could be further explored in a series of tutorials, so for scope I'll keep this very basic.

1. Understanding Constraints

At the beginning of a design, the hardest part is figuring out what is going to be in a level. As a designer, you get to decide a lot, but you don't always get to decide everything—especially if you're working in a large team. 

On a large team, most of your constraints are going to come from other people. There will be business constraints, franchise constraints, audience constraints, legal constraints, engine constraints, and so forth. Most of the time, these restrictions come from far away up the chain.

Closer to you will be the constraints applied by the vision of the creative director, art director, and anyone else involved making decisions at that level.

If you're working on your own as an indie, you're the one who will be making these decisions, so you still need to understand your constraints very well.

General Constraints

There are a few general constraints I try keep in mind whenever I design a level; I find that these apply to most games I've ever worked on. I've provided example answers to the questions about these constraints below to show you the level of detail you need to get started, and I'll use these example constraints to construct an example level design in this tutorial.

How Long Should This Level Be?  

This is a short level, about 30 minutes long at most. 

Are We Trying to Show off Any New Tech, Art, Audio, or Similar? 

Our imaginary game engine has cool indoor lighting effects, so I want to have lots of cool indoor spaces.

How Much Time Do I Have to Design It? 

This article was written over the course of several months, but the level design aspect itself took about two or three days to complete. 

Note: I'd expect this process to take about 5 weeks for a full-sized level on a real game.

If Someone is Paying For This Game, What Are Their Requirements?

For a game not made as a tutorial example, these requirements usually come from the publisher, the investors, the marketing department, and so on.

What Platform is It On?

The platform you make the game for imposes constraints. A game for a phone can't use as much processing power as, say, a game for PS4 or PC. A virtual reality game imposes restraints on camera movement to avoid causing motion sickness. Mobile games have length restrictions because people play in short bursts. Know your limitations.

For the sake of this example, let's say the game is targeted at last-gen consoles (PlayStation 3, Xbox 360) and PC.

Where Does This Level Fit Into the Level Progression?

This is the third level of the game, and, as such, the challenges won't be too hard.

Who is the Audience?

This game is a sci-fi game, fairly violent. It will likely get an M (or 18+) rating. We're aiming this at hardcore gamers over the age of 18.

The Most Critical Constraints

If you find yourself in the fortunate situation that someone is paying you to design a level, remember that they want this level/game for a reason. If the stuff you make doesn't satisfy that reason, they will not (and should not) pay you, or the development studio you work for, for it. Satisfying clients is the best way to make sure they hire you or your studio again, so make sure to ask questions about what that reason is.

Those critical questions vary from project to project, but regardless of whether I'm designing a level for myself or for others, I find that there are four questions that are almost always the most important to ask first:

  • What is required by the level's story, theme, and plot?
  • What are my set-pieces?
  • What metrics am I constrained by?
  • What does the game's "Macro design" require from this level?

Let's look at each of these, in turn:

What is Required by the Story, Theme, or Plot?

The goal of the example level is to rescue a VIP who is trapped in a military facility, then leave the area in a helicopter.

What Are My Set-Pieces?

For the sake of this example:

  • Dark hallways and stairwells show our lighting off to good effect. Employ surprise to prompt weapon firing, which will cast cool shadows.
  • Fight a huge monster in a destroyed barracks near the middle.
  • A Control Tower where the VIP is located.

What Metrics Am I Bound By?

Each area that you design needs to take into account things like the player's movement speed, the size of the player, the size of the monsters, jump heights, and so on.

Each of these informs how large your corridors and spaces need to be, and what heights and lengths are available to be used as jumps.

What Does the Game's Macro Design Require From This Level?

Early in the development of a game, a short document is usually developed that decides what goes on in each level in very vague terms. (Watch this D.I.C.E. 2002 speech by Mark Cerny for more information on Macro designs.) 

A Macro document specifies which puzzles and enemies go in each level, how many usages of each are expected per-level, what rewards you get, and things of that nature. This puts further constraints on your design. 

For the sake of our example, here are our Macro constraints:

First: this is a simple first-person combat game. No puzzles, and simple combat with four enemy types:

  • Ranged: An enemy that stands still and shoots at the player.
  • Melee: An enemy that runs up close and attacks the player with a weapon.
  • Swarmer: A small, close-range enemy with a single hit point. Good in swarms.
  • Heavy: A large enemy that stands still, takes lots of hits to kill, does lots of damage, and has both a ranged attack and a melee attack.

Second: once the player has rescued the VIP, there needs to be a shortcut back to the start of the level, so the player doesn't have to re-traverse the whole thing.

Third: the VIP is located in the final combat room. She is being held prisoner by elite soldiers.

2. Brainstorming and Structure

Coming Up With Ideas

Once I'm clear on my restrictions, I start brainstorming. For example:

  • We want a lot of interiors, so I decide this will be an underground base.
  • Helicopters get into the base via a long vertical shaft, so I'll start the level at the bottom of one of these.
  • The bad guys wrecked the place coming in, so the place is torn up. Several of the areas should be wrecked.
  • I want to do combats with enemies at differing heights, so I want to have at least one really long stairwell fight sequence. 
  • This is not a real level design, so I'm going to make it absolutely linear so my examples in the article are as clear as possible.
  • And so on...

Narrowing It Down to Areas

When I'm designing a level, I like to think in terms of different "areas" within the level. That makes it easier to break my work down into manageable chunks. "Areas" is a loose term for any chunk of the level, of any arbitrary size, shape, or location. The only real criterion for whether something is an area or not is that it must help you work faster to think of it that way. If thinking that way makes things more difficult, don't worry about it.

For our example, I want players to learn about new enemies in isolation and then combine the enemies together over the course of the level, so everything gets more complex. This is good intensity ramping

To do that well, I usually like to have at least seven areas to work with. (It's vastly beyond the scope of this article to explain why, but you can read more about the "Rule of Seven" here to see some of the pacing benefits it brings). When I need a final area for something, like a room for a cutscene where you rescue the VIP, I usually add an extra area. For this example, that means 8 areas. 

For each area, I then assign some basic ideas or requirements, so that I have a short list that tells me the structure of my level.

For the example level, this is what I came up with for areas:

  1. Helicopter Landing Pad: Start of level; safe—no enemies.
  2. Computer Room: One combat encounter with two Ranged enemies; the path behind you closes off somehow (one way).
  3. Tight Corridors: Four combat encounters; introduce the Melee and Swarmer enemies.
  4. Destroyed Barracks: One combat encounter; introduce the Heavy Enemy; tight quarters.
  5. Barracks 2: The path behind you closes off somehow; one encounter with Melee, Ranged, and Heavy enemies.
  6. Corridors 2: One encounter with Melee and Ranged enemies.
  7. Large Stairwell: Vertical fight against enemies; three encounters using all four enemy types.
  8. Damaged Control Tower Room: One encounter with two Heavies and some Swarmers as a final fight; we need a one-way exit back to the start; the VIP itself is located between this room and the shortcut to the start.

3. Bubble Diagrams

Before I commit a bunch of time and effort towards making a final design, building something in-engine, or even starting to think about individual areas, I always want to have a sense of the overall level and how it flows. This keeps me from making mistakes and having to rework my designs as much.

To visualize the whole level and how its areas are connected, I make a Bubble Diagram.

Bubble Diagram

A Bubble Diagram is a very simple map of the whole level, with circles indicating areas in the level and arrows indicating the flow and connections between the areas. 

In the brainstorming phase from section 2, we came up with all the pieces of our level. The idea of a Bubble Diagram is to help you visualize where all of these pieces will be, relative to each other. It also helps you think about the paths through your level, and what kind of path structure is best suited for your objectives.

This is the Bubble Diagram I've made for our linear example level, with two types of arrows to show whether the connection is two-way or one-way:

Bubble diagram for level design
A bubble diagram from our example level. Note: the numbers in the diagram refer to the list of eight areas from the previous step. 

Almost every designer I know makes these a little differently, and that’s okay! The only requirements are that you must be consistent and that the final product must be readable. Part of the point of making bubble diagrams is that they can be used to communicate your ideas to others, so keep that in mind when you create them.

Note: See my article on Views and Vistas for information about setting up set-pieces and deciding on views. This is a good stage in the process to decide where those go. 

4. Rough Maps

Flesh out Each Bubble

Once I've got the Bubble Diagram finished, we know what's going into this level, and we know how each area is connected each other area.

The next step is to run down the list and create a rough design for each bubble. I almost always do this on paper or in Illustrator, because that's how I learned, but I know a number of great designers who do this kind of thing in-engine to get a better sense of the space. Whatever makes you work fastest is best here.

Below, see an example of what one of the bubbles (specifically Bubble 3: Tight Corridors) looks like after I've designed it out on paper (top-down):


The player starts at the top of this area and proceeds to the bottom. This area makes use of right angles to introduce enemies as a surprise to the player  

I'll break this down:

  1. Player comes south and fights 3 Swarmers. After player rounds the corner, four more Swarmers run out from an alcove.
  2. After rounding the second corner, the player is face-to-face with a Melee enemy. This enemy will need to close distance before attacking, so having it around the corner isn't cheap.
  3. Rounding the third corner, the player fights a horde of Swarmers, along with a single Melee enemy that runs from behind cover to attack. The Swarmers come from inside the alcove close to the player, and from around the next corner.
  4. The player passes the fourth corner and turns the fifth corner to be confronted by three Ranged enemies, each using the wall as cover, while five Swarmers run at the player.
  5. Rounding the last corner, the player proceeds to the area in Bubble 4.

Note how this area is designed in isolation from the others, and scale is considered, but not called out. Note how the distances and heights are still not well defined. 

In this rough stage, it's really helpful to be able to change things quickly, so I don't finalize those details until I'm ready to finish the design. I do try to keep the scale relatively consistent between all the areas, though, as this will make my job easier in the next step when we connect the areas together. 

Don't get too hung up on accuracy or small details. Things about this design will change constantly from now until the game ships (even after we "finalize" the design). Nothing is being set in stone

Connect the Areas Together

After taking each bubble and designing them in rough, on paper, I link them together (roughly). For readability, I've done it here in Adobe Illustrator, but this can be done on paper as well.

Rough unconnected 2D level map

Note how the areas are all laid out end to end, so I know how they'll connect, but I haven't finalized anything yet. 

Try to ramp the intensity up, area by area. Make sure you're combining your enemy types well, and that in general the difficulty, complexity, and intensity of your enemy encounters or puzzles increases over the course of the level

Make sure to add plenty of rest spots between combats or challenges to lower intensity from time to time. If you keep the intensity at 10 all the time, 10 will become the new 5.

The end product (as appears in the image above) is what I call a rough map.

5. Finishing the Design

This step is when I finalize how all the areas connect to each other in physical space. All of the transitions are completed, and I've finalized the heights and distances of everything.

Different designers do this step in different ways. A lot of designers like to dive straight into the engine and build this stuff out, which is great. My preference is usually to finish the 2D map, since I tend to be a bit slower than most when constructing levels in-engine and this speeds me up. The best way will be whatever makes you work faster and makes your end product better.

The final level map
The final level map. (Full-sized PDF version available here.)

Notes: 

See the PDF attached to this tutorial if you'd like to zoom in and see details of the final map. You can also see how it's organized (different parts in different layers) to get an idea of how I construct these. 

The orange boxes are triggers. Enemies in a room won't attack players until players cross the trigger.

Each box on the grid is 2x2 game units. By doing this map top-down on a grid like this, and by marking heights with numbers (e.g. +70 in the map above), I can give indications in all three dimensions about where things should go.

Conclusion

Keep in mind that everything we've done so far is only a design. The minute you get it into the engine and start playing with it, you'll find a ton of things you'll want to improve—but having a solid foundation before going into the tools has helped me a lot over the years.

It's my hope that this description of my method has been useful to you. Most people won't want to work exactly this way, and that's good -- just pick out the parts that make you faster, or which make your resulting work more cohesive, and use those.

Review

I begin the process by understanding all the constraints and restrictions that surround the level. Having a solid handle on my requirements prevents the need for re-work to fix the lack later.

Next, I brainstorm ideas and get together a rough structure for what the level will be like: how many areas I'll need, and what will basically be in them. This usually ends up being a simple numbered list, especially for linear levels like the one we've been working on in this article.

Then, I create a Bubble Diagram so that I can understand how all my areas fit together. It gives me a foundation for understanding the basic flow of my new level at a glance.

After that, I create a rough map. I usually design each area separately, on paper, and then later figure out how to string them together. Once I've got them where I want them, I can see if any changes need to be made to anything I've designed to accommodate the areas fitting together.

Once I've got a rough map, I either start working in-engine or finish the map. When I'm working on my own projects, I go in-engine. When I'm working for others, I usually make a map. A map is a very effective communication tool, and if you keep it relatively up to date it can be useful for people to look at during meetings. 

It's Back... Student Subscriptions for Just $45!

$
0
0

For a limited time only, if you're currently a student, you can grab a full year of learning on Envato Tuts+ for just $45! Save a massive 50% off our normal student offering.

That's 12 months of access to all 21,000+ written tutorials, 700+ video courses, and over 180 eBooks to download. Plus you'll also be able to claim special offers from our subscriber benefits program.

How Do I Claim This Offer?

This offer is limited to the first 500 students paying via PayPal, and can be purchased here. Hurry—the offer ends 5 February 2016, unless sold out prior.

Want to Learn Something New, in Just 60 Seconds?

$
0
0

In most of our tutorials and courses at Envato Tuts+, we aim to cover a topic in depth, giving you a comprehensive understanding of the concept or skill we're teaching.

But we also know that people don't always have time to read a long tutorial or watch a 15-part video series. 

So we've been trying out something different: a series of quick video tutorials, in which we introduce you to a new subject in just 60 seconds. It's been quite a challenge for our instructors to tackle complex subjects like building a WordPress theme or filming a documentary video, and cram all that information into just a minute. But the results have been impressive.

So far we've created more than 35 video tutorials across a wide range of subjects. You can browse a selection of them below, and if you've got a minute, why not play one of the videos and see what you can learn?

Web Design

Build a WordPress Theme in 60 Seconds

Creating a basic WordPress theme can be easier than you might think. Here’s how, in 60 seconds!

CSS Preprocessors in 60 Seconds

CSS Preprocessors do a number of things and can massively improve your workflow. Here's a quick-fire explanation.

Your First HTML Document in 60 Seconds

Creating your first HTML document is one of the most satisfying moments for any new web designer. Here’s how to do it in 60 seconds!

If you want to learn more, you can watch the full series of 60-second Web Design video tutorials.

Photo & Video

How to Film a Documentary Video in 60 Seconds

Filming a documentary is all about telling a true story. It is your job to make that interesting and engaging. Find out how in this short video.

Macro Photography in 60 Seconds

If you'd like to photograph small subjects up close and personal, then you can learn how in this video.

Cinematic Drone Video in 60 Seconds

Light unmanned vehicles are opening the skies to brand new kinds of affordable aerial filmmaking. With drones you can achieve cinematic shots that were once limited to big-budget productions.

If you want to learn more, you can watch the full series of 60-second Photo & Video tutorials.

Design & Illustration

Photoshop in 60 Seconds: Smart Objects

In this 60-second video, Kirk Nelson explores some of the amazing benefits of using Smart Objects in Adobe Photoshop.

Design in 60 Seconds: RGB and CMYK Color Modes Explained

If RGB and CMYK color modes have ever seemed confusing to you, this quick 60-second video will help out. 

Illustrator in 60 Seconds: How to Use the Pathfinder Panel

Having trouble figuring out how to use the different Shape Modes found under the Pathfinder panel in Adobe Illustrator? Well, worry no more, since in this short video you’ll learn exactly how to use them!

If you want to learn more, you can watch the full series of 60-second Design & Illustration video tutorials.

Code

Creating a Simple WordPress Plugin in 60 Seconds

The process of creating a WordPress plugin can be daunting, especially as you're just getting started. But before trying to create a large, multi-featured plugin, why not start off with the basics?

Gradle in 60 Seconds

Gradle is the de facto build system for Android Studio. It takes your project's source code, resources, and other dependencies, and packages them up into an APK file. But there's much more that Gradle can do. In this video, you'll learn what Gradle is and what it can do for you.

Create a React Class in 60 Seconds

React is a JavaScript library built and maintained by Facebook that aims to make it easy to build user interfaces using, you guess it, JavaScript.

In this video you'll learn about React as a library, and you'll see how you can begin implementing it in your projects.

Want to Learn More?

If you want to see more 60-second tutorials, here are those links to the overall series again:

If there's another subject that you'd like us to explain in 60 seconds, leave a comment below. Our instructors love a challenge!

How to Use Tile Bitmasking to Auto-Tile Your Level Layouts

$
0
0

Crafting a visually appealing and varied tileset is a time consuming process, but the results are often worth it. However, even after creating the art, you still have to piece it all together within your level! 

You can place each tile, one by one, by hand—or, you can automate the process by using bitmasking, so you only need to draw the shape of the terrain.

What is Tile Bitmasking?

What is tile bitmasking

Tile bitmasking is a method for automatically selecting the appropriate sprite from a defined tileset. This allows you to place a generic placeholder tile everywhere you want a particular type of terrain to appear instead of hand placing a potentially enormous selection of various tiles. 

See this video for a demonstration:

(You can download the demos and source files from the GitHub repo.)

When dealing with multiple types of terrain, the number of different variations can exceed 300 or more tiles. Drawing this many different sprites is definitely a time-consuming process, but tile bitmasking ensures that the act of placing these tiles is quick and efficient.

With a static implementation of bitmasking, maps are generated at runtime. With a few small tweaks, you can expand bitmasking to allow for dynamic tiles that change during gameplay. In this tutorial, we will cover the basics of tile bitmasking while working our way towards more complicated implementations that use corner tiles and multiple terrain types.

How Tile Bitmasking Works

Overview

Tile bitmasking is all about calculating a numerical value and assigning a specific sprite based on that value. Each tile looks at its neighboring tiles to determine which sprite from the set to assign to itself. 

Every sprite in a tileset is numbered, and the bitmasking process returns a number corresponding to the position of a sprite in the tileset. At runtime, the bitmasking procedure is performed, and every tile is updated with the appropriate sprite.

Example terrain sprite sheet for tile bitmasking

The sprite sheet above consists of terrain tiles with all of the possible border configurations. The numbers on each tile represent the bitmasking value, which we will learn how to calculate in the next section. For now, it's important to understand how the bitmasking value relates to the terrain tileset. The sprites are ordered sequentially so that a bitmasking value of 0 returns the first sprite, all the way to a value of 15 which returns the 16th sprite. 

Calculating the Bitmasking Value

Calculating this value is relatively simple. In this example, we are assuming a single terrain type with no corner pieces. 

Each tile checks for the existence of tiles to the North, West, East, and South, and each check returns a Boolean, where 0 represents an empty space and 1 signifies the presence of another terrain tile. 

This Boolean result is then multiplied by the binary directional value and added to the running total of the bitmasking value—it's easier to understand with some examples:

4-bit Directional Values

  • North = 20 = 1
  • West = 21 = 2
  • East = 22 = 4
  • South = 23 = 8
Tile bitmasking example

The green square in the figure above represents the terrain tile we are calculating. We start by checking for a tile to the North. There is no tile to the North, so the Boolean check returns a value of 0. We multiply 0 by the directional value for North, 20 = 1, giving us 1*0 = 0

For a terrain tile surrounded entirely by empty space, every Boolean check returns 0, resulting in the 4-bit binary number 0000 or 1*0 + 2*0 + 4*0 + 8*0 = 0. There are 16 total possible combinations, from 0 to 15, so the 1st sprite in the tileset will be used to represent this type of terrain tile with a value of 0.

Tile bitmasking example
A terrain tile bordered by a single tile to the North returns a binary value of 0001, or 1*1 + 2*0 + 4*0 + 8*0 = 1. The 2nd sprite in the tileset will be used to represent this type of terrain with a value of 1.
Tile bitmasking example

A terrain tile bordered by a tile to the North and a tile to the East returns a binary value of 0101, or 1*1 + 2*0 + 4*1 + 8*0 = 5. The 6th sprite in the tileset will be used to represent this type of terrain with a value of 5.

Tile bitmasking example

A terrain tile bordered by a tile to the East and a tile to the West returns a binary value of 0110, or 1*0 + 2*1 + 4*1 + 8*0 = 6. The 7th sprite in the tileset will be used to represent this type of terrain with a value of 6.

Assigning Sprites to Tiles

After calculating a tile's bitmasking value, we assign the appropriate sprite from the tileset. This final step can be performed in real time as the map loads, or the result can be saved and loaded into your tile editor of choice for further editing.

Tile bitmasking how tiles are assigned based on the terrains shape

The figure on the left represents a 4-bit, single-terrain tileset as it would appear sequentially on a tile sheet. The figure on the right depicts how the tiles look in-game after they are placed using the bitmasking procedure. Each tile is marked with its bitmasking value to show the relationship between a tile’s order on the tile sheet and its position in the game. 

As an example, let’s examine the tile in the upper-right corner of the figure on the right. This tile is bordered by tiles to the West and Souh. The Boolean check returns a binary value of 1010, or 1*0 + 2*1 + 4*0 + 8*1 = 10. This value corresponds to the 11th sprite in the tile sheet.

Tileset Complexity

The number of required directional Boolean checks depends on the intended complexity of your tileset. By ignoring corner pieces, you can use this simplified 4-bit solution that only requires four directional binary checks. 

But what happens when you want to create more visually appealing terrain? You will need to deal with the existence of corner tiles, which increases the amount of sprites from 16 to 48. The following 8-bit bitmasking example requires eight Boolean directional checks per tile.

8-Bit Bitmasking with Corner Tiles

For this example, we are creating a top-down tileset that depicts grassy terrain near the ocean. In this case, our ocean exists on a layer underneath the terrain tiles. This allows us to use a single-terrain solution, while still maintaining the illusion that two terrain types are colliding. 

Once the game is running and the bitmasking procedure is complete, the sprites will never change. This is a seamless, static implementation of bitmasking where everything takes place before the player ever sees the tiles.

Introducing Corner Tiles

We want the terrain to be more visually interesting than the previous 4-bit solution, so corner pieces are required. This extra bit of visual complexity requires an exponential amount of additional work for the artist, programmer, and the game itself. By expanding on what we learned from the 4-bit solution, we can quickly understand how to approach the 8-bit solution.

A sprite sheet of an example tileset

Here is the complete sprite sheet for our ocean-side terrain tiles. Do you notice anything peculiar about the number of tiles? The 4-bit example from earlier resulted in 24 = 16 tiles, so this 8-bit example should surely result in 28 = 256 tiles, yet there are clearly fewer than that there. 

While it’s true that this 8-bit bitmasking procedure results in 256 possible binary values, not every combination requires an entirely unique tile. The following example will help explain how 256 combinations can be represented by only 48 tiles.

8-bit Directional Values

  • North West = 20 = 1
  • North = 21 = 2
  • North East = 22 = 4
  • West = 23 = 8
  • East = 24 = 16
  • South West = 25 = 32
  • South= 26 = 64
  • South East = 27 = 128
Tile bitmasking 8-bit example

Now we're making eight Boolean directional checks. The center tile above is bordered by tiles to the North, North-East, and East, so this Boolean check returns a binary value of 00010110 or 1*0 + 2*1 + 4*1 + 8*0 + 16*1 + 32*0 + 64*0 + 128*0 = 22.

Tile bitmasking 8-bit example

The tile on the left above is similar to the previous tile, but now it is also bordered by tiles to the South West and South East. This Boolean directional check should return a binary value of 10110110, or 1*0 + 2*1 + 4*1 + 8*0 + 16*1 + 32*1 + 64*0 + 128*1 = 182

This value is different from the previous tile, but both tiles would actually be visually identical, so it becomes redundant. 

To eliminate the redundancies, we add an extra condition to our Boolean directional check: when checking for the presence of bordering corner tiles, we also have to check for neighboring tiles in the four cardinal directions (directly North, East, South, or West). 

For example, the tile to the North-East is neighbored by existing tiles, whereas the tiles to the South-West and South-East are not. This means that the South-West and South-East tiles are not included in the bitmasking calculation. 

With this new condition, this Boolean check returns a binary value of00010110 or 1*0 + 2*1 + 4*1 + 8*0 + 16*1 + 32*0 + 64*0 + 128*0 = 22 just like before. Now you can see how the 256 combinations can be represented by only 48 tiles.

Tile Order

Another problem you may notice is that the values calculated by the 8-bit bitmasking procedure no longer correlate to the sequential order of the tiles in the sprite sheet. There are only 48 tiles, but our possible calculated values range from 0 to 255, so we can no longer use the calculated value as a direct reference when grabbing the appropriate sprite. 

What we need, therefore, is a data structure to contain the list of calculated values and their corresponding tile values. How you want to implement this is up to you, but remember that the order in which you check for surrounding tiles dictates the order in which your tiles should be placed in the sprite sheet. 

For this example, we check for bordering tiles in the following order: North-West, North, North-East, West, East, South-West, South, South-East. 

Below is the complete set of bitmasking values as they relate to the positions of tiles in our sprite sheet (feel free to use these values in your project to save time):

Multiple Terrain Types

All of our previous examples assume a single terrain type, but what if we introduce a second terrain to the equation? We need a 5-bit bitmasking solution, and we need to define our two terrain types. We also need to assign a value to the center tile that is only counted under specific conditions. Remember that we are no longer accounting for "empty space" as in the previous examples; tiles must now be surrounded by another tile on all sides.

Tile bitmasking 8-bit example with multiple terrain types

The above figure shows an example with two terrain types and no corner tiles. Type 1 always returns a value of 0 whenever it is detected during the directional check; the center tile value is calculated and used only if it is terrain type 2. 

The center tile in the above example is surrounded by terrain type 2 to the North, West, and East, and by terrain type 1 to the South. The center tile is terrain type 1, so it is not counted. This Boolean check returns a binary value of  00111, or  1*1 + 2*1 + 4*1 + 8*0 + 16*0 = 7.

Tile bitmasking 8-bit example with multiple terrain types

In this example, our center tile is terrain type 2, so it will be counted in the calculation. The center tile is surrounded by terrain type 2 to the North and West. It is also surrounded by terrain type 1 to the East and South. This Boolean check returns a binary value of  10011, or  1*1 + 2*1 + 4*0 + 8*0 + 16*1 = 19 .

Dynamic Implementation

The bitmasking calculation can also be performed during gameplay, allowing for real-time changes in tile placement and appearance. This is useful for destructible terrain as well as games that allow for crafting and building. The initial bitmasking procedure is mandatory for all tiles, but any additional dynamic calculations should only be performed when absolutely necessary. For example, a destroyed terrain tile would trigger the bitmasking calculation only for surrounding tiles.

Conclusion

Tile bitmasking is the perfect example of building a working system to aid you in game development. It is not something that directly impacts the player's experience; instead, this method of automating a time-consuming portion of level design provides a valuable benefit to the developer. To put it simply: tile bitmasking is a quick way to make the game do your dirty work, allowing you to focus on more important tasks.

Game Development for Kids

$
0
0

Recently, my four-year-old son asked me, “Daddy, can we make a game together? Can it be a kitty game?”

I’m a gamedev dad. I’ve made 35 games on my own: mostly little freeware game jam entries, but also lots of client projects. The answer was yes, of course, since every parent loves to share their passion in life with their family.

I’m teaching him concepts of design and coding, but it’s a very collaborative process where he draws something in crayon and I turn it into a 3D game as he sits on my lap.

The project continues to be a huge success. I’d like to share my experiences in the hopes that you too will be able to enjoy making a game with someone young in your life.

Kitty Game

Why Games, Specifically?

The advantages of early exposure to STEM (science, technology, engineering, and math) have become well established in recent years. No parents need convincing that this knowledge is important, not just for future jobs but because technology surrounds us. Understanding how the world around us works is what growing up is all about, and teaching our children about the world is a primary goal of parenthood.

Unfortunately, many think it too daunting a task to sit down and teach their kids algebra at a young age, and question how younger kids (who don’t yet understand math or know how to read) could possibly understand how a game is made. Luckily, motivating kids to learn how games are created is made immeasurably easier by one key fact.

They’re Cool

“Here was something that could turn a bunch of rowdy, crazed, mini-monsters into avid creatives and creators, genuinely excited to learn.”—Gabriel Williams, ProCore3D

Kids like to play and like to build. There’s a tremendous satisfaction in the creation of something. It’s empowering; it gives them a sense of accomplishment and self-worth; it’s a thing they can call their own.

Don’t forget that making a game is something that kids can boast about to their peers: gamedev is officially cool. This is something teachers can and should take big advantage of.

Kids Are Smarter Than We Think

Consider a typical four-year-old’s worldview. They might not understand the principles of physics, but they do know that objects fall to the ground when dropped, they know balls bounce off walls, and they know cars speed along with the help of wheels.

There’s plenty of room to experiment within these simplistic parameters; you don’t need to know how to calculate a dot product to explain collision response and restitution, and kids know that some things are more slippery than others. Friction and drag and other concepts related to rigid-body Newtonian physics are the perfect subject matter for curious young minds.

Kids Love Physics

Sound crazy? Well, what child doesn’t get a thrill out of smashing a tower of blocks, or sending a train off a cliff, or throwing snowballs? In game terms, it’s easy to imagine a younger child wanting to explore game mechanics inspired by Angry Birds (a ballistics simulation).

Physics allows youngsters to do things they can’t in the real world. They can smash cars or push buildings around. They can jump around and cause general mayhem. This instant feedback in a physics simulation is why I warmly recommend making a game with real-time physics.

With a physics simulation implemented within your game’s basecode, kids can experiment with cause and effect to their heart’s content by spawning cubes, spheres, and prefabs aplenty and simply throwing them around and pushing them off cliffs with a simple character controller.

Benefits for Adults

“For my daughter, it was about getting some one-on-one time with an otherwise very busy and preoccupied daddy. It could have been any activity, really. The focus was on togetherness.”—Ryan Henson Creighton, Untold Entertainment

By undertaking a creative project together, you also help yourself. There are obvious benefits for us mentors to reap, from personal interactions leading to deeper bonding, to the simple fact that teaching something often helps you learn it better.

“There is no better way to feel re-invigorated or inspired than by kids, purely because their imagination knows no limitations or boundaries.”—Stacey Mulcahy YoungGameMakers

Working with children refreshes your own imagination and motivation. It lights a fire under your inner critic, and reawakens your inner child, ready to play in earnest, unironically, unselfconsciously.

How I Did It

Here is my personal story. It is an ongoing project, currently taking place in my home office. Not all this advice will apply to your personal situation, and my student may be quite unlike yours, but with luck some of the wisdom I’ve learned in making a game with my preschooler will help you on your own quest.

Keep Things Simple

It probably goes without saying that one key to retaining a child’s attention is to keep it short. Nothing ruins the fun of learning more than boredom. Twenty minutes is all you’ve got.

If you’re working with the very young, don’t try to teach them how to write source code at all; in truth this is more like a “field trip” and less like actually teaching them to program. We’re not trying to make a commercial game that is ever polished and ready to sell. That’s something for older kids working on their tenth jam game. The goal here is to build something fun together, much like a fort out of couch cushions, or a treehouse. You’ll do most of the work, but you’ll both reap huge benefits.

Prepare Yourself First

Although prep work and asset gathering is the typical first step in any creative endeavour, if your child wanted to paint a painting they’d often be disappointed by a shopping trip to the art store.

If you’ve never made a game before, this first prep stage will prove to be a significant challenge. A fun one, but not something you should expect a child to sit around watching. Join a few game jams solo, first, and then share your newfound skills. You don’t have to be a master game developer, just enthusiastic and with tools in hand.

If you’ve made several games, you’ll know that at the onset of every project is a ramping up of productivity as you assemble tools, create new folders, add empty scripts to your project, and collect art assets.

Skip Square One

“Start at the finish line! Begin with a simple game… and modify it until interest wanes.”—Farbs, creator of Captain Forever

Preparing a “basecode” or “skeleton project” is essential if you want to hit the ground running. Before you even sit a child down in front of a game project, you should prepare an MVP (minimum viable product) that contains the bare necessities for a game: some inits, and the ability to move around, trigger a sound, and detect collisions.

One quick way to do that is to download one of the hundreds of “starter kits” on your engine’s asset store, or rip out the core of an old game you made, such as a game jam entry. These starter kits, available both for free and for a few dollars, encapsulate all the mechanics a simple game needs, whether it is a platformer, shooter, or puzzle game.

Before you create a basic MVP project ready for tinkering, you’ll want to talk with your new apprentice and get a general sense of the kinds of things they imagine doing. You have a lot of leeway here to suggest features or gameplay mechanics that you feel are within the realm of “technically possible” and “easy to implement”. The younger the child, the more likely that what they request will sound suspiciously similar to their current favorite game.

In my personal case, our only shared game experiences have been LEGO games, which are some of the most accessible AAA titles around. I started by searching the Unity Asset Store and buying a cheap platformer starter kit. If your student is passionate about Minecraft instead, you should work in voxels, for which plenty of starter kits are available for the price of a pizza.

Kids love Minecraft

Skipping square one also ensures that you will always have a blank canvas that “just works” as a testbed for experiments and to get right into the action within a few seconds. Don’t bother trying to set up a fresh new project while your child watches; by the time you’re ready for their input, the moment will have passed.

I didn’t sit my son in front of a keyboard and expect him to start typing. I engaged him in a conversation about what we were trying to accomplish, and did all the work myself under his enthusiastic direction.

A Typical Lesson

My son sits on my lap for ten minutes at a time and tells me what he’d like me to implement. He loves to playtest each build and offer suggestions for improvement. He designs characters in crayon, and asks me to try to make 3D versions of his imaginings.

It’s actually worked quite well, and without ever so much as mentioning it to him I find he keeps begging to work on it some more, a couple times a week, just for fun.

It’s like building LEGO spaceships with him, an activity I’ve put quite a large number of happy hours into over the last few years, in that it’s a collaborative experience where we build something together.

Once I had kitties running around, I showed it to him. He was enthralled. Overjoyed. It “just worked” for him and his clumsy hands. He couldn’t die. The game never told him he failed. It was finally more a “toy” than a game.

The Ongoing Process

We sat together, and I built whatever he wanted.

“It needs things to smash. Can you make it so you can smash everything? Can I push the cars off the cliff?”

Once implemented, he would beg to play the game, and when allowed he would meticulously push cars off the cliff at the edge of the map for as long as I’d let him!

My son's game

We would play together and smash holes in the buildings and eat sushi for what felt like idyllic hours of pure summer holiday fun—free from any rules, without a “you lose game over” condition. He’d laugh, I’d laugh, and it felt like building sandcastles at the beach or climbing playground equipment.

It sounds corny, but I know these moments will be some of my happiest memories, ones I hope to savour decades from now. My son and I, playing a game he’d helped me make.

The End Product

Sure, it’s broken, buggy, cheap, unpolished, and held together with duct tape. It’s like a spaceship made out of a cardboard box with crayon buttons and wires drawn all over it. A child’s half-finished messy painting, a non-game, a mess.

But it’s so much more than that for me. For me, it’s the culmination of a lifelong dream, a few happy afternoons tucked away, free from stress.

For you, Kitty Game is a physics sandbox with up to eight silly cats running around destroying cars and houses, being chased by evil dogs, and eating delicious sushi.

There’s no real point to the game—but the funny thing is, many people go on to play for quite a long time.

Tips and Tricks

“There’s no better way to ingrain a hatred for something in a kid than to make it mandatory.”—Tom Farro

Avoid the Keyboard

Keyboard controls are confusing. Faced with over a hundred buttons, each with a complex glyph that may or may not be understood, children will hesitate. They’re worried about pressing Delete or Escape, and the keyboard is likely in use by the teacher.

Most kids have been taught by nervous adults that some keys on the keyboard are booby-trapped and could cause terrible things to happen. Who hasn’t admonished a child that they might delete our work if they messed with our keyboards?

For this reason, I recommend implementing gamepad controls immediately. Even kids as young as two years old can use a joystick and push a button on a gamepad.

Alternatively, if the game runs on a touch-screen device, you’ll find kids are more than happy to mash away at the screen.

The reason I love having an alternative control device in the child’s hand is that they retain the feeling of control, while being sequestered away from the actual code.

An ideal format is to have two chairs in front of your computer. The child is holding a gamepad, and sits beside the adult, who types into the IDE and runs the builds for testing. You have a gamepad as well, so the two of you can play together with each iteration.

Do this often: after every line of code or tweak to a few variables in the editor. Doing so keeps boring work sessions short, and rewards the child’s patience with a gaming session.

More Play Than Work

“With younger kids I do the programming but they tell me what to do every step of the way. I get them to draw the art and pick colors.”—Sarah Northway, creator of Rebuild and half of Northway Games

One way I’ve made our sessions so enjoyable is by allowing generous timespans for each playtest session, while keeping work sprints ultra short—in the two- to five-minute range.

After each session, I ask my student what changes need to be made. Examples of requests include “Can we give him a lightning sword?” and “We should be able to jump higher”.

These iterative development loops teach some central lessons required of all developers: discipline, patience, and an experimental, iterative mindset. The value of small improvements over time, and an acknowledgement of the limits we all face.

Expect and Accept Failure

You’ll invariably be asked to implement something that would take you too long, or that would be technically not feasible given the engine you’re working with.

Kids are okay with this, in the same way that they appreciate that not all LEGO designs are destined to stay standing, and not all finger paintings look the way they initially envisaged.

This acceptance of imperfect results, outcomes that are typically less awesome than those in our heads, is the key to being a software developer. You try some things. They blow up in your face. Finally you get it to work, and you move on, even if the feature turned out a little less spectacular than you’d imagined before work had begun.

The more ambitious the imagination, the greater some creative disappointments are going to be. I feel that this is a very powerful life lesson. Not a negative or cynical worldview but instead a healthy way to live. Grownup life will be filled with slower progress than planned, frustrations along the way to great destinations, and the occasional roadblock.

Patience and Iteration

For very young children like my son, they’re not yet actually learning to code—they’re just learning that doing anything takes much patience. He now knows what a “bug” is, and how grownups fix them: trial and error, over and over, until what was broken starts working, like building a very tall block tower.

This “try, try again” work ethic is central to all engineering and science, and goes beyond the foundations of software development through to personal development. Back to the drawing board! Time for a redesign! The first version wasn’t stable! These are all phrases we often exclaim while playing with LEGO, train sets, sandcastles, or building blocks. It doesn’t always work on the first try, but through a process of iterative refinement we can build amazing things.

Realistic Goal Setting

Just like an overly-optimistic, top-heavy tower of blocks, children and adults alike begin with dreams too big to be realistic. The patience and positive reaction to small failures along the way in big projects will serve children well for their entire life.

Nothing will work perfectly on the first try, whether now, sitting on Dad’s lap making a videogame, in a decade, when he is building an RC car, or as an adult, working on NASA’s next amazing rocket engine.

Fundamentals of Software Development

By explaining concepts and roadblocks along the way, we’ve covered some pretty hefty topics. My eager student now vaguely understands concepts such as collision detection, sound effects, gravity, cameras, and what source code is. He gets the difference between the editor and a compiled executable.

He’s wrapping his head around prefabs and particle systems, and understands projectiles and pickups. He groks main menus and HUD overlay score counters, timers and events, what “respawning” is, and how we can intercept triggers like onCollision to run some custom code (like cause something to explode).

Kid making game

You’d be amazed at the level of understanding of game systems and simulation even a pre-school kid can reach, within reason. I think these concepts (simulation, gravity, and event-driven coding principles) will send him rather well prepared into kindergarten when he goes next September, and they don’t require true math ability—even little kids can estimate based on their real-world experiences, and even pre-schoolers think about gravity.

Even at this most basic level, my pre-schooler has learned lessons not just about specifics like what a level is, gravity, collision, simulation, and so on, but also about design and project management: dealing with frustration and repetition, trial and error, discipline to work on something that takes more than one session to finish. These are valuable tools that kids already deal with when building LEGO structures that are initially unstable. The “back to the drawing board” mentality that game development requires is also what growing up requires.

“Kids learn how to embrace both art and technology with game making… it’s a great way to open the door to the potential of a technology career as it does encompass so many different disciplines.”—Stacey Mulcahy, YoungGameMakers

PWC video game challenge

Make a Game Together!

Perhaps you have a little person in your life who loves games. Have they asked about how they’re made? You’re the kind of person who reads gamedev tutorials. You may have already made your own games. Have you shared this knowledge and enthusiasm with them? It’s never too early.

Why not dive in and share in the bonding experience, the learning opportunity, and the pure joy of collaborative creation with someone you love. Make a game with a young child. It’s feasible and will benefit you both. You might be surprised just how smart kids are, if they’re given a chance. Good luck!

50% Off Hosting and Free Envato Tuts+

$
0
0

Until 18 March, you'll receive a free year subscription to Envato Tuts+ when you purchase a hosting plan from SiteGround.

With up to 50% off, SiteGround offers three plans to get you on the way to hosting bliss. Whether you're just starting or already have a few existing sites, you'll receive the following support:

Easy Start

One-click setup tools for WordPress, Joomla!, Drupal and most other popular web apps. Free migration assistance is also available for existing projects.

Cutting-Edge Infrastructure

All servers come with SSD drives and run the hottest new virtualization method: Linux containers. The latest technology, like HTTP/2 and PHP7, is always integrated as soon as it’s ready.

Fully Managed Service

Server software and hardware are optimized for speed and security. A 24/7 expert support team is available, so that server management is no longer a hassle.

Dev Tools

Easily create git repos, make staging copies with one click, and control your site with SSH or app-specific command line interfaces. Those are just some of the tools available to make the development workflow a breeze. 

Your Free Year on Envato Tuts+

Your free year on Envato Tuts+ gives you access to all 21,000+ written tutorials, over 740 video courses, and 180 downloadable ebooks on a range of topics including web design, code, design & illustration and photography. 

Hurry, don't miss out—offer ends 18 March.

Terms and conditions apply: 

Redemption of this offer is valid for new SiteGround customers and new Envato Tuts+ subscribers only. This coupon code is valid for one redemption only of a Tuts+ yearly subscription worth $180. By taking part in this promotion, you understand that any fraudulent activity associated with your account may result in your account being suspended or other action deemed appropriate by Envato Tuts+.

Basic 2D Platformer Physics, Part 1

$
0
0

Character Collisions

Alright, so the premise looks like this: we want to make a 2D platformer with simple, robust, responsive, accurate and predictable physics. We don't want to use a big 2D physics engine in this case, and there are a few reasons for this:

  • unpredictable collision responses
  • hard to set up accurate and robust character movement
  • much more complicated to work with
  • takes a lot more processing power than simple physics

Of course, there are also many pros to using an off-the-shelf physics engine, such as being able to set up complex physics interactions quite easily, but that's not what we need for our game.

A custom physics engine helps the game to have a custom feel to it, and that's really important! Even if you're going to start with a relatively basic setup, the way in which things will move and interact with each other will always be influenced by your own rules only, rather than someone else's. Let's get to it!

Character Bounds

Let's start by defining what kind of shapes we'll be using in our physics. One of the most basic shapes we can use to represent a physical object in a game is an Axis Aligned Bounding Box (AABB). AABB is basically an unrotated rectangle.

Example of an AABB

In a lot of platformer games, AABBs are enough to approximate the body of every object in the game. They are extremely effective, because it is very easy to calculate an overlap between AABBs and requires very little data—to describe an AABB, it's enough to know its center and size.

Without further ado, let's create a struct for our AABB.

As mentioned earlier, all we need here as far as data is concerned are two vectors; the first one will be the AABB's center, and the second one the half size. Why half size? Most of the time for calculations we'll need the half size anyway, so instead of calculating it every time we'll simply memorize it instead of the full size.

Let's start by adding a constructor, so it's possible to create the struct with custom parameters.

With this we can create the collision-checking functions. First, let's do a simple check whether two AABBs collide with each other. This is very simple—we just need to see whether the distance between the centers on each axis is less than the sum of half sizes.

Here's a picture demonstrating this check on the x axis; the y axis is checked in the same manner.

Demonstrating a check on the X-Axis

As you can see, if the sum of half sizes were to be smaller than the distance between the centers, no overlap would be possible. Notice that in the code above, we can escape the collision check early if we find that the objects do not overlap on the first axis. The overlap must exist on both axes, if the AABBs are to collide in 2D space.

Moving Object

Let's start by creating a class for an object that is influenced by the game's physics. Later on, we'll use this as a base for an actual player object. Let's call this class MovingObject.

Now let's fill this class with the data. We'll need quite a lot of information for this object:

  • Position and previous frame's Position
  • speed and previous frame's speed
  • scale
  • AABB and an offset for it (so we can align it with a sprite)
  • is object on the ground and whether it was on the ground last frame
  • is object next to the wall on the left and whether it was next to it last frame
  • is object next to the wall on the right and whether it was next to it last frame
  • is object at the ceiling and whether it was at the ceiling last frame

Position, speed and scale are 2D vectors.

Now let's add the AABB and the offset. The offset is needed so we can freely match the AABB to the object's sprite.

And finally, let's declare the variables which indicate the position state of the object, whether it is on the ground, next to a wall or at the ceiling. These are very important because they will let us know whether we can jump or, for example, need to play a sound after bumping into a wall.

These are the basics. Now, let's create a function that will update the object. For now we won't be setting everything up, but just enough so we can start creating basic character controls.

The first thing we'll want to do here is to save the previous frame's data to the appropriate variables.

Now let's update the position using the current speed.

And just for now, let's make it so that if the vertical position is less than zero, we assume the character's on the ground. This is just for now, so we can set up the character's controls. Later on, we'll do a collision with a tilemap.

After this, we need to also update AABB's center, so it actually matches the new position.

For the demo project, I'm using Unity, and to update the position of the object it needs to be applied to the transform component, so let's do that as well. The same needs to be done for the scale.

As you can see, the rendered position is rounded up. This is to make sure the rendered character is always snapped to a pixel.

Character Controls

Data

Now that we have our basic MovingObject class done, we can start by playing with the character movement. It's a very important part of the game, after all, and can be done pretty much right away—no need to delve too deep into the game systems just yet, and it'll be ready when we'll need to test our character-map collisions.

First, let's create a Character class and derive it from the MovingObject class.

We'll need to handle a few things here. First of all, the inputs—let's make an enum which will cover all of the controls for the character. Let's create it in another file and call it KeyInput. 

As you can see, our character can move left, right, down and jump up. Moving down will work only on one-way platforms, when we want to fall through them.

Now let's declare two arrays in the Character class, one for the current frame's inputs and another for the previous frame's. Depending on a game, this setup may make more or less sense. Usually, instead of saving the key state to an array, it is checked on demand using an engine's or framework's specific functions. However, having an array which is not strictly bound to real input may be beneficial, if for example we want to simulate key presses.

These arrays will be indexed by the KeyInput enum. To easily use those arrays, let's create a few functions that will help us check for a specific key.

Nothing special here—we want to be able to see whether a key was just pressed, just released, or if it's on or off.

Now let's create another enum which will hold all of the character's possible states.

As you can see, our character can either stand still, walk, jump, or grab a ledge. Now that this is done, we need to add variables such as jump speed, walk speed, and current state.

Of course there's some more data needed here such as character sprite, but how this is going to look depends a lot on what kind of engine you're going to use. Since I'm using Unity, I'll be using a reference to an Animator to make sure the sprite plays animation for an appropriate state.

Update Loop

Alright, now we can start the work on the update loop. What we'll be doing here will depend on the current state of the character.

Stand State

Let's start by filling up what should be done when the character is not moving—in the stand state. First of all, the speed should be set to zero.

We also want to show the appropriate sprite for the state.

Now, if the character is not on the ground, it can no longer stand, so we need to change the state to jump.

If the GoLeft or GoRight key is pressed, then we'll need to change our state to walk.

In case the Jump key is pressed, we want to set the vertical speed to the jump speed and change the state to jump.

That's going to be it for this state, at least for now. 

Walk State

Now let's create a logic for moving on the ground, and right away start playing the walking animation.

Here, if we don't press the left or right button or both of these are pressed, we want to go back to the standing still state.

If the GoRight key is pressed, we need to set the horizontal speed to mWalkSpeed and make sure that the sprite is scaled appropriately—the horizontal scale needs to be changed if we want to flip the sprite horizontally. 

We also should move only if there is actually no obstacle ahead, so if mPushesRightWall is set to true, then the horizontal speed should be set to zero if we're moving right.

We also need to handle the left side in the same way.

As we did for the standing state, we need to see if a jump button is pressed, and set the vertical speed if that is so.

Otherwise, if the character is not on the ground then it needs to change the state to jump as well, but without an addition of vertical speed, so it simply falls down.

That's it for the walking. Let's move to the jump state.

Jump State

Let's start by setting an appropriate animation for the sprite.

In the Jump state, we need to add gravity to the character's speed, so it goes faster and faster towards the ground.

But it would be sensible to add a limit, so the character cannot fall too fast.

In many games, when the character is in the air, the maneuverability decreases, but we'll go for some very simple and accurate controls which allow for full flexibility when in the air. So if we press the GoLeft or GoRight key, the character moves in the direction while jumping as fast as it would if it were on the ground. In this case we can simply copy the movement logic from the walking state.

Finally, we're going to make the jump higher if the jump button is pressed longer. To do this, what we'll actually do is make the jump lower if the jump button is not pressed. 

As you can see, if the jump key is not pressed and the vertical speed is positive, then we clamp the speed to the max value of cMinJumpSpeed (200 pixels per second). This means that if we were to just tap the jump button, the speed of the jump, instead of being equal to mJumpSpeed (410 by default), will get lowered to 200, and therefore the jump will be shorter.

Since we don't have any level geometry yet, we should skip the GrabLedge implementation for now.

Update the Previous Inputs

Once the frame is all finished, we can update the previous inputs. Let's create a new function for this. All we'll need to do here is move the key state values from the mInputs array to the mPrevInputs array.

At the very end of the CharacterUpdate function, we still need to do a couple of things. The first is to update the physics.

Now that the physics is updated, we can see if we should play any sound. We want to play a sound when the character bumps any surface, but right now it can only hit the ground because the collision with tilemap is not implemented yet. 

Let's check if the character has just fallen onto the ground. It's very easy to do so with the current setup—we just need to look up if the character is on the ground right now, but wasn't in the previous frame.

Finally, let's update the previous inputs.

All in all, this is how the CharacterUpdate function should look now, with minor differences depending on the kind of engine or framework you're using.

Init the Character

Let's write an Init function for the character. This function will take the input arrays as the parameters. We will supply these from the manager class later on. Other than this, we need to do things like:

  • assign the scale
  • assign the jump speed
  • assign the walk speed
  • set the initial position
  • set the AABB

We'll be using a few of the defined constants here.

In the case of the demo, we can set the initial position to the position in the editor.

For the AABB, we need to set the offset and the half size. The offset in the case of the demo's sprite needs to be just the half size.

Now we can take care of the rest of the variables.

We need to call this function from the game manager. The manager can be set up in many ways, all depending on the tools you're using, but in general the idea is the same. In the manager's init, we need to create the input arrays, create a player, and init it.

Additionally, in the manager's update, we need to update the player and player's inputs.

Note that we update the character's physics in the fixed update. This will make sure that the jumps will always be the same height, no matter what frame rate our game works with. There's an excellent article by Glenn Fiedler on how to fix the timestep, in case you're not using Unity.

Test the Character Controller

At this point we can test the character's movement and see how it feels. If we don't like it, we can always change the parameters or the way the speed is changed upon key presses.

An animation of the character controller

Summary

The character controls may seem very weightless and not as pleasant as a momentum-based movement for some, but this is all a matter of what kind of controls would suit your game best. Fortunately, just changing the way the character moves is fairly easy; it's enough to modify how the speed value changes in the walk and jump states.

That's it for the first part of the series. We've ended up with a simple character movement scheme, but not much more. The most important thing is that we laid out the way for the next part, in which we'll make the character interact with a tilemap.


How to Make a 3D Model of Your Head Using Blender and a RealSense Camera

$
0
0

I'm very excited about sensification of computing: an idea where smart things sense the environment, are connected, then become an extension of the user. Asan Intel Software evangelist (and a self-confessed hobbyist coder), I'm fortunate enough to get to experiment with these possibilities using Intel's cutting-edge technology. Previously, I had written about 3D-scanning the world around you in real time and putting a virtual object in it; however in this tutorial, I'd like to explain how to do the converse, by 3D-scanning yourself then adding that scan to the virtual world.

More specifically, we will:

  1. Scan your head using a RealSense camera and the SDK.
  2. Convert the 3D file from OBJ format to PLY format (for editing in Blender).
  3. Convert the vertex colors to a UV texture map.
  4. Edit the 3D mesh to give it a lower vertex and poly count.

We'll end up with a 3D, colored mesh of your head, ready to use in Unity. (In the next tutorial, we'll see how to actually get this into Unity, and what you could use this for in a game.)

You will need:

Scanning Your Head

After installing the SDK, go to the Charm Bar (in Windows 8) or the Start Menu (in Windows 7 or 10) and search for RealSense SDK Sample Browser. Find it in the list, right-click it, and select Run as Administrator.

Finding the RealSense SDK Sample Brower in Windows 8

In the Sample Browser you will find a section called Common Samples. These are samples that will work on both the front-facing and rear-facing RealSense cameras. Find the sample called 3D Scan (C++).

The RealSense SDK Common Samples

How to Scan Your Head

  • Select Run from the 3D Scan (C++) section of the sample browser.
  • Put your head in front of the camera and you will see it in the view of the scanner. Hold it still until the second window pops up.
  • Once the second window pops up, start to rotate in one direction (slowly), then reverse direction and try to go all the way around. 
  • Once you've made a full loop around, tip your head back and then forward, then hold still. For the last few seconds. keep your eyes looking forward just over the camera, so that your eyes get set in a static position.
  • When the scan is complete, a browser will open, allowing you to drag and rotate the model around. The model will be saved to your Documents folder as 3Dscan.obj.
  • You can rerun the sample as many times as you like to get a scan that you're satisfied with.

Tips for Scanning

  • The sample is designed to track a rigid 3D object in space as it rotates in front of the camera. This means you can scan it from multiple sides.
  • If part of the object is rotating and another part is still, it will confuse the tracking algorithm and you'll end up with a bad scan. So, when scanning your head, do not just turn your neck to scan; instead, rotate your entire torso.
  • You can tip forward, tip back, rotate left, rotate right, and turn around. With practice, it is possible to scan an object from all angles.
  • You should scan in an area with even lighting. Avoid shadows and specular reflections, as they will get baked into your 3D texture.

Converting to PLY Format

The OBJ format is the default format you get from the scan. To use it in Blender (and later Unity), we'll convert it to PLY format using a free tool called MeshLab. So, download, install, and run MeshLab, if you haven't already.

From the File menu, select Import. Find the 3Dscan.obj file you created in the previous section.

MeshLabs Import Mesh dialog

After you've imported your object, it may be on its side. You can left-click at the bottom of the object and drag up to rotate the object and examine it. 

You should see the mesh textured with color. The color information is stored as vertex colors, meaning that each vertex is colored with the pixels obtained from the RGB camera when it scanned your head. 

My head in MeshLab

Click File > Export Mesh As. In the dialog box that appears, select Stanford Polygon File Format (*.ply) from the Files of type drop-down:

Saving an OBJ as a PLY in MeshLab

Save the resulting PLY somewhere you'll be able to find it later.

Converting Vertex Colors to a UV Texture Map

The idea here is to import the 3D file to Blender, make a copy of it, clean up the copy, and map the vertex colors as a UV map for the new duplicate version.

Download, install, and run Blender, if you haven't already.

Import the Object to Blender

Create a new project in Blender, select Import > Stanford (.ply), then select the PLY file you just exported from MeshLab:

Importing a PLY file in Blender

After importing it, you will see that the textures are not visible:

Textureless head in Blender

However, if you switch to Texture view, you will see that the RGB colors from the scan are now mapped to the Texture mode:

Switching to Texture view in Blender
Use this menu to switch to Texture view.

To see the colors properly, switch view mode again—this time, to Material. On the right-hand panel, select the Materials icon, and then create a new Material. 

In the panel editor, set Shading to Shadeless, and in the Options, turn on Vertex Color Paint. This successfully sets the vertex colors to the material of the object.

Material options in Blender

Now, this isn't yet a workable mesh for us. It's full of holes, it's rough, and the poly count is much higher than we'd like it to be, if we're going to use it in a game.

We need to create a version of this mesh that we can clean up, but map this vertex-colored texture as a UV map. That version is the version we will bring into Unity.

Create a Duplicate of the Mesh

Select Object > Duplicate Objects and hit Enter. Do not move your mouse.

Duplicating objects in Blender

Next, select Object > Move to Layer, and then select the square to the right of the current layer (the second square from the left in the top row).

Moving the mesh to another layer in Blender

Go to the Asset Browser and rename your meshes. Name one Original Mesh and the other Second Mesh. This will help you tell them apart.

Reduce the Poly Count of the Duplicate

Select the Second Mesh, select the relevant layer, and switch view mode from Material to Solid.

Then, from the right-hand panel, click the icon that looks like a wrench to add a modifier. We are going to reduce the poly count of this mesh and clean it up.

Select Tools > Remesh. In this dialog, set the Oct Tree to between 7 and 8. Set Mode to Smooth, and turn on Smooth Shading. This will cause you to lose the material mapping:

Smooth shading no material mapping in Blender

Click Apply. You now have a cleaner Mesh to which you can apply the vertex and UV map.

A clean mesh in Blender

Apply the Vertex and UV Map

Go Into Edit mode on this Second Mesh. Click Select > Select All.

A selected mesh

Create a new window and set it to UV/Image Editor mode to see the UV map.

UVImage Editor in Blender

In the 3D window, select Mesh > UV Unwrap > Smart UV Project.

Smart UV Project

In the UV Map window, select New to create a texture. Turn off Alpha. Name the texture the name you would like to give your final model.

Switch to Object mode. You will see a black model and a black UV map. That's okay!

Now select both layers by Shift-clicking them to make them both visible.

Turning on multiple layers in Blender
Select both layers...
Two visible layers
...and your view pane will look like this.

In the Asset Browser, select the Original Mesh. Click the camera icon.

In the Bake section, turn on Selected to Active. Make sure that Bake to Vertex Color is turned off.

Bake panel in Blender

In the Scene section, set Color Management to None.

Scene panel in Blender

In the Asset menu, select the Original Mesh. Then, hold down Shift and select the Second Mesh. (You must do it in that order.) Be sure that both layers are visible with the eye and camera icons.

Click the camera icon. In the Bake section, click Bake.

Baked mesh in Blender

You should now have a texture mapped. To see this, turn off the first layer, so you are only seeing the layer with the second mesh. Select the Texture view and you will see your mesh textured with your UV Map. 

You will notice some funkiness where the image filled in missing areas of your mesh:

Auto-filled in areas in a mesh in Blender

You can switch to Texture Paint mode to clean those up. You can also now switch to Sculpt mode to fill in missing areas, as well as to clean up or add features to your mesh. 

Export the Mesh

Once you've cleaned up your mesh, you can get ready to export it:

  1. Make sure the UV map is named the same as the mesh, and is in the same directory as the mesh.
  2. Go to the UV Map window, select Image > Save As, and navigate to the directory where you will save the mesh.
  3. Select File > Export to .FBX, and enter the same name you gave the UV map.

Your model is complete!

Conclusion

You've scanned in your head, reduced the polycount, cleaned up the model, and exported it as a 3D model ready to use in other applications. In a future tutorial, we'll import this 3D mesh into Unity, and look at how it could be used in games.

The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of ThingsAndroid*Intel® RealSense™ Technology, Modern CodeGame Dev and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

Get Your Head in the Game With Intel RealSense

$
0
0

Since 1999, I've worked through my company, TheGameCreators, to make it easier for everyone to make games. Last year, I started developing with the Intel RealSense SDK and camera, which lets you capture 3D images in real time. (I actually won Intel's RealSense App Challenge with my Virtual 3D Video Maker!) It should come as no surprise that I've been looking at how RealSense can be used for game development, and in this tutorial I'll share one example: how to scan your (or your player's) head into your game. 

Why Use RealSense?

Many 3D game characters will include a 2D texture image that is used as a "skin" for the 3D model, as with Kila, below:

Left: Kila's texture. Right: The 2D texture applied to the 3D model. From Game Character Creation Series: Kila.

You can see a distinct face in the texture. If you want to change the character's face to match your own, it's tempting to try pasting a flat 2D photo onto the texture. However, this tends not to give great results, as demonstrated below:

Result of pasting a flat 2D photo onto an existing texture
The result of pasting a flat photo of a face onto an existing 3D model's texture.

That's fairly horrifying. The problem is, the face texture does not map onto the underlying "bone structure" of the model.

RealSense devices include an infra-red projector and camera, so they can capture a 3D projection map as well as a 2D photo of whatever they're pointed at. This means that you can adjust the 2D texture and the 3D model of a character to fit the new face.

In-Game Character Creator Workflow

Let's suppose we want to allow our players to take a few selfies with a RealSense camera, and then use them to create a 3D avatar for them.

Rather than attempting to scan in their entire body—and then creating a 3D model, texture, and set of animations from that—we'd be better off buying a 3D full-body model from a site like 3DOcean...

3DOcean full human body models

...then cutting off the head:

This makes it much easier to develop and test the game; the only thing that will vary between player avatars is the head.

Capturing the Player's Likeness

To capture the player's likeness, the in-game character creator must make several scans of their head, all precisely controlled so that the software can stitch it all back together again. (The Intel RealSense SDK has provided a sample that does just that; it can be run immediately from the pre-compiled binaries folder.)

Naturally you will want to create your own 3D scanner to suit your project, so what follows is just a breakdown of how the technique could work for you.

Method 1

You could take the mesh shots separately and then attempt to manually connect them in a make-shift art tool or small program. However, the problem with live 3D scanning is that there is a human at the other end of it, which means they fidget, shift in their seats, lean back, and move forward in subtle ways—and that’s not accounting for the subtle pitch, yaw and roll of the head itself. Given this tendency for the head to shift position and distance from the camera during the scanning session, this will only lead to frustration when trying to reassemble the shots.

Method 2

If your scans are accurate, you could blend the meshes against the first mesh you scanned and make adjustments if a vertex exists in the world space model data, or create a new world space vertex point if none previously existed. This will allow your head mesh to get more refined with the more samples the player provides. The downside to this technique is that getting your player's head to remain in a central position during scanning, and converting that 3D data to world coordinates, creates some challenges.

Method 3

The perfect technique is to detect signature markers within the scanned mesh, in order to get a "vertex fix". Think of this as charting your position on the ocean by looking at the stars; by using constellations, you can work out both your relative orientation and position. This is particularly useful when you produce a second mesh: you can apply the same marker detection algorithm, find the same pattern and return the relative orientation, position and scale shift from the first mesh. Once you have this offset, adding the second mesh to the first is a simple world transform calculation and then adding the extra vertex data to the original mesh. This process is repeated until there are no more meshes you wish to submit, and then you proceed to the final step.

Key Points

  • I would not recommend implementing the techniques here until you have a good practical grasp of 3D geometry coding, as manipulating variable oriented vertex data can be tricky.
  • If the technique you chose involved layering vertex data onto a single object, you will likely have a lot of surplus and overlapping vertices. These need to be removed and a new triangle list produced to seal the object.
  • Storing the color data of your face scan in the diffuse component of the vertex data is a handy way to avoid the need for a texture, and offers other advantages when it comes to optimizing the final mesh
  • Don’t forget to process the raw depth data to remove interference such as stray depth values that are not consistent with the surrounding depth pixel data. Fail to do this and all your work to create a good 3D head mesh will be in vain.

This is definitely The Hard Part, but it is not impossible; remember, there is a working demo in the SDK files you can use as a basis to get this running in your game's engine.

Using the Likeness in the Game

Let's suppose you have a decent 3D model (and corresponding texture) of the player's face:

A single 3D scan of the front of someones head taken with RealSense
A futuristic mugshot.

We can now stick this on the character's body model:

A face without a head

Granted, the result is still fairly horrifying—but we're not done yet. We can take a solution from one of the many pages of the game developers' bible, one we can convince ourselves is a rather nice feature as opposed to a rather sinister hack.

All we do is cover up the gaps with a wig and a hat:

Gap in head covered up by wig and hat

OK, granted, we could just stick the 3D face on the front of an already-modeled 3D head, but this is more fun. If Team Fortress 2 is anything to go by, players will enjoy having the option to customize their avatar's head and face with various accessories; we're killing two birds with one stone!

If you really want to avoid adding hats and wigs to the game character, you'll need to get the player to tie or gel back their hair before making the scans. The flatter their hair is, the better the scan will be. Loose hair interferes with the depth camera, and produces scatter reflections which do not scan well.

It's Easier to Do This at Dev Time

Although getting a player's likeness into the game at runtime is possible, it's a difficult place to start. If you've not attempted this before, I recommend first trying something technically simpler: getting your own likeness (or that of an actor) into the game.

This is easier for three reasons:

  1. There's much more margin for error and room for correction; if you control the whole process yourself, manually adjust any colors or meshes that don't come out right, and tweak the end result until it's just right. You aren't reliant on your code working perfectly, first time, for every face.
  2. You can use existing software (like Blender), rather than having to incorporate the whole process into your game's UI and code.
  3. Since it's your game, you're likely to have more patience than your players, so you can take (and retake) more pictures and wait longer for results than it's reasonable to ask a player to.

And based on this latter point, you can take the time to do a scan of your entire head, not just your face. In fact, we've already done this in a previous tutorial. Let's pick up where that left off and walk through how to get this model into Unity 5.

Getting Your Head Into Unity

After following the previous tutorial, you should have an FBX model of your head and a corresponding UV texture map.

Open Unity, then create a new project and scene.

A new Unity project and scene

Click Assets > Import new asset. Select the FBX model you exported from Blender.

When Unity imports the object, it is possible (and even likely) that it won't import its texture, in which case you will see a gray-white version of your mesh:

Untextured mesh in Unity

That's OK—we will add the texture map next.

Select the Materials folder, then right-click and select Import Asset. Select the PNG file you saved in Blender, then import it.

You will now see the texture map alongside a gray-white material with the same name as your mesh. Right-click on the gray-white material (shaped like a ball) and delete it.

Two materials in Unity

Go back to your Assets folder and right-click on the mesh you imported. Select Reimport

Reimporting a mesh with a different material in Unity

After it reimports, it will create the material properly and texture your mesh:

3D scanned head in Unity

If the model looks pink in the object preview, simply select another object (like the camera), then select your mesh again to make it refresh.

3D scanned and textured head model in Unity

Now you are ready to get your head in the game. Drag your object to the scene. (You can make it bigger and mess around with the camera perspective to get it in view.)

When you hit Play, you should see your object rendered properly:

If you want to adjust the texture to make it more or less shiny or metallic, first go to the Materials folder and select the round shaped version of your texture. In the Inspector panel, you can adjust the Metallic and Smoothness sliders to get the right look:

Unity Inspector panel - material properties

Conclusion

It might seem an inordinate amount of work just to get the front parts of a head into a virtual scene, but the techniques you have learned in this article can be applied to scanning and stitching anything you want. 

The ability to scan your very essence into the computer and continue your journey in a virtual landscape begins right here with the scanning of a single face. How will games and attitudes change as the in-game protagonists start to resemble real people you might meet in your own life? When you put a face to someone, they become real, and it will be interesting to see how games and players change as a result.

The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology, Modern Code, Game Dev and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

Basic 2D Platformer Physics, Part 2

$
0
0

Level Geometry

There are two basic approaches to building platformer levels. One of them is to use a grid and place the appropriate tiles in cells, and the other is a more free-form one, in which you can loosely place level geometry however and wherever you want. 

There are pros to cons to both approaches. We'll be using the grid, so let's see what kind of pros it has over the other method:

  • Better performance—collision detection against the grid is cheaper than against loosely placed objects in most cases.
  • Makes it much easier to handle pathfinding.
  • Tiles are more accurate and predictable than loosely placed objects, especially when considering things like destructible terrain.

Building a Map Class

Let's start by creating a Map class. It will hold all of the map specific data.

Now we need to define all the tiles that the map contains, but before we do that, we need to know what tile types exist in our game. For now, we're planning on only three: an empty tile, a solid tile, and a one-way platform.

In the demo, tile types correspond directly to the type of collision we'd like to have with a tile, but in a real game that's not necessarily so. As you have more visually different tiles, it would be better to add new types such as GrassBlock, GrassOneWay and so on, to let the TileType enum define not only the collision type but also the appearance of the tile.

Now in the map class we can add an array of tiles.

Of course, a tilemap that we can't see is not of much use to us, so we also need sprites to back up the tile data. Normally in Unity it is extremely inefficient to have each tile be a separate object, but since we're just using this to test our physics, it's OK to make it this way in the demo.

The map also needs a position in the world space, so that if we need to have more than just a single one, we can move them apart.

Width and height, in tiles.

And the tile size: in the demo we'll be working with a fairly small tile size, which is 16 by 16 pixels.

That would be it. Now we need a couple of helper functions to let us access the map's data easily. Let's start by making a function which will convert world coordinates to the map's tile coordinates.

As you can see, this function takes a Vector2 as a parameter and returns a Vector2i, which is basically a 2D vector operating on integers instead of floats.

Converting the world position to the map position is very straightforward—we simply need to shift the point by mPosition so we return the tile relative to the map's position and then divide the result by the tile size.

Note that we had to shift the point additionally by cTileSize / 2.0f, because the tile's pivot is in its center. Let's also make two additional functions which will return only the X and Y component of the position in the map space. It'll be useful later.

We also should create a complementary function which, given a tile, will return its position in the world space.

Aside from translating positions, we also need to have a couple of functions to see whether a tile at a certain position is empty, is a solid tile or is a one-way platform. Let's start with a very generic GetTile function, which will return a type of a specific tile.

As you can see, before we return the tile type, we check if the given position is out of bounds. If it is, then we want to treat it as a solid block, otherwise we return a true type.

The next in queue is a function to check whether a tile is an obstacle. 

In the same way as before, we check if the tile is out of bounds, and if it is then we return true, so any tile out of bounds is treated as an obstacle.

Now let's check whether the tile is a ground tile. We can stand on both a block and a one-way platform, so we need to return true if the tile is any of these two.

Finally, let's add IsOneWayPlatform and IsEmpty functions in the same manner.

That's all that we need our map class to do. Now we can move on and implement the character collision against it.

Character-Map Collision

Let's go back to the MovingObject class. We need to create a couple of functions which will detect whether the character is colliding with the tilemap.

The method by which we'll know whether the character collides with a tile or not is very simple. We'll be checking all the tiles that exist right outside the moving object's AABB.


The yellow box represents the character's AABB, and we'll be checking the tiles along the red lines. If any of those overlap with a tile, we set a corresponding collision variable to true (such as mOnGround, mPushesLeftWall, mAtCeiling or mPushesRightWall).

Let's start by creating a function HasGround, which will check if the character collides with a ground tile. 

This function returns true if Character overlaps with any of the bottom tiles. It takes the old position, the current position and the current speed as parameters, and also returns the Y position of the top of the tile we are colliding with and whether the collided tile is a one-way platform or not.

The first thing we want to do is to calculate the center of AABB.

Now that we've got that, for the bottom collision check we'll need to calculate the beginning and end of the bottom sensor line. The sensor line is just one pixel below the AABB's bottom contour.

The bottomLeft and bottomRight represent the two ends of the sensor. Now that we've got these, we can calculate which tiles we need to check. Let's start by creating a loop in which we'll be going through the tiles from the left to the right.

Note that there's no condition to exit the loop here—we'll do that at the end of the loop. 

The first thing we should do in the loop is to make sure that the checkedTile.x is not greater than the right end of the sensor. This might be the case because we move the checked point by multiples of the tile size, so for example, if the character is 1.5 tiles wide, we need to check the tile on the left edge of the sensor, then one tile to the right, and then 1.5 tiles to the right instead of 2.

Now we need to get the tile coordinate in the map space to be able to check the tile's type.

First, let's calculate the tile's top position.

Now, if the currently checked tile is an obstacle, we can easily return true.

Finally, let's check whether we already looked through all the tiles that intersect with the sensor. If that's the case, then we can safely exit the loop. After we exit the loop not finding a tile we collided with, we need to return false to let the caller know that there is no ground below the object.

That's the most basic version of the check. Let's try to get it to work now. Back in the UpdatePhysics function, our old ground check looks like this.

Let's replace it using the newly created method. If the character is falling down and we have found an obstacle on our way, then we need to move it out of the collision and also set the mOnGround to true. Let's start with the condition.

If the condition is fulfilled then we need to move the character on the top of the tile we collided with.

As you can see, it's very simple because the function returns the ground level to which we should align the object. After this, we only need to set the vertical speed to zero and set mOnGround to true.

If our vertical speed is greater than zero or we're not touching any ground, we need to set the mOnGround to false.

Now let's see how this works.

As you can see, it works well! The collision detection for the walls on both sides and at the top of the character are still not there, but the character stops each time it meets the ground. We still need to put a little more work into the collision-checking function to make it robust.

One of the issues we need to solve is visible if the character's offset from one frame to the other is too big to detect the collision properly. This is illustrated in the following picture.

This situation does not happen now because we locked the maximum falling speed to a reasonable value and update the physics with 60 FPS frequency, so the differences in positions between the frames are rather small. Let's see what happens if we update the physics only 30 times per second. 

As you can see, in this scenario our ground collision check fails us. To fix this, we can't simply check if the character has ground beneath him at the current position, but we rather need to see whether there were any obstacles along the way from the previous frame's position.

Let's go back to our HasGround function. Here, besides calculating the center, we'll also want to calculate the previous frame's center.

We'll also need to get the previous frame's sensor position.

Now we need to calculate at which tile vertically we are going to start checking whether there is a collision or not, and at which we will stop.

We start the search from the tile at the previous frame's sensor position, and end it at the current frame's sensor position. That's of course because when we check for a ground collision we assume we are falling down, and that means we're moving from the higher position to the lower one.

Finally, we need to have another iteration loop. Now, before we fill the code for this outer loop, let's consider the following scenario.

Here you can see an arrow moving fast. This example shows that we need not only to iterate through all the tiles we would need to pass vertically, but also to interpolate the object's position for each tile we go through to approximate the path from the previous frame's position to the current one. If we simply kept using the current object's position, then in the above case a collision would be detected, even though it shouldn't be.

Let's rename the bottomLeft and bottomRight as newBottomLeft and newBottomRight, so we know that these are the new frame's sensor positions.

Now, within this new loop, let's interpolate the sensor positions, so that at the beginning of the loop we're assuming the sensor to be at the previous frame's position, and at its end it's going to be in the current frame's position.

Note that we interpolate the vectors based on the difference in tiles on the Y axis. When old and new positions are within the same tile, the vertical distance will be zero, so in that case we wouldn't be able to divide by the distance. So to solve this issue, we want the distance to have a minimum value of 1, so that if such a scenario were to happen (and it's going to happen very often), we'll simply be using the new position for collision detection. 

Finally, for each iteration, we need to execute the same code we did already for checking the ground collision along the width of the object. 

That's pretty much it. As you may imagine, if the game's objects move really fast, this way of checking collision can be quite a bit more expensive, but it also reassures us that there will be no weird glitches with objects moving through solid walls.

Summary

Phew, that was more code than we thought we'd need, wasn't it? If you spot any errors or possible shortcuts to take, let me and everyone know in the comments! The collision check should be robust enough so that we don't have to worry about any unfortunate events of objects slipping through the tilemap's blocks. 

A lot of the code was written to make sure that there are no objects passing through the tiles at big speeds, but if that's not a problem for a particular game, we could safely remove the additional code to increase the performance. It might even be a good idea to have a flag for specific fast-moving objects, so that only those use the more expensive versions of the checks.

We still have a lot of things to cover, but we managed to make a reliable collision check for the ground, which can be mirrored pretty straightforwardly to the other three directions. We'll do that in the next part.

Basic 2D Platformer Physics, Part 3

$
0
0

One-Way Platforms

Since we've just finished working on the ground collision check, we might as well add one-way platforms while we're at it. They are going to concern only the ground collision check anyway. One-way platforms differ from solid blocks in that they will stop an object only if it's falling down. Additionally, we'll also allow a character to drop down from such a platform.

First of all, when we want to drop off a one-way platform, we basically want to ignore the collision with the ground. An easy way out here is to set up an offset, after passing which the character or object will no longer collide with a platform. 

For example, if the character is already two pixels below the top of the platform, it shouldn't detect a collision anymore. In that case, when we want to drop off the platform, all we have to do is move the character two pixels down. Let's create this offset constant.

Now let's add a variable which will let us know if an object is currently on a one-way platform.

Let's modify the definition of the HasGround function to also take a reference to a boolean which will be set if the object has landed on a one-way platform.

Now, after we check if the tile we are currently at is an obstacle, and it isn't, we should check if it's a one-way platform.

As explained before, we also need to make sure that this collision is ignored if we have fallen beyond the cOneWayPlatformThreshold below the platform. 

Of course, we cannot simply compare the difference between the top of the tile and the sensor, because it's easy to imagine that even if we're falling, we might go well below two pixels from the platform's top. For the one-way platforms to stop an object, we want the sensor distance between the top of the tile and the sensor to be less than or equal to the cOneWayPlatformThreshold plus the offset from this frame's position to the previous one.

Finally, there's one more thing to consider. When we find a one-way platform, we cannot really exit the loop, because there are situations when the character is partially on a platform and partially on a solid block.

We shouldn't really consider such a position as "on a one-way platform", because we can't really drop down from there—the solid block is stopping us. That's why we first need to continue looking for a solid block, and if one is found before we return the result, we also need to set onOneWayPlatform to false.

Now, if we went through all the tiles we needed to check horizontally and we found a one-way platform but no solid blocks, then we can be sure that we are on a one-way platform from which we can drop down.

That's it, so now let's add to the character class an option to drop down the platform. In both the stand and run states, we need to add the following code.

Let's see how it works.

Everything is working correctly.

Handle Collisions for the Ceiling

We need to create an analogous function to the HasGround for each side of the AABB, so let's start with the ceiling. The differences are as follows:

  • The sensor line is above the AABB instead of being below it
  • We check for the ceiling tile from bottom to top, as we're moving up
  • No need to handle one-way platforms

Here's the modified function.

Handle Collisions for the Left Wall

Similarly to how we handled the collision check for the ceiling and ground, we also need to check if the object is colliding with the wall on the left or the wall on the right. Let's start from the left wall. The idea here is pretty much the same, but there are a few differences:

  • The sensor line is on the left side of the AABB.
  • The inner for loop needs to iterate through the tiles vertically, because the sensor is now a vertical line.
  • The outer loop needs to iterate through tiles horizontally to see if we haven't skipped a wall when moving with a big horizontal speed.

Handle Collisions for the Right Wall

Finally, let's create the CollidesWithRightWall function, which as you can imagine, will do a very similar thing as CollidesWithLeftWall, but instead of using a sensor on the left, we'll be using a sensor on the right side of the character. 

The other difference here is that instead of checking the tiles from right to left, we'll be checking them from left to right, since that's the assumed moving direction.

Move the Object Out of the Collision

All of our collision detection functions are done, so let's use them to complete the collision response against the tilemap. Before we do that, though, we need to figure out the order in which we'll be checking the collisions. Let's consider the following situations.

In both of these situations, we can see the character ended up overlapping with a tile, but we need to figure out how should we resolve the overlap. 

The situation on the left is pretty simple—we can see that we're falling straight down, and because of that we definitely should land on top of the block. 

The situation on the right is a bit more tricky, since in truth we could land on the very corner of the tile, and pushing the character to the top is as reasonable as pushing it to the right. Let's choose to prioritize the horizontal movement. It doesn't really matter much which alignment we wish to do first; both choices look correct in action.

Let's go to our UpdatePhysics function and add the variables which will hold the results of our collision queries.

Now let's start by looking if we should move the object to the right. The conditions here are that:

  • the horizontal speed is less or equal to zero
  • we collide with the left wall 
  • in the previous frame we didn't overlap with the tile on the horizontal axis—a situation akin to the one on the right in the above picture

The last one is a necessary condition, because if it wasn't fulfilled then we would be dealing with a situation similar to the one on the left in the above picture, in which we surely shouldn't move the character to the right.

If the conditions are true, we need to align the left side of our AABB to the right side of the tile, make sure that we stop moving to the left, and mark that we are next to the wall on the left.

If any of the conditions besides the last one is false, we need to set mPushesLeftWall to false. That's because the last condition being false does not necessarily tell us that the character is not pushing the wall, but conversely, it tells us that it was colliding with it already in the previous frame. Because of this, it's best to change mPushesLeftWall to false only if any of the first two conditions is false as well.

Now let's check for the collision with the right wall.

As you can see, it's the same formula we used for checking the collision with the left wall, but mirrored.

We already have the code for checking the collision with the ground, so after that one we need to check the collision with the ceiling. Nothing new here as well, plus we don't need to do any additional checks except that the vertical speed needs to be greater or equal to zero and we actually collide with a tile that's on top of us.

Round Up the Corners

Before we test if the collision responses work, there's one more important thing to do, which is to round the values of the corners we calculate for the collision checks. We need to do that, so that our checks are not destroyed by floating point errors, which might come about from weird map position, character scale or just a weird AABB size.

First, for our ease, let's create a function that transforms a vector of floats into a vector of rounded floats.

Now let's use this function in every collision check. First, let's fix the HasCeiling function.

Next is OnGround.

PushesRightWall.

And finally, PushesLeftWall.

That should solve our issues!

Check the Results

That's going to be it. Let's test how our collisions are working now.

Summary

That's it for this part! We've got a fully working set of tilemap collisions, which should be very reliable. We know in what position state the object currently is: whether it's on the ground, touching a tile on the left or on the right, or bumping a ceiling. We've also implemented the one-way platforms, which are a very important tool in every platformer game. 

In the next part, we'll add ledge-grabbing mechanics, which will increase the possible movement of the character even further, so stay tuned!

Basic 2D Platformer Physics, Part 4

$
0
0

Ledge Grabbing

Now that we can jump, drop down from one-way platforms and run around, we can also implement ledge grabbing. Ledge-grabbing mechanics are definitely not a must-have in every game, but it's a very popular method of extending a player's possible movement range while still not doing something extreme like a double jump.

Let's look at how we determine whether a ledge can be grabbed. To determine whether the character can grab a ledge, we'll constantly check the side the character is moving towards. If we find an empty tile at the top of the AABB, and then a solid tile below it, then the top of that solid tile is the ledge our character can grab onto.

Setting Up Variables

Let's go to our Character class, where we'll implement ledge grabbing. There's no point doing this in the MovingObject class, since most of the objects won't have an option to grab a ledge, so it would be a waste to do any processing in that direction there.

First, we need to add a couple of constants. Let's start by creating the sensor offset constants.

The cGrabLedgeStartY and cGrabLedgeEndY are offsets from the top of the AABB; the first one is the first sensor point, and the second one is the ending sensor point. As you can see, the character will need to find a ledge within 2 pixels.

We also need an additional constant to align the character to the tile it just grabbed. For our character, this will be set to -4.

Aside from that, we'll want to remember the coordinates of the tile that we grabbed. Let's save those as a character's member variable.

Implementation

We'll need to see if we can grab the ledge from the jump state, so let's head over there. Right after we check whether the character has landed on the ground, let's see if the conditions to grab a ledge are fulfilled. The primary conditions are as follows:

  • The vertical speed is less than or equal to zero (the character is falling).
  • The character is not at the ceiling—no use grabbing a ledge if you can't jump off it.
  • The character collides with the wall and moves towards it.

If those three conditions are met, then we need to look for the ledge to grab. Let's start by calculating the top position of the sensor, which is going to be either the top left or top right corner of the AABB. 

Now, as you may imagine, here we'll encounter a similar problem to the one we found when implementing the collision checks—if the character is falling very fast, it is actually very likely to miss the hotspot at which it can grab the ledge. That's why we'll need to check for the tile we need to grab not starting from the current frame's corner, but the previous one's—as illustrated here:


The top image of a character is its position in the previous frame. In this situation, we need to start looking for opportunities to grab a ledge from the top-right corner of the previous frame's AABB and stop at the current frame's position.

Let's get the coordinates of the tiles we need to check, starting by declaring the variables. We'll be checking tiles in a single column, so all we need is the X coordinate of the column as well as its top and bottom Y coordinates.

Let's get the X coordinate of the AABB's corner.

We want to start looking for a ledge from the previous frame's position only if we actually were already moving towards the pushed wall in that time—so our character's X position didn't change.

As you can see, in that case we're calculating the topY using the previous frame's position, and the bottom one using that of the current frame. If we weren't next to any wall, then we're simply going to see if we can grab a ledge using only the object's position in the current frame.

Alright, now that we know which tiles to check, we can start iterating through them. We'll be going from the top to the bottom, because this order makes the most sense as we allow for ledge grabbing only when the character is falling.

Now let's check whether the tile we are iterating fulfills the conditions which allow the character to grab a ledge. The conditions, as explained before, are as follows:

  • The tile is empty.
  • The tile below it is a solid tile (this is the tile we want to grab).

The next step is to calculate the position of the corner of the tile we want to grab. This is pretty simple—we just need to get the tile's position and then offset it by the tile's size.

Now that we know this, we should check whether the corner is between our sensor points. Of course we want to do that only if we're checking the tile concerning the current frame's position, which is the tile with Y coordinate equal to the bottomY. If that's not the case, then we can safely assume that we passed the ledge between the previous and the current frame—so we want to grab the ledge anyway.

Now we're home, we have found the ledge that we want to grab. First, let's save the grabbed ledge's tile position.

We also need to align the character with the ledge. What we want to do is align the top of the character's ledge sensor with the top of the tile, and then offset that position by cGrabLedgeTileOffsetY.

Aside from this, we need to do things like set the speed to zero and change the state to CharacterState.GrabLedge. After this, we can break from the loop because there's no point iterating through the rest of the tiles.

That's going to be it! The ledges can now be detected and grabbed, so now we just need to implement the GrabLedge state, which we skipped earlier.

Ledge Grab Controls

Once the character is grabbing a ledge, the player has two options: they can either jump up or drop down. Jumping works as normal; the player presses the jump key and the jump's force is identical to the force applied when jumping from the ground. Dropping down is done by pressing the down button, or the directional key that points away from the ledge.

Controls Implementation

The first thing here to do is to detect whether the ledge is to the left or to the right of the character. We can do this because we saved the coordinates of the ledge the character is grabbing.

We can use that information to determine whether the character is supposed to drop off the ledge. To drop down, the player needs to either:

  • press the down button
  • press the left button when we're grabbing a ledge on the right, or
  • press the right button when we're grabbing a ledge on the left

There's a small caveat here. Consider a situation when we're holding the down button and the right button, when the character is holding onto a ledge to the right. It'll result in the following situation:

The problem here is that the character grabs the ledge immediately after it lets go of it. 

A simple solution to this is to lock movement towards the ledge for a couple frames after we dropped off the ledge. For that we need to add two new variables; let's call them mCannotGoLeftFrames and mCannotGoRightFrames.

When the character drops off the ledge, we need to set those variables and change the state to jump.

Now let's go back for a bit to the Jump state, and let's make sure that it respects our ban on moving either left or right after dropping off the ledge. Let's reset the inputs right before we check if we should look for a ledge to grab.

As you can see, this way we won't fulfill the conditions needed to grab a ledge as long as the blocked direction is the same as the direction of the ledge the character may try to grab. Each time we deny a particular input, we decrement from the remaining blocking frames, so eventually we'll be able to move again—in our case, after 3 frames.

Now let's continue working on the GrabLedge state. Since we handled dropping off the ledge, we now need to make it possible to jump from the grabbing position.

If the character didn't drop from the ledge, we need to check whether the jump key has been pressed; if so, we need to set the jump's vertical speed and change the state:

That's pretty much it! Now the ledge grabbing should work properly in all kinds of situations.

Allow the Character to Jump Shortly After Leaving a Platform

Often, to make jumps easier in platformer games, the character is allowed to jump if it just stepped off the edge of a platform and is no longer on the ground. This is a popular method to mitigate an illusion that the player has pressed the jump button but the character didn't jump, which might have appeared due to input lag or the player pressing the jump button right after the character has moved off the platform.

Let's implement such a mechanic now. First of all, we need to add a constant of how many frames after the character steps off the platform it can still perform a jump.

We'll also need a frame counter in the Character class, so we know how many frames the character is in the air already.

Now let's set the mFramesFromJumpStart to 0 every time we just left the ground. Let's do that right after we call  UpdatePhysics.

And let's increment it every frame we're in the jump state.

If we're in the jump state, we cannot allow an in-air jump if we're either at the ceiling or have a positive vertical speed. Positive vertical speed would mean that the character hasn't missed a jump.

If that's not the case and the jump key is pressed, all we need to do is set the vertical speed to the jump value, as if we jumped normally, even though the character is in the jump state already.

And that's it! We can set the cJumpFramesThreshold to a big value like 10 frames to make sure that it works.

The effect here is quite exaggerated. It's not very noticeable if we're allowing the character to jump just 1-4 frames after it is in fact no longer on the ground, but overall this allows us to modify how lenient we want our jumps to be.

Scaling the Objects

Let's make it possible to scale the objects. We already have the mScale in the MovingObject class, so all we actually need to do is make sure it affects the AABB and the AABB offset properly.

First of all, let's edit our AABB class so it has a scale component.

Now let's edit the halfSize, so that when we access it, we actually get a scaled size instead of the unscaled one.

We'll also want to be able to get or set only an X or Y value of the half size, so we need to make separate getters and setters for those as well.

Besides scaling the AABB itself, we'll also need to scale the mAABBOffset, so that after we scale the object, its sprite will still match the AABB the same way it did when the object was unscaled. Let's head back over to the MovingObject class to edit it.

The same as previously, we'll want to have access to X and Y components separately too.

Finally, we also need to make sure that when the scale is modified in the MovingObject, it also is modified in the AABB. The object's scale can be negative, but the AABB itself shouldn't have a negative scale because we rely on half size to be always positive. That's why instead of simply passing the scale to the AABB, we're going to pass a scale that has all components positive.

All that's left to do now is to make sure that wherever we used the variables directly, we use them through the getters and setters now. Wherever we used halfSize.x, we'll want to use HalfSizeX, wherever we used halfSize.y, we'll want to use HalfSizeY, and so on. A few uses of a find and replace function should deal with this well.

Check the Results

The scaling should work well now, and because of the way we built our collision detection functions, it doesn't matter if the character is giant or tiny—it should interact with the map well.

Summary

This part concludes our work with the tilemap. In the next parts, we'll be setting things up to detect collisions between objects. 

It took some time and effort, but the system in general should be very robust. One thing that may be lacking right now is the support for slopes. Many games don't rely on them, but a lot of them do, so that's the biggest improvement goal to this system. Thanks for reading so far, see you in the next part!

Viewing all 728 articles
Browse latest View live