Recently we've been running a series of sponsored tutorials by Microsoft technical evangelists, designed to help you solve the problem of building things that work well across all platforms.
In case you missed it, here are some of the highlights.
Microsoft recently launched a new tool to make it easier to test sites in IE regardless of which platform you’re on. It’s called RemoteIE, and is designed to offer a virtualized version of the latest version of IE. This allows you to test out the latest version of IE without having to have a virtual machine installed.
In this tutorial, developer advocate Rey Bango shows you how to set up RemoteIE, and demonstrates what it looks like in Chrome on his MacBook.
Have you ever wished that more APIs were fluent? In other words, do you think the community should be able to more easily read, understand, and build upon the work while spending less time in the tech docs?
In this tutorial, David Catuhe walks through fluent APIs: what to consider, how to write them, and cross-browser performance implications.
Many software companies create developer relations or community outreach programs to win the hearts and minds of developers. But the problem with developer relations is that it’s hard to measure how successful you or your team are and if you’re hitting the mark with your community. In this tutorial, Rey Bango explains what developer advocates do, and examines different ways of measuring the success of outreach efforts.
Ever want to create your own Twitch.tv-like app for live-streaming your work? How about your own YouTube-esque program for playing back your previously recorded video? You might have used Flash, Java, or Silverlight for rich media in the past, but with Chrome 42 announcing that those plug-ins are no longer supported, now is as good a time to go HTML5 as ever.
In this tutorial, Microsoft technical evangelist David Voyles explains how to use Azure Media Services to get set up and start experimenting with delivering live or on-demand HTML video.
In this short tutorial, we'll extend our platformer pathfinder so that it can deal with one-way platforms: blocks that the character can jump through and also step on. (Technically, these are two-way platforms, since you can jump through them from either direction, but let's not split hairs!)
Demo
You can play the Unity demo, or the WebGL version (100MB+), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, right-click a cell to toggle the ground at that point, and middle-click to place a one-way platform.
Changing the Map to Accommodate One-Way Platforms
To handle the one-way platforms, we need to add a new tile type to the map:
public enum TileType
{
Empty,
Block,
OneWay
}
One-way platforms have the same pathfinder weight as empty tiles—that is, 1. That's because the player can always go through them when jumping up; they only stop him when he's falling, and that in no way impairs the character's movement.
We also need a function that lets us know if the tile at a given position is specifically a one-way platform:
public bool IsOneWayPlatform(int x, int y)
{
if (x < 0 || x >= mWidth
|| y < 0 || y >= mHeight)
return false;
return (tiles[x, y] == TileType.OneWay);
}
Finally, we need to change Map.IsGround to return true if a tile is either a solid block or a one way platform:
public bool IsGround(int x, int y)
{
if (x < 0 || x >= mWidth
|| y < 0 || y >= mHeight)
return false;
return (tiles[x, y] == TileType.OneWay || tiles[x, y] == TileType.Block);
}
That's the map part of the code sorted; now we can work on the pathfinder itself.
Adding New Node Filtering Conditions
We also need to add two new node filtering conditions to our list. Remember, our list currently looks like this:
It is the start node.
It is the end node.
It is a jump node.
It is a first in-air node in a side jump (a node with jump value equal to 3).
It is the landing node (a node that had a non-zeo jump value becomes 0).
It is the high point of the jump (the node between moving upwards and and falling downwards).
It is a node that goes around an obstacle.
We want to add these two conditions:
The node is on a one-way platform.
The node is on the ground and the previous node was on a one-way platform (or vice-versa).
Including Nodes That Are One-Way Platforms
The first point: we always want to include a node if it's on a one-way platform:
Finally, here's an example of filtering with one-way platforms.
Conclusion
That's all there is to it! It's a simple addition, really. In the next tutorial in this series, we'll add a slightly more complicated (but still fairly straightforward) extension, allowing the pathfinding algorithm to deal with characters that are larger than 1x1 blocks.
In this tutorial, we'll extend our grid-based platformer pathfinder so that it can cope with characters that take up more than one cell of the grid.
If you haven't added one-way platform support to your code yet, I recommend you do so, but it's not necessary in order to follow this tutorial.
Demo
You can play the Unity demo, or the WebGL version (100MB+), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, right-click a cell to toggle the ground at that point, and click-and-drag the sliders to change their values.
Character Position
The pathfinder accepts the position, width, and height of the character as an input. While width and height are easy to interpret, we need to to clarify which block the position coordinates refer to.
The position that we pass needs to be in terms of map coordinates, which means that, yet again, we need to embrace some inaccuracy. I decided that it would be sensible to make the position refer to the bottom-left character tile, since this matches the map coordinate system.
With that cleared up, we can update the pathfinder.
Checking That the Goal is Realistic
First, we must make sure that our custom-sized character can fit in the destination location. Until this point, we've only checked one block to do this, as that was the maximum (and only) size of the character:
if (mGrid[end.x, end.y] == 0)
return null;
Now, however, we need to iterate through every cell that the character would occupy if it were standing in the end position, and check whether any of them are a solid block. If they are, then of course the character cannot stand there, so the goal cannot be reached.
To do this, let's first declare a Boolean which we will set to false if the character is in a solid tile and true otherwise:
var inSolidTile = false;
Next, we'll iterate through every block of the character:
for (var w = 0; w < characterWidth; ++w)
{
for (int h = 0; h < characterHeight; ++h)
{
}
}
Inside this loop, we need to check whether a particular block is solid; if so, we set inSolidTile to true, and exit the loop:
for (var w = 0; w < characterWidth; ++w)
{
for (int h = 0; h < characterHeight; ++h)
{
if (mGrid[end.x + w, end.y + h] == 0
|| mGrid[end.x + w, end.y + h] == 0)
{
inSolidTile = true;
break;
}
}
if (inSolidTile)
break;
}
But this is not enough. Consider the following situation:
If we were to move the character so that its bottom-left block occupied the goal, then the bottom-right block would be stuck in a solid block—so the algorithm would think that, since the character doesn't fit the goal position, it is impossible to reach the end point. Of course, that's not true; we don't care which part of the character reaches the goal.
To solve this problem, we will move the end point to the left, step by step, up to the point where the original goal location would match the bottom-right character block:
for (var i = 0; i < characterWidth; ++i)
{
inSolidTile = false;
for (var w = 0; w < characterWidth; ++w)
{
for (var h = 0; h < characterHeight; ++h)
{
if (mGrid[end.x + w, end.y + h] == 0 || mGrid[end.x + w, end.y + h] == 0)
{
inSolidTile = true;
break;
}
}
if (inSolidTile)
break;
}
if (inSolidTile)
end.x -= 1;
else
break;
}
Note that we shouldn't simply check the bottom left and right corners, because the following case may occur:
Here, you can see that if either of the bottom corners occupy the goal location, then the character would still be in solid ground on the other side. In this case, we need to match the bottom-center block with the goal.
Finally, if we can't find any place where the character would fit, we might as well exit the algorithm early:
if (inSolidTile == true)
return null;
Determining the Starting Position
To see whether our character is on the ground, we need to check whether any of the character's bottom-most cells are directly above a solid tile.
Let's look at the code we used for a 1x1 character:
We determine whether the starting point is on the ground by checking whether the tile immediately below the starting point is a ground tile. To update the code, we'll simply make it check below all of the bottom-most blocks of the character.
First, let's declare a Boolean that will tell us whether the character starts on the ground. Initially, we assume that it doesn't:
bool startsOnGround = false;
Next, we'll iterate through all the bottom-most character blocks and check whether any of them are directly above a ground tile. If so, then we set startsOnGround to true and exit the loop:
for (int x = start.x; x < start.x + characterWidth; ++x)
{
if (mMap.IsGround(x, start.y - 1))
{
startsOnGround = true;
break;
}
}
Finally, we set the jump value depending on whether the character started on the ground:
We need to change our successor's bounds check as well, but here we don't need to check every tile. It's good enough to check the contour of the character—the blocks around the edge—because we know that the parent's position was fine.
Let's look how we checked the successor's bounds previously:
if (mGrid[mNewLocationX, mNewLocationY] == 0)
continue;
if (mMap.IsGround(mNewLocationX, mNewLocationY - 1))
onGround = true;
else if (mGrid[mNewLocationX, mNewLocationY + characterHeight] == 0)
atCeiling = true;
We'll update this by checking whether any of the contour blocks are within a solid block. If any of them does, then the character cannot fit in the position and the successor should be skipped.
Checking the Top and Bottom Blocks
First, let's iterate over all the top-most and bottom-most blocks of the character, and check whether they overlap a solid tile on our grid:
for (var w = 0; w < characterWidth; ++w)
{
if (mGrid[mNewLocationX + w, mNewLocationY] == 0
|| mGrid[mNewLocationX + w, mNewLocationY + characterHeight - 1] == 0)
goto CHILDREN_LOOP_END;
}
The CHILDREN_LOOP_END label leads to the end of the successor loop; by using it, we skip the need to first break out of the loop and then continue to the next successor in the successor loop.
When a Tile in the Air Can Be Considered "OnGround"
If any of the bottom blocks are right above a solid tile, then the successor must be on the ground. This means that, even if there's no solid tile directly under the successor cell itself, the successor will still be considered an OnGround node, if the character is wide enough.
for (var w = 0; w < characterWidth; ++w)
{
if (mGrid[mNewLocationX + w, mNewLocationY] == 0
|| mGrid[mNewLocationX + w, mNewLocationY + characterHeight - 1] == 0)
goto CHILDREN_LOOP_END;
if (mMap.IsGround(mNewLocationX + w, mNewLocationY - 1))
onGround = true;
}
Checking Whether the Character is at the Ceiling
If any of the tiles above the character are solid, then the character is at the ceiling.
for (var w = 0; w < characterWidth; ++w)
{
if (mGrid[mNewLocationX + w, mNewLocationY] == 0
|| mGrid[mNewLocationX + w, mNewLocationY + characterHeight - 1] == 0)
goto CHILDREN_LOOP_END;
if (mMap.IsGround(mNewLocationX + w, mNewLocationY - 1))
onGround = true;
if (mGrid[mNewLocationX + w, mNewLocationY + characterHeight] == 0)
atCeiling = true;
}
Checking the Blocks at the Sides of the Character
Now we just need to check that there aren't any solid blocks in the left and right cells of the character. If there are, then we can safely skip the successor, because our character won't fit that particular position:
for (var h = 1; h < characterHeight - 1; ++h)
{
if (mGrid[mNewLocationX, mNewLocationY + h] == 0
|| mGrid[mNewLocationX + characterWidth - 1, mNewLocationY + h] == 0)
goto CHILDREN_LOOP_END;
}
Conclusion
We've removed a fairly significant restriction from the algorithm; now, you have much more freedom in terms of the size of your game's characters.
In the next tutorial in the series, we'll use our pathfinding algorithm to power a bot that can follow the path itself; just click on a location and it'll run and jump to get there. This is very useful for NPCs!
In this tutorial, we'll use the platformer pathfinding algorithm we've been building to power a bot that can follow the path by itself; just click on a location and it'll run and jump to get there. This is very useful for NPCs!
Demo
You can play the Unity demo, or the WebGL version (100MB+), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, right-click a cell to toggle the ground at that point, middle-click to place a one-way platform, and click-and-drag the sliders to change their values.
Updating the Engine
Handling the Bot State
The bot has two states defined: the first is for doing nothing, and the second is for handling the movement. In your game, though, you'll probably need many more to change the bot's behaviour according to the situation.
public enum BotState
{
None = 0,
MoveTo,
}
The bot's update loop will do different things depending on which state is currently assigned to mCurrentBotState:
void BotUpdate()
{
switch (mCurrentBotState)
{
case BotState.None:
/* no need to do anything */
break;
case BotState.MoveTo:
/* bot movement update logic */
break;
}
CharacterUpdate();
}
The CharacterUpdate function handles all the inputs and updates physics for the bot.
To change the state, we'll use a ChangeState function which simply assigns the new value to mCurrentBotState:
public void ChangeState(BotState newState)
{
mCurrentBotState = newState;
}
Controlling the Bot
We'll control the bot by simulating inputs, which we'll assign to an array of Booleans:
For example, if we want to simulate a press of the left button, we'll do it like this:
mInputs[(int)KeyInput.GoLeft] = true;
The character logic will then handle this artificial input in the same way that it would handle real input.
We'll also need an additional helper function or a lookup table to get the number of frames we need to press the jump button for in order to jump a given number of blocks:
int GetJumpFrameCount(int deltaY)
{
if (deltaY <= 0)
return 0;
else
{
switch (deltaY)
{
case 1:
return 1;
case 2:
return 2;
case 3:
return 5;
case 4:
return 8;
case 5:
return 14;
case 6:
return 21;
default:
return 30;
}
}
}
Note that this will only work consistently if our game updates with a fixed frequency and the character's starting jump speed is the same. Ideally, we'd calculate these values separately for each character depending on that character's jump speed, but the above will work fine in our case.
Preparing and Obtaining the Path to Follow
Constraining the Goal Location
Before we actually use the pathfinder, it'd be a good idea to force the goal destination to be on the ground. This is because the player is quite likely to click a spot that is slightly above the ground, in which case the bot's path would end with an awkward jump into the air. By lowering the end point to be right on the surface of the ground, we can easily avoid this.
First, let's look at the TappedOnTile function. This function gets called when the player clicks anywhere in the game; the parameter mapPos is the position of the tile that the player clicked on:
public void TappedOnTile(Vector2i mapPos)
{
}
We need to lower the position of the clicked tile until it is on the ground:
public void TappedOnTile(Vector2i mapPos)
{
while (!(mMap.IsGround(mapPos.x, mapPos.y)))
--mapPos.y;
}
Finally, once we arrive at a ground tile, we know where we want to move the character to:
public void TappedOnTile(Vector2i mapPos)
{
while (!(mMap.IsGround(mapPos.x, mapPos.y)))
--mapPos.y;
MoveTo(new Vector2i(mapPos.x, mapPos.y + 1));
}
Determining the Starting Location
Before we actually call the FindPath function, we need to make sure that we pass the correct starting cell.
First, let's assume that the starting tile is the bottom-left cell of a character:
This tile might not be the one we want to pass to the algorithm as the first node, because if our character is standing on the edge of the platform, the startTile calculated this way may have no ground, as in the following situation:
In this case, we'd like to set the starting node to the tile that's on the left side of the character, not on its center.
Let's start by creating a function that will tell us if the character will fit a different position, and if it does, whether it's on the ground at that spot:
bool IsOnGroundAndFitsPos(Vector2i pos)
{
}
First, let's see if the character fits the spot. If it doesn't, we can immediately return false:
bool IsOnGroundAndFitsPos(Vector2i pos)
{
for (int y = pos.y; y < pos.y + mHeight; ++y)
{
for (int x = pos.x; x < pos.x + mWidth; ++x)
{
if (mMap.IsObstacle(x, y))
return false;
}
}
}
Now we can see whether any of the tiles below the character are ground tiles:
bool IsOnGroundAndFitsPos(Vector2i pos)
{
for (int y = pos.y; y < pos.y + mHeight; ++y)
{
for (int x = pos.x; x < pos.x + mWidth; ++x)
{
if (mMap.IsObstacle(x, y))
return false;
}
}
for (int x = pos.x; x < pos.x + mWidth; ++x)
{
if (mMap.IsGround(x, pos.y - 1))
return true;
}
return false;
}
Let's go back to the MoveTo function, and see if we have to change the start tile. We need to do that if the character is on the ground but the start tile isn't:
We know that, in this case, the character stands on either the left edge or the right edge of the platform.
Let's first check the right edge; if the character fits there and the tile is on the ground, then we need to move the start tile one space to the right. If it doesn't, then we need to move it to the left.
if (mOnGround && !IsOnGroundAndFitsPos(startTile))
{
if (IsOnGroundAndFitsPos(new Vector2i(startTile.x + 1, startTile.y)))
startTile.x += 1;
else
startTile.x -= 1;
}
Now we should have all the data we need to call the pathfinder:
The second is the destination; we can pass this as-is.
The third and fourth arguments are the width and the height which need to be approximated by the tile size. Note that here we want to use the ceiling of the height in tiles—so, for example, if the real height of the character is 2.3 tiles, we want the algorithm to think the character is actually 3 tiles high. (It's better if the real height of the character is actually a bit less than its size in tiles, to allow a bit more room for mistakes from the path following AI.)
Finally, the fifth argument is the maximum jump height of the character.
Backing Up the Node List
After running the algorithm we should check whether the result is fine—that is, if any path has been found:
if (path != null && path.Count > 1)
{
}
If so, we need to copy the nodes to a separate buffer, because if some other object were to call the pathfinder's FindPath function right now, the old result would be overwritten. Copying the result to a separate list will prevent this.
if (path != null && path.Count > 1)
{
for (var i = path.Count - 1; i >= 0; --i)
mPath.Add(path[i]);
}
As you can see, we're copying the result in reverse order; this is because the result itself is reversed. Doing this means the nodes in the mPath list will be in first-to-last order.
Now let's set the current goal node. Because the first node in the list is the starting point, we can actually skip it and proceed from the second node onwards:
if (path != null && path.Count > 1)
{
for (var i = path.Count - 1; i >= 0; --i)
mPath.Add(path[i]);
mCurrentNodeId = 1;
ChangeState(BotState.MoveTo);
}
After setting the current goal node, we set the bot state to MoveTo, so an appropriate state will be enabled.
Getting the Context
Before we start writing the rules for the AI movement, we need to be able to find what situation the character is in at any given point.
We need to know:
the positions of the previous, current and next destinations
whether the current destination is on the ground or in the air
whether the character has reached the current destination on the x-axis
whether the character has reached the current destination on the y-axis
Note: the destinations here are not necessarily the final goal destination; they're the nodes in the list from the previous section.
This information will let us accurately determine what the bot should do in any situation.
Let's start by declaring a function to get this context:
public void GetContext(out Vector2 prevDest, out Vector2 currentDest, out Vector2 nextDest, out bool destOnGround, out bool reachedX, out bool reachedY)
{
}
Calculating World Positions of Destination Nodes
The first thing we should do in the function is calculate the world position of the destination nodes.
Let's start by calculating this for the previous destination. This operation depends on how your game world is set up; in my case, the map coordinates do not match the world coordinates, so we need to translate them.
Translating them is really simple: we just need to multiply the position of the node by the size of a tile, and then offset the calculated vector by the map position:
And now for the next destination's position. Here we need to check if there are any nodes left to follow after we reach our current goal, so first let's assume that the next destination is the same as the current one:
nextDest = currentDest;
Now, if there are any nodes left, we'll calculate the next destination in the same way that we did the previous two:
The next step is to determine whether the current destination is on the ground.
Remember that it is not enough to only check the tile directly underneath the goal; we need to consider the cases where the character is more than one block wide:
Let's start by assuming that the destination's position is not on the ground:
destOnGround = false;
Now we'll look through the tiles beneath the destination to see if there are any solid blocks there. If there are, we can set destOnGround to true:
for (int x = mPath[mCurrentNodeId].x; x < mPath[mCurrentNodeId].x + mWidth; ++x)
{
if (mMap.IsGround(x, mPath[mCurrentNodeId].y - 1))
{
destOnGround = true;
break;
}
}
Checking Whether the Node Has Been Reached on the X-Axis
Before we can see if the character has reached the goal, we need to know its position on the path. This position is basically the center of the bottom-left cell of our character. Since our character is not actually built from cells, we are simply going to use the bottom left position of the character's bounding box plus half a cell:
This is the position that we need to match to the goal nodes.
How can we determine whether the character has reached the goal on the x-axis? It'd be safe to assume that, if the character is moving right and has an x-position greater than or equal to that of the destination, then the goal has been reached.
To see if the character was moving right we'll use the previous destination, which in this case must have been to the left of the current one:
The same applies to the opposite side; if the previous destination was to the right of the current one and the character's x-position is less than or equal to that of the goal position, then we can be sure that the character has reached the goal on the x-axis:
Sometimes, because of the character's speed, it overshoots the destination, which may result in it not landing on the target node. See the following example:
To fix this, we'll snap the character's position so that it lands on the goal node.
The conditions for us to snap the character are:
The goal has been reached on the x-axis.
The distance between the bot's position and current destination is greater than cBotMaxPositionError.
The distance between the bot's position and the current destination is not very far, so we don't snap the character from far away.
The character did not move left or right last turn, so we snap the character only if it's falling straight down.
cBotMaxPositionError in this tutorial is equal to 1 pixel; this is how far off we let the character be from the destination while still allowing it to go to the next goal.
Checking Whether the Node Has Been Reached on the Y-Axis
Let's figure out when we can be sure that the character has reached its target's Y position. First of all, if the previous destination is below the current one, and our character jumps to the height of the current goal, then we can assume that the goal has been reached.
Similarly, if the current destination is below the previous one and the character has reached the y-position of the current node, we can set reachedY to true as well.
Regardless of whether the character needs to be jumping or falling to reach the destination node's y-position, if it's really close, then we should set reachedY to true also:
If the destination is on the ground but the character isn't, then we can assume that the current goal's Y position has not been reached:
if (destOnGround && !mOnGround)
reachedY = false;
That's it—that's all the basic data we need to know to consider what kind of movement the AI needs to do.
Handling the Bot's Movement
The first thing to do in our update function is get the context that we've just implemented:
Vector2 prevDest, currentDest, nextDest;
bool destOnGround, reachedY, reachedX;
GetContext(out prevDest, out currentDest, out nextDest, out destOnGround, out reachedX, out reachedY);
Now let's get the character's current position along the path. We calculate this in the same way we did in the GetContext function:
At the beginning of the frame we need to reset the fake inputs, and assign them only if a condition to do so arises. We'll be using only four inputs: two for movement left and right, one for jumping, and one for dropping off a one way platform.
The very first condition for movement will be this: if the current destination is lower than the position of the character and the character is standing on a one way platform, then press the down button, which should result in the character jumping off the platform downwards:
Let's lay out how our jumps should work. First off, we don't want to keep the jump button pressed if mFramesOfJumping is 0.
if (mFramesOfJumping > 0)
{
}
The second condition to check is that the character is not on the ground.
In this implementation of platformer physics, the character is allowed to jump if it just stepped off the edge of a platform and is no longer on the ground. This is a popular method to mitigate an illusion that the player has pressed the jump button but the character didn't jump, which might have appeared due to input lag or the player pressing the jump button right after the character has moved off the platform.
if (mFramesOfJumping > 0 && !mOnGround)
{
}
This condition will work if the character needs to jump off a ledge, because the frames of jumping will be set to an appropriate amount, the character will naturally walk off the ledge, and at that point it will also start the jump.
This will not work if the jump needs to be performed from the ground; to handle these we need to check these conditions:
The character has reached the destination node's x-position, where it's going to start jumping.
The destination node is not on the ground; if we are to jump up, we need to go through a node that's in the air first.
The character should also jump if it is on ground and the destination is on the ground as well. This will generally happen if the character needs to jump one tile up and to the side to reach a platform that's just one block higher.
Note that we decrement the mFramesOfJumping only if the character is not on the ground. This is to avoid accidentally decreasing the jump length before starting the jump.
Proceeding to the Next Destination Node
Let's think about what needs to happen when we reach the node—that is, when both reachedX and reachedY are true.
if (reachedX && reachedY)
{
}
First, we'll increment the current node ID:
mCurrentNodeId++;
Now we need to check whether this ID is greater than the number of nodes in our path. If it is, that means the character has reached the goal:
The next thing we must do is calculate the jump for the next node. Since we'll need to use this in more than one place, let's make a function for it:
public int GetJumpFramesForNode(int prevNodeId)
{
}
We only want to jump if the new node is higher than the previous one and the character is on the ground:
public int GetJumpFramesForNode(int prevNodeId)
{
if (mPath[currentNodeId].y - mPath[prevNodeId].y > 0 && mOnGround)
{
}
}
To find out how many tiles we'll need to jump, we're going to iterate through nodes for as long as they go higher and higher. When we get to a node that is at a lower height, or a node that has ground under it, we can stop, since we know that there will be no need to go higher than that.
First, let's declare and set the variable that will hold the value of the jump:
public int GetJumpFramesForNode(int prevNodeId)
{
if (mPath[currentNodeId].y - mPath[prevNodeId].y > 0 && mOnGround)
{
int jumpHeight = 1;
}
}
Now let's iterate through the nodes, starting at the current node:
public int GetJumpFramesForNode(int prevNodeId)
{
if (mPath[currentNodeId].y - mPath[prevNodeId].y > 0 && mOnGround)
{
int jumpHeight = 1;
for (int i = currentNodeId; i < mPath.Count; ++i)
{
}
}
}
If the next node is higher than the jumpHeight, and it's not on the ground, then let's set the new jump height:
public int GetJumpFramesForNode(int prevNodeId)
{
if (mPath[currentNodeId].y - mPath[prevNodeId].y > 0 && mOnGround)
{
int jumpHeight = 1;
for (int i = currentNodeId; i < mPath.Count; ++i)
{
if (mPath[i].y - mPath[prevNodeId].y >= jumpHeight && !mMap.IsGround(mPath[i].x, mPath[i].y - 1))
jumpHeight = mPath[i].y - mPath[prevNodeId].y;
}
}
}
If the new node height is lower than the previous, or it's on the ground, then we return the number of frames of jump needed for the found height. (And if there's no need to jump, let's just return 0.)
public int GetJumpFramesForNode(int prevNodeId)
{
int currentNodeId = prevNodeId + 1;
if (mPath[currentNodeId].y - mPath[prevNodeId].y > 0 && mOnGround)
{
int jumpHeight = 1;
for (int i = currentNodeId; i < mPath.Count; ++i)
{
if (mPath[i].y - mPath[prevNodeId].y >= jumpHeight)
jumpHeight = mPath[i].y - mPath[prevNodeId].y;
if (mPath[i].y - mPath[prevNodeId].y < jumpHeight || !mMap.IsGround(mPath[i].x, mPath[i].y - 1))
return GetJumpFrameCount(jumpHeight);
}
}
return 0;
}
We need to call this function in two places.
The first one is in the case where the character has reached the node's x- and y-positions:
if (reachedX && reachedY)
{
int prevNodeId = mCurrentNodeId;
mCurrentNodeId++;
if (mCurrentNodeId >= mPath.Count)
{
mCurrentNodeId = -1;
ChangeState(BotState.None);
break;
}
if (mOnGround)
mFramesOfJumping = GetJumpFramesForNode(prevNodeId);
}
Note that we set the jump frames for the whole jump, so when we reach an in-air node we don't want to change the number of jump frames that was determined before the jump took place.
After we update the goal, we need to process everything again, so the next movement frame gets calculated immediately. For this, we'll use a goto command:
goto case BotState.MoveTo;
The second place we need to calculate the jump for is the MoveTo function, because it might be the case that the first node of the path is a jump node:
if (path != null && path.Count > 1)
{
for (var i = path.Count - 1; i >= 0; --i)
mPath.Add(path[i]);
mCurrentNodeId = 1;
ChangeState(BotState.MoveTo);
mFramesOfJumping = GetJumpFramesForNode(0);
}
Handling Movement to Reach the Node's X-Position
Now let's handle the movement for the case where the character has not yet reached the target node's x-position.
Nothing complicated here; if the destination is to the right, we need to simulate the right button press. If the destination is to the left, then we need to simulate the left button press. We only need to move the character if the difference in position is more than the cBotMaxPositionError constant:
else if (!reachedX)
{
if (currentDest.x - pathPosition.x > Constants.cBotMaxPositionError)
mInputs[(int)KeyInput.GoRight] = true;
else if (pathPosition.x - currentDest.x > Constants.cBotMaxPositionError)
mInputs[(int)KeyInput.GoLeft] = true;
}
Handling Movement to Reach the Node's Y-Position
If the character has reached the target x-position but we still it to jump higher, we can still move the character left or right depending on where the next goal is. This will just mean that the character does not stick so rigidly to the found path. Thanks to that, it'll be much easier to get to the next destination, because instead of simply waiting to reach the target y-position, the character will be naturally moving towards the next node's x-position while it's doing so.
We'll only move the character towards the next destination if it exists at all and it's not on the ground. (If it's on the ground, then we can't skip it because it's an important checkpoint—it resets the character's vertical speed and allows it to use the jump again.)
But before we actually move towards the next goal, we need to check that we won't break the path by doing so.
Avoiding Breaking a Fall Prematurely
Consider the following scenario:
Here, as soon as the character walked off the ledge where it started, it reached the x-position of the second node, and was falling to reach the y-position. Since the third node was to the right of the character, it moved right—and ended up in a tunnel above the one we wanted it to go into.
To fix this, we need to check whether there are any obstacles between the character and the next destination; if there aren't, then we are free to move the character towards it; if there are, then we need to wait.
First, let's see which tiles we'll need to check. If the next goal is to the right of the current one, then we'll need to check the tiles on the right; if it's to the left then we'll need to check the tiles to the left. If they are at the same x-position, there's no reason to make any pre-emptive movements.
int checkedX = 0;
int tileX, tileY;
mMap.GetMapTileAtPoint(pathPosition, out tileX, out tileY);
if (mPath[mCurrentNodeId + 1].x != mPath[mCurrentNodeId].x)
{
if (mPath[mCurrentNodeId + 1].x > mPath[mCurrentNodeId].x)
checkedX = tileX + mWidth;
else
checkedX = tileX - 1;
}
As you can see, the x-coordinate of the node to the right depends on the width of the character.
Now we can check whether there are any tiles between the character and the next node's position on the y-axis:
The AnySolidBlockInStripe function checks whether there are any solid tiles between two given points on the map. The points need to have the same x-coordinate. The x-coordinate we are checking is the tile we'd like the character to move into, but we're not sure if we can, as explained above.
Here's the implementation of the function.
public bool AnySolidBlockInStripe(int x, int y0, int y1)
{
int startY, endY;
if (y0 <= y1)
{
startY = y0;
endY = y1;
}
else
{
startY = y1;
endY = y0;
}
for (int y = startY; y <= endY; ++y)
{
if (GetTile(x, y) == TileType.Block)
return true;
}
return false;
}
As you can see, the function is really simple; it just iterates through the tiles in a column, starting from the lower one.
Now that we know we can move towards the next destination, let's do so:
That's almost it—but there's still one case to solve. Here's an example:
As you can see, before the character reached the second node's y-position, it bumped its head on the floating tile, because we made it move towards the next destination to the right. As a result, the character ends up never reaching the second node's y-position; instead it moved straight on to the third node. Since reachedY is false in this case, it cannot proceed with the path.
To avoid such cases, we'll simply check whether the character reached the next goal before it reached the current one.
The first move towards this will be separating our previous calculations of reachedX and reachedY into their own functions:
Now we can check whether the next destination has been reached. If it has, we can simply increment mCurrentNode and immediately re-do the state update. This will make the next destination become the current one, and since the character has reached it already, we will be able to move on:
if (checkedX != 0 && !mMap.AnySolidBlockInStripe(checkedX, tileY, mPath[mCurrentNodeId + 1].y))
{
if (nextDest.x - pathPosition.x > Constants.cBotMaxPositionError)
mInputs[(int)KeyInput.GoRight] = true;
else if (pathPosition.x - nextDest.x > Constants.cBotMaxPositionError)
mInputs[(int)KeyInput.GoLeft] = true;
if (ReachedNodeOnXAxis(pathPosition, currentDest, nextDest) && ReachedNodeOnYAxis(pathPosition, currentDest, nextDest))
{
mCurrentNodeId += 1;
goto case BotState.MoveTo;
}
}
That's all for character movement!
Handling Restart Conditions
It's good to have a backup plan for a situation in which the bot is not moving through the path like it should. This can happen if, for example, the map gets changed—adding an obstacle to an already calculated path may cause the path to become invalid. What we'll do is reset the path if the character is stuck for longer than a particular number of frames.
So, let's declare variables that will count how many frames the character has been stuck and how many frames it may be stuck at most:
public int mStuckFrames = 0;
public const int cMaxStuckFrames = 20;
We need to reset this when we call MoveTo function:
And finally, at the end of the BotState.MoveTo, let's check whether the character is stuck. Here, we simply need to check if its current position is equal to the old one; if so, then we also need to increment the mStuckFrames and check whether the character has been stuck for more frames than cMaxStuckFrames—and if it was, then we need to call the MoveTo function with the last node of the current path as the parameter. Of course, if the position is different, then we need to reset the mStuckFrames to 0:
if (mFramesOfJumping > 0 &&
(!mOnGround || (reachedX && !destOnGround) || (mOnGround && destOnGround)))
{
mInputs[(int)KeyInput.Jump] = true;
if (!mOnGround)
--mFramesOfJumping;
}
if (mPosition == mOldPosition)
{
++mStuckFrames;
if (mStuckFrames > cMaxStuckFrames)
MoveTo(mPath[mPath.Count - 1]);
}
else
mStuckFrames = 0;
Now the character should find an alternative path if it wasn't able to finish the initial one.
Conclusion
That's the whole of the tutorial! It's been a lot of work, but I hope you'll find this method useful. It is by no means a perfect solution for platformer pathfinding; the approximation of the jump curve for the character that the algorithm needs to make is often quite tricky to do and can lead to incorrect behaviour. The algorithm still can be extended—it's not very hard to add ledge-grabs and other kinds of extended movement flexibility—but we've covered the basic platformer mechanics. It is also possible to optimize the code to make it faster as well as use less memory; this iteration of the algorithm isn't perfect at all when it comes to those aspects. It also suffers from quite poor approximation of the curve when falling at large speeds.
The algorithm can be used in many ways, most notably to enhance the enemy AI or AI companions. It can also be used as a control scheme for touch devices—this would work basically the same way it does in the tutorial demo, with the player tapping wherever they want the character to move. This removes the execution challenge upon which many platformers have been built, so the game would have to be designed differently, to be much more about positioning your character in the right spot rather than learning to control the character accurately.
Thanks for reading! Be sure to leave some feedback on the method and also let me know if you've made any improvements to it!
There's a basic problem in the world of mobile app development.
Developers invest a lot of time and money in creating a great app, and they deserve to be rewarded for that. But consumers are unwilling to pay. As Ryan Chang noted in a tutorial on pricing apps, paid apps only account for a small minority of overall app revenue.
One option, of course, is to offer a free app that includes in-app purchases. But many people hate those too, especially when the demands for money are too intrusive. So what's an honest app developer to do?
Amazon's newly-launched Amazon Underground offers an intriguing solution. Customers get to download and use your Android app completely free, and Amazon pays you for every minute that they spend using it.
Here's a one-minute explanation of how it works:
Customers do get shown some interstitial advertisements, so in a sense this is similar to other mobile advertising models. But what's different about Amazon Underground's model is that developers don't have to worry about setting up ads, or about whether users will click on them or not.
All that matters is how many people use your app and how long they use it for, so you can focus purely on creating a powerful user experience that keeps people coming back for more.
You then paid a flat rate per user per minute: in the US, that's $0.0020, and the rates are similar in other countries, allowing for currency conversion.
Are You Eligible?
Not every app is suitable for Amazon Underground. Because payment depends on active user engagement, a utility app that runs in the background or only gets used occasionally would not be suitable. Also, there are some eligibility requirements:
Your mobile app must be available for download from at least one other app store, such as Google Play, and be monetized in at least one of the following ways:
The app is available for purchase for a fee in all other app stores where it is sold.
The app contains in-app items that are available for purchase for a fee.
Your mobile app must not contain any subscription in-app items.
The features and gameplay of the Amazon Underground version of your app must be substantially similar to or better than the non-Underground version.
When you submit your app to the Amazon Appstore, you must make your app available for distribution on at least one non-Amazon mobile device.
September 30th is International Translation Day! No, really, it’s all thanks to St. Jerome, the patron saint of translators. To celebrate this fact, here’s a quick run down of the most popular translations on Tuts+ Game Development.
Note: the Tuts+ Translation Project is a community initiative, helping our content reach as many people as possible, all over the world. Find out how to get involved at the bottom of this post!
Do you have a solid grasp of English and written fluency in another language? If so, and you’re willing to get involved, we’d love to have you translate some of our tutorials! Any languages are welcome, find the link in the sidebar of a tutorial you’d like to translate, click it to visit the Native platform and enter your details to volunteer.
Perhaps you have a favourite tutorial which you’d like to see translated into another language? Let us know in the comments. Lastly, stay up to date with Translation Project news by subscribing to the newsletter or following @tutsplus_es and @tutsplus_pt on Twitter!
Intel RealSense pairs a 3D camera and microphone array with an SDK that allows you to implement gesture tracking, 3D scanning, facial expression analysis, voice recognition, and more. In this article, I'll look at what this means for games, and explain how you can get started using it as a game developer.
What is RealSense?
RealSense is designed around three different peripherals, each containing a 3D camera. Two are intended for use in tablets and other mobile devices; the third—the front-facing F200—is intended for use in notebooks and desktops. I'll focus on the latter in this article.
An infra-red laser projector and camera (640x480, 60fps)
A microphone array (with the ability to locate sound sources in space and do background noise cancellation)
The infra-red projector and camera can retrieve depth information to create an internal 3D model of whatever the camera is pointed at; the colour information from the conventional camera can then be used to colour this model in.
The SDK then makes it simpler to use the capabilities of the camera in games and other projects. It includes libraries for:
Hand, finger, head, and face tracking
Facial expression and gesture analysis
Voice recognition and speech synthesis
Augmented reality
3D object and head scanning
Automated background removal
Note that, as well as allowing you to track, say, the position of someone's nose or the tip of their right index finger in 3D space, RealSense can also detect several built-in gestures and expressions, like these:
What RealSense Brings to Games
Here are a few examples of how RealSense can be (and is being) used in games:
Nevermind, a psychological horror game, uses RealSense for biofeedback: it measures the player's heart rate using the 3D camera, and then reacts to the player's level of fear. If you lose your cool, the game gets harder!
MineScan, by voidALPHA, is a proof-of-concept that lets you scan real-world objects (like stuffed animals) into Minecraft. Any 3D PC game with an emphasis on mods or personalisation could use RealSense's scanning capabilities to let players insert their own objects (or even themselves!) into the game.
Faceshift uses RealSense for motion capturing faces in detail. This technology could be used in real-time, within a game, whenever players talk to one another, or during production time, to record an actor's expressions as well as their voice for more realistic characters.
There Came an Echo is a tactical RTS that uses RealSense's voice recognition capabilities to let the player command their squad. It's easy to see how this could be adapted to, for instance, a team-based FPS.
Years ago, Johnny Lee explained how to (mis-)use a Wii controller and sensor bar to track the player's head position and adjust the in-game view accordingly. Few games, if any, actually made use of this (no doubt because of the unorthodox setup it required)—but RealSense's head and face tracking makes this possible, and much simpler.
There are also several games already using RealSense to power their gesture-based controls:
Laserlife, a sci-fi exploration game from the studio behind the BIT.TRIP series.
Head of the Order, a tournament-style fighting game set in a fantasy world, where players use hand gestures to cast spells at each other.
Space Between, in which you use swimming hand motions to guide turtles, fish, and other sea creatures through a series of tasks in an underwater setting.
Gesture controls aren't exactly new to gaming, but previously they've been almost exclusive to Kinect. Now, they can be used in PC games—that means Steam, and even the web platform.
libraries and interfaces for Java, Processing, C++, C#, and JavaScript
a Unity Toolkit with scripts and prefabs
code samples and demos
documentation
Next, take a look at the RealSense SDK Training site. Here, you'll find guides to get you started, tutorials to guide you through using certain features (including the Unity Toolkit), and videos of previous webinars. We'll also be publishing RealSense tutorials on Tuts+ over the next few weeks.
We've covered what RealSense is, what game developers are using it for, and how you can get started using it in your own games. Keep an eye on the Tuts+ Game Development section over the next few weeks for some tutorials on head scanning, keyboard-free typing, and expression recognition.
The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.
Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology, Modern Code, Game Dev and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.
Intel RealSense technology pairs a 3D camera and microphone array with an SDK that allows you to implement gesture tracking, 3D scanning, facial expression analysis, voice recognition, and more. In this article, I'll look at what this means for games, and explain how you can get started using it as a game developer.
What is Intel RealSense?
RealSense is designed around three different peripherals, each containing a 3D camera. Two are intended for use in tablets and other mobile devices; the third—the front-facing F200—is intended for use in notebooks and desktops. I'll focus on the latter in this article.
An infrared laser projector and camera (640x480, 60fps)
A microphone array (with the ability to locate sound sources in space and do background noise cancellation)
The infrared projector and camera can retrieve depth information to create an internal 3D model of whatever the camera is pointed at; the color information from the conventional camera can then be used to color this model in.
The SDK then makes it simpler to use the capabilities of the camera in games and other projects. It includes libraries for:
Hand, finger, head, and face tracking
Facial expression and gesture analysis
Voice recognition and speech synthesis
Augmented reality
3D object and head scanning
Automated background removal
Note that, as well as allowing you to track, say, the position of someone's nose or the tip of their right index finger in 3D space, RealSense can also detect several built-in gestures and expressions, like these:
What RealSense Brings to Games
Here are a few examples of how RealSense can be (and is being) used in games:
Nevermind, a psychological horror game, uses RealSense for biofeedback: it measures the player's heart rate using the 3D camera, and then reacts to the player's level of fear. If you lose your cool, the game gets harder!
MineScan, by voidALPHA, is a proof-of-concept that lets you scan real-world objects (like stuffed animals) into Minecraft. Any 3D PC game with an emphasis on mods or personalization could use the RealSense camera's scanning capabilities to let players insert their own objects (or even themselves!) into the game.
Faceshift uses RealSense for motion capturing faces in detail. This technology could be used in real-time, within a game, whenever players talk to one another, or during production time to record an actor's expressions as well as their voice for more realistic characters.
There Came an Echo is a tactical RTS that uses RealSense's voice recognition capabilities to let the player command their squad. It's easy to see how this could be adapted to, for instance, a team-based FPS.
Years ago, Johnny Lee explained how to (mis-)use a Wii controller and sensor bar to track the player's head position and adjust the in-game view accordingly. Few games, if any, actually made use of this (no doubt because of the unorthodox setup it required)—but RealSense's head and face tracking capabilities makes this possible, and much simpler.
There are also several games already using RealSense to power their gesture-based controls:
Laserlife, a sci-fi exploration game from the studio behind the BIT.TRIP series.
Head of the Order, a tournament-style fighting game set in a fantasy world, where players use hand gestures to cast spells at each other.
Space Between, in which you use swimming hand motions to guide turtles, fish, and other sea creatures through a series of tasks in an underwater setting.
Gesture controls aren't exactly new to gaming, but previously they've been almost exclusive to Kinect. Now, they can be used in PC games—that means Steam, and even the web platform.
Libraries and interfaces for Java, Processing, C++, C#, and JavaScript
A Unity Toolkit with scripts and prefabs
Code samples and demos
Documentation
Next, take a look at the Intel RealSense SDK Training site. Here, you'll find guides to get you started, tutorials to guide you through using certain features (including the Unity Toolkit), and videos of previous webinars. We'll also be publishing RealSense tutorials on Tuts+ over the next few weeks.
We've covered what RealSense is, what game developers are using it for, and how you can get started using it in your own games. Keep an eye on the Tuts+ Game Development section over the next few weeks for some tutorials on head scanning, keyboard-free typing, and expression recognition.
The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward-looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.
Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel RealSense Technology, Modern Code, Game Dev and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.
In 2010, a string of high-profile bullying-related suicides of gay school students prompted teenager Brittany McMillan and US organisation GLAAD to create Spirit Day: an annual show of support for LGBT (lesbian, gay, bisexual, and transgender) youth who are victims of bullying.
Sadly, the 2010 suicides that prompted the founding of Spirit Day were not isolated events. In 2014, 17-year-old Leelah Alcorn—a transgender girl with an interest in both gaming and game development—took her own life after her parents refused to accept her identity and forced her to go to "conversion camp". In her suicide letter, she stated, "My death needs to mean something."
The news of her death spread and, as in 2010, people around the world rallied to show their support for LGBT youth. One pair of indie game developers organised JamForLeelah, which they described as a trans-positive game jam, saying that "an indie game jam felt like a possible way to raise awareness for Leelah's plea for social change, in a method she may not have only approved of, but also taken part in."
In the guidelines for the jam, they note:
Even with exposure to trans issues, it can be difficult to navigate what is and isn't offensive, and how best to illustrate certain concepts without being within the community itself [...] We highly recommend that you reach out to the community and collaborate with the many transgender voices which exist.
In the spirit of the above, I'd like to share some games that expose LGBT issues via their mechanics, recommend a few notable games that include LGBT characters and content without them being the main focus, and highlight talks and articles that can help all game developers better address LGBT issues and include LGBT characters.
i've been saying for a while now that supposedly lgbt-friendly game developers like bioware only see queer people (lgb people, at least) as consumers. nothing could make the point more clearly than bioware’s decision to add gay romances to star wars: the old republic – on a single planet in the galaxy, which players have to pay for the privilege of visiting. pay to enter a dystopian future where queers are exiled to a far-off, backwater planet!
Supposedly home to hungering demons and a beast that would destroy the village, the forest is forbidden and nobody knows what's on the other side. However, our hero's beloved—tired of the oppressive attitude of the villagers—decided to go there, as anywhere would be better than home. Now it's your turn to follow after.
LGBT Game Jams
Many of the above games were made during game jams whose theme or purpose revolved around LGBT issues. The home pages for these jams therefore include even more examples of such games, as well as tips, help, and support for those wishing to make some of their own:
QUILTBAG Jam: "A game jam inspired by queerness, politics, theory, and themes related to gender and sexual identity. What is queerness? What is a game? This game jam is an opportunity to answer these questions - to explore, celebrate, deconstruct, and subvert these ideas.You could make games about: rainbows, protests, riots, butches, queens, a non-binary worldview, that person you fantasized about in 10th grade!"
Games [4Diversity] Jam: The 2014 jam's aims were to "[explore] ways to incorporate feminine and LGBT aspects to games in a constructive way: creating games. Hereby the 24-hour development session will show that female, gay, lesbian, bisexual and transgender issues can enrich games in an innovative and positive manner. So let’s jam! And diversify the game industry content now!"
Queer Game Jam: This 2014 game jam had the theme, "Queers and Allies: Making games together".
JamForLeelah: As mentioned above, "a month long trans positive game jam to raise awareness on LGBTIQ issues, specifically trans youth issues and Leelah's Law."
GXDev: "A 24 hour game jam focused on the themes of LGBT/Queerness. [The] goal is to foster the growth of LGBTQ devs and encourage games with topics that relate to or come from a queer perspective. GXdev is focused on helping queer and diverse voices begin to make games."
Notable Games With LGBT Content
There are many games that could be included here; I will cut this list down to just four.
The main character, Dominique, is frequently misgendered throughout the game. Dierdra Kiai, the creator of the game, has said of this:
In a regular game the detective would be coded male in a lot of ways, the hard-boiled detective, and the femme fatale, to contrast. But it doesn’t have to be that way. There could also be a woman detective. The categories of ‘man’ and ‘woman’ are so limiting too.
The Last of Us
The Last of Us was one of 2013's most popular Triple-A PS3 games, with high critical acclaim across the board—including from GLAAD, who praised the depiction of Bill:
One of the characters the player encounters over the course of the game is Bill, an unstable loner in the town of Lincoln with a talent for fixing things. Through dialogue and backstory, the player learns that Bill once had a partner named Frank who he loved, but the plague drove them apart and led Frank to a bitter end. Both helpful and contentious, Bill is as deeply flawed but wholly unique a gay character found in any storytelling medium this year.
GLAAD gave Dragon Age: Inquisition a Special Recognition Award for "the many complex and unique LGBT characters prominently integrated throughout the game". This article from The Daily Dot explains why Dragon Age deserves this:
What really sets DA:I apart though, are the handful of major LGBT characters it features that actually hit the full spectrum of L, G, B, and T. Three of these—Sera, Dorian, and the "Iron Bull"—make up a third of the playable in-game companions the player can take out on adventures, and each has a strong personality and rich backstory waiting to be uncovered.
I also recommend reading On "The Gay Thing", by David Gaider, who wrote for previous Dragon Age games:
We make roleplaying games, which means that the character you play doesn’t have to be yourself, but I believe there’s an element where having a game acknowledge that you exist can be validating in a way most people never consider—no doubt because they have no need for validation, and thus no knowledge as to what the lack of it can do to someone.
Talks and Articles About LGBT in Game Design and Development
QGCon (the Queerness and Games Conference) is another annual event, but unlike GDC, it is specifically focused on "exploring the intersection of LGBTQ issues and video games".
All previous talks are available on their website; a good place to start is 2014's Queerness and Video Games: A Crash Course:
If you've ever wanted to learn the basics of video games, the games industry, game studies, or queer studies, we've got you covered.
GaymerX and GXTalks
GaymerX is an annual convention; unlike GDC and QGCon, it's focused less on game developers, and more on gamers of all stripes. As they put it:
We focus on creating a fun and safe space for gamers and gaymers of all identities to have fun and hang out with like minded folks. GaymerX is a “queer space”, in that many of the panels revolve around queer issues or queer devs, but GX3 is made for everyone and everyone is welcome!
Although the convention is aimed at gamers, many of the panels are relevant to game developers. For example, here's BioWare's 2014 talk on Building a Better Romance:
There are a lot of lessons to be learnt from other media here, of course. Writing Gay Characters (linked from the last article in that last) is a good example to start with.
The LGBT Gaming Scene
Gaming in Color is "a feature documentary exploring the queer side of gaming: the queer gaming community, gaymer culture and events, and the rise of LGBTQ themes in video games."
Diverse queer themes in game storylines and characters are an anomaly in the mainstream video game industry, and LGBTQ gamers have a higher chance of being mistreated in social games. Gaming In Color explores how the community culture is shifting and the industry is diversifying, helping with queer visibility and acceptance of an LGBTQ presence.
Finally, it's worth taking a look at The Daily Dot's Complete History of LGBT Game Characters. It highlights the fact that LGBT characters have been around almost as long as video games themselves—hidden in plain sight—and they've become some of the most beloved, enduring characters around."
Conclusion
It is important to support the LGBT community. The above games do that, and the above talks and articles show how to do that better. Please help by sharing your own examples of games, talks, and articles in the comments.
Unreal Tournament, Blood Bowl, and Rocket League belong to three different genres (FPS, round-based tactics, and driving), but all share the same theme: Weird Sport.
Previously, we'vefocusedmore on underused gameplay genres, and what makes them tick; in this article, we'll look at six broad themes that have never become as prevalent as WWII or Modern Military, and that could suit many different gameplay genres.
Although the majority of Western-themed games are first-person shooters, the theme can support many different genres: RTS, real-time tactics, or point-and-click, for example.
The theme is defined by more than the appropriate time and locale; it's essential to use the right tropes and motifs if you want to get the "feel" right. In fact, if these rules are used well, the setting can be something other than the American frontier and still feel like a Western—see Space Westerns like Firefly and Outland, or Neo-Westerns like Last Man Standing.
Using common Western tropes as a basis for gameplay is fun way to thread the themes throughout the game. The various Call of Juarez games do this nicely by including duelling mechanics: in the first two games the player can pull two revolvers at the same time and slow down time, so they can fire both accurately. Call of Juarez: Gunslinger also allows the player to slow down time, and additionally invented a completely new duelling mechanic, where the player must focus on multiple opponents, and make sure they draw your weapon in time.
The Desperados games are similar to the real-time tactics Commandos series, which allow the player to control a group of characters that correspond to Western archetypes: the Doctor, the Gunslinger, the Femme Fatale, the Bandit. The places visited are also common "Old West" locations: saloons, Alamo-style forts, wagon trains and frontier farms.
The city builder combines many elements into a single experience—not just placing buildings and paths, but also managing an economy, trading with other cities, and controlling a military force.
There are many ways to approach a city builder. They can be set in pretty much any scenario: the Stronghold series is set in the medieval era; Anno 1602 is set in the Colonial era; Anno 2070 and Startopia are set in a sci-fi future.
Gameplay focuses on the construction and care of the "city"—which could be a settlement, village, island, or even space-station, instead of a literal city metropolis. Then, the player expands the city, with each step gaining them more resources to continue building and caring for it.
Everything outside of that gives you a chance to make a game unique. Anno 1701 introduced slight Adventure/RPG mechanics: the player is issued quests which they much complete using their player avatar (their ship), while other compete with them for rewards. In Anno 1404 these became unique items that could buff the player's islands or ships.
Stronghold and Age Of Empires offer a pretty even split between city-building and military actions, while SimCity and Cities Skylines lack these entirely.
Banished focuses on survival: how can these people work together to get all things necessary to make it though the winter?
So while the basis of the city builder remains the same, their interpretations differ wildly, allowing a lot of room for you, the designer, to add unique aspects.
In a traditional tower defence game, the player must defend against an onslaught of enemies by placing defensive towers. As the enemy waves move through your base, the player's towers shoot them down—hopefully before they reach their goal!
Anomaly Warzone Earth reverses this: here, the invading aliens build stationary towers, with fixed fields of fire, while a convoy of the player's tanks moves through the streets. The player can only control the tanks indirectly, by choosing the route they will follow, but not how fast they move or which towers they attack. As such, the player must plan carefully, so they can pick the route that approaches alien towers from the side, and that allows them to gather the most resources, which are scattered around the level.
In addition to this, the player controls a solitary "hero unit" directly. With it, they can pick up and activate special powers, which include healing friendly units and deploying smoke-screens which cause enemy shots to miss. They can also call in new vehicles and upgrade them.
The tower defence theme has been explored and experimented with thoroughly over the years, but the reverse tower defence theme feels like it is in its infancy. Strip it back to its bare essentials—your waves of troops want to get past sets of stationary towers—and there's a lot of potential to work with.
Games like Team Fortress 2, with its 60s spy fiction setting, Tropico, where the player takes the role of a dictator, or even the Grand Theft Autoseries, with all of its less-than-angelic protagonists come close to this theme, but none quite hit the mark. The "evil overlord" theme is about the setting, being a straight-up bad guy, and leading a crew of underlings.
Evil Genius pits the player as a Bond villain on their own evil island hideout. They can extend and improve their base, and hire their own private army of minions. Your minions and henchmen play a vital part: you can send them around the world on missions for income. Later, they are the ones that go up against the spies that attack you.
All this time, while your base needs taking care of and evil science has to be done. Like Tropico, the game does not take itself seriously and is very campy, which contrasts well against the idea of being evil.
In Overlord the player takes the role of the titular Overlord, a Sauron-type being. They command a group of imps to do their bidding—that is, terrorizing and subjugating the populace, and building up their evil fortress. Between missions, they can return to this evil lair, which grows more opulent over time.
These games generally do not allow the player to be directly evil; there is always some off-set. Everything in Evil Genius is over-the-top and funny. In Overlord, on the other hand, the player mostly fights other, more evil entities.
Games about real-life sports are immensely popular, with most franchises having yearly instalments: just by tweaking the setup a little, you can create a new game. Tweak the setup a lot, however, and you can create a "weird sport".
Rocket League is the latest game to show that having a "sport" with different rules can be a huge success. In this case: "What if you played football with cars? And what if it isn't cars, but awesome badass rocket-powered cars? Which you can puts hats on?" Suddenly you have an entirely new system with entirely new rules and implications.
Blood Bowl asks the question "What if the teams in this sport are orcs, elves, dwarves and ratmen which can fight each other?". The resulting game is a round-based tactics-game where you have to get a ball on the other side of the field, but can also send your orcs out to gang up on other players.
The Bombing Run Mode of Unreal Tournament 2004 asks you to get a ball-shaped device into the heart of the base of your opponents. Suddenly, even though the game is an FPS, shooting people becomes less important, and passing the ball along offers a lot of room for new strategies.
Here, one player is the mutant. The mutant has special and unique powers, and fights everyone else. In Evolve, for instance, one player plays as a giant monster, and four other players fight together in a team to hunt it down.
In The Hidden, a wonderful mod for Half-Life 2, the titular Hidden is a near-invisible player with a knife, who can jump very high, very far, and stick to walls. They are not completely invisible, though: a Predator-style rippling effect can be seen. Opposing the Hidden is a team of scared, fragile marines with assault rifles and shotguns.
No matter what side you are on, it is tense. Whoever plays as the Hidden must stay out of sight, try to isolate players, and attack them from behind. While doing this they are constantly afraid of being spotted or driven into a corner. Anyone playing as a soldier is also constantly afraid, turning around to double- and triple-check whether they saw something, and getting twitchy when seeing the air distort, even if it is just steam.
A simple version or mutant multiplayer can be found in Unreal Tournament 2004. In Mutant Deathmatch, the mutant gets all weapons and maximum health and armour. Anyone that manages to kill the mutant gets extra points, and then becomes the mutant. This is especially fun in a free-for-all deathmatch, as it completely switches around the balance of the game, since players stop fighting each other and combine forces temporarily to bring down the mutant.
Conclusion
Next time you brainstorm ideas for a new game, or enter a game jam, instead of setting a game in an over-saturated theme, try using one of the above! You could even try a combination of theme and gameplay genre that has never been done before, like RTS + Mutant Multiplayer, or Hacker Game + Evil Overlord.
As level designers, it is often our job to convey
a wealth of information to players for a lot of reasons. Sometimes we want to show off the cool art in our level. Sometimes we want to direct players to their goals, explore the game's story or theme, or build and release tension. Before we can do any of this, though, we must first know how to direct players' attention towards where we want it to go.
There are many methods designers use to get
players' attention. Some games don't mind taking control away from
the player, and so use cut-scenes (or camera fly-throughs) to direct
players' attention to all the cool things the designer wants them to see. Other games, to give the players more control,
implement a "look at the cool stuff" button to show important things to players.
This article, though, is about my favorite way to direct
the players' attention: by deliberately designing your level to
take advantage of what I call Views, and their close cousins, Vistas.
I like to use all the other methods too (usually layered together), but any design goal I can accomplish during the level design phase is a problem I won't have to solve later in development, when it's more likely to be solved with something more complex and requiring more work (like a cut-scene).
What Are Views and Vistas?
View: An arrangement of the camera and level geometry to create a well-constructed view of something important in a level.
I'm referring to any place in a level that
is set up to ensure players are looking at something important. More
often than not, this takes the form of a more specific kind of View,
a Vista.
Vista:
A view, especially one through a long narrow avenue of trees or
buildings. (via
Dictionary.com)
Views and Vistas require thoughtful placement of landmarks and focal points and skilful framing of the game's camera, so games most often use these whenever they have an opportunity to control those variables. For example, at the beginning of a level (or any other time after a load) the designer has the opportunity to set these things up on frame 1, before the player takes control.
That's not the only way they're useful, however. Many games take advantage of pinch points, corridors, or other parts of a level that encourage the player to point the camera in a certain direction. For example, if the player is forced to walk through a door, the camera will likely be pointing through the door, so a Vista could be constructed on the other side of the door (since you'd know the most likely direction the camera would be pointed in).
Views and Vistas are really powerful tools, and
especially so when used in conjunction with the other methods I mentioned
earlier for getting players' attention (like cut-scenes).
1. To Show Off Your Game's Art
The most obvious benefit to a View or Vista is to show players the awesome art in your game, like in this screenshot from Middle Earth: Shadows of Mordor.
The above shot is taken at one of the game's checkpoints. The checkpoints are tall square towers with columns at each of the four corners. Through each pair of columns, the player can get good views of the game's very nice environments.
2. To Demonstrate Your Game's Hook
If your game has an interesting hook, either in your technology, art, or gameplay, Views and Vistas offer an opportunity to set those features up as a focal point.
In Super Mario Galaxy, this opening view of the first planet helps demonstrate non-flat gravity, which serves as the game's main hook. The camera is set up such that the planet's spherical horizon, and the objects that appear as the horizon changes, is a focal point.
3. To Direct Players to Content
This screenshot from Elder Scrolls: Oblivion is taken right outside the tutorial dungeon. This View is given to you; the camera is placed specifically at this angle. This kind of View is very common in Elder Scrolls games, since designers always know where the camera will be when you walk out a door.
Note how the dock in the water draws the eye to the ruins, which are not a mandatory part of the game—this is because Oblivion is trying to teach the player about their open world. They want to encourage exploration without forcing the player to do anything, and this kind of View is one tool they use to do this.
4. To Explore the Game's Theme or Story
This shot from Super Mario Galaxy has two purposes:
The Vista created by the two hills draws the player along the pathway, which goes to the next story point.
The player sees that the sky is falling through the Vista—these falling stars are the main point of the game's story.
The first level frequently gives you Vistas like this that frame video screens and NPC conversations. This gives the game an opportunity to tell you what's going on while they're pretty sure you're watching, while not taking control away from the player.
Note on Player-Controlled Cameras
The designer doesn't need complete control over the camera to present the player with good views. The View in the above Half-Life 2 screenshot is not forced—the player comes out of the train on the left and then turns 90 degrees to face this way—but since the player will have to walk down this narrow corridor between trains, the View is all but assured.
5. To Manage Tension and Intensity
As I mentioned before, the first level of Half-Life 2 is set up to convey a lot of information to players by using Views or Vistas to show video screens or NPC interactions. The video screens serve mainly to give the player an idea of what's going on; the NPC interactions, though, are mainly there to creep players out.
Take this switchpoint before a security checkpoint, as an example:
As players walk down the first "corridor" made by the fences, the game can be reasonably sure they are looking straight ahead. When they get about halfway, a sequence begins where the player can watch one of the two civilians on the right proceed through the checkpoint with no fuss.
When players come back down the third corridor, the second civilian goes through the checkpoint, likewise with no fuss. The player sees both civilians exit through the door at the far end of this corridor.
When the player enters the checkpoint, the game breaks the pattern. The soldiers don't attack, but one instead steps in front of the doorway you saw the civilians enter. The soldiers motion you through another door to the left—one you couldn't see from the switchback. Breaking the pattern jars the players, and makes them wonder why they were shunted to the other door. Will there be a threat?
A Good Opening View is Crucial
The opening view in your level gives players their first
impression of the level. As you can see from the advantages I listed above, this gives designers an amazing opportunity to make a good first impression.
The games I can most speak to on this count are the PlayStation 2 Ratchet & Clank games, which I worked on as a designer.
In the video below, you'll see the opening view of Metropolis, from the first Ratchet & Clank. This view is what you see after walking from your ship through a long tunnel that creates a Vista through which you can see bits of the View. Upon emerging into the area depicted in the video, the player gets access to the whole enchilada.
One of our Level Design mandates in these games was the creation of a good opening view. Over time, we broke that down into three really big requirements for the opening view:
1. Include Motion to Create Interest
The layers and layers of flying cars serve as the movement in the view above. We found that motion makes a view much more arresting than just a static screen.
Note: For a game that wants a slower pace or a heavier tone, reducing motion may help those goals, but for Ratchet & Clank we wanted a fast pace and a light tone.
2. Show the Player a Goal
The building centered in the video above is the train station, which is your ultimate goal destination in this level. Getting there starts a train-ride sequence through the city which ends the path back at the start point.
3. Show Off Art and Tech
This game was first made on the PS2 (the version above is the HD edition), so one of our big technical selling points was how far into the distance we could see, compared to other PS2 games at the time. Our artists used the technology to create beautiful views like the one in the video above.
Focal Points
To acchieve these goals, we used a number of tricks to position the camera for maximum effect. The first thing we had to do, though, was pick out the focal points in the level that we want to show. Focal points are whatever you decide are the most important parts of your view, and how you pick them depends on your goals.
Are you showing off your art?
Are you trying to indicate the way forward?
Are you trying to tell a story or explore a theme?
Whatever your goals are, find the things in your View that you want the player to see and then design the level so that it meets those goals.
In the above image, there are three major things to focus on:
A. A goal: The player will go through here in a few minutes.
B. Art: The giant towers in this level are the major feature.
C. The path forward: The little "diving board" area indicated draws the player forward (since the lines point towards A).
Once you know your focal points, you're going to want to draw attention to them somehow. There are many ways to do this; here are a couple:
Draw Attention
Draw the player's attention to the focal points of your composition, and put the most emphasis on those points you rank high in importance.
Vistas and Motion
Half-Life 2 uses a combination of a Vista and some motion to teach the barnacle enemy's behavior to the player.
The player walks up to the gap in the train.
The slope makes sure the player sees the pigeon and the gap makes sure the player is looking in the right direction.
The motion of the pigeon ideally makes the player follow it with the camera.
Once the pigeon gets eaten, the player is trained on how the Barnacle attacks (the tongue dangles, if you touch it the thing tries to eat you—just like it ate the pigeon).
Lines
We talked about this earlier, but it's worth mentioning again. In the screenshot below, note how there are several lines formed by the environment that point towards the ruins:
This draws the player's attention to the ruins, without using a big neon arrow and a sign saying "Check out these ruins!".
Align the Camera
One great way to direct players' attention is by lining up the camera so that important parts of the View fall into 1/3rd "slices" of the screen. This is an old concept, dating back at least to the Renaissance.
A full explanation of this is outside the scope of this tutorial, but for more information on the Rule of Thirds, see this photography tutorial.
I'll explain it with this image you saw earlier from Super Mario Galaxy:
The Tic-Tac-Toe symbol drawn over the screen above divides it into thirds. Notice how the important parts of the view are all more or less either fully contained inside one of the boxes, or intersected by one or more of the lines. This helps bring the players' attention where you want it.
Depending on your goals, you can do a lot with this little tic-tac-toe diagram.
For example, let's say you want to focus the player on a single goal or destination. One way is to put the thing you're going for in the center box:
The above picture not only centers the view on the ruins, but also has the player looking down a Vista made by the vertical posts on the dock.
In Oblivion, the designers likely wanted you to think about exploring the ruins, but also don't want you to think it's mandatory. Centering it like I did in the image above makes it hard for the eye to see anything besides the ruins, and makes it seem more important than it is in the overall scheme of things. It's pushy, basically.
The above shot is the one actually used in Oblivion (the same one I used in the previous section to talk about lines). The focus is still on the ruins, but the view around the ruins is much more accentuated. It's not all about where you're going, in this shot; it's also about how big the world is.
The bridge is located where the bottom and right lines cross, and it spans between the lower right and the center rectangles. As I mentioned before, this helps lead your eye to the ruins.
But what if you want the players to notice more than one thing?
The ruins in the picture above fall more or less on the crossing point between two lines, which sets it apart in the composition. The bridge is centered, which also gives attention to the ruins, since it's pointing at them.
But what the above screen has that the others don't is the trail leading off-screen to the lower-right which is aligned with the bottom third line of the screen, and which is pointed to by the lines of the bridge. That trail leads towards the Critical Path and quests, so if my goal was to split the player's focus, that's how I'd do it.
Views in Non-Linear Games
Speaking of Oblivion, I want to talk briefly about how these principles are applied in non-linear games. We've talked about how Views and Vistas are used in Oblivion when the designer controls the camera at certain points (like after a load), but they are also frequently used in many character-controlled areas by funneling the player towards the View.
The screenshot above, from Middle Earth: Shadows of Mordor, shows how this is done. The game is vastly open, but frequently funnels the player through narrow areas that serve as Vistas. The Views on the other side of the tunnels are carefully composed to show you interesting places to go.
Conclusion
Views are important. Opening views are super important. Using the tricks above, you can construct these into
your level designs so that later you can add onto them with other
methods, if needed.
A couple of years ago, I wrote a book analyzing every level in Super Mario World (1990), where I identified the systematic way in which most of the game’s levels were created. Super Mario World is a classic and worth studying in its own right, but the lessons it teaches are broadly applicable to video game design. The "Super Mario World method", or "Nintendo method" of level design actually appears in all manner of games—even games that were not made by Nintendo.
In this tutorial, we'll look at a natural form of organization that evolved in level designs from the 1990s and beyond. As video game designers (especially Nintendo designers) got more and more practiced at their craft, they began to organize their content intuitively. Here, I'll explain how you can use all of these intuitive tricks deliberately in your own games.
My Example
I've created (and annotated) a Super Mario Maker level using this method; you can see it here:
Level ID: 0740-0000-00CD-4D5B
Challenge, Cadence, Skill-Theme
The actual name for the framework we’re looking at is “challenge, cadence, skill-theme” or CCST for short. I’m going to use examples from Super Mario Maker to explain the fundamental parts of the CCST structure because it’s a very convenient example. However, I have seen use of the CCST structure in Mega Man, Metroid, Portal, Half-Life, Super Meat Boy, and many other games. With practice, you will be able to identify these things in some of your favorite games, so that you can bring a greater level of organization and depth to your game’s content.
Challenges
The first step lies in understanding and identifying challenges. A challenge is a short task, surrounded by periods of relative safety, which the player must complete all at once. Take a look at the example below, where Mario must jump on and off a moving platform above a bottomless pit.
On either side of the moving platform, the player can basically wait around without being in any danger. Once on the platform, however, the player has to be careful not to fall into the bottomless pit and lose a life. Thus, the challenge begins and ends on the safe, stationary ground to the left and right of the moving platform.
Let’s take a look at something more complex. In this next challenge I have added a second moving platform of a different type. For this challenge, the boundaries are still the solid ground on either side, but now the player has to jump across two kinds of platforms in order to make it safely through.
Even though this challenge is larger, it’s still one challenge, because the player has to get through the two moving platforms in one action, before arriving at the safe challenge boundary on the right.
Challenges can be quite large, but you must always remember that they are bounded by periods of safety where the player can (in a sense) “rest” for a moment before beginning the next challenge. Challenges and their boundaries are different in different kinds of games. For example, some challenges in a Sonic the Hedgehog game are much wider than those in a Mario game, because Sonic is usually moving much faster and the player needs the extra space to slow down, stop, or change direction. The principle is still the same, however.
Filling a Level With Challenges
It’s not enough to know what challenges are; a good designer also has to know how to fill a level with them. The level has to get more difficult, or else it will be boring. But at the same time, the designer can’t simply throw random challenges into the level; the level needs to feel coherent from challenge to challenge. This is where we get into the different types of challenges, and their relation to one another.
Evolutions
Let’s take a look at how the two challenges I have shown you are related to one another:
What is different between the first challenge and the second challenge? They’re both based on moving platforms, but the second challenge has added something new: there are two different kinds of platform moving in different shapes at different intervals. All of the complexity comes from changes in type or kind.
When the difference between two challenges is the addition of a qualitative element like this, the later challenge is called an evolution of the first challenge. Challenge two is a qualitatively more complex iteration of challenge one.
Here’s another evolution to help clarify what I mean:
You can see here that all I’ve done is add an enemy that moves up and down in between the platforms. This is another qualitative increase in complexity, making it more difficult for the player to find the correct window of time in which to jump.
It’s not too big a leap in complexity, though; Nintendo games tend to get more difficult one small step at a time. By the end of a difficult level, the gradual pile-up of evolutions creates the most difficult challenges, but the process is always accomplished one evolution at a time.
Expansions
There is another way of increasing the difficulty of challenges while still staying true to the level’s original design idea. In the challenge pictured below, I have reiterated the first challenge but with one critical difference.
I have increased the distance between the stationary platform and the moving platform; because Mario has to jump a greater distance, the jump is slightly more difficult. This is what I call an expansion challenge.
An expansion is an increase in some quantitatively measurable aspect of a previous challenge. In this case, the space between platforms has tripled. Let’s take a look at another expansion:
This challenge is just like the second evolution from earlier, except that I’ve doubled the number of enemies in between the two platforms. The jump is harder because the window for optimal jumping is shorter, but all of this owes to a simple quantitative change, from one Grinder to two.
Expansion challenges are very easy to implement, because all that the designer has to do is add more of something that’s already there. For this reason, it’s easy for fledgling designers to overdo expansion challenges (although this is something we’ll go over in another tutorial).
Evolving, Expanding, and Resetting
Before you get started trying evolutions and expansions in your own levels, there are a few more guidelines to take note of.
First, Mario levels (and their successors) don’t keep evolving in a linear fashion. Instead, levels often “reset” back to the beginning and then evolve in a different direction.
Here’s an example of this in my level:
Instead of sticking with the looping platforms and getting more complex forever, my level takes a different route. Here, the basic looping platform evolves into a linear moving platform. In place of jumping to get to a new platform, Mario has to jump to avoid enemies.
The skill is the same—it’s all about jumping at the right time—but there is a new qualitative direction for the level. For the next couple of challenges, we’re going to use this new direction as the basis for our challenges.
The next challenge carries on with the linear motion idea:
You don’t have to look too closely to see that this challenge adds both qualitative and quantitative changes to the previous challenge. The dip in the track path is a qualitative change, while the addition of extra enemies in the jump-over section is a quantitative change. This is both an evolution and an expansion.
In Super Mario World, challenges that both evolve and expand their ancestor are about as common as challenges that do just one of those things.
Finally, my level ends with this challenge, which brings back the earlier looping platforms:
This is a somewhat common strategy, in which the two evolutionary branches of a level are brought together at the end. Here we have the looping and linear tracked platforms together for a final, climactic challenge.
Cadences
At this point, I want to come back to the definition of a cadence. A cadence is the way that all of the challenges in a level relate to one another.
To illustrate this, I have created a map graphic that explains how my example level works:
As you can see, this level breaks down into two evolving branches, or what I call the “fork” cadence, with a rejoining motion at the end.
Even within Nintendo games, and within Super Mario World itself, there are a variety of cadences:
There are more cadence styles than what you see here in other games from other developers. This means that the cadence structure doesn’t restrict your creativity as a level designer—it actually enhances it.
The CCST framework is merely a tool for helping designers to keep their creative work organized and coherent. Evolutions and expansions can be applied to virtually any game or game mechanic. Cadences are a way of understanding naturally occurring patterns in the design of levels.
Conclusion
Although I named the above concepts as part of my research, everything that I discuss in this article (and its follow-ups!) is something that I observed occurring naturally in the design of classic video games. Theory is not more important than practice, and there is no perfect recipe for level design. That said, the CCST structure can and will help you to improve and organize your best work.
Today we're giving you a chance to look behind the scenes here at Envato Tuts+ and meet the editors who bring you all these tutorials and courses.
We're a very 21st-century workforce: spread across the world, working from home, doing completely different hours, and communicating via web apps like Trello, Basecamp and Slack.
In this article, you'll get to see photos of our workspaces—from attics in Canada to basements in Spain, and from a cafe in the Montenegrin mountains to a soft play area in Leamington Spa. You'll hear how we work, how we balance work and family life, and how we stay productive and organised.
Ian Yates
Web Design Editor—Mallorca, Spain
My office space is a room at the bottom of the house (it’s tall and skinny) which looks out onto the garden. I wouldn’t say I’m a minimalist, but I definitely like things to be tidy—my head’s enough of a mess without my workspace adding to the confusion!
The working hours I keep are fairly regimented too, possibly due to having two daughters. At 8:00 my wife, on her way to work, takes them to school, so I usually start at that point. They’re back by 17:00, so I like to be done and dusted by then. That said, working for Envato is a round the clock thing, and hugely flexible, so it’s not unusual for me to disappear on my bike for a couple of hours during the day—staring at a screen for eight or nine hours in a row isn’t great for productivity or creativity.
In terms of equipment, my setup is pretty standard. I used to be a Mac Pro fan, but have since realised that a 13” MBP can be just as powerful, and infinitely more portable (though you’ll never find me working in a coffee shop, or on the steps of a museum). Hooked up to a 24” Dell monitor, it’s a great working setup and really all I need. You’ll notice a pile of Ikea bits and pieces, and earlier this year I bought a refurbished Aeron chair; well worth the bucks because I’m pretty tall and can quickly get back problems by sitting down all day.
Tiffany Brown Olsen
Course Producer—Vancouver, Canada
I love working from home and I also love routines. Living in Vancouver, Canada puts me in the latest timezone so I try to start work at 7am to give me the most overlap with my coworkers. My work schedule is broken up into 2 hour blocks, with a short break in between. These shorter concentrated slots help me stay focused and productive while not feeling tied to my computer.
I have a few different possible places to work. I can work at my desk in the front room, in the office, or in the summer I'll occasionally work on the back deck. I love sunshine so I'll often move to whichever space is the sunniest!
Joel Bankhead
Courses & Content Manager—Manchester, UK
I work remotely from my home office in Manchester, UK. On days without early meetings (darn those Australians…) I tend to start my working day around 9am, take a healthy lunch break away from my computer, and finish up at 6pm. My wife also works from home, so we’ve put a lot of thought into the office environment to make sure that it’s peaceful and conducive to work.
Staying healthy and maintaining a good level of energy and focus is vital for successful and satisfying remote work. To that end I tend to go for a run a few times a week, drink only water during the day, and try to work somewhere other than my office one day a week.
I enjoy experimenting with improving my productivity and rely heavily on Omnifocus to keep me on track. Recently I’ve been trying out the “Pomodoro Technique”, where you break down work into 25 minute intervals, separated by short breaks, and I’ve found it useful for maintaining better focus throughout the day.
I’m also extremely aggressive with my email, hovering around “Inbox Zero” by the time I clock off. I try to prioritise well and split my focus effectively between important and urgent tasks, but I almost always overestimate how much I can get done in a given day.
Michael Williams
Game Development Editor—Leamington Spa, UK
I used to work very unusual hours, as my then-editor Ian will attest—he was often puzzled by the 2am emails he'd receive. But these days I've settled on 10 to 6, Monday to Friday, give or take an hour (and the odd late-night meeting). I do value being able to pop to the shops or meet a friend or see a movie on a weekday afternoon, and then make up the time in the evening or on the weekend.
I generally work from home on my desktop computer, but when I'm not doing data analysis and don't need the extra power, it's a nice change of scenery to take my laptop to a coffee shop, a park, or a friend's house and work through my inbox. In this picture, I'm working from a local soft play area, with Leon (3) "assisting" me.
We use Trello at Envato Tuts+ to keep track of what needs to be done and who needs to do it by when, and I'm a big fan of this approach, but when it comes to managing my personal workday, I find the best tool is a simple notebook: tasks I need to do today go in the top half, tasks I need to do soon go in the bottom half, and I rewrite it every morning. I've tried many different apps and techniques, and this is what sticks.
For actually getting things done, I find the Pomodoro technique works best. Since I work from home (mostly!), I'm surrounded by distractions (including, of course, the entire Internet), and there's no-one to stop me slacking off except myself. I've been doing this long enough to know that I can't fool myself into believing that I can resist all these distractions all day long, but I do at least have enough willpower to resist for 25 minutes at a time!
Tom McFarlin
CMS & Web Development Editor—Atlanta, USA
My family and I just moved and so my workspace is more or a less a makeshift office at the moment. The final setup is something I'm still planning, but right now I have everything I need in order to get my work done.
In this photo, you see my desk, a large jar of water, my Jambox for listening to tunes during the day, and my dogs that usually spend the day asleep right under my desk.
This next photo shows my setup a bit more. It includes all of the things that I need to sort through off on the left hand side of the desk. I hate clutter on the desk, but I usually leave it in a stack on my desk as a reminder that I need to take some sort of action on it.
The rest is my keyboard, trackpad, and monitor. In the background, you see my wife's computer.
David Appleyard
Envato Tuts+ Manager—Shrewsbury, UK
I’ve worked from home for years, and love having my own commute-free space to work. I tend to wake up around 5am, make a cup of tea, then catch up on emails and admin for the day. I’ll have breakfast with my wife before she goes to work, then work on bigger and more in-depth projects for the rest of the day. I’ll usually stop work around 2-3pm, and read, cook dinner, go for a run, work on something creative, or do jobs around the house.
We have a baby on the way around the end of the year, so I’ll get some first-hand experience of the challenges of home-working with kids (something many of the rest of our team already know all too well!).
Sean Hodge
Business Editor—Florida, USA
My day is broken up into work sessions—each a half hour to two hours. Length of work sessions are based on the task at hand. I take breaks as needed.
My mornings are really important. I try to shut out all distractions and get my most important projects done before lunch. My main focus in the first two hours is to do any work that requires a lot of creative thought, so I do my heavy lifting first. This could be planning business content, doing research, or writing. Then I take a break, head over to Starbucks, grab an Americano, then continue to work on similar material.
After lunch, my energy dips a bit, so I work on more repetitive tasks, or work that needs to get done but isn’t as complicated. Here’s a helpful tutorial on managing your creative energy levels. After an hour or so of banging out simple tasks, I get back to working on more complicated projects. I then wrap the day with the last half hour pushing out any last minute emails, summarizing ideas, and planning for tomorrow.
Bart Jacobs
Mobile Development Editor—Antwerp, Belgium
The morning is when I get most of my work done so I usually get up pretty early. I love it when the world is still asleep and I can focus on writing or editing. To avoid that my inbox messes with my planning, I tend to read email only after I’ve put in a few hours of work.
For Envato, I usually work from my home office, but it happens that I pull out my computer on the train. The train is actually a great place to get work done if it isn’t too crowded. I avoid working late at night to make sure my brain can get some time off. At the end of the day, I plan the next morning, take a look at the publication schedule for the next day, and end with a quick peek in my inbox to avoid surprises the next morning.
Adam Brown
Code Course Editor—Ottawa, Canada
Immediately before starting work at Envato Tuts+, I had a brief stint as a handyman and woodworker. To ease the transition to office life, I spent the weekend before starting my new role at the woodshop, building myself a new desk and filing cabinet. I use the desk every day and it's the nicest I've ever had—just the right size!
The best part of my job is working with my talented colleagues in the Envato Tuts+ editorial team and the fantastic instructors. As of today, I'm working with 18 instructors across 35 active course projects. The information flow is intense! To manage it all, I have a Rube Goldbergian spreadsheet I call "mission control", with lots of colours and lots of pivot. Some day I'll have to write an app.
Every day is different. I Skype, Tweet, post, edit, comment and send many many emails. Also, naturally, I watch a lot of video courses about coding. In the last eight months I've learned an almost dizzying amount about cutting-edge web and mobile development. It's a real treat to be able to learn from and grow this amazing resource.
Johnny Winter
Computer Skills, 3D & Audio Editor—Brighton, UK
Essentially, my workspace is in my loft room on the second floor of the house. This is a quieter working environment than the rest of the house. I work quite flexible hours, due to childcare commitments, and I tend to be more of a night-owl, preferring to work in the evenings.
The Mac is a 27" Core 2 Duo, late-2009 model that has given exemplary service. It's due to be replaced, this month, with a brand new 27" Core i7 model. The speakers are from AudioEngine USA and are connected to an Apple Airport Express.
To get the infinite loop effect of Mac within a Mac within a Mac, I took a photos with a Canon EOS 650D and out the SD card in the Mac, repeated a number of times, to get the effect.
Jackson Couse
Photo & Video Editor—Ottawa, Canada
I drink many, many tiny cups of tea every day, made with a gaiwan and sipped from teacup of unknown origin and rather so-so design. I used to have another one, but I have no idea where it's gone. This one has a crack. I have the habit of getting wrapped up in things and forgetting to stop working, but my little one-at-a-time tea ritual pulls me away from the desk long enough to stop and think and catch my breath every once in a while. Always drink good tea!
Andrew Blackman
Copy Editor, Travelling Around Europe
I'm from London originally, but when I started working for Envato Tuts+ last year I was living in Crete, and now my wife and I are doing some long-term travelling around Europe for a couple of years.
One of the great things about working for Envato Tuts+ is the flexibility. There's no office to commute to, and no regular hours to keep. As long as I have good WiFi, I can work from anywhere. And as long as I get all my work done, I can work the craziest hours I want.
This photo was taken in a small cafe in the mountains of Montenegro. We had left the coastal town of Kotor that morning and were driving to the capital, Podgorica. On the way, we stopped off and I caught up with work for a couple of hours. Then we drove on, and that evening I stayed up late into the night doing more work.
When I'm in one place for a while, I generally settle into a pattern of working roughly 12pm to 8pm, five days a week. But while I'm travelling, those same 40 hours a week can get scattered around all over the place.
When communicating a message, brevity can be more viable than verbosity.
It depends on the message you want to convey, your delivery method, and your intended audience. As a game developer, your game is your message. Minimalism is the brevity of game design: a way to engage your audience more efficiently by favoring simplicity over complexity.
But don't confuse a lack of complexity with the lack of challenge or depth, because many successful games have proven that minimalism and traditional game design philosophies can coexist beautifully. Examples of this range from the rhythmically challenging, geometric simplicity of Terry Cavanagh's Super Hexagon to the epic scale of Fumito Ueda's emotionally deep Shadow of the Colossus.
Whether it's a straight-forward aesthetic implementation or a metaphorical, narrative-focused approach, this article will help you to use minimalism in your projects and prove the old adage that 'less is more'.
What is Minimalism?
A Brief History
Minimalism, as an art movement, arose in the 1960s as a by-product of philosophical modernism. Modernism was a rejection of outdated ways of thinking, and this directly affected artistic expression.
Previous art movements consisted of abstraction and subjective metaphors while minimalism focused on objective literalism. Artwork from this period featured limited colors, geometric shapes, and a general lack of detail. The aesthetic properties of minimalist art still exist today, but minimalism in modern game design is ironically much more complicated.
The Basics
The general purpose of minimalism in game design is to accentuate a game's specific elements by limiting the scope or detail of the other surrounding elements. For a simplified 'real world' example, imagine wearing a blindfold to place a greater emphasis on your sense of touch.
An entire game can be designed with minimalism as a core concept, but minimalism can also be invoked only when needed. Art, sound, gameplay, and narrative can all be subject to minimalist interpretations.
Examples of Minimalism in Popular Games
The best way to understand minimalism as a game developer is to experience great examples of minimalism as a game player. In fact, an entire book could be written about the history of minimalism in video games. Whether you play on consoles, PC, handhelds, or mobile devices, you've undoubtedly encountered a game that was at least partially designed around these concepts.
Let's take a look at a small selection of games that feature a wide variety of minimalistic design elements. Some of these may challenge your idea of what minimalism is and help you think differently about game design.
Terry Cavanagh, the developer of Super Hexagon describes his work as a “minimal action game”. The player object is represented by a small triangle, and the game world consists of a central hexagon that attracts rhythmically moving shapes. All the player has to do is press left or right to avoid the incoming shapes and survive for as long as possible.
The art style and gameplay mechanics utilize minimalism to put extreme focus on the objective, while the pulsing soundtrack and rhythm-fuelled animations put the player into an almost hypnotic state. Super Hexagon is the perfect example of a game that features minimalism as a core design philosophy while maintaining an intense level of difficulty.
Dwarf Fortress, by Tarn and Zach Adams, is a notoriously challenging simulation game with roguelike elements. The only thing about Dwarf Fortress that can be considered minimalistic is the aesthetic design: game objects are represented by ASCII characters rather than hand-drawn sprites.
In this case, minimalism is used to trigger a nostalgic response that reminds players of a time when game graphics required a powerful imagination.
Mirror's Edge, a first-person parkour action game by EA Dice, combines futuristic architecture and minimalism in a unique way to assist players with navigation. The city that the player runs around in is full of bright, sharp edges and the buildings stand like white monoliths against a blue sky.
Rather than following screen-cluttering HUD elements, players are instead led through checkpoints by running towards certain actionable environmental objects that have been painted red. This type of minimalism is both visually striking and highly intuitive.
In 2007, Jason Rohrer released a minimalist art game entitled Passage. This game takes place entirely within a 100x16 window and can be completed in five minutes. With themes of loss, mortality, and the human condition, Passage manages to pack a powerful punch despite its limited scope—although with no traditional win condition, it can be argued that Passage isn't even a game at all.
This type of minimalism is used to provoke thought and convey a very specific message through metaphors by forcing the player to reflect on the experience by asking, “why?”
I bet you didn't expect to see a Metal Gear Solid game in a collection of minimalist games! Hideo Kojima, the creator of this iconic franchise, is known for drawing from numerous artistic, historical, and pop-culture influences. A particular scene in Metal Gear Solid 3 features the game's protagonist climbing an incredibly tall ladder. After walking down a long, windy tunnel, the player reaches a dead end and is forced to climb a ladder for two whole minutes. During this climb, an a cappella version of the game's theme song begins to play. The words “I give my life. Not for honor, but for you” echo through the tunnel during the song, creating a haunting moment that sticks with players long after the credits have rolled.
This is an example of minimalism being invoked in an otherwise complex and verbose experience to create a meaningful and poignant moment.
Thomas Was Alone, by Mike Bithell, is a game that you will never understand just by looking at screenshots. On the surface, Thomas Was Alone appears to be a typical puzzle platformer with simple, polygonal characters devoid of personality. What lies beneath that minimalistic coat of paint is a deceptively deep and captivating experience full of character.
This combination of aesthetic minimalism and compelling narrative design is useful for creators that may lack advanced visual art skills.
The Nintendo Wii was a console that was built entirely around the concept of minimalism. In an effort to attract a wider audience, Nintendo created a controller with pared down inputs and a focus on motion control. And with Wii Sports, Nintendo arguably created the most successful minimalist game of all time. By removing excesses and placing a larger emphasis on simplistic, yet intuitive gameplay, Nintendo proved that minimalism could be incredibly popular.
Journey, the award-winning adventure game by Thatgamecompany, uses minimalism in several unique ways. Much like Flower, by the same developer, there is a distinct lack of exposition, and no explanations to help the player understand exactly what is transpiring. Players learn through unguided exploration instead of long-winded tutorials and gratuitous on-screen prompts.
The simplified gameplay mechanics help to facilitate this, but the true power of Journey's minimalism is found in the game's multiplayer mode. Players seamlessly drop in to a co-op session without a notification and with no way to communicate aside from a few simple emotes. The bonding that occurs between two complete strangers in Journey is a testament to how powerful human interaction can be, even after removing the excesses of traditional multiplayer gaming conventions.
Shadow of the Colossus, by Fumito Ueda and Team Ico, is an adventure game on an epic scale. Players wield a sword, ride a horse, shoot arrows, and fight enormous bosses in an attempt to save a loved one. But unlike most adventure games, Shadow of the Colossus does not feature any smaller enemies—"grunts" that typically exist in large quantities to challenge the player through dungeons and overworlds.
There are only 16 enemies in Shadow of the Colossus, and they're all giant puzzle-centric boss encounters. Time spent between boss battles is somber, quiet, and full of relaxed exploration. The mood set by this unique combination of elements is unmatched, and is a trademark style of Ueda's particular brand of minimalist design.
If you want even more inspiration, check out these 2,346 individual interpretations of minimalism in game design.
Limitation vs. Intention
When talking about the history of minimalism in games, it's very important to understand the difference between limitation and intention. By today's standards, Pong is a minimalist exercise in art, input, sound, and mechanics. But this was not entirely by design; it was mainly due to hardware limitations.
Early Game Boy games used a monochrome four-shades color palette, and the Game Boy Color saw an increase to 32,768 colors. Now there are entire gamejams devoted to replicating the limited palette of the original Game Boy. Are these self-imposed limitations an example of minimalism, or are they merely exhibitions in retro gaming nostalgia?
The rise in popularity of iOS and Android devices also saw a rise in gaming minimalism. Touch-screen inputs and the nature of mobile gaming creates an environment where quick, simple games get the most attention. The line gets blurry when you begin to differentiate between casual gaming and minimalist design, and that topic requires an entirely different discussion.
Minimalism in Game Development
As game developers, we're always looking for solutions to problems. A straight line is the shortest distance between two points, and minimalism is often the straightest path to follow when designing a game. If you find yourself struggling with a problem, start with a simple solution and work your way up from there. By starting with minimalism, you are forcing yourself to focus on the more important aspects of your gameplay experience.
When your game works using minimal art, sound, and gameplay mechanics, you can slowly begin to integrate more elements, while maintaining that important balance.
Aesthetic Design
The visual style of your game is incredibly important. A single screenshot is often a potential player's only first impression, so it's vital that your game is readable at a glance. Minimalism can both help and hurt you in this regard due to the possibility of abstraction, so be careful.
Below are a few ideas to get you thinking about ways to introduce minimalism into your aesthetic design:
Use a limited and deliberate color palette. Colors can represent emotions, moods, locations, temperatures, and personalities. Being consistent and tasteful in your choice of color is far more important than using a certain variety of colors.
Contrast is your friend. When using limited visual assets, the contrast between those assets becomes just as important as the assets themselves. Blank space between items should be used to your advantage when dealing with a limited number of on-screen elements.
Use simple and recognizable shapes. If you're not familiar with the intricacies of art and design, you should at least learn about the importance of silhouettes. Take the most detailed element in your scene and reduce it to a single color. Is it still readable? By working with limited colors, you can build recognizable scenes that will remain readable as you increase the fidelity of those elements over time.
Lighting is more important than poly count. A low-poly scene with superb lighting is a beautiful thing.
Use colors or light to direct players towards a destination rather than HUD elements, maps, and markers.
Integrate potential HUD elements into the game's environment where possible. Think of a digital readout on the side of a gun to represent ammo, or an inventory system that exists in physical space within a player's backpack rather than in a menu screen.
High quality animation on a less detailed character is far more valuable than a badly animated photo-realistic character.
Use real-time lighting to represent time, rather than an on-screen clock.
Use damage models or other unique environmental solutions to represent health instead of on-screen meters. Think of an enemy moving more slowly or walking with a limp to represent low health.
Use animated GIFs to advertise your game if your minimalist aesthetic design results in unreadable screenshots.
System Design
The way that players interact with your game should be as minimal as possible. Overly-complicated control schemes can scare players away from your game and create frustration. A steep learning curve is often required for specific games, but you should always strive for a streamlined control scheme. Along with input, your individual game mechanics will also benefit from being intuitive. Think about these following points when developing your core gameplay experience:
Always side with familiarity over uniqueness. This sounds incredibly counter-productive for anyone trying to create something new, but it's true. If you're making something that is in any way familiar to something that already exists, then your players are going to have pre-existing expectations. Don't reinvent the wheel when your players are already familiar with driving.
Use context-sensitive inputs rather than expansive control schemes. If your player can open chests and doors, the same button should perform both actions. It is up to you, as a designer, to make sure that doors and chests never overlap.
To expand on the context-sensitive inputs: a single button can be pressed in more than one way. Single tap, double tap, extended hold, rhythmic tap; these are all different ways to press the same button. It may be tempting to take advantage of an entire controller, but try to find ways to limit the number of buttons your player has to press when it makes sense.
Use timing and rhythm-based solutions to puzzles and conflicts. Players can pick up on these gameplay patterns without having to rely on intrusive instructions or explanations.
Force players to learn through experimentation directly after discovering a new ability. Without relying on text boxes and tutorials, you can immediately require the use of the ability to solve a problem before the player can continue. Then, as the designer, it's your job to make sure that this ability remains relevant throughout the rest of the game so that the player doesn't forget about it.
Give your players a reason to not press a button. If shooting endlessly down a hallway is a solution, then not shooting endlessly down a different hallway can be another solution. Instead of adding new gameplay mechanics, think of ways to temporarily remove existing mechanics to add new gameplay experiences.
Avoid redundancies. Giving players a variety of choices is usually a good thing, but make sure that a highly preferred choice doesn't create obsolete solutions. For example, a jet pack power-up will make your grappling hook and double jump power-ups useless. When one solution is obviously superior, the other solutions immediately become pointless and excessive. If you have multiple solutions, make sure that they each provide a unique and valuable reward.
Narrative Design
As a narrative designer, it is your duty to direct and control the flow of a game's story. Depending on the type of story being told, minimalism may or may not be a useful solution. Minimalistic narrative design requires that significant portions of a game's story be told through gameplay mechanics, art design, level design, and other methods that may be out of the writer's hands. Narrative designers have to ensure that every element of a game comes together to tell the right story, requiring them to understand multiple aspects of the game development process.
Here are some tips for approaching narrative design from a minimalist standpoint:
Avoid extensive exposition. Instead of starting the game with a voice-over, cut scene, or text crawl, put the player in an interactive role. Set the scene with mood and atmosphere rather than words. Introduce characters with actions rather than biographies.
Value exploration over explanation. Reward players for exploring your world rather than burdening them with excessive pages of lore and back story.
Avoid optional collectibles that explain crucial details of your world's history. Find ways to implement this information into gameplay rather than audio files.
Silence can be just as valuable as conversation. Use body language and facial expressions to express feelings when possible.
Make the player ask questions and don't be afraid to never answer them. If every detail of a game is laid out for the player to discover, then the game has a finite depth. Mysteries and the unknown represent areas that have never been explored and will exist in the minds of players long after they put the controller down.
Complex character progression can be the focus within an incredibly simple plot. Remember that minimalism can be used as a contrast between two things to create a deceptively deep experience.
Conclusion
The topic of minimalism in game design is deep, complex, and always evolving. The next time you play a game, think about how it could be more or less complicated. Study the game's art and try to think of ways to make it more simple while still maintaining its original form. Pay attention to the "empty" areas. Listen to the silence between important moments. Instead of focusing on the obvious, shift your attention to the things that have been minimized.
Games are infinitely complicated, and it can be tempting to dive into that infinite depth in search of solutions. But as with many of life's problems, the right solution is sometimes the simplest.
In this tutorial, we'll create a simple color swapping shader that can recolor sprites on the fly. The shader makes it much easier to add variety to a game, allows the player to customise their character, and can be used to add special effects to the sprites, such as making them flash when the character takes damage.
Although we're using Unity for the demo and source code here, the basic principle will work in many game engines and programming languages.
Demo
You can check out the Unity demo, or the WebGL version (25MB+), to see the final result in action. Use the color pickers to recolor the top character. (The other characters all use the same sprite, but have been similarly recolored.) Click Hit Effect to make the characters all flash white briefly.
Understanding the Theory
Here's the example texture we're going to use to demonstrate the shader:
There are quite a few colors on this texture. Here's what the palette looks like:
Now, let's think about how we could swap these colors inside a shader.
Each color has a unique RGB value associated with it, so it's tempting to write shader code that says, "if the texture color is equal to this RGB value, replace it with that RGB value". However, this doesn't scale well for many colors, and is quite an expensive operation. We would definitely like to avoid any conditional statements entirely, in fact.
Instead, we will use an additional texture, which will contain the replacement colors. Let's call this texture a swap texture.
The big question is, how do we link the color from the sprite texture to the color from the swap texture? The answer is, we'll use the red (R) component from the RGB color to index the swap texture. This means that the swap texture will need to be 256 pixels wide, because that's how many different values the red component can take.
Let's go over all this in an example. Here are the red color values of the sprite palette's colors:
Let's say we want to replace the outline/eye color (black) on the sprite with the color blue. The outline color is the last one on the palette—the one with a red value of 25. If we want to swap this color, then in the swap texture we need to set the pixel at index 25 to the color we want the outline to be: blue.
Now, when the shader encounters a color with a red value of 25, it will replace it with the blue color from the swap texture:
Note that this may not work as expected if two or more colors on the sprite texture share the same red value! When using this method, it's important to keep the red values of the colors in the sprite texture different.
Also note that, as you can see in the demo, putting a transparent pixel at any index in the swap texture will result in no color swapping for the colors corresponding to that index.
Implementing the Shader
We'll implement this idea by modifying an existing sprite shader. Since the demo project is made in Unity, I'll use the default Unity sprite shader.
All the default shader does (that is relevant to this tutorial) is sample the color from the main texture atlas and multiply that color by a vertex color to change the tint. The resulting color is then multiplied by the alpha, to make the sprite darker at lower opacities.
The first thing we need to do is to add an additional texture to the shader:
As you can see, we've got two textures here now. The first one, _MainTex, is the sprite texture; the second one, _SwapTex, is the swap texture.
We also need to define a sampler for the second texture, so we can actually access it. We'll use a 2D texture sampler, since Unity doesn't support 1D samplers:
Here's the relevant code for the default fragment shader. As you can see, c is the color sampled from the main texture; it is multiplied by the vertex color to give it a tint. Also, the shader darkens the sprites with lower opacities.
After sampling the main color, let's sample the swap color as well—but before we do so, let's remove the part that multiplies it by the tint color, so that we're sampling using the texture's real red value, not its tinted one.
To do this, we need to interpolate between the main color and the swapped color using the alpha of the swapped color as the step. This way, if the swapped color is transparent, the final color will be equal to the main color; but if the swapped color is fully opaque, then the final color will be equal to the swapped color.
Let's not forget that the final color needs to be multiplied by the tint:
Now we need to consider what should happen if we want to swap a color on the main texture that isn't fully opaque. For example, if we have a blue, semi-transparent ghost sprite, and want to swap its color to purple, we don't want the ghost with the swapped colors to be opaque, we want to preserve the original transparency. So let's do that:
The final color transparency should be equal to the transparency of the main texture color.
Finally, since the original shader was multiplying the color's RGB value by the color's alpha, we should do that too, in order to keep the shader the same:
The shader is complete now; we can create a swap color texture, fill it with different color pixels, and see if the sprite changes colors correctly.
Of course, this method wouldn't be very useful if we had to create swap textures by hand all the time! We'll want to generate and modify them procedurally...
Setting Up an Example Demo
We know that we need a swap texture to be able to make use of our shader. Furthermore, if we want to let multiple characters use different palettes for the same sprite at the same time, each of these characters will need its own swap texture.
It will be best, then, if we simply create these swap textures dynamically, as we create the objects.
First off, let's define a swap texture and an array in which we'll keep track of all the swapped colors:
Texture2D mColorSwapTex;
Color[] mSpriteColors;
Next, let's create a function in which we'll initialize the texture. We'll use RGBA32 format and set the filter mode to Point:
public void InitColorSwapTex()
{
Texture2D colorSwapTex = new Texture2D(256, 1, TextureFormat.RGBA32, false, false);
colorSwapTex.filterMode = FilterMode.Point;
}
Now let's make sure that all the texture's pixels are transparent, by clearing all the pixels and applying the changes:
for (int i = 0; i < colorSwapTex.width; ++i)
colorSwapTex.SetPixel(i, 0, new Color(0.0f, 0.0f, 0.0f, 0.0f));
colorSwapTex.Apply();
We also need to set the material's swap texture to the newly created one:
Finally, we save the reference to the texture and create the array for the colors:
mSpriteColors = new Color[colorSwapTex.width];
mColorSwapTex = colorSwapTex;
The complete function is as follows:
public void InitColorSwapTex()
{
Texture2D colorSwapTex = new Texture2D(256, 1, TextureFormat.RGBA32, false, false);
colorSwapTex.filterMode = FilterMode.Point;
for (int i = 0; i < colorSwapTex.width; ++i)
colorSwapTex.SetPixel(i, 0, new Color(0.0f, 0.0f, 0.0f, 0.0f));
colorSwapTex.Apply();
mSpriteRenderer.material.SetTexture("_SwapTex", colorSwapTex);
mSpriteColors = new Color[colorSwapTex.width];
mColorSwapTex = colorSwapTex;
}
Note that it is not necessary for each object to use a separate 256x1px texture; we could make a bigger texture that covers all the objects. If we need 32 characters, we could make a texture of size 256x32px, and make sure that each character uses only a specific row in that texture. However, each time we needed to make a change to this bigger texture, we'd have to pass more data to the GPU, which would likely make this less efficient.
It is also not necessary to use a separate swap texture for each sprite. For example, if the character has a weapon equipped, and that weapon is a separate sprite, then it can easily share the swap texture with the character (as long as the weapon's sprite texture doesn't use colors that have red values identical to those of the character sprite).
It's very useful to know what the red values of particular sprite parts are, so let's create an enum that will hold this data:
These are all the colors used by the example character.
Now we have all the things we need to create a function to actually swap the color:
public void SwapColor(SwapIndex index, Color color)
{
mSpriteColors[(int)index] = color;
mColorSwapTex.SetPixel((int)index, 0, color);
}
As you can see, there's nothing fancy here; we just set the color in our object's color array and also set the texture's pixel at an appropriate index.
Note that we don't really want to apply the changes to the texture each time we actually call this function; we'd rather apply them once we swap all the pixels we want to.
As you can see, it's pretty easy to understand what these function calls are doing just from reading them: in this case, they're changing both skin colors, both shirt colors, and the pants color.
Adding a Hit Effect to the Demo
Let's next see how we can use the shader to create a hit effect for our sprite. This effect will swap all the sprite's colors to white, keep it that way for a brief period of time, and then go back to the original color. The overall effect will be that the sprite flashes white.
First of all, let's create a function which swaps all the colors, but doesn't actually overwrite the colors from the object's array. We will need these colors when we will want to turn off the hit effect, after all.
public void SwapAllSpritesColorsTemporarily(Color color)
{
for (int i = 0; i < mColorSwapTex.width; ++i)
mColorSwapTex.SetPixel(i, 0, color);
mColorSwapTex.Apply();
}
We could iterate only through the enums, but iterating through the whole texture will ensure the color is swapped even if a particular color is not defined in the SwapIndex.
Now that the colors are swapped, we need to wait some time and return back to the previous colors.
First, let's create a function which will reset the colors:
public void ResetAllSpritesColors()
{
for (int i = 0; i < mColorSwapTex.width; ++i)
mColorSwapTex.SetPixel(i, 0, mSpriteColors[i]);
mColorSwapTex.Apply();
}
Let's create a function which will start the hit effect:
public void StartHitEffect()
{
mHitEffectTimer = cHitEffectTime;
SwapAllSpritesColorsTemporarily(Color.white);
}
And in the update function, let's check how much time is left on the timer, decrease it every tick, and call for a reset when the time is up:
public void Update()
{
if (mHitEffectTimer > 0.0f)
{
mHitEffectTimer -= Time.deltaTime;
if (mHitEffectTimer <= 0.0f)
ResetAllSpritesColors();
}
}
That's it—now, when StartHitEffect is called, the sprite will flash white for a moment and then go back to its previous colors.
Summary
This marks the end of the tutorial! I hope you find the method acceptable and the shader useful. It is a really simple one, but it works just fine for pixel art sprites that don't use many colors.
The method would need to be changed a bit if we wanted to swap whole groups of colors at once, which definitely would require a more complicated and expensive shader. In my own game, though, I'm using very few colors, so this technique fits perfectly.
Games with funny elements have always existed, but at what point does an interactive experience become a comedy game?
There are games that are obviously flat-out comedies, like the Borderlands series. Other titles, like Left 4 Dead, feature constant quipping by characters, and by movie standards could easily be described as comedic.
But looking at games you'll notice that, by their very nature, they are comedic. Even ultra-serious dramas can become comedic by virtue of allowing you, the player, to do things that go against their style...
Finding Humor Where the Developers Didn't Intend It
Any effect the player can have on the world can let them do or create something humorous. In Half-Life, shooting a wall leaves a bullet hole—suddenly, all kinds of options for shenanigans open up.
Long before the release, when this feature was first implemented in empty test rooms, developers were surprised by how testers started "writing" words on the wall. (Half-Life 2: Raising The Bar)
I find this telling: Given an empty room and only a single (inherently violent) ability to affect the world, people chose to do something that made them laugh.
This becomes more pronounced the more serious the setting is. Deus Ex, for example, is set in a dark and serious world ravaged by a plague. Yet any conversation is instantly seen in another light when you've first spent ten minutes collecting random detritus and piling it onto your boss's desk.
Want a more modern example? The recent Metal Gear Solid 5 styles itself as an ultra-serious espionage thriller, yet at the same time allows you to drop crates of equipment on unsuspecting people, hoist animals into the sky, and jump from a pink helicopter as Billy Idol's Rebel Yell plays over its loudspeakers.
Related:
Old Man Murray's Deus Ex Walkthrough, in which you are advised to carry a flag to every conversation and decorate your office.
Besides this unintentional, inherent comedy, there are a few ways to intentionally create comedy and humour in games, which boil down to story, chaosand gameplay. They are not mutually exclusive, and often actually complement each other well. Let's take a look at them...
Humor Through Story and Narrative
One way to include humor is via the story. This essentially boils down to "a character says something funny", but is a bit more complicated than that to get right.
One of the best examples of this can be found in Portal, where throughout the entire game you hear the narration of GLaDOS, the AI.
It starts out simple, but gets progressively more outrageous and ludicrous throughout the game. Portal 2 repeats this with its multiple narrator characters.
The Monkey Island series (and the majority of point-and-click adventure games) also do this nicely. Their worlds are filled with characters that offer funny dialogue, and players are encouraged to look for all funny and interesting responses.
Battlefield: Bad Company 2 falls into this camp, too. The gameplay is identical to other Battlefield titles, but it features most characters spouting funny and witty dialogue.
A straightforward way to implement these elements are simple statements or quips, triggered on certain conditions: "If situation X occurs, character Y says Z." "When you walk into the room, the bartender greets you with one of three humorous statements." "When you begin the new test chamber, GLaDOS gets a few lines explaining the setup." "Every once in a while, Francis laments that he doesn't really like bridges, especially with zombies on them." "After you have clicked a unit in Starcraft 2 15 times, their regular barks turn into comedic, fourth-wall-breaking-lines."
But make sure to not overdo it. Because no matter how brilliant a line or gag is, people will grow to hate it if they're continually bombarded with it.
Another important element of this is to not make everything into a joke. Comedy is based on contrast, and when everything is zany, nothing stands out. Borderlands 2 does this nicely by having its main story and a lot of other elements remain relatively serious; against this backdrop, unique and fun elements tend to stick out better. Even in Portal, the light-hearted narration only occurs when entering a new test chamber, with the rest of the game offering serious and not-particularly-funny surroundings.
Referencing pop-culture or other jokes also needs to be approached carefully. A simple reference á la "here is the same Monty Python gag that has been repeated for decades" will only point out how lazy you are with this. If you need to make a reference-based joke, always use it as a springboard to start your own idea. In the process, it might even surpass your initial idea.
Comedy from chaos is about the unforeseen. It can appear unplanned, when you careen off the track in Trackmania before doing some amazing stunts (yet still failing), or somehow so gravely miscalcutating a manoeuvre in any action game that you fail abruptly within the first few seconds.
One game actually engineered around this is Magicka. In it, you can combine elements to create spells, of which there are thousands of combinations. These tend to go wrong an alarmingly high proportion of times, which creates a lot of unforeseen comedy.
On your team's first attempt, you might say "I wonder what this does..." before setting everybody on fire.
The next attempt, everybody vows to do better. And then, oops, your healing spell targets an enemy, restoring them to maximum health, the ice mines laid by your buddy go off on your own team, and the energy blast cast by one of your comrades blows up in their face to catapult you all into a lava pit.
And then you all hit "Again!", because it was just too much fun.
Engineering these moments can be tricky, as they walk the line between frustration and fun. When looking at games that center around this (Magicka, Goat Simulator, Surgeon Simulator, I am Bread, QWOP) you'll notice that the entire experience is often centered around the conceit of chaos.
They all also offer the ever-present possibility of abrupt explosive failure. When you fail in Magicka, you mostly fail in a spectacular manner. When you fail in Trackmania,you do so by driving off the road at mach speed before doing something ludicrous. When you fail in Kerbal Space Program, you do it often by realizing that your massive three-stage-rocket might have some rather profound design flaws, as you watch it tip over on the launchpad and explode on the spot.
An important thing to note is that the failure must be fair—all elements must be known to the player. A sudden, un-telegraphed death out of nowhere is just frustrating and no fun.
Gameplay is a bit more tricky to pin down. It relies on unexpected and unique interactive gameplay elements that serve to illustrate the joke (compared to spoken lines, which are non-interactive). A vital point is that you have to actively perform something and get a reaction.
Borderlands 2 does this nicely. A typical quest in Borderlands consists of getting it, going to a place, having to shoot a bunch of enemies, and then possibly finding one or several macguffins, before returning to get your reward.
This "scheme" of gameplay has been established all throughout the first Borderlands and the early parts of Borderlands 2. At that point these quests are what players expect—so it is a perfect time to subvert their expectations.
Now, creating a quest that's completely different can be humorous just through the sudden and complete disregard of this established formula. For example, the "Shoot me in the face" quest wants you to shoot a character in the face. Additional objectives consist of not shooting them in other body-parts, which are already "checked off", but can be unchecked by shooting the respective parts. And just as soon as it's started, the entire thing is over.
And there are more! One quest can be completed by jumping off a cliff and killing yourself in a certain spot. Another has you listen to Grandma Torgue talk droningly for 15 minutes. You don't fight anything, you just listen. You can't even leave (which you aren't told), because she realizes and bursts out in tears. Then, when you've stopped listening and resigned yourself to waiting it out, she asks a question about the myriad of random details she mentioned. Fail, and the entire thing starts over. All the while I was laughing, admiring the developers for putting this in, and describing it as the hardest boss fight in the game.
Use this by noticing the system that the game employs, that the player expects, and then subverting it. A mission that is comprised of nothing but listening is by its nature funnier when every other mission is about shooting things.
Also, re-purpose regular game actions for other actions. In Portal 2, Wheatley asks you to talk by pressing Space. You can't actually talk, and pressing the key makes you jump, which he remarks upon as the wrong action. At multiple points in Borderlands you can high-five characters, which you do by performing a melee-attack on them. The same system is used for slapping another character. At one point you have to "teabag" the corpse of an enemy, which you have to perform by repeatedly ducking over them.
There are a lot of straightforward ways to make a game more comedic. Use these to engage the player with comedic elements, instead of just having funny things appear.
And remember, even when you try to avoid this, your players will probably find a way to subvert a serious game, so accepting this and going with it might be more efficient.
Intel RealSense 3D cameras bring hand and finger tracking to home PCs, and an easy-to-use SDK for developers, which makes them a great new input method for both VR games and screen-based games.
Ideally, we'd like to be able to make games that don't require the player to touch any kind of peripheral at any point. But, as with the Kinect and the EyeToy, we run into problems when we face one common task: entering text. Even inputting a character's name without using a keyboard can be tedious.
In this post, I'll share what I've learned about the best (and worst!) ways to let players enter text via gesture alone, and show you how to set up the Intel RealSense SDK in Unity, so you can try it in your own games.
(Note that I'm focusing on ASCII input here, and specifically the English alphabet. Alternative alphabets and modes of input, like stenography, shorthand, and kanji, may be served better in other ways.)
Input Methods We Can Improve On
There are other approaches for peripheral-free text input out there, but they have flaws. We'd like to come up with an approach that improves on each of them.
Virtual Keyboard
Keyboards are the gold standard for text entry, so what about just mimicking typing on a keyboard in mid-air or on a flat surface?
Sadly, the lack of tactile feedback is more important than it might seem at first glance. Touch-typing is impossible in this situation, because kinesthesia is too inaccurate to act on inertial motion alone. The physical and responsive touch of a keyboard gives the typist second-sense awareness of finger position and acts as an ongoing error-correction mechanism. Without that, one’s fingers tend to drift off target, and slight positioning errors compound quickly, requiring a "reset" to home keys.
Gestural Alphabet
Our first experiment with RealSense for text input was an effort to recognize American Sign Language finger-spelling. We discovered that there are several difficulties that make this an impractical choice.
One problem is speed. Proficient finger-spellers can flash about two letters per second or 120 letters per minute. At an average of five letters per word, that’s 24 WPM, which is considerably below even the average typist's speed of 40 WPM—a good finger-speller is about half as fast as a so-so keyboarder.
Another problem is the need for the user to learn a new character set. One of the less-than-obvious values of a standard keyboard is that it comports to all of the other tools we use to write. The printed T we learn in kindergarten is the same T seen on the keyboard. Asking users to learn a new character set just to enter text in a game is a no-go.
Joystick and Spatial Input
Game consoles already regularly require text input for credit card numbers, passwords, character names, and other customizable values. The typical input method is to display a virtual keyboard on the screen and allow a spatially sensitive input to "tap" a given key.
There are many iterations of this concept. Many use a joystick to move a cursor. Others may use a hand-tracking technology like Intel RealSense or Kinect to do essentially the same thing (with a wave of the hand acting as a tap of the key). Stephen Hawking uses a conceptually similar input that tracks eye movements to move a cursor. But all of these systems create a worst-of-both-worlds scenario where a single-point spatial input, essentially a mouse pointer, is used to work a multi-touch device; it's like using a pencil eraser to type one letter at a time.
Some interesting work has been done to make joystick text input faster and more flexible by people like Doug Naimo at Triggerfinger, but the input speed still falls short of regular typing by a large margin, and is really only valuable when better or faster input methods are unavailable.
My Chosen Input Method
All this talk about the weaknesses of alternate text-input methods implies that the humble keyboard has several strengths that are not easily replaced or improved upon. How can these demonstrated strengths be conserved in a text entry system that requires no hands-on peripherals? I believe the answer lies in two critical observations:
The ability to use as many as 10 fingers is impossible to meet or beat with any single-point system.
The tight, layered, and customizable layout of the keyboard is remarkably efficient—but it is a 2D design, and could be expanded by incorporating a third dimension.
With all of this in mind, I came up with a simple, two-handed gestural system I call "facekeys". Here's how it works.
Starting Simple: A Calculator
Before getting to a full keyboard, let's start with a numpad—well, a simple calculator. We need ten digits (0 to 9) and five operators (plus, minus, divide, multiply, and equals). Aiming for using all ten fingers, we can break these into two five-digit groups, and represent these on screen as two square-footed pyramids, with the operators as another pyramid:
Each finger corresponds to one face of each pyramid. Each face can be thought of as a "key" on a keyboard, so I call them facekeys. The left hand enters digits 1 to 5 by flexing fingers individually, while the right enters digits 6 to 0. Flexing the same finger on both hands simultaneously—both ring fingers, say—actuates a facekey on the operator pyramid.
Non-digit (but essential) functions include a left-fist to write the displayed value to memory, a right-fist to read (and clear) memory, and two closed fists to clear the decks and start a new calculation.
When I first tested this out, I assumed that users holding their hands palm-downward (as if typing on a keyboard) would be most comfortable. However, it turns out that a position with palms facing inward is more comfortable, and allows for both longer use and more speed:
It also turns out that visual feedback from the screen is very important, especially when learning. We can provide this via a familiar calculator-style digit readout, but it's also good to make the pyramids themselves rotate and animate with each stroke, to establish and reinforce the connection between a finger and its corresponding facekey.
This system is comfortable and easily learned, and is also easily extensible. For instance, the lack of a decimal point and a backspace key gets frustrating quickly, but these inputs are easily accommodated with minor modifications. First, a right-handed wave can act as a backspace. Second, the equals facekey can be replaced with a decimal point for entry, and a "clap" gesture became the equals operator, which has the delightful result of making calculations rhythmic and modestly fun.
Extending This to a Full Keyboard
A peripheral-free calculator is one thing, but a typical 80+ keyboard replacement is quite another. There are, however, some very simple and practical ways to continue development around this keyfacing concept.
The standard keyboard is arranged in four rows of keys plus a spacebar: numbers and punctuation on top with three rows of letters beneath. Each row is defined by its position in space, and we can use that concept here.
Instead of moving your hands toward or away from a fixed point like the camera, a more flexible method is make the system self-referential. We can let the player define a comfortable distance between their palms; the game will then set this distance internally as 1-Delta. The equivalent of reaching to different rows on the keyboard is then just moving hands closer or farther apart from one another: a 2-Delta distance accesses "second row" keys, and 3-Delta reaches third row keys.
The "home keys" are set to this 1-Delta distance, and keyfacing proceeds by mapping letters and other characters to a series of pyramids that sequentially cover the entire alphabet. Experimentation suggests 3-4 comfortable and easily reproducible Deltas exist between hands that are touching and shoulder-width. Skilled users may find many more, but the inherent inaccuracy of normal kinesthesia is likely to be a ceiling to this factor.
Simple gestures provide another axis of expansion. The keyboard's Shift key, for instance, transforms each key into two, and the Ctrl and Alt keys extend that even more. Simple, single-handed gestures would create exactly the same access to key layers while maintaining speed and flexibility. For instance, a fist could be the Shift key. A "gun" hand may access editing commands or any number of combinations. By using single-handed gestures to modify the keyfaces, the user can access different characters.
Ready to try it yourself? First, you’ll need to install the Intel RealSense SDK and set up the plugin for Unity.
Crash Course in Unity and RealSense
Here's a quick walkthrough explaining how to install and set up the RealSense SDK and Unity. We'll make a simple test demo that changes an object's size based on the user's hand movement.
1. What You'll Need
You will need:
An Intel RealSense 3D camera (either embedded in a device or an external camera)
I'm going to use the spaceship from Unity's free Space Shooter project, but you can just use a simple cube or any other object if you prefer.
2. Importing the RealSense Unity Toolkit
The package containing the Unity Toolkit for Intel RealSense technology contains everything you need for manipulating game objects. The Unity package is located in the \RSSDK\Framework\Unity\ folder. If you installed the RealSense SDK in the default location, the RSSDK folder will be in C:\Program Files (x86)\ (on Windows).
You can import the Unity Toolkit as you would any package. When doing so, you have the options to pick and choose what you want. For the purpose of this tutorial, we will use the defaults and import everything.
As you can see in the following image, there are now several new folders under the Assets folder.
Plugins and Plugins.Managed contain DLLs required for using the Intel RealSense SDK.
RSUnityToolkit is the folder that contains all the scripts and assets for running the toolkit.
We won’t go into what all the folders are here; I’ll leave that for you to investigate!
3. Setting Up the Scene
Add the ship to the scene.
Next, add a directional light to give the ship some light so you can see it better.
It should look like this:
4. Adding the Scale Action
To add scaling capabilities to the game object, we have to add the ScaleAction script. The ScaleActionScript is under the RSUnityToolkit folder and in the Actions subfolder.
Simply grab the script and drag and drop it directly onto the ship in the Scene view. You will now be able to see the ScaleAction script parameters in the Inspector.
5. Setting the Parameters
Starting with the Start Event, expand the arrow to show the default trigger. In this case, we don't want to use Gesture Detected, we want to use Hand Detected.
Right-click on Gesture Detected and select Remove. Then, on the Start Event’s Add button, click and select Hand Detected. Under Which Hand, select and choose ACCESS_ORDER_RIGHT_HANDS.
Now we'll to set the Stop Event. Expand the Stop Event and remove the Gesture Lost row. Next, click the Stop Event’s Add button and choose Hand Lost. Next to Which Hand, select ACCESS_ORDER_RIGHT_HANDS.
We won’t have to change the Scale Trigger because there is only one option for this anyway. We'll just use the default.
6. Trying It Out
That's it! Save your scene, save your project, and run it; you'll be able to resize the ship on screen with a gesture.
Now It's Your Turn!
We've discussed the ideas behind inputting text without touching a peripheral, and you've seen how to get started with the RealSense SDK in Unity. Next, it's up to you. Take what you've learned and experiment!
First, get your demo interpreting different characters based on which fingers you move. Next, reflect this on the screen in an appropriate way (you don't have to use my pyramid method!). Then, take it further—what about trying a different hand position, like with palms facing the camera, or a different input motion, like twisting a Rubik's cube?
The Intel® Software Innovator program supports innovative independent developers who display an ability to create and demonstrate forward looking projects. Innovators take advantage of speakership and demo opportunities at industry events and developer gatherings.
Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android, Intel® RealSense™ Technology, Modern Code, Game Dev and Windows to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.
This is a follow-up to my first tutorial on using the Nintendo method of level design. During the creation of Super Mario World, the design team at Nintendo (perhaps serendipitously) created a method for building and organizing level content. I call this method the “challenge, cadence, skill theme” system, or CCST for short.
In the previous tutorial, I defined the terms challenge and cadence; if you haven’t read that article, this one isn’t going to make much sense. In this article, I’m going to talk about how to use the primary types of challenges, evolutions, and expansions.
Although the CCST structure is a means of organizing level design content, it does not restrict the creativity of the designer by any means. Indeed, I'll show you how the CCST structure helps designers to make the most of a limited set of tools for creating level designs.
My Example
I have created a level here which shows how to produce a number of different (and increasingly difficult) challenges with only six different enemies/obstacles. I'll refer to this level throughout this tutorial, so take a look at the playthrough below:
Level code: 1288 0000 00EC F5A2
The Use of Evolutions
For the first seven challenges in the level, I’m going to focus on using evolutions. If you recall from the first article, a challenge evolves when it becomes qualitatively more difficult than the previous challenge, while keeping the same essential idea.
Let’s see how this works in this level. Here’s the first challenge the player comes across:
It’s a simple instance of a Thwomp which will drop from the ceiling when Mario comes close enough to it. It’s a very simple design idea, and the evolution of this is also fairly simple.
All I did here was to add the rotating fire trap. Instead of running straight through, the player now has to make a small jump to get over it.
Adding a new element to an old element is probably the most basic form of evolution. When there are more obstacles to think about, the challenge is more difficult—it’s a pretty simple dynamic, but it’s also a building block for more difficult challenges.
For the third challenge, I add a new element: a pitfall just to the left side of the Thwomp.
This pit forces the player to jump, meaning that the player has to make a jump into the Thwomp’s line of fire (so to speak). You’ll notice that this isn’t much more difficult than the previous evolution, and it’s not supposed to be. Both of these evolutions are simply branching off from the first challenge, but in different directions.
Next, I introduce an evolution of the pitfall: a fireball shoots up at a regular interval, meaning that the player has to wait for the right moment to jump over the platform and underneath the Thwomp. This is clearly building on the previous challenge, and it is a little harder—but it’s still not that hard yet.
For the next challenge, I add together everything that the level has done up to this point:
This is the most difficult challenge so far, because it has three obstacles working in it at once. Most of the time, three obstacles is the greatest number you’ll see in a Nintendo challenge. This challenge is pretty difficult, and about as hard as most Nintendo challenges get. You might aim for something harder in your own games (or your own Mario Maker levels), but the technique I use here is still a means to getting there.
For the next two evolutions, I actually drop the difficulty a little bit, and instead focus on qualitative variety. One game design principle that I cannot stress enough is that levels should not always get harder in a linear fashion. The difficulty of most levels goes both up and down. Sometimes, it’s enough for challenges to simply be different, rather than more and more difficult.
With that in mind, for this evolution, I remove the pit and replace it with a Chain Chomp.
And then, for the final challenge, I bring back the pits and fireball, but I drop the rotating flame trap.
These are still fairly difficult challenges, but they don’t keep piling design elements up. There’s a limit on how much stuff you can stack in your challenges before the challenge stops being fun and starts being annoying, although that limit depends a lot on context.
The Use of Expansions
Before I get to the overarching lesson that we can learn from evolutions, I want to look at the proper development of expansion challenges. In the previous article, I defined expansions as being like evolutions, except that they change quantitative elements of some challenge before them. Let’s take a look at a very simple expansion, now.
This is the first expansion. We're going all the way back to the first challenge in the level, and simply doubling the number of Thwomps.
The key question to ask is “what is the relationship between numerical increases and perceived difficulty?” This challenge is numerically more dangerous than its ancestor, but despite doubling the number of Thwomps, there's not a huge change in player's effort level.
This is an important principle to understand about using expansions: the difficulty of a challenge does not scale linearly with expansions. Expansions are just as dependent on context as evolutions are, despite being different in construction.
Let's take a look at another expansion.
Doubling the number of flame traps creates a challenge that is significantly more difficult than the doubling of the Thwomps (especially since their rotations do not sync up). In fact, this is probably the most difficult challenge in the whole level. The same kind of expansion as before is going on, but the player's experience of it is significantly different.
But what about an expansion with a totally different element? Here's an expansion of the jump-distance.
This is a slightly less-obvious ways of implementing expansions. I've increased the size of the pit by two block-lengths (doubling its size, just like I doubled everything else), meaning that the player will definitely feel the change in the size of the jump, even if the change isn't obvious at a glance.
The interesting thing about this challenge is that the expanded jump-distance actually doesn't make the challenge that much harder. Part of the challenge of the fireball jumps is that the player has to angle Mario's descent so as not to collide with the Thwomp.
The jump over the larger pit actually changes Mario's jump path so that he'll be farther away from the falling Thwomp, and safer! The initial jump is still more difficult than its ancestor, but the overall challenge might not be much harder at all. These are the things to be on the lookout for during play-testing, if you're using the CCST framework.
Next, I return to the more obvious form of doubled elements, by inserting two fireballs.
This challenge is harder than the two Thwomps, but easier than the two flame traps. Doubling the number of fireballs creates a slightly smaller window of time in which the player can jump through the challenge, but it’s just not as hard as the challenge immediately prior.
The takeaway message here is that, although expansions are very easy to implement at first, they can be very difficult to balance. Doubling one element is not the same as doubling another element, no matter how similar those two elements are.
Expansion by Contraction
There are other ways of using expansions to make challenges more difficult. One method I like a lot is what I call the expansion by contraction. It's an oxymoronic term, but it captures the idea well. Let's take a look at one:
What I've done here is to contract the amount of safe space between Mario and the Thwomp. It's clearly a numerical increase in difficulty based on the first challenge in the level, so we still call it an expansion, even though it's reducing something. Almost always, expansions by contraction are executed by reducing the space between Mario and some danger. Sometimes it's done by moving closer to the danger, and sometimes it's done by reducing the size of a platform; it all depends on context.
The biggest danger in the expansion by contraction is contracting too much, so that the player simply cannot get through the challenge without taking damage. My last challenge does this: the final Thwomp is so close to the ground that Mario must end up being hit by it if he tries to run past them all.
This brings up two very important points about the use of expansions.
The first point is that there is a limit on how much a design element can expand numerically. Eventually, the pits will be too wide, the enemies too many, or the safe spaces too narrow. If you want to design your level up to the maximum threshold of expandability, it’s probably a good idea to design backwards, starting from the maximum and working back towards a smaller beginning.
That said, the other important thing to know about expansions is that sometimes they’re just psychological. In the last challenge I showed you, the first three Thwomps will not actually reach Mario if he remains at full speed, so the danger isn’t that great. If the player feels like they’ve just done something flashy and complicated, then the level has done its job well enough.
Conclusion and Tips
This was a long and somewhat winding article, so I’d like to reiterate some of the main points here. The overall point is that although there are clear, repeatable strategies for developing level content, those strategies do not produce perfectly predictable content. Really, this is a good thing, because it means that the CCST framework still offers plenty of room for creativity. The framework isn’t about making paint-by-numbers levels; it’s about organizing your content so levels can be both coherent and challenging.
A handful of mechanics are enough to fill an entire level if you mix and match your design elements. My level had only six different types of obstacles in it, but I actually ran out of room before I was even close to exhausting the possibilities.
Introduce and use several different evolutions of your main design idea.
You can stack the different evolutionary branches of your level up, but be careful about stacking too many at once. Usually, any two evolutions can be made to work together, but once you hit three, you should make sure it’s not too difficult or time-consuming to play through.
Expansions are numerical in nature, but the player’s experience of an expansion will not usually match the expansion perfectly. That is, if you double something, it’s not going to make the challenge twice as hard. It could be 10% harder or 400% harder; it depends on what you’re expanding.
There is an absolute limit on how much any design element can be expanded. If you want to push this limit in your level, consider designing backwards.
Remember that a player’s subjective experience of your game doesn’t have to match the nuts-and-bolts numbers of it. If something feels more challenging than it really was, you may have achieved your design goals.
Nintendo created several games using this method, or a slightly modified version of it. What’s more, the CCST framework (evolutions and expansions) appears in lots of other kinds of games, not just Nintendo titles or platformers. At the end of the day, this method is just a means of organizing your content, not of changing the voice of your game.