Quantcast
Channel: Envato Tuts+ Game Development
Viewing all 728 articles
Browse latest View live

Numbers Getting Bigger: What Are Incremental Games, and Why Are They Fun?

$
0
0

Incremental games are fascinating and perplexing. Marked by minimal player agency and periods of inactivity, they seem to defy conventional logic about good game design, and yet nonetheless have attracted a substantial player base. Let's examine them in more detail, and see if we can explore why that is.

What is an Incremental Game?

You click a button, a number goes up. You click it again, the number goes up again. You keep clicking, and eventually unlock something that makes the number go up for you. Now the number keeps going up, even when you're not playing. Next, you repeat this process, forever.

That, in essence, is the framework of an "incremental" game. While they may seem simple, even brutally so, there is a depth of play and surprising addictiveness to them. They appeal to a variety of playstyles as well, and there have been successful commercial and casual incremental games like Clicker Heroes and AdVenture Capitalist, as well as more experimental or hardcore examples like Candy Box, Cookie Clicker and Sandcastle Builder.

CookieClicker by Orteil
In Cookie Clicker, it's all uphill from here.

So what defines an incremental game? Though there is substantial variation and experimentation in the genre, the fundamental aspects of the design are the following:

  1. the presence of at least one currency or number,
  2. which increases at a set rate, with no or minimal effort,
  3. and which can be expended to increase the rate or speed at which it increases.

It's that loop of accumulation, reinvestment, and acceleration that defines the genre and distinguishes it from games that merely have an increasing score. As an example, in the influential Cookie Clicker, the player seeks to amass 'cookies', which they initially increase by clicking a giant cookie, and then spend to buy upgrades which produce more cookies.

One of the distinguishing features of these games is that the number can increase without the player's direct involvement or even presence. This has led some to call incremental games "idle games," since they can be left to run and then returned to. While this is an important feature, I don't think its central to what defines these games or why players enjoy them. The unending upward growth of numbers is the most prominent feature, and so "incremental game" is a more useful title.

The Psychology of Increasing Numbers

What is it about these games that can inspire such dedicated play? There are a number of reasons, including two important ways that incremental games leverage unique facets of human psychology.

The first is a term that is commonly brought up in the discussion of incremental games: the "Skinner Box." Named (to his chagrin) after the behaviorist B. F. Skinner, these were experimental chambers he built to study behavioral conditioning of animal subjects. The "operant conditioning chamber" would typically house an animal participant who could produce a reward (like food) in response to performing an action (like pushing a button). Notably, once the response mechanism has been learned, animals have been observed to repeat the food-producing action even if it only produces a reward after long intervals or even at random.

By analogy, systems that periodically reward users or players for repetitive tasks are often derisively called Skinner Boxes, because the neurological feedback loop this creates can be incredibly addicting. This structure is certainly quite apparent in incremental games: The player performs an action like clicking or waiting, and is periodically rewarded for their efforts by a number going up. This isn't necessarily bad in and of itself, and is actually a quite commonly used mechanic. Many games need to train their players to perform certain actions in the game system, and use positive rewards (points, experience) and negative outcomes (death) to teach the player the correct way to proceed. In incremental games the usage is simply more obvious.

The second psychological underpinning of incremental games is our accumulation desire and loss aversion. Our brains are wired to dislike losing things we have, and, conversely, to give us a strong desire to accumulate things. 

Incremental games work with both sides of this. Because the primary currency always goes up even when you're not playing, it reduces the anxiety caused by loss aversion: you can safely do something else for awhile without the stress of the currency going away. Additionally, coupled with our brains' poor numeracy skills, we can enjoy numbers that go up, even if those numbers lack external meaning. So, although it can seem ridiculous, a number that simply goes up can actually make us feel good. 

Again, this is laid bare in an incremental game, but most games take advantage of this to some degree: it's why a "score" is such a widespread mechanism for player reward.

Clash of Clanss storage cap uses loss aversion You lose potential gains unless you check in frequently
The storage cap in Clash of Clans uses loss aversion: you "lose" potential gains unless you check in frequently.

Origins of the Genre

Some of the earliest uses of the incremental mechanic were amongst the first generation of MMOs in the late 1990s. Because MMOs used a subscription model, they needed to entice players to play for as long as possible. Part of how they accomplished this was with vast persistent worlds and complex social systems, but another technique was setting players up on a sort of "treadmill" of power: you kill a few rats, you gain a level! You kill more rats, and gain another. Now rats don't give enough experience, so you move on to killing slimes, and so on. The player is always running for the next goal, but they never actually get anywhere. 

There's a strong work/reward loop in that mechanism, and, critically, the time invested to reach each additional reward takes incrementally longer and longer. EverQuest was particularly famous for its level-grinding, the leveling curve of which was so steep that players actually became relatively less powerful the more they played. Despite that, the reward loop the game reinforced could be extremely seductive, and it was one of the first games to propel games addiction into popular discussion.

Lampooning this endless grind was 2002's Progress Quest by Eric Fredricksen, which might be the first example of a true incremental game (as opposed to just using the incremental mechanic as a hook as EverQuest does). It's also among the most minimalist, as it eliminates any semblance of player interaction. It is a 0-player game: once you roll your character, the game will have them kill mobs, complete quests, and level up, without any further input from you. Despite being an almost literal reductio ad absurdum, critics have noted that it was "something much more frightening: it was enjoyable."

Progress Quest by Eric Fredricksen
Some of the increasing progress bars inProgressQuest.

Some years later, social games would be the next inheritor of the incremental mechanic. Played primarily on Facebook, social games similarly needed players to play for as long as possible (though they didn't use subscriptions, but rather the emerging pricing tools of "free"-to-play). Notably, the incremental mechanic became not only the reward system of these games, but often the entire game itself. One of the most successful of these was 2009's FarmVille. It was ostensibly a farming simulator, but contained very little resource management or strategic decision making. Instead, the player buys plots of land on which to grow crops (and later, breed livestock), which can later be harvested and sold, and then the proceeds of which can be used to buy additional plots, and so on ad infinitum.  At its height, it was played by an astonishing 80 million people.

Farmville by Zynga

The apparent simplicity of games like FarmVille led the games critic and professor Ian Bogost to produce a satirical deconstruction of the concept with Cow Clicker in 2010. Designed to reveal the starkness of the core underlying play mechanism (clicking a cow, having a number go up), Bogost was instead surprised and dismayed to find people actually played his game, without irony. Though it was an attempt to show how empty and meaningless these games actually are, Cow Clicker inadvertently showed the opposite.

Incremental Games Today

Many of today's most popular mobile games (the modern successors of social games) make use of incremental mechanics. The immensely successful Clash of Clans (2013) is framed as a strategy war game, but the battling mechanic is fairly simplified and only forms a small part of the experience. The main aspect of play is upgrading the village base by spending gold and elixir, both of which accumulate on their own, and can be made to accumulate faster with incremental upgrades. 

Hay Day, made by the same company, is even clearer in its use. The core gameplay loop is exclusively incremental growth and reinvestment:

The game loop in Hay Day
Hay Day's game loop. Based on a diagram from Game monetization design: Analysis of Hay Day by Pete Koistila.

Of the most successful modern incremental games has been 2013's Cookie Clicker, which has brought more mainstream attention to games focused solely on incremental mechanics. Cookie Clicker uses a single currency (cookies), which the player can slowly accumulate by clicking a large cookie, and then spend on upgrades which produce cookies autonomously. The simple premise, charming art style, and the gradual descent into absurdity of its "plot", all helped propel the game and its genre to widespread popularity. It inspired a wave of similar games, with new takes on the concept continuing to be released today.

Are These Even Games?

Because of the simplicity of their core mechanic and the limited interaction with the system, incremental games can strain our definitions of what a "game" even is. Many critics and commentators have called these games mindless, dumb, and pointless, while at the same time conceding that they can be addictive or hypnotic. Cow Clicker creator Ian Bogost, speaking of Cookie Clicker's monotony and repetitiveness, went so far as to call it among the first game for computers, not people, to play. That seems at odds with the large number of human beings who do play it, though.

Number by Tyler Glaiel
Number by Tyler Glaiel, a particularly stark example.

However they may appear, incremental games are, in fact, games. We can set aside their immense addictive value for a moment, because while that speaks to why we might find them compelling on some level, its not the same as analyzing them as games. Games of most genres appeal to some visceral or subconscious area of their players, but that is secondary to what makes a game a game.

First, incremental games do have some non-obvious mechanics to unpack, most primarily that of discovery. In most incremental games, the player doesn't know the extent of upgrades they can buy, or know the upper bounds of the game's main number or the speed at which it can increase. Exploring the limits of an interactive system is one of the hallmark qualities of how players experience a game, and incremental games are no exception. Even if they appear to be simple, they often permit vast exploration. Candy Box in particular is arguably more about exploration and discovery than it is about incremental growth, despite that being its most obvious feature.

Secondly, while incremental games lay bare the vapidity of their premise ("make a number go up a lot"), it's the means to that end that can actually be engaging. Cookie Clicker, for instance, does permit the use of strategy because there are multiple ways the player can increase their "Cookies per second" metric. So the "game" is about optimizing the system in pursuit of that goal. Most games actually have meaningless goals ("make this score go up") but it's the pursuit of them that's the fun part. Incremental games are just startlingly upfront about this convention.

In his seminal 1937 work Homo Ludens, the anthropologist Johan Huizinga observed of play that it was "not serious, but at the same time absorbing the player intensely and utterly. It is an activity connected with no material interest, and no profit can be gained by it." Those are qualities that we can observe quite readily in incremental games.

Beauty in Simplicity

Incremental games have seen a surge of interest in recent years, and we'll undoubtedly continue to see new examples, further exploration of the mechanic, and innovations on the premise. It would be a mistake to dismiss these games as merely inexplicably addicting. 

I hope that by giving them a critical examination and by looking through their history, that we can come to appreciate their minimalist beauty and elegant execution. So keep an open mind and explore the form a little, and don't be surprised if you lose a few hours in the process.


New Course: Character Design & Animation for Games

$
0
0

Ever wondered how to create and animate characters for video games? In Character Design & Animation for Games, concept artist and animator Jonathan Lam will take you through a step-by-step process, from learning how to design your character to managing your assets in animation programs such as Spine. Topics covered will include sketching, asset creation, posing and basic movement.

video game character design course final product

You can take our new course straight away by subscribing to Tuts+. For just $15 a month, you get access to this course and hundreds of others, with new ones added every week.

Hockeynamite - Development Post Mortem

$
0
0

Hockeynamite was an unusual game for me. It was born out of a technical demo created for a series of tutorials, so it didn't quite follow the usual workflow of evolving an idea into a fully playable game mechanic. In my case, the starting point was an existing technical demo that was not a game whatsoever.

This post-mortem is an overview of the process I had to come up with to insert a "soul" into a technical demo to turn it into a game. I made several mistakes and (I hope!) right moves along the way, so I am really glad to share the lessons I learned.

Play the Finished Game

How It Plays

Hockeynamite is a simplified hockey game featuring a fraction of the sport's original rules. The player controls a single athlete of his team at a time, and the primary objective is to score as many times as possible within the time limit. In order to score, the player must carry the puck and make it touch the opponent's goal. There is a catch, though: if an opponent touches the athlete carrying the puck, he freezes and shatters into a million pieces!

Before coming up with that crazy game mechanic, my starting point was this demo:

The demo had two teams of AI-controlled athletes all trying to carry the puck to their opponents' goal. There was no score, but if the puck eventually touched any goal, both teams would re-organize for a new match. 

Since I had to use this demo as a starting point, my first thought was to add some interaction by allowing the player to control any athlete of his team. I also thought that a score count would help with the fun.

To bring a bit more action to that idea, I also added a time countdown, so the team that scored most during a period would win. There was just one problem: it was boring as heck!

First version of Hockeynamite
First version of Hockeynamite.

For the tech demo, I used a simplified version of hockey's rules because it would be too complicated to implement them all within our development budget. That became an outstanding problem, since the game was too simple without the sport's original rules. Beating the AI-controlled athletes was a challenge, but the fun was over after a few seconds. I needed something to keep the player focused on the opponents and the puck while trying to score (or prevent the opponent from scoring).

I could not change my development budget to accommodate all the official hockey rules, so I decided to add an unusual one: if you are carrying the puck and are touched by the opponent, you explode. As I expected, the result was more engaging and interesting gameplay:

Addition of shattering when athletes touch each other
The addition of shattering when athletes touch each other. Explosions are always awesome in games, no?

I think that was a good addition. Even though hockey athletes don't usually shatter into thin air, this gameplay twist saved me time and gave the game an objective. It might not be the best idea ever, but at least the game didn't feel boring anymore.

I had postponed my deadlines far too many times by this date. I decided to add a title and an instructions screen (with text only, no images) and ship the first playable version of the game. I knew I was rushing things and that this was a mistake, but I didn't want to miss another deadline. I thought the instructions were clear, but I knew players would not read them. I ignored a very important lesson I've learned from past projects: instructions need less text, and more images:

First instructions screen There was too much text
Hockeynamite's first instructions screen. There was too much text.

Validating Everything With First Impressions

To check how the game would be received, I was recommended a service that FGL offers, called First Impressions. The idea of this service is best described by FGL itself:

First Impressions are an innovative way to get feedback about your game's initial play experience. We ask gamers with wildly different backgrounds to play your game for at least five minutes and then give you feedback about their experience.

A gamer (a real one, not an FGL employee), who has never seen your game before, plays it for a while (usually five to 20 minutes) and reports the experience using a questionnaire. The responses are organized in categories such as graphics, gameplay, sounds, and controls, with each one given a score from 0 to 10. You must pay USD $1 for each gamer session (called a "first impression"). I bought 10 first impressions and ran them all at once. The results started to arrive within 24 hours.

I was skeptical of this service at first, but the results were overwhelmingly satisfying. Usually I ask friends and family for feedback, but they are always polite and soft when evaluating the game, even when I ask them to be as honest as possible. FGL's first impressions were straight to the point and without euphemisms: users were testing the game with no fear of hurting my feelings. 

Here are a few comments (and the ratings I received) regarding the "Fun" category:

"The game is quite fun to play but the controls sucks." 7/10
"I am not really a hockey fan, but I did enjoy the game. It is easy to control and it is a challenge to beat the opposing team." 8/10
"The game does not have enough elements to it. You need to be able to earn power ups." 6/10
"The game was a bit frustrating at first then I figured out all you had to do was click on the goal right from the start and it will go in 9 times out of ten. Making it a bit more challenging by giving more control of the players would be helpful. Also making it compatible with a touch screen would be cool." 5/10
"Fun but would be more fun if you could better aim shooting and passing the puck." 6/10
"The game can be more fun if you could do special moves or shoots." 8/10

I was expecting far worse comments, but overall I was happy with the feedback. The game was not that fun, but at least I had a start that was rated above zero. The comment that struck me the most was this one, regarding "Polish":

"It would be an instant oxymoron to talk about polish and quality in the context of this current monstrosity. I get the idea that there is fun to be had because the guys explode upon contact and whatnot, but in reality, the main idea, I feel, is simply is too brute to operate on such a small playing field. The game, on the other hand, is not sophisticated enough to legitimally call itself a manic shooter. As said, and, as I'm sure you aware of, this genre has been delivered and designed according to far superior qualifications back in the '90s by the Speedball franchise, and it even had a revamp in the not too distant past. Your product should be able to offer something to rival the conventions that are reflected by similar games, but this product, I feel, needs a lot more effort being put into it before reaching a state that is presentable. It, I feel, is under AND beyond criticism at its current state. You need to give this game a lot of love AND a PACING to call it one. Please do not let yourself be let down by my words. You just need to realize that this simply won't be enough when grannies are running around with handhelds. Peace!" 1/10

After carefully reading all the feedback, several problems in the game became evident. The controls were not okay, the playing field was too small, there were too few elements (like power-ups), and it was hard to identify who had the puck and where it was.

I assumed some players were complaining about the controls because they didn't understand how to move and shoot properly, probably because they never read the instructions or read them too quickly. I could fix that by tweaking the controls and improving the instructions.

Regarding the playing field being too small, I had thought about that early in the development, but had decided to ignore the problem. A serious mistake.

The top-down camera captured the whole rink There was little room left for movement strategy
The top-down camera captured the whole rink. There was little room left for movement strategy.

A top view camera covering the whole rink was fine for the tech demo, because it allows the user to see the whole arena and to understand what is going on. As a downside, however, it drastically reduces the player's strategy in a game, since moving from one side of the rink to the other is really quick. As a consequence, the rink also felt crowded. Luckily, I could fix that by making the athletes smaller and changing the camera's field of view.

Finally, there was the complaint about the game having very few elements. I knew the game was too simple and the first impressions just made that more obvious. I had to come up with some more additions to make the game a bit more complex, without blowing the development budget.

Fixing the Wrong Things

Based on the feedback, I discussed a few ideas with my "producer" (the tutorial editor), and we decided to make a few changes.

New Camera

The first change was to improve the camera system to allow players to use more movement strategy. I made the rink 4x bigger and locked the camera on the puck. As a result, the player could see just a fraction of the arena now, instead of the whole place at once:

The new camera system removed the claustrophobic feeling and gave the player more freedom to move around
The new camera system removed the claustrophobic feeling and gave the player more freedom to move around.

If I had to create this game again, I would choose this camera system from the very beginning. It reduced the claustrophobic feeling that the tech demo had, and definitely improved the game play.

A Better HUD for Active Elements

Next, I added a few visual marks to the HUD to make it more explicit who is carrying the puck and where it is. The athlete being controlled by the player received a animated green halo. The puck received a permanent animated purple circle, too, to make it easier for the player to spot:

The currently selected athlete and the puck received animated HUD marks It helped the player locate the important things on the screen quickly
The currently selected athlete and the puck received animated HUD marks. It helped the player quickly locate the important things on the screen.

Honestly, I didn't think the puck circle was necessary, but the first impressions clearly stated the opposite. Maybe I was too good at finding the puck because I had spent hours looking at it during development, but if a first-timer player is not able to find it, the results are catastrophic. Without the first impressions, I would never have added that purple circle, which would have been a significant mistake.

Better Instructions

This time I decided to follow my experience (and the feedback from the first impressions) to use more images and animations and less text in the instructions screen. I spent a significant (but necessary) amount of my precious development time re-working this screen:

New instructions screen featured more animations and fewer texts
New instructions screen featured more animations and less text.

If the player completely ignored the instructions' text, the images and animations would still make it clear how to play the game. While working on the instructions, I decided to make the player able to speed up the athlete being controlled by holding the Shift key. It was a small addition, but it made the gameplay feel much better. 

After I finished the instructions screen, I was satisfied with the results. I should have used those animated instructions from the very beginning. After reading all the instructions, I noticed there was too much information to remember. Players would probably forget about the keys as soon as they leave the instructions screen, so I should have added some sort of in-game reminder about the keys. I didn't do it because of time constraints, but I feel that was another mistake.

Power-Ups

Finally, I implemented some power-ups. They were a decent addition, and shook things up, because they allowed the player to improve his team with "super-powers". Giving power-ups to both teams when anyone scored helped to balance the game and ensure that gameplay remained competitive.

Because of time constraints, again, I only implemented two power-ups, but several others could have been added. I think it was a mistake to add just two of them, because they really made the game better, especially if two human players could eventually play together. 

On the other hand, I am glad we just stuck with two power-ups; otherwise, we definitely would have exceeded the deadline by several days, maybe weeks.

A Second Batch of First Impressions

I bought a second batch of 10 first impressions (another USD $10 spent). That was a small investment for receiving fresh feedback within 48 hours. This round of first impressions confirmed a few thoughts I had, and gave a mix of opinions regarding gameplay and fun.

I knew players would at least notice the instructions this time, and the impressions confirmed that. As a side effect, however, players had problems remembering them all (as I predicted—but I decided to ignore the problem):

"It was hard to remember all the instructions for how to use the game. I suggest in-game help or a small list of the instructions to appear on screen. The character was also hard to control as in directing it to go a certain way." Ease of Use: 6/10
"Game would be easier to play if more of the options were on screen instead of having some many buttons its just too many to remember." Ease of Use: 5/10

If I had added that in-game help I'd previously thought about, players would not be complaining about the lack of it now. It is really hard to distinguish the essential features from the cosmetic ones, but that in-game help was an easy one to identify as important. I made a mistake by ignoring my instincts here; those instructions would have been essential for understanding (and enjoying) the game.

"I think it would be more fun just if the game had its movements a little more under control as it it is harder to enjoy when the puck or character seems to drift away from where it is directed." Fun: 8/10

I also received complaints about the athletes' movement. I tried to mimic the movement of something over the ice where things continue to slide in one direction even after they are forced to another. I thought about removing the sliding effect, but sharp and precise movements would completely defeat the idea of moving on the ice. I decided to keep the sliding movement untouched, and I think that was not a mistake.

I received divergent opinions regarding the "Fun" category:

"More realistic. Every time I clicked to hit the puck it was more beneficial to just run with it." Fun: 5/10
"Game was fun eventually got a little boring. Maybe let player versus each other or have little mini challenges." Fun: 7/10
"I am not a sports game fan, but it is pretty fun." Fun: 7/10
"Would be better if graphics were better." Fun: 5/10
"'Retro' feel games like this can be fun, but only if they're modern with a retro look. This game kept starting and stopping, blinking images would interfere with the game, players would vanish into thin air, etc. TOO retro." Fun: 2/10

At this point, I was pretty confused. Some players seemed to enjoy the game, while others were complaining. They were not complaining about shattering athletes, though, so I assumed my game mechanic was working—I just had to improve other aspects of the game (such as the graphics).

The Feature Creep Vortex

After reading the second batch of first impressions over and over again, I had a list of things I could improve. I could have added more power-ups, worked on the movement system a bit more, and so on... but this was turning into a feature creep vortex.

I was about to miss my deadline (again), so we decided to make two last adjustments: describe the power-ups in the instructions screen and improve the HUD to show when a power-up is active.

Available power-ups are displayed on the left in the HUD.

After completing those, I decided to stop working on the game and ship it. I think this was a wise decision, considering the time constraints I was under. Feature creep is a never-ending cycle and I could find more and more things to work as I progressed. I am glad I just called it a day and stopped.

Conclusion

The development of Hockeynamite took a lot more work than I planned. Finding a simple mechanic to be inserted into an already existing tech demo was a tough task, but polishing and shaping the game built on top of that mechanic was even harder.

FGL's first impressions service was really helpful. I quickly spotted several problems (game play, HUD, and so on) using the service; I would probably have missed or ignored those details if I were evaluating the game  by myself. I spent just USD $20 to get almost instant feedback. It was totally worth it.

I think the game could be better enjoyed with gamepads and two human players, but I had no development budget to pursue that. A significant amount of the comments I received support that idea. I would definitely make it a desktop offline game played with gamepads if I were to iterate on it some more.

Takeaways for Other Games

Below is a "TL;DR" list summarizing the things I will do differently when I next make a game, based on my experience developing Hockeynamite:

  • In the instructions screen: less text, more images and animations, no matter how close or overdue the deadline is.
  • If I have a feeling that something is not correct (like the missing in-game help in Hockeynamite), I will address (not ignore) the problem—especially if it was spotted by a person other than myself.
  • I will not treat a tech demo as an "almost finished" game. It was really hard and painful to come up with an acceptable game using that demo as a starting point.
  • I will not assume players see the game the same way I do. The green and purple halos I added to the game HUD made a huge difference in helping the player quickly identify important things on the screen. I can't image the game without them now.
  • If I don't have platform constraints (like "web-only"), I will definitely explore different input methods (such as gamepads) when an idea or opportunity arises.
  • I will polish the game as much as I can after the main mechanic is solid.

WebGL Physics and Collision Detection Using Babylon.js and Oimo.js

$
0
0

Today, I’d like to share with you the basics of collisions, physics and bounding boxes by playing with the WebGL Babylon.js engine and a physics engine companion named Oimo.js.

Here’s the demo we’re going to build together: Babylon.js Espilit Physics demo with Oimo.js.

Babylonjs Espilit Physics demo with Oimojs

You can launch it in a WebGL-compatible browser—like IE11, Firefox, Chrome, Opera, Safari 8, or Microsoft Edge in Windows 10 Technical Preview—and then move inside the scene like in an FPS game. Press the s key to launch some spheres/balls and the b key to launch some boxes. Using your mouse, you can also click on one of the spheres or boxes to apply some impulse force on it.

1. Understanding Collisions

Looking at the Wikipedia collision detection definition, we can read that: 

Collision detection typically refers to the computational problem of detecting the intersection of two or more objects. While the topic is most often associated with its use in video games and other physical simulations, it also has applications in robotics. In addition to determining whether two objects have collided, collision detection systems may also calculate time of impact (TOI), and report a contact manifold (the set of intersecting points). Collision response deals with simulating what happens when a collision is detected (see physics engine, ragdoll physics). Solving collision detection problems requires extensive use of concepts from linear algebra and computational geometry.

Let’s now unpack that definition into a cool 3D scene that will act as our starting base for this tutorial.

You can move in this great museum as you would in the real world. You won’t fall through the floor, walk through walls, or fly. We’re simulating gravity. All of that seems pretty obvious, but it requires a bunch of computation to simulate that in a 3D virtual world. 

The first question we need to resolve when we think about collision detection is how complex it should be. Indeed, testing whether two complex meshes are colliding could cost a lot of CPU, even more with a JavaScript engine where it’s complex to offload that on something other than the UI thread.

To better understand how we’re managing this complexity, navigate into the Espilit museum near this desk:

the Espilit museum near the desk

You’re blocked by the table even if there seems to be some space available on the right. Is it a bug in our collision algorithm? No, it’s not (Babylon.js is free of bugs!). It’s because Michel Rousseau, the 3D artist who built this scene, has done this by choice. To simplify the collision detection, he has used a specific collider.

What’s a Collider?

Rather than testing the collisions against the complete detailed meshes, you can put them into simple invisible geometries. Those colliders will act as the mesh representation and will be used by the collision engine instead. Most of the time, you won’t see the differences but it will allow us to use much less CPU, as the math behind that is much simpler to compute.

Every engine supports at least two types of colliders: the bounding box and the bounding sphere. You’ll better understand by looking at this picture:

Illustration of bounding box and bounding sphere
Extracted from: Computer Visualization, Ray Tracing, Video Games, Replacement of Bounding Boxes

This beautiful yellow duck is the mesh to be displayed. Rather than testing the collisions against each of its faces, we can try to insert it into the best bounding geometry. In this case, a box seems a better choice than a sphere to act as the mesh impostor. But the choice really depends on the mesh itself.

Let’s go back to the Espilit scene and display the invisible bounding element in a semitransparent red color:

invisible bounding element in a semitransparent red color

You can now understand why you can’t move by the right side of the desk. It’s because you’re colliding (well, the Babylon.js camera is colliding) with this box. If you’d like to do so, simply change its size by lowering the width to perfectly fit the width of the desk.

Note: if you’d like to start learning Babylon.js, you can follow the free training course at Microsoft Virtual Academy (MVA). For instance, you can jump directly to Introduction to WebGL 3D with HTML5 and Babylon.js: Using Babylon.js for Beginners where we cover this collision part of Babylon.js. You can also have a look at the code inside our interactive playground tool, Babylon.js playground: Collisions sample.

Based on the complexity of the collision or physics engine, there are other types of colliders available: the capsule and the mesh, for instance.

Box sphere capsule and mesh colliders
Extracted from: Getting Started with Unity - Colliders & UnityScript

Capsule is useful for humans or humanoids as it better fits our body than a box or a sphere. Mesh is almost never the complete mesh itself—rather, it’s a simplified version of the original mesh you’re targeting—but it is still much more precise than a box, a sphere, or a capsule.

2. Loading the Starting Scene

To load our Espilit scene, you have various options:

Option 1: Download it from our GitHub repository, and then follow the Introduction to WebGL 3D with HTML5 and Babylon.js: Loading Assets module of our MVA course to learn how to load a .babylon scene. Basically, you need to host the assets and the Babylon.js engine into a web server and set the proper MIME types for the .babylon extension.

Option 2: Download this premade Visual Studio solution (.zip file).  

Note: If you are unfamiliar with Visual Studio, take a look at this article: Web developers, Visual Studio could be a great free tool to develop with… Please note also that the Pro version is now free for a lot of different scenarios. It’s named Visual Studio 2013 Community Edition.

Of course, you can still follow this tutorial if you don’t want to use Visual Studio. Here is the code to load our scene. Remember that most browsers support WebGL now—remember to test for Internet Explorer even on your Mac.

Using this material, you will only benefit from the embedded collision engine of Babylon.js. Indeed, we’re making a difference between our collision engine and a physics engine. 

The collision engine is mostly dedicated to the camera interacting with the scene. You can enable gravity or not on the camera, and you can enable the checkCollision option on the camera and on the various meshes. 

The collision engine can also help you to know if two meshes are colliding. But that’s all (this is already a lot, in fact!). The collision engine won’t generate actions, force or impulse after two Babylon.js objects are colliding. You need a physics engine for that to bring life to the objects.

The way we’ve been integrating physics in Babylon.js is via a plugin mechanism. You can read more about that here: Adding your own physics engine plugin to Babylon.js. We’re supporting two open-source physics engines: Cannon.js and Oimo.js. Oimo is now the preferred default physics engine.

If you’ve chosen Option 1 to load the scene, you then need to download Oimo.js from our GitHub. It’s a slightly updated version we’ve made to better support Babylon.js. If you’ve chosen Option 2, it’s already referenced and available in the VS solution under the scripts folder.

3. Enabling Physics Support in the Scene and Transforming Colliders Into Physics Impostors

The first thing to do is to enable physics on the scene. For that, please add these lines of code:

You’re setting up the gravity level (-10 on the Y axis in this sample code, which is more or less like what we have on Earth) and the physics engine you’d like to use. We’ll use Oimo.js, but the commented line shows how to use Cannon.js.

Now, we need to iterate through all non-visible colliders used by the collision engine and activate physics properties on it. For that, you simply need to find all meshes where checkCollisions is set to true but not visible in the scene:

Please declare the meshesColliderList also:

And we’re done! We’re ready to throw some objects in our scene and put a lot of mess in this beautiful but way-too-calm museum.

4. Creating Spheres & Boxes With Physics States

We’re now going to add some spheres (with an Amiga texture) and some boxes (with a wood texture) to the scene. 

These meshes will have physics state set. For instance, this means that they will bounce on the floor if you launch them in the air, bounce between them after a collision has been detected, and so on. The physics engine needs to know which kind of impostor you’d like to use for the mesh (plane, sphere, or box today), as well as the mass and friction properties.

If you’ve chosen Option 1, you can download the two textures here.

Add this code to your project:

You can see that boxes are heavier than the spheres by a factor of 4.

Note: If you need to understand how material works in Babylon.js, watch the module Introduction to WebGL 3D with HTML5 and Babylon.js: Understanding Materials and Inputs, or play with our dedicated Playground sample, Babylon.js Playground: Materials sample.

Add these two lines of code after the scene.enablePhysics line:

And launch the web project. Navigate to the center of the museum and press the s or b keys. You’ll obtain this fun result:

Demo scene with spheres and boxes floating in the air

5. Adding Picking Support to Click on Meshes

Let’s add another cool feature: the ability to click on one of the objects to throw it away. For that, you need to send a ray from the 2D coordinates of the mouse inside the 3D scene, check whether this ray touches one of the interesting meshes, and if so, apply an impulse force on it to try to move it.

Note: to understand how picking works, please view the MVA module Introduction to WebGL 3D with HTML5 and Babylon.js: Advanced Features. Or play with our online sample, Babylon.js Playground: Picking sample.

Add this code into the addListeners() function:

Launch your code in your favorite browser. You can now click on your physic meshes to play with them.

6. Displaying the Bounding Boxes to Better Understand the Whole Story

Finally, we’re going to create a debug scene to let you display/hide the colliders and activate/deactivate the physics properties on them.

We’re going to inject the UI into this div:

And we’ll use this function to handle the UI:

I know, it generates a very ugly UI, but I was too lazy to spend more time on it. Feel free to improve it!

Call this new function and launch the web project. Now, for instance, display the colliders 12 and 17:

colliders 12 and 17

You can also, with the second checkbox, enable/disable the physics properties. For instance, if you disable the physics properties on collider 12 and launch the spheres, they will now go through this wall! This is shown in the following screenshot as the sphere surrounded by the red square:

sphere going through wall

You can play with this debugging sample directly in your browser here: Babylon.js Espilit Physicsdebug demo.

Please also have a look at this awesome demo built by Samuel Girardin that also uses Oimo.js on some funny characters:

demo built by Samuel Girardi

I hope you’ve enjoyed this tutorial! Feel free to ping me on Twitter to comment about it, or use the comments field below.

This article is part of the web dev tech series from Microsoft. We’re excited to share Microsoft Edge and the new EdgeHTML rendering engine with you. Get free virtual machines or test remotely on your Mac, iOS, Android, or Windows device @ http://dev.modern.ie/.

A Beginner's Guide to Coding Graphics Shaders: Part 2

$
0
0

ShaderToy, which we used in the previous tutorial in this series, is great for quick tests and experiments, but it's rather limited. You can't control what data gets sent to the shader for example, among other things. Having your own environment where you can run shaders means you can do all sorts of fancy effects, and you can apply them to your own projects!

We're going to be using Three.js as our framework to run shaders in the browser. WebGL is the Javascript API that will allow us to render shaders; Three.js just makes this job easier. 

If you're not interested in JavaScript or the web platform, don't worry: we won't be focusing on the specifics of web rendering (though if you'd like to learn more about the framework, check out this tutorial). Setting up shaders in the browser is the quickest way to get started, but becoming comfortable with this process will allow you to easily set up and use shaders on whatever platform you like. 

The Setup

This section will guide you through setting up shaders locally. You can follow along without needing to download anything with this pre-built CodePen:

You can fork and edit this on CodePen.

Hello Three.js!

Three.js is a JavaScript framework that takes care of a lot of boilerplate code for WebGL that we'll need to render our shaders. The easiest way to get started is to use a version hosted on a CDN

Here's an HTML file you can download which has just a basic Threejs scene. 

Try saving that file to disk, then opening it in your web browser. You should see a black screen. That isn't very exciting, so let's try adding a cube, just to make sure everything is working. 

To create a cube, we need to define its geometry and its material, and then add it to the scene. Add this code snippet under where it says Add your code here:

We won't go into too much detail in all this code, since we're more interested in the shader part. But if all went right, you should see a green cube in the center of the screen:

While we're at it, let's make it rotate. The render function runs every frame. We can access the cube's rotation through cube.rotation.x (or .y or .z). Try incrementing that, so that your render function looks like this:

Challenge: Can you make it rotate along a different axis? What about along two axes at the same time?

Now you've got everything set up, let's add some shaders!

Adding Shaders

At this point, we can start thinking about the process of implementing shaders. You're likely to find yourself in a similar situation regardless of the platform you plan to use shaders on: you've got everything set up, and you have things being drawn on screen, now how do you access the GPU?

Step 1: Loading in GLSL Code

We're using JavaScript to build this scene. In other situations you might be using C++, or Lua or any other language. Shaders, regardless, are written in a special Shading LanguageOpenGL's shading language is GLSL(OpenGLShading Language). Since we're using WebGL, which is based on OpenGL, then GLSL is what we use.

So how and where do we write our GLSL code? The general rule is that you want to load your GLSL code in as a string. You can then send it off to be parsed and executed by the GPU. 

In JavaScript, you can do this by simply throwing all your code inline inside a variable like so:

This works, but since JavaScript doesn't have a way to easily make multiline strings, this isn't very convenient for us. Most people tend to write the shader code in a text file and give it an extension of .glsl or .frag (short for fragment shader), then just load that file in.

This is valid, but we're going to write our shader code inside a new <script> tag and load it into the JavaScript from there, so that we can keep everything in one file for the purpose of this tutorial. 

Create a new <script> taginside the HTML that looks like this:

We give it the ID of fragShader is so that we can access it later. The type shader-code is actually a bogus script type that does not exist. (You could put in any name there and it would work). The reason we do this is so that the code doesn't get executed, and doesn't get displayed in the HTML.

Now let's throw in a very basic shader that just returns white.

(The components of vec4 in this case correspond to the rgba value, as explained in the previous tutorial.)

Finally, we have to load in this code. We can do this with a simple JavaScript line that finds the HTML element and pulls the inner text:

This should go under your cube code.

Remember: only what's loaded as a string will be parsed as valid GLSL code (that is, void main() {...}. The rest is just HTML boilerplate.) 

You can fork and edit this on CodePen.

Step 2: Applying the Shader

The method for applying the shader might be different depending on what platform you're using and how it interfaces with the GPU. It's never a complicated step, though, and a cursory Google search shows us how to create an object and apply shaders to it with Three.js.

We need to create a special material, and give it our shader code. We'll create a plane as our shader object (but we could just as well use the cube). This is all we need to do:

By now, you should be seeing a white screen:

You can fork and edit this on CodePen.


If you change the code in the shader to any other color and refresh, you should see the new color!

Challenge: Can you set a portion of the screen to red, and another portion to blue? (If you're stuck, the next step should give you a hint!)

Step 3: Sending Data

At this point, we can do whatever we want with our shader, but there's not much we can do. We only have the built-in pixel position gl_FragCoord to work with, and if you recall, that's not normalized. We need to have at least the screen dimensions. 

To send data to our shader, we need to send it as what's called a uniform variable. To do this, we create an object called uniforms and add our variables to it. Here's the syntax for sending the resolution:

Every uniform variable must have a type and a value. In this case, it's a 2 dimensional vector with the window's width and height as its coordinates. The table below (taken from the Three.js docs) shows you all the data types you can send and their identifiers:
Uniform type stringGLSL typeJavaScript type
'i', '1i'
int
Number
'f', '1f'float
Number
'v2'
vec2
THREE.Vector2
'v3'
vec3
THREE.Vector3
'c'vec3
THREE.Color
'v4'vec4
THREE.Vector4
'm3'mat3
THREE.Matrix3
'm4'mat4
THREE.Matrix4
't'sampler2D
THREE.Texture
't'samplerCube
THREE.CubeTexture
To actually send it to the shader, modify the ShaderMaterial instantiator to include it, like this:

We're not done yet! Now our shader is receiving this variable, we need to do something with it. Let's create a gradient in the same way we did in the previous tutorial: by normalizing our co-ordinate and using it to create our color value.

Modify your shader code so that it looks like this:

And you should see a nice looking gradient!

You can fork and edit this on CodePen.

If you're a little fuzzy on how we managed to create such a nice gradient with only two lines of shader code, check out the first part of this tutorial series for an in-depth run down of the logic behind this.

Challenge: Can you split the screen into 4 equal sections with different colors? Something like this:

Step 4: Updating Data

It's nice to be able to send data to our shader, but what if we need to update it? For example, if you open the previous example in a new tab, then resize the window, the gradient does not update, because it's still using the initial screen dimensions.

To update your variables, usually you'd just resend the uniform variable and it will update. With Three.js, however, we just need to update the uniforms object in our render function—no need to resend it to the shader. 

So here's what our render function looks like after making that change:

If you open the new CodePen and resize the window, you will see the colors changing (although the initial viewport size stays the same). It's easiest to see this by looking at the colors in each corner to verify that they don't change.

Note: Sending data to the GPU like this is generally costly. Sending a handful of variables per frame is okay, but your framerate can really slow down if you're sending hundreds per frame. It might not sound like a realistic scenario, but if you have a few hundred objects on screen, and all need to have lighting applied to them, for example, all with different properties, then things can quickly get out of control. We'll learn more about optimizing our shaders in future articles!

Challenge: Can you make the colors change over time? (If you're stuck, look at how we did it in the first part of this tutorial series.)

Step 5: Dealing With Textures

Regardless of how you load in your textures or in what format, you'll send them to your shader in the same way across platforms, as uniform variables.

A quick note about loading files in JavaScript: you can load images from an external URL without much trouble (which is what we'll be doing here) but if you want to load an image locally, you'll run into permission issues, because JavaScript can't, and shouldn't, normally access files on your system. The easiest way to get around this is to start a local Python server, which is simpler than it perhaps sounds.

Three.js provides us with a handy little function for loading an image as a texture:

The first line just needs to be set once. You can put in any URL to an image there. 

Next, we want to add our texture to the uniforms object.

Finally, we want to declare our uniform variable in our shader code, and draw it in the same way we did in the previous tutorial, with the texture2D function:

And you should see some tasty jelly beans, stretched across our screen:

You can fork and edit this on CodePen.

(This picture is a standard test image in the field of computer graphics, taken from the University of Southern California's Signal and Image Processing Institute (hence the IPI initials). It seems fitting to use it as our test image while learning about graphics shaders!)

Challenge: Can you make the texture go from full color to grayscale over time? (Again, if you're stuck, we did this in the first part of this series.)

Bonus Step: Applying Shaders to Other Objects

There's nothing special about the plane we've created. We could have applied all of this onto our cube. In fact, we can just change the plane geometry line:

to:

Voila, jelly beans on a cube:

You can fork and edit this on CodePen.

Now you might be thinking, "Hold on, that doesn't look like proper projection of a texture onto a cube!". And you'd be right; if we look back to our shader, we'll see that all we really did was say "map all the pixels of this image onto the screen". The fact that it's on a cube just means the pixels outside are being discarded. 

If you wanted to apply it so that it looks like it's drawn physically onto the cube, that would involve a lot of reinventing a 3D engine (which sounds a bit silly considering we're already using a 3D engine and we can just ask it to draw the texture onto each side individually).This tutorial series is more about using shaders to do things we couldn't achieve otherwise, so we won't be delving into details like that. (Udacity has a great course on the fundamentals of 3D graphics, if you're eager to learn more!)

Next Steps

At this point, you should be able to do everything we've done in ShaderToy, except now you have the freedom to use whatever textures you want on whatever shapes you like, and hopefully on whatever platform you choose. 

With this freedom, we can now do something like set up a lighting system, with realistic looking shadows and highlights. This is what the next part will focus on, as well as tips and techniques for optimizing shaders!

Building Shaders With Babylon.js and WebGL: Theory and Examples

$
0
0

In the keynote for Day 2 of //Build 2014 (see 2:24-2:28), Microsoft evangelists Steven Guggenheimer and John Shewchuk demoed how Oculus Rift support was added to Babylon.js. And one of the key things for this demo was the work we did on a specific shader to simulate lenses, as you can see in this picture:

Lens simulation image

I also presented a session with Frank Olivier and Ben Constable about graphics on IE and Babylon.js

This leads me to one of the questions people often ask me about Babylon.js: "What do you mean by shaders?" So in this post, I am going to explain to you how shaders work, and give some examples of common types of shaders.

The Theory

Before starting experimenting, we must first see how things work internally.

When dealing with hardware-accelerated 3D, we are discussing two CPUs: the main CPU and the GPU. The GPU is a kind of extremely specialized CPU.

The GPU is a state machine that you set up using the CPU. For instance the CPU will configure the GPU to render lines instead of triangles. Or it will define that transparency is on, and so on.

Once all the states are set, the CPU will define what to render—the geometry, which is composed of a list of points (called the vertices and stored into an array called vertex buffer), and a list of indexes (the faces, or triangles, stored into an array called index buffer).

The final step for the CPU is to define how to render the geometry, and for this specific task, the CPU will define shaders for the GPU. Shaders are a piece of code that the GPU will execute for each of the vertices and pixels it has to render.

First, some vocabulary: think of a vertex (vertices when there are several of them) as a “point” in a 3D environment (as opposed to a point in a 2D environment).

There are two kinds of shaders: vertex shaders, and pixel (or fragment) shaders.

Graphics Pipeline

Before digging into shaders, let’s take a step back. To render pixels, the GPU will take the geometry defined by the CPU and will do the following:

Using the index buffer, three vertices are gathered to define a triangle: the index buffer contains a list of vertex indexes. This means that each entry in the index buffer is the number of a vertex in the vertex buffer. This is really useful to avoid duplicating vertices. 

For instance, the following index buffer is a list of two faces: [1 2 3 1 3 4]. The first face contains vertex 1, vertex 2 and vertex 3. The second face contains vertex 1, vertex 3 and vertex 4. So there are four vertices in this geometry: 

Chart showing four vertices

The vertex shader is applied on each vertex of the triangle. The primary goal of the vertex shader is to produce a pixel for each vertex (the projection on the 2D screen of the 3D vertex): 

vertex shader is applied on each vertex of the triangle

Using these three pixels (which define a 2D triangle on the screen), the GPU will interpolate all values attached to the pixel (at least its position), and the pixel shader will be applied on every pixel included into the 2D triangle in order to generate a color for every pixel: 

pixel shader will be applied on every pixel included into the 2D triangle

This process is done for every face defined by the index buffer. 

Obviously, due to its parallel nature, the GPU is able to process this step for a lot of faces simultaneously, and thereby achieve really good performance.

GLSL

We have just seen that to render triangles, the GPU needs two shaders: the vertex shader and the pixel shader. These shaders are written using a language called GLSL (Graphics Library Shader Language). It looks like C.

For Internet Explorer 11, we have developed a compiler to transform GLSL to HLSL (High Level Shader Language), which is the shader language of DirectX 11. This allows IE11 to ensure that the shader code is safe (you don’t want to using WebGL to reset your computer!):

Flow chart of transforming GLSL to HLSL

Here is a sample of a common vertex shader:

Vertex Shader Structure

A vertex shader contains the following:

  • Attributes: An attribute defines a portion of a vertex. By default a vertex should at least contain a position (a vector3:x, y, z). But as a developer, you can decide to add more information. For instance, in the former shader, there is a vector2 named uv (texture coordinates that allow us to apply a 2D texture on an 3D object).
  • Uniforms: A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a matrix used to project the position of the vertex (x, y, z) to the screen (x, y).
  • Varying: Varying variables are values created by the vertex shader and transmitted to the pixel shader. Here, the vertex shader will transmit a vUV (a simple copy of uv) value to the pixel shader. This means that a pixel is defined here with a position and texture coordinates. These values will be interpolated by the GPU and used by the pixel shader. 
  • main: The function named main() is the code executed by the GPU for each vertex and must at least produce a value for gl_position (the position on the screen of the current vertex). 

We can see in our sample that the vertex shader is pretty simple. It generates a system variable (starting with gl_) named gl_position to define the position of the associated pixel, and it sets a varying variable called vUV

The Voodoo Behind Matrices

In our shader we have a matrix named worldViewProjection. We use this matrix to project the vertex position to the gl_position variable. That is cool, but how do we get the value of this matrix? It is a uniform, so we have to define it on the CPU side (using JavaScript).

This is one of the complex parts of doing 3D. You must understand complex math (or you will have to use a 3D engine, like Babylon.js, which we are going to see later).

The worldViewProjection matrix is the combination of three different matrices:

The worldViewProjection matrix is the combination of three different matrices

Using the resulting matrix allows us to be able to transform 3D vertices into 2D pixels while taking in account the point of view and everything related to the position/scale/rotation of the current object.

This is your responsibility as a 3D developer: to create and keep this matrix up to date.

Back to the Shaders

Once the vertex shader is executed on every vertex (three times, then) we have three pixels with a correct gl_position and a vUV value. The GPU will then interpolate these values on every pixel contained in the triangle produced by these pixels.

Then, for each pixel, it will execute the pixel shader:

Pixel (or Fragment) Shader Structure

The structure of a pixel shader is similar to a vertex shader:

  • Varying: Varying variables are values created by the vertex shader and transmitted to the pixel shader. Here the pixel shader will receive a vUV value from the vertex shader. 
  • Uniforms: A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a sampler, which is a tool used to read texture colors.
  • main: The function named main is the code executed by the GPU for each pixel and must at least produce a value for gl_FragColor (the color of the current pixel). 

This pixel shader is fairly simple: It reads the color from the texture using texture coordinates from the vertex shader (which in turn got it from the vertex).

Do you want to see the result of such a shader? Here it is:

This is being rendered in real time; you can drag the sphere with your mouse.

To achieve this result, you will have to deal with a lot of WebGL code. Indeed, WebGL is a really powerful but really low-level API, and you have to do everything by yourself, from creating the buffers to defining vertex structures. You also have to do all the math and set all the states and handle texture loading and so on…

Too Hard? BABYLON.ShaderMaterial to the Rescue

I know what you are thinking: shaders are really cool, but I do not want to bother with WebGL internal plumbing or even with math.

And that's fine! This is a perfectly legitimate ask, and that is exactly why I created Babylon.js.

Let me present to you the code used by the previous rolling sphere demo. First of all, you will need a simple webpage:

You will notice that the shaders are defined by <script> tags. With Babylon.js you can also define them in separate files (.fx files).

You can get Babylon.js here or on our GitHub repo. You must use version 1.11 or higher to get access to BABYLON.StandardMaterial.

And finally the main JavaScript code is the following:

You can see that I use a BABYLON.ShaderMaterial to get rid of all the burden of compiling, linking and handling shaders.

When you create a BABYLON.ShaderMaterial, you have to specify the DOM element used to store the shaders or the base name of the files where the shaders are. If you choose to use files, you must create a file for each shader and use the following filename pattern: basename.vertex.fx and basename.fragment.fx. Then you will have to create the material like this:

You must also specify the names of any attributes and uniforms that you use. Then, you can set directly the value of your uniforms and samplers using the setTexture, setFloat, setFloats, setColor3, setColor4, setVector2, setVector3, setVector4, and setMatrix functions.

Pretty simple, right?

Do you remember the previous worldViewProjection matrix? Using Babylon.js and BABYLON.ShaderMaterial, you have nothing to worry about! The BABYLON.ShaderMaterial will automatically compute it for you because you declare it in the list of uniforms.

BABYLON.ShaderMaterial can also handle the following matrices for you:

  • world 
  • view 
  • projection 
  • worldView 
  • worldViewProjection 

No need for math any longer. For instance, each time you execute sphere.rotation.y += 0.05, the world matrix of the sphere is generated for you and transmitted to the GPU.

CYOS: Create Your Own Shader

So let’s go bigger and create a page where you can dynamically create your own shaders and see the result immediately. This page is going to use the same code that we previously discussed, and is going to use a BABYLON.ShaderMaterial object to compile and execute shaders that you will create.

I used ACE code editor for CYOS. This is an incredible code editor with syntax highlighters. Feel free to have a look at it here. You can find CYOS here.

Using the first combo box, you will be able to select pre-defined shaders. We will see each of them right after.

You can also change the mesh (the 3D object) used to preview your shaders using the second combo box.

The Compile button is used to create a new BABYLON.ShaderMaterial from your shaders. The code used by this button is the following: 

Brutally simple, right? The material is ready to send you three pre-computed matrices (world, worldView and worldViewProjection). Vertices will come with position, normal and texture coordinates. Two textures are also already loaded for you:

amiga texture
amiga.jpg
ref texture
ref.jpg

And finally, here is the renderLoop where I update two convenient uniforms:

  • one called time in order to get some funny animations 
  • one called cameraPosition to get the position of the camera into your shaders (which will be useful for lighting equations) 

Thanks to the work we did on Windows Phone 8.1, you can also use CYOS on your Windows Phone (it is always a good time to create a shader):

CYOS on Windows Phone

Basic Shader

So let’s start with the very first shader defined on CYOS: the Basic shader.

We already know this shader. It computes the gl_position and uses texture coordinates to fetch a color for every pixel.

To compute the pixel position, we just need the worldViewProjection matrix and the vertex’s position:

Texture coordinates (uv) are transmitted unmodified to the pixel shader.

Please note that we need to add precision mediump float; on the first line for both vertex and pixel shaders because Chrome requires it. It defines that, for better performance, we do not use full precision floating values.

The pixel shader is even simpler, because we just need to use texture coordinates and fetch a texture color:

We saw previously that the textureSampler uniform is filled with the “amiga” texture, so the result is the following:

Basic Shader result

Black and White Shader

Now let’s continue with a new shader: the black and white shader.

The goal of this shader is to use the previous one but with a "black and white only" rendering mode. To do so, we can keep the same vertex shader, but the pixel shader must be slightly modified.

The first option we have is to take only one component, such as the green one:

As you can see, instead of using .rgb (this operation is called a swizzle), we used .ggg.

But if we want a really accurate black and white effect, it would be a better idea to compute the luminance (which takes into account all color components):

The dot operation (or dot product) is computed like this:

result = v0.x * v1.x + v0.y * v1.y + v0.z * v1.z

So in our case:

luminance = r * 0.3 + g * 0.59 + b * 0.11 (these values are based on the fact that human eye is more sensible to green)

Sounds cool, doesn’t it?

Black and white shader result

Cell Shading Shader

Now let’s move to a more complex shader: the cell shading shader.

This one will require us to get the vertex’s normal and the vertex’s position in the pixel shader. So the vertex shader will look like this:

Please note that we also use the world matrix, because position and normal are stored without any transformation and we must apply the world matrix to take into account the object’s rotation.

The pixel shader is the following:

The goal of this shader is to simulate a light, and instead of computing a smooth shading we will consider that light will apply according to specific brightness thresholds. For instance, if light intensity is between 1 (maximum) and 0.95, the color of the object (fetched from the texture) will be applied directly. If intensity is between 0.95 and 0.5, the color will be attenuated by a factor of 0.8, and so on.

So, there are mainly four steps in this shader:

  • First, we declare thresholds and levels constants.
  • Then, we need to compute the lighting using the Phong equation (we assume that the light is not moving): 

The intensity of light per pixel is dependent on the angle between the normal and the light's direction.

  • Then we get the texture color for the pixel.
  • And finally we check the threshold and apply the level to the color.

The result looks like a cartoon object: 

Cell shading shader result

Phong Shader

We used a portion of the Phong equation in the previous shader. So let’s try to use the whole thing now.

The vertex shader is clearly simple here, because everything will be done in the pixel shader:

According to the equation, you must compute the diffuse and specular part by using the light direction and the vertex’s normal:

We already used the diffuse part in the previous shader, so here we just need to add the specular part. This picture from a Wikipedia article explains how the shader works:

Diffuse plus Specular equals Phong Reflection
By Brad Smith aka Rainwarrior.

The result on our sphere:

Phong shader result

Discard Shader

For the discard shader, I would like to introduce a new concept: the discard keyword. This shader will discard every non-red pixel and will create the illusion of a "dug" object.

The vertex shader is the same as that used by the basic shader:

The pixel shader will have to test the color and use discard when, for instance, the green component is too high:

The result is funny:

Discard shader result

Wave Shader

We’ve played a lot with pixel shaders, but I also wanted to show you that we can do a lot of things with vertex shaders.

For the wave shader, we will reuse the Phong pixel shader.

The vertex shader will use the uniform called time to get some animated values. Using this uniform, the shader will generate a wave with the vertices’ positions:

A sine is applied to position.y, and the result is the following:

Wave shader result

Spherical Environment Mapping

This one was largely inspired by this tutorial. I’ll let you read that excellent article and play with the associated shader. 

Spherical environment mapping shader

Fresnel Shader

I would like to finish this article with my favorite: the Fresnel shader.

This shader is used to apply a different intensity according to the angle between the view direction and the vertex’s normal.

The vertex shader is the same one used by the cell shading shader, and we can easily compute the Fresnel term in our pixel shader (because we have the normal and the camera’s position, which can be used to evaluate the view direction):

Fresnel Shader result

Your Shader?

You are now more prepared to create your own shader. Feel free to use the comments here or at the Babylon.js forum to share your experiments!

If you want to go further, here are some useful links:

And some more learning that I’ve created on the subject:

Or, stepping back, our team’s learning series on JavaScript: 

And of course, you are always welcome to use some of our free tools in building your next web experience: Visual Studio Community, Azure Trial, and cross-browser testing tools for Mac, Linux, or Windows.

This article is part of the web dev tech series from Microsoft. We’re excited to share Microsoft Edge and the new EdgeHTML rendering engine with you. Get free virtual machines or test remotely on your Mac, iOS, Android, or Windows device @ http://dev.modern.ie/.

How to Learn Pygame

$
0
0

Pygame is a cross-platform set of Python modules designed for creating games. The modules are designed to be simple, easy to use, and fun—a key part of Pygame's ideology. In this post, I'll show you how to use Pygame, and share tips and resources for learning it.

This tutorial assumes you already know how to code in Python. If not, check out The Best Way to Learn Python, Learn Python, The Python Tutorial, Codecademy, or Learn Python the Hard Way to get started.

Why Pygame?

It's Simple

Python is often regarded as the best "first programming language" to learn, and has been praised for its easy-to-learn syntax and gradual learning curve. For this reason, many new programmers start by learning Python. 

Pygame extends Python, adopts Python’s philosophy, and aims to be easy to use. Plus, aspiring new game developers with no programming experience can use Pygame, as they can quickly pick up Python first.

It Has a Large Community

Pygame has been available since 2000 and, since then, a large community has built up around it. The community claims that Pygame has been downloaded millions of times, and has had millions of visits to its websit”. As a result of the large community, reported bugs are fixed quickly, help is readily available, and a large number of additional features have been added. Furthermore, the community ensures that Pygame development is fun; for example, a bi-annual Python game competition is run to promote the platform. The size of the community is what separates Pygame from the other Python game frameworks.

It's Open Source

The fact that Pygame is open source means that bugs are generally fixed very quickly by the community. It also means that you can extend Pygame to suit your needs and maybe even give back to the community. 

Pygame is written in C, and taking a peek at the code is a great way to better understand how Python and Pygame work.

It's Highly Portable

Pygame is highly portable, as it supports Windows, Linux, Mac OS X, BeOS, FreeBSD, NetBSD, OpenBSD, BSD/OS, Solaris, IRIX, and QNX. In addition to these computing platforms, a Pygame subset for Android exists. Furthermore, Pygame does not require OpenGL and can use DirectX, WinDIB, X11, Linux framebuffer and many other APIs to render graphics. This ensures that a larger number of users can play your game.

Getting Started

Installation

Before installing Pygame, ensure that you have installed Python. You will notice that there are two versions of Python: Python 2 and Python 3. Personally, I recommend that you use Python 2 alongside Pygame, as Python 2 is more prevalent and there are Pygame modules that do not support Python 3. However, you would still be able to make great games using Python 3.

After ensuring that a version of Python is running on your machine, you can pick up Pygame from its download website.

IDEs

In addition to installing the Pygame modules, I recommend that you install a Python IDE, which will make it easier for you to create and compile games.

Note that an integrated development environment, known as IDLE, already comes bundled with Python. However, IDLE has frequently been criticized for usability issues and I personally cannot recommend it.

My personal preference is PyScripter, but PyCharm, Wing IDE, Python Tools for Visual Studio and Netbeans are all viable options. As always, the choice of an IDE comes down to personal preference, so I suggest you try all of them and see which one you like best. 

Resources

Here are a few resources that will help you get started with Pygame.

University of Colorado

The University of Colorado provides a great introduction to Pygame. The slides attached will ensure that you know about and how to use the key components of Pygame. After you have read this document, you should understand the core concepts of Pygame.

Tutorials

There are a multitude of tutorials available that will teach you Pygame in a step-by-step manner. I’ve hand-picked three of my favorites:

Video Series

Game Development in Python 3 With Pygame is a video tutorial series available as a YouTube playlist. The series contains 29 videos, beginning with displaying images and rendering text, and moves on to teach animation and user input. If you prefer video tutorials to text-based tutorials, don’t miss this series. 

Books

If you are a beginner to both Python and Pygame, I recommend taking a look at Invent Your Own Computer Games with Python. This book starts off by teaching you the basics of Python and supports this instruction by showing you how to make simple command-line games. After ensuring you understand the basics of Python, the book moves on to teach Pygame.

On the other hand, if you are itching to jump straight into Pygame, Making Games with Python & Pygame is the book for you. By the end of this book, you will have created 11 games and will be ready to go on your own. 

All of these books are free and are online, so you can get started right away!

Next Steps

Read the Documentation and Wiki

Taking a look at the documentation is an important step on your road to Pygame mastery. In here, you will find documents, tutorials and references that cover all the features of Pygame. The documentation will allow you to gain knowledge of all the aspects of Pygame, going beyond what you would have learned through the earlier resources.

The Pygame Wiki will also aid in answering any questions you may have about the platform, so don’t hesitate to take a look.

Look at Code

Looking at code will help you understand how different parts of Pygame interact to make a complete game. Moreover, examples will show you best practices, and you may pick up some tricks along the way.

Check out the official Pygame examples for details on how to use individual modules, or check out public Pygame projects on GitHub to see a full set of code. 

Participate in the Community

Pygame has an active IRC channel and mailing list. (The mailing list is also available in a forum-like interface.) Members of the community are eager to help beginners, so don’t be afraid to ask questions. 

Once you become a Pygame expert, contribute back to the community and help others. Teaching others will help you learn, and will solidify the concepts you already know. 

Make Your Own Game

The resources provided above will be more than enough to gain a solid understanding of Pygame. Furthermore, if you work your way through the resources, you will have made multiple games. However, I urge you to make your own creation; do more than build Tic-Tac-Toe or Concentration. Anyone can follow a tutorial to make a game. Let your creativity flow and put your skills to the test by building an original game.  

Development Tips

Use Groups

Pygame comes with a really powerful feature known as groups, through the pygame.sprite.Group class. A group is essentially a container for sprite objects.

 For example, if I am creating multiple Bullet sprites, I can add them to a group called bullets:

Working with groups is much faster than working with individual sprites. You can simply call Group.update() to update all the sprites within a group, or Group.draw() to blit the contents of the Group to your Surface. Furthermore, groups make collision detection easier, because Pygame has a method to check for collisions between two groups: pygame.sprite.groupcollide()

Convert Your Surfaces

Ensure that you are converting a Surface whenever you are creating one by calling the convert() method. This method changes the pixel format of a Surface to match that of the display Surface. If this method is not called, a pixel conversion will have to be done every time you blit something to your Surface—a very slow operation.

 For example, instead of:

 call:

or if you need to set the alpha:

Note that convert_alpha() will not result in the same performance increase as convert(), since the pixel format will not be an exact match for the Surface window. Nevertheless, it results in an improved blit speed.

Update vs. Flip

Both pygame.display.update() and pygame.display.flip() update the contents of the game window. However, simply calling these methods update the entire screen, heavily slowing down the frame rate. Instead, consider only updating a specific part of the screen (wherever changes are to be made) by calling pygame.display.update()and passing a rect representing the area to be updated into the method. In doing this, you will see a vast improvement in performance.

Note that, at times, the entire screen will need to be updated, and in these instances both methods will work fine. 

Additional Modules

Recall that I mentioned Pygame's large and helpful community earlier in the post. I recommend you take advantage of this community by downloading additional modules made by its members. These modules will help facilitate the rapid development of your game and simplify the game creation process.

 My personal favorites include:

  • Pyganim: Makes it easy to add sprite animations to your games
  • input.py: Simplifies control handling in Pygame when using multiple controller methods

You can find more libraries on the Pygame website.

Executable Games

If you want your game to be truly cross-platform, you must consider the scenario in which your users do not have Python and Pygame installed on their machines.

Consider the py2exe extension to make Windows executables, and the py2app extension to create applications for Mac OS X. 

Showcase

Pygame has not been used very often commercially, as Python is not known for its game development capabilities. Nevertheless, Pygame has been used to create many successful independent games:

Galcon

Galcon is a multi-player galactic action-strategy game, and may be the most prominent game developed with Pygame. It received as an award as a Top 10 Game of the Year in 2007 from GameTunnel, one of the first indie game review sites.

Frets on Fire

Frets on Fire, an open-source Guitar Hero clone was created in Pygame, as an entry to the Assembly 2006 Game Development Competition by Unreal Voodoo.

Dangerous High School Girls in Trouble!

The multi-award-winning role-playing game Dangerous High School Girls in Trouble!, by independent developer Mousechief, was a finalist in IndieCade 2008.

PyWeek

PyWeek is a bi-annual game programming challenge that only accepts games created in Python. Thus, many submissions are created using Pygame. A list of previous winners is available on the PyWeek website.

Conclusion

In this post, I’ve given you an introduction to game development with Python and Pygame. I hope that I've been successful in getting you interested in Pygame and in teaching you more about it. Don’t hesitate to ask your questions here or in various Pygame forums. You are now ready to begin your journey into the world of Pygame. Best of luck!

Numbers Getting Bigger: The Design and Math of Incremental Games

$
0
0

In our introductory piece on incremental games, we took a look at the history of the genre and examined what makes these games unique, but we didn't delve too deeply into their actual design. While incremental games can appear simple, their design reveals complex and thoughtful intent from their creators. By looking at some successful examples of the genre, we can better appreciate the features of these games, and better understand how we might design our own.

Before diving into the mathematical framework, there are three easily-overlooked but important areas of design that I want to highlight: the quality of exploration and discovery, the difference between 'idle' and 'clicker' expressions of the genre, and the importance of coherent theme and art.

The Joy of Discovery

One of the most important vectors of "fun" in an incremental game is that of discovery. Many of these games begin with a very simple initial setup, but the complexity spirals as the player advances. The process of uncovering this complexity taps into the innate appeal of discovering new and hidden features. Candy Box, for example, can be understood as a game primarily about exploring its system, and the 'incrementing' candy score is simply the mechanism of unlocking further content.

Thus, most incremental games do not make the entirety of the system available from the start, but instead "gate" additional features against tiers of the primary currency. This content may be a "known unknown", as in Idling to Rule the Gods, where certain sections of the game are explicitly empty and specify how and when they can be unlocked, or an "unknown unknown", where the player doesn't even know the features exist until a certain level is reached, like almost all of the content in Cookie Clicker. Some games may contain elements of both: AdVenture Capitalist informs the player about much of its content that they can unlock, but contains numerous hidden features that emerge over the course of play.

Cookie Clicker Unknown Achievements
I guess we have a long way to go.

Discovery is an important feature to consider in the design of an incremental game because it provides a exploratory reward system to the player while they learn about the game's core mechanics. Presenting everything up front would not only raise the barrier to entry in learning the game, but also remove the joy that comes from gradual familiarization with a system.

Idling or Clicking?

Incremental games tend to focus on two overlapping but distinct primary mechanics: 

  • Autonomous growth that the player gradually increases the rate of.
  • Active player engagement whose productivity gradually increases.

Games focusing on the latter will generally have either a literal "clicking" mechanic to produce growth, or some other means to require active player participation, like storage caps that necessitate frequent player intervention. In CivClicker, for example, the player must mostly actively manage their town, with only short periods of idle growth. Conversely, games focusing on autonomous growth might include a clicking mechanic, but if it does then its importance gradually wanes in favor of something automated. In AdVenture Capitalist, theplayer must actively click at first, but quickly unlocks the ability to automate the process, and then is largely free from manual incrementing.

This choice is largely a matter of preference and emphasis of the game's goals. A game requiring active management may be more engaging for the player over a short timeframe, but, if implemented in a way that requires too much player involvement too often, it can come to violate principles of ethical and humane game design. Conversely, a more autonomous or idle approach may require less engagement from the player in any given play session, but can engender a more long-term commitment to the game, which helps explain why "idle" games on Kongregate have such a high retention rate. AdVenture Capitalist even helpfully informs the player what has happened in their absence, emphasizing that it doesn't require your constant attention:

AdVenture Capitalist Idle Screen

Art Direction and Theme

Incremental games still often benefit from a narrative theme on which the mechanics sit (although this can be easy to overlook because these mechanics are so minimal.)

 A sensible theme can help give context to the otherwise abstract exercise of increasing numbers. Likewise, all games benefit from good art direction and design, and incrementals are no exception. Consistent aesthetics help the game feel like a unified experience, and a clean interface lowers the mental cost of navigating the game, so the player can concentrate on the game itself, rather than interpreting bad user interface elements.

AdVenture Capitalist Tutorial Example

The example above from AdVenture Capitalist is a good illustration of this. Its theme is business management and capitalistic expansion (which fits with the gameplay of ever increasing numbers), and it uses a 1950s Googie aesthetic for its art direction. This is used consistently (and with humor), such that even the menus and tutorials are "in character" and reinforce the visual and narrative theme. 

Incremental games' need for graphics and writing might be somewhat sparse compared to games of other genres, but its important not to mistake low need for no need.

Numbers Going Up

The most defining mechanic of incremental games is the number increasing. We defined this last time as:

  1. The presence of at least one currency or number,
  2. which increases at a set rate, with no or minimal effort,
  3. and which can be expended to increase the rate or speed at which it increases.

It's that third item that largely affects the feel of the game, and that is the most difficult to design well. Since it's a particularly straightforward example, let's take a look at Number by Tyler Glaiel. It has the three core definitional items and almost nothing else: a number goes up, and you can spend that number to make it go up faster.

Number Main Screen

When the game begins, the "income rate" of the number increasing is 0.1 per second. The amount of "number" saved up can be spent to make it go up faster. Here are the first five purchases, with their cost in the first column and the new "number per second" rate in the second:

CostIncome Rate
1.00.2
1.2
0.4
1.4
0.7
1.7
1.2
2.21.8

Even with a handful of observations, we can identify some of the hallmarks of incremental design here. One is the nonlinear increases to both cost and benefit: it takes more and more number to get relatively less incremental improvement. 

This makes sense from a practicality perspective: if the cost/benefit stayed the same (for instance, if it always cost 1 number to purchase a 0.2 increase in the income rate), there would no variability at all in the outcome, and the income rate increase would climb at a steady and predictable pace. This would get boring very quickly!

Instead, this is what the cost (in blue) and income rate (in orange) for the first twenty purchases look like:

Number Cost orange and Rate blue
Blue: Cost of the xth upgrade. Orange: Income rate per second generated from x purchases. For example, the 10th purchase costs 7 numbers, and results in an income rate of 7.6 numbers per second.

(You can download an XLSX of the data used to generate these graphs from this GitHub repo, or view a Google Sheets equivalent.)

We can see here very obviously that these functions are non-linear (even ignoring the cost formula jump at the 12th iteration), and that cost increases quickly outpace the income rate increases. That's an important aspect of the design, because it means that the time spent waiting to afford the next upgrade grows exponentially the longer the game lasts. So the game progresses quite quickly at first, with the player only needing to wait periodically to save up enough for the next purchase, but gradually slows down.

Most incremental games have multiple sources of income rate increase to upgrade, instead of just one as Number does. This is a major source of the discovery and strategy of incremental games, because having multiple vectors of improvement whose costs increase non-linearly introduces interesting avenues of optimization for the player. If the player chooses to invest heavily in a single building or upgrade, the exponentially increasing cost means at some point other options will become relatively cheaper, even if they were initially priced very high. This means the player has an array of options at their disposal, but they must be constantly reevaluated because relative worth to the player is constantly shifting.

Linear Improvements With Exponential Costs

Exponential cost scaling is beneficial for the ever growing resource and time investment they require, but most games don't employ exponential income rate increases. Why not? 

In the graph from the last section, it's the gap between the two lines that gives us the ever-growing cost-versus-benefit ratio. To achieve that, we actually only require cost (in orange) to increase exponentially (or polynomially); the income rate could increase linearly, and the gap between the lines would still ever widen.

For example, in Clicker Heroes, one of the first automated sources of number increasing is from a "hero" called Treebeard. Initially, it costs 50, and gives you an income rate of 5 per second. The second level costs 53.5, but still just gives an additional rate increase of 5. The first fifty purchases look like this, again with cost in blue and the income rate in orange:

Clicker Heroes Cost orange and Rate blue of Treebeard leveling
Please note that for simplicity we're ignoring a number of other cost/rate mechanics in Clicker Heroes.

The "income rate" function here is just a straight line, since each purchase increases it by the set amount of 5, so the formula for it is very simple: the total rate per second is just the number owned multiplied by 5 (so, \(y=5x\)). 

The cost, however, is going up at an ever-increasing rate. The incremental cost of each additional level is minimal at first; on the graph we can see for the first twenty the gap between the two is nearly constant. But then it breaks away dramatically, requiring more and more for each subsequent upgrade. 

The formula for the cost function here is actually one that's widely used across many incremental games:

\[ Price = BaseCost \times Multiplier ^{(\#\:Owned)} \]

For our Treebeard example, the base cost is 50, and the "Multiplier" variable is 1.07, so the second level costs \(50 \times 1.07^1 = 53.5\), the third costs \(50 \times 1.07^2 = 57.245\), and so on. The Multiplier's value determines the curvature of the line, with higher values meaning steeper cost curves. (A value of 1 would give a linear cost line.) 

Clicker Heroes uses 1.07 as the increase multiplier for all 35 of its upgradable heroes, and all the various buildings of Cookie Clicker use a value of 1.15. Interestingly, the 10 businesses of AdVenture Capitalist all use a different multiplier, but each is between 1.07 and 1.15. The common appearance of the same Multipliers across different games suggests that the curves produced between those bounds are balanced and satisfying. 

Some games do diverge from there though. Steam's multiplayer incremental game Monster, part of their 2015 summer sale event, uses Multipliers as high as 2.5, which increase very steeply.

Monster Main Menu

As mentioned before, exponential cost scaling has the benefit of balancing multiple upgrade paths by ensuring each follows a path of diminishing returns. This makes some of the tactical balancing innate to the cost formula itself, rather than something the designer needs to explicitly frame. Because even if a given resource is sometimes or even mostly 'better', its exponentially rising cost means it can't be exploited exclusively. 

Let's take a look at the list of upgradeable buildings in Cookie Clicker as an example:

BuildingBase Cost
Base Income Rate
Cursor150.1
Grandma
1000.5
Farm
5004
Factory
3,00010
Mine
10,00040
Shipment
40,000100
Alchemy Lab
200,000400
Portal
1,666,6666,666
Time Machine
123,456,78998,765
Antimatter Condenser
3,999,999,999999,999
Prism
75,000,000,00010,000,000

We can see several apparent patterns just from this table. 

The first is that base cost of every subsequent upgrade is almost five times higher than the previous one (except for the last few). These increases by half an order of magnitude ensure the player has sufficient time to enjoy each newly unlocked resource; lower increases would mean the unlocks might come too quickly, but any longer would risk the player becoming bored before achieving the next unlock. 

The income rate (cookies per second, for this game) meanwhile is increasing only by about a third for each additional tier, which means that while the buildings contribute ever larger numerical amounts, they're actually less and less efficient relative to their cost.

However, because each building follows the same cost increase formula \( Price = BaseCost \times 1.15^{(\#\:Owned)} \), every building actually follows a very similar pattern. The chart below shows a line for each of the 11 buildings, graphing their first two hundred upgrades, with log cost along the y-axis and log income rate on the x-axis. (Since these are exponential functions, a logarithmic scale reveals their similarity better than a linear one.)

Cookie Clicker Each Buildings log Cost y-axis versus log Rate x-axis
This Each line represents a different building, with cost on the y-axis and income rate on the x-axis (both logarithmic scales). This is a visualization of the cost to benefit curvature we discussed earlier

So even though these buildings appear very different, since each one nominally produces and costs much more than the prior, their exponential cost formulas produce curves that are inherently similar, while still creating a system that the player can optimize.

Accounting for Efficiency

While incremental games are superficially about making numbers go up, it's how to make them go up as fast as possible that provides depth of play for passionate players. The player always has multiple avenues of improvement before them between the various upgradeable resources (usually along with some additional features we discuss later), and so challenges the player to evaluate these choices. Should you buy the cheaper upgrade you can afford right now, or save until you can afford the next tier?

Since we eventually want to buy all upgrades, the most efficient approach is just to evaluate the optimal order. Imagine a scenario where we're currently producing 5 of our number per second (\(nps=5\)), and we have a choice between two upgrades. The first costs 20 (\(cost_a = 20\)), and will increase our income rate by 1 (\(rate_a = 1\)). The other has \(cost_b = 100\), but also has \(rate_b = 10\). The former is cheaper, but it's also less cost efficient. 

Well, let's try buying A then B:

  • We wait and save for \(20/5 = 4.0\) seconds, then purchase A.
  • Now we wait \(100/(5+1) = 16.67\) seconds, then purchase B.
  • We now have \(nps = 16\), and it took us \(20.67\) seconds to achieve.

What if we did the opposite?

  • We wait and save for \(100/5 = 20.0\) seconds, then purchase B.
  • Now we wait \(20/(5+10) = 1.33\) seconds, then purchase A.
  • We now have \(nps=16\), and it took us \(21.33\) seconds to achieve.

So, it looks like buying A first and then B is more efficient, because \( 20/5+100/(5+1) < 100/5+20/(5+10)\). We could generalize this example to get a formula like this:

\[ \frac{cost_a}{nps} + \frac{cost_b}{(nps + rate_a)} < \frac{ cost_b}{nps} + \frac{cost_a}{(nps + rate_b)} \]

But this is only useful for comparisons between two possible upgrades, and so isn't as useful if we had a great deal of choices. We need to simplify the formula to isolate the variables for just a single upgrade (the derivation of which is explained in detail in this fantastic article by Adam Babcock), which yields this:

\[ \frac{cost_a}{nps} + \frac{cost_a}{(nps + rate_a)}  \]

Now, we can apply this formula to each possible upgrade, and the lowest result, because of the transitivity of the inequalities, will yield what we should purchase next (with some exceptions not worth getting into for this level of analysis). This greatly simplifies the process of finding the most efficient route to optimization.

This is obviously relevant for the player, but its also useful for the designer. Knowing the most efficientusage of various game elements can identify the existence of unintended spikes in time requirements, and ensure that even optimal play progresses at a rate intended by the creator.

Deriving optimal play scenarios also lets us compare different incremental games, as we can reduce the disparate variables to just the time it takes to reach a given level of number per second. The chart below shows the time required to achieve a given number per second income rate in AdVenture Capitalist (in green) and Cookie Clicker (in brown), if buying buildings in the most efficient manner (ignoring other game aspects for simplicity):

Time vs NPS in AdVenture Capitalist and Cookie Clicker
X-axis: time, y-axis: income rate (log scale); green: AdVenture Capitalist, brown: Cookie Clicker.

Remarkably, the two games look very similar here, returning higher rates of nps against an ever increasing amount of time. Both grow incredibly quickly in the first 8-10 hours (around 500 minutes), but the rate of increase is much more marginal thereafter. They eventually flatten out as the number of new buildings is exhausted. As a result, most incremental games include other purchasable resources alongside the main upgradeable buildings, one of the most important being the ability to reset the game, which allows the player to climb this curve all over again.

The spiralling complexity of upgrades in incremental games can make their design a daunting prospect. But the designer doesn't need to precisely calibrate each and every element. The beauty of complex non-linear systems means that engrossing ladders of upgrades can be produced with only high level balancing from the designer. For the player, navigating the system to find the optimal sequence is difficult and fun, whereas the task for the designer is simply to make sure there is such a complex system to navigate.

"New Game +" and Other Features

A "New Game +" feature lets the player reset their progress in exchange for some lasting bonus. So, all purchased buildings and other resources might be returned to zero, but when starting over a flat multiplicative increase is applied to all number per second calculations thereafter. 

This doesn't change any of the fundamental formulas of the game; it just means the player will reach the eventual asymptotic plateau quicker and quicker. In essence, this feature acts to extend the core gameplay by allowing it to replayed faster. This can't be kept up indefinitely however, so eventually long-time players will reach an "endgame" of sorts.

Clicker Heroes Ascension New Game  Mechanic
In Clicker Heroes, ascending rewards the players with a new currency to buy upgrades with.

Another common feature for extending play is simply increasing the complexity of the upgradeable buildings. So far, we've only examined the most common method of incremental upgrade, which follows the exponential cost function. Alongside those are typically upgrades which either yield one-time flat improvements to the overall number per second, or that alter the underlying cost and income rate variables in some way. 

In Clicker Heroes, for example, there are upgrades that increase the base number per second for a "hero", as well as ones that increase the base number per second for all "heroes". While these and similar features don't change the underlying mechanics of an incremental game, they can widen the possibility space for the player to explore, and further challenge their ability to optimize the game. Additionally, like the 'New Game +' mechanic, the volume of upgrades can also act to prolong play before hitting the eventual plateau in progress.

Conspiracy Clicker Upgrades
Conspiracy Clicker has three distinct trees of interrelated upgrades.

Go Forth and Multiply (Exponentially)

Although this hasn't been an exhaustive investigation of incremental game design, we've given a thorough look to their fundamental aspects. As a brief summary for prospective designers and developers:

  • Enable and promote a feeling of discovery.
  • Consider active and inactive play (and ideally reward both).
  • Don't neglect a unifying theme and art style.
  • Make use exponential cost scaling, with the most common form being \( Price = BaseCost \times Multiplier ^{(\#\:Owned)} \) with a Multiplier between \(1.07\) and \(1.15\).
  • Provide the player with multiple avenues of optimization.
  • Extend play through the strategic use of resetting and mechanics that increase complexity.

If you're interested in learning more, the incremental games subreddit is a great community of designers and developers to turn to for advice and ideas. If you'd like to dive right into implementing some of your ideas, the developer of Cookie Clicker created an online tool that can easily create similar games, and its a great way to experiment without having to lay all the entire foundation yourself. If you're looking for something a bit more advanced, the creator of CivClicker has an excellent piece on the logic for a HTML and JavaScript implementation as well.

I hope the framework we've examined here serves as an inspiration for exploring your own expression of an incremental game. After all, there is a lot of untapped design space still left to explore:

List of Games Mentioned

While not a comprehensive list (for that, the incremental games subreddit has a great list), here's a list of the games mentioned in our first article or in this one:

Also note: you can download an XLSX of the data used to generate the graphs in this article from this GitHub repo, or view a Google Sheets equivalent.



Tuts+ Celebrates 20,000 Free Tutorials!

$
0
0

The whole team here at Tuts+ is excited to announce that we’ve just published our 20,000th free tutorial. Since launching in 2007, Tuts+ has helped more than a quarter of a billion people learn new skills and improve their lives. We’re very proud to have helped so many people from all over the world expand their knowledge across a huge range of topics—all for free!

To celebrate the occasion, we asked our editorial team to share with you some of their favorite tutorials from the last few years, so make sure to check them out. And as a small thanks to our community of readers for helping us get here, we’ve teamed up with some of the great authors on Envato Market to offer you a selection of free assets from their library for the next seven days only. Enjoy!

Tuts+ Design & Illustration

How to Create a Geometric, WPAP Vector Portrait in Adobe Illustrator

How to Create a Geometric WPAP Vector Portrait in Adobe Illustrator

This tutorial from September 2013 teaches you how to create a WPAP (Wedha's Pop Art Portrait) portrait in Adobe Illustrator. Produced by the WPAP master himself, Abdul Rasyid!

Download the free file Double Exposure Photoshop Action by Eugene-design

Create your own double exposure images with this free file from Eugene-design. Download here.

Tuts+ Code

Building Your Startup With PHP

Building Your Startup With PHP

This series of tutorials will teach you how to turn your entrepreneurial business concept into an actual startup by following the instructor Jeff Reifman’s own product development process.

Tuts+ Code WordPress

Hosting Your Website After Death

Hosting Your Website After Death

The first of a series on exploring the complexity of managing your digital legacy. In this tutorial, Jeff Reifman shares how he learned to host his WordPress site for a hundred years, ensuring continuity after he’s gone.

Tuts+ Web Design

Creating a Future-Proof Responsive Email Without Media Queries

Creating a Future-Proof Responsive Email Without Media Queries

Learn how to code a future-proofed responsive email template for email clients with no media query support whatsoever. From instructor Nicole Merlin.

Download the free file MailChimp Subscribe Form by RafaelFerreira

Let your community subscribe to your newsletter via social media just by typing their email with this free file. Download here.

Tuts+ Photo & Video

Mental Haze: How to Use Bokeh to Create Visual Tension and Emotional Impact

How to Use Bokeh to Create Visual Tension and Emotional Impact

Learn how to create a beautifully effective bokeh effect to provoke an emotional response in your photos. With instructor Marie Gardiner.

Download the free stock image "light and flare" by memorialphoto

Envato Market Free File Giveaway

Download the full set of assets from Envato Market. Available until 22 July 2015 only.

Eight Years & 20,000 Free Tutorials Later

Here are a few stats from our tutorial vault:

  • Over 2,600 different instructors have written for Tuts+, spread over 20,000 tutorials.
  • Over 300 million people from around 99% of all the different countries in the world have visited the Tuts+ network since the initial launch of PSDTUTS (now Tuts+ Design & Illustration) in 2007. That’s close to the entire population of the United States of America, the third most populated country in the world.
  • Top instructors on Tuts+ by posts are:
    1) Mo Volans: 230 posts in Tuts+ Music & Audio
    2) Martin Perhiniak: 218 posts in Tuts+ Design & Illustration& Tuts+ Photo & Video
    3) Andrei Marius: 214 posts in Tuts+ Design & Illustration
  • Over 40 Tuts+ tutorials have had more than one million views.
  • 41,702,992 minutes of video from the Tuts+ YouTube channels have been watched. That's over 79 years—a lifetime of learning.
  • Our 20,000 tutorials consist of over 30 million words. That's more than 27 times the length of the entire series of Harry Potter books!

The Tuts+ Melbourne HQ team celebrating the milestone.

From everyone here on the Tuts+ team, thank you for your support over the years. We’d love to hear your story, so please comment below and let us know what you’ve learned from Tuts+. We’ll choose our three favorite comments to win a free month of access to Tuts+ subscription content! 
Entries close 11:59pm 22 July 2015, AEST. Winners will be notified via Disqus.

A Beginner's Guide to Coding Graphics Shaders: Part 3

$
0
0

Having mastered the basics of shaders, we take a hands-on approach to harnessing the power of the GPU to create realistic, dynamic lighting.

The first part of this series covered the fundamentals of graphics shaders. The second part explained the general procedure of setting up shaders to serve as a reference for whatever platform you choose. From here on out, we'll be tackling general concepts on graphics shaders without assuming a specific platform. (For convenience's sake, all code examples will still be using JavaScript/WebGL.)

Before going any further, make sure that you have a way to run shaders that you're comfortable with. (JavaScript/WebGL might be easiest, but I encourage you to try following along on your favorite platform!) 

Goals

By the end of this tutorial, you will not only be able to boast a solid understanding of lighting systems, but you'll have built one yourself from scratch. 

Here's what the final result looks like (click to toggle the lights):

You can fork and edit this on CodePen.

While many game engines do offer ready-made lighting systems, understanding how they're made and how to create your own gives you a lot more flexibility in creating a unique look that fits your game. Shader effects don't have to be purely cosmetic either, they can open doors to fascinating new game mechanics! 

Chroma is a great example of this; the player character can run along the dynamic shadows created in real-time:

Getting Started: Our Initial Scene

We're going to skip a lot of the initial setup, since this is what the previous tutorial was exclusively about. We'll start with a simple fragment shader rendering our texture:

You can fork and edit this on CodePen.

Nothing too fancy is happening here. Our JavaScript code is setting up our scene and sending the texture to render, along with our screen dimensions, to the shader.

In our GLSL code, we declare and use these uniforms:

We make sure to normalize our pixel coordinates before we use them to draw the texture. 

Just to make sure you understand everything that's going on here, here's a warm up challenge:

Challenge: Can you render the texture while keeping its aspect ratio intact? (Have a go at this yourself; we'll walk through the solution below.)

It should be fairly obvious why it's being stretched, but here are some hints: Look at the line where we normalize our coordinates:

We're dividing a vec2 by a vec2, which is the same as dividing each component individually. In other words, the above is equivalent to:

We're dividing our x and y by different numbers (the width and height of the screen), so it will naturally be stretched out. 

What would happen if we divided both the x and y of gl_FragCoord by just the x res ? Or what about just the y instead?

For simplicity's sake, we're going to keep our normalizing code as-is for the rest of the tutorial, but it's good to understand what's going on here!

Step 1: Adding a Light Source

Before we can do anything fancy, we need to have a light source. A "light source" is nothing more than a point we send to our shader. We'll construct a new uniform for this point:

We created a vector with three dimensions because we want to use the x and y as the position of the light on screen, and the z as the radius

Let's set some values for our light source in JavaScript:

We intend to use the radius as a percentage of the screen dimensions, so 0.2 would be 20% of our screen. (There's nothing special about this choice. We could have set this to a size in pixels. This number doesn't mean anything until we do something with it in our GLSL code.)

To get the mouse position in JavaScript, we just add an event listener:

Now let's write some shader code to make use of this light point. We'll start with a simple task: We want every pixel within our light range to be visible, and everything else should be black.

Translating this into GLSL might look something like this:

All we've done here is:

  • Declared our light uniform variable.
  • Used the built-in distance function to calculate the distance between the light position and the current pixel's position.
  • Checked if this distance (in pixels) is greater than 20% of the screen width; if so, we return the color of that pixel, otherwise we return black.
You can fork and edit this on CodePen.

Uh oh! Something seems off with how the light is following the mouse.

Challenge: Can you fix that? (Again, have a go yourself before we walk through it below.)

Fixing the Light's Movement

You might remember from the first tutorial in this series that the y-axis here is flipped. You might be tempted to just do:

Which is mathematically sound, but if you did that, your shader won't compile! The problem is that uniform variables cannot be changed.To see why, remember that this code runs for every single pixel in parallel. Imagine all those processor cores trying to change a single variable at the same time. Not good! 

We can fix this by creating a new variable instead of trying to edit our uniform. Or better yet, we can simply do this step before passing it to the shader:

You can fork and edit this on CodePen.

We've now successfully defined the visible range of our scene. It looks very sharp, though....

Adding a Gradient

Instead of simply cutting to black when we're outside the range, we can try to create a smooth gradient towards the edges. We can do this by using the distance that we're already calculating. 

Instead of setting all pixels inside the visible range to the texture's color, like so:

We can multiply that by a factor of the distance:

You can fork and edit this on CodePen.

This works because dist is the distance in pixels between the current pixel and the light source. The term  (light.z * res.x) is the radius length. So when we're looking at the pixel exactly at the light source, dist is 0, so we end up multiplying color by 1, which is the full color.

In this diagram, dist is calculated for some arbitrary pixel. dist is different depending on which pixel we're at, while light.z * res.x is constant.

When we look at a pixel at the edge of the circle, dist is equal to the radius length, so we end up multiplying color by 0, which is black. 

Step 2: Adding Depth

So far we haven't done much more than make a gradient mask for our texture. Everything still looks flat. To understand how to fix this, let's see what our lighting system is doing right now, as opposed to what it's supposed to do.

In the above scenario, you would expect A to be the most lit, since our light source is directly overhead, with B and C being dark, since almost no light rays are actually hitting the sides. 

However, this is what our current light system sees:

They're all treated equally, because the only factor we're taking into account is distance on the xy plane.Now, you might think that all we need now is the height of each of those points, but that's not quite right. To see why, consider this scenario:

A is the top of our block, and B and C are the sides of it. D is another patch of ground nearby. We can see that A and D should be the brightest, with D being a little darker because the light is reaching it at an angle. B and C, on the other hand, should be very dark, because almost no light is reaching them, since they're facing away from the light source. 

It's not the height so much as the direction that surface is facingthat we need. This is called the surface normal.

But how do we pass this information to the shader?  We can't possibly send a giant array of thousands of numbers for every single pixel, can we?  Actually, we're already doing that! Except we don't call it an array, we call it a texture. 

This is exactly what a normal map is; it's just an image where the r, g and b values of each pixel represent a direction instead of a color. 

Example normal map

Above is a simple normal map. If we use a color picker, we can see that the default, "flat" direction is represented by the color (0.5, 0.5, 1) (the blue color that takes up the majority of the image). This is the direction that's pointing straight up. The x, y and z values are mapped to the r, g and b values.

The slanted side on the right is pointing to the right, so its x value is higher; the x value is also its red value, which is why it looks more reddish/pinkish. The same applies for all the other sides. 

It looks funny because it's not meant to be rendered; it's made purely to encode the values of these surface normals. 

So let's load this simple normal map to test with:

And add it as one of our uniform variables:

To test that we've loaded it correctly, let's try rendering it instead of our texture by editing our GLSL code (remember, we're just using it as a background texture, rather than a normal map, at this point):

You can fork and edit this on CodePen.

Step Three: Applying a Lighting Model

Now that we have our surface normal data, we need to implement a lighting model. In other words, we need to tell our surface how to take into account all the factors we have to calculate the final brightness. 

The Phong model is the simplest one we can implement. Here's how it works: Given a surface with normal data like this:

We simply calculate the angle between the light source and the surface normal:

The smaller this angle, the brighter the pixel. 

This means that pixels directly underneath the light source, where the angle difference is 0, will be the brightest. The darkest pixels will be those pointing in the same direction as the light ray (that would be like the underside of the object)

Now let's implement this. 

Since we're using a simple normal map to test with, let's set our texture to a solid color so that we can easily tell whether it's working. 

So, instead of:

Let's make it a solid white (or any color you like really):

This is GLSL shorthand for creating a vec4 with all components equal to 1.0.

Here's what our algorithm looks like:

  1. Get the normal vector at this pixel.
  2. Get the light direction vector.
  3. Normalize our vectors.
  4. Calculate the angle between them.
  5. Multiply the final color by this factor.

1. Get the Normal Vector at This Pixel

We need to know what direction the surface is facing so we can calculate how much light should reach this pixel. This direction is stored in our normal map, so getting our normal vector just means getting the current pixel color of the normal texture:

Since the alpha value doesn't represent anything in the normal map, we only need the first three components. 

2. Get the Light Direction Vector

Now we need to know in which direction our light is pointing. We can imagine our light surface is a flashlight held in front of the screen, at our mouse's location, so we can calculate the light direction vector by just using the distance between the light source and the pixel:

It needs to have a z-coordinate as well (in order to be able to calculate the angle against the 3-dimensional surface normal vector). You can play around with this value. You'll find that the smaller it is, the sharper the contrast is between the bright and dark areas. You can think of this as the height you're holding your flashlight above the scene; the further away it is, the more evenly light is distributed.

3. Normalize Our Vectors

Now to normalize:

We use the built-in function normalize to make sure both of our vectors have a length of 1.0. We need to do this because we're about to calculate the angle using the dot product. If you're a little fuzzy on how this works, you might want to brush up on some of your linear algebra. For our purposes, you only need to know that the dot product will return the cosine of the angle between two vectors of equal length

4. Calculate the Angle Between Our Vectors

Let's go ahead and do that with the built-in dot function:

I call it diffuse just because this is what this term is called in the Phong lighting model, due to how it dictates how much light reaches the surface of our scene.

5. Multiply the Final Color by This Factor

That's it! Now go ahead and multiply your color by this term. I went ahead and created a variable called distanceFactor so that our equation looks more readable:

And we've got a working lighting model! (You might want to expand the radius of your light to see the effect more clearly.)

You can fork and edit this on CodePen.

Hmm, something seems a bit off. It feels like our light is tilted somehow. 

Let's revise our maths for a second here. We've got this light vector:

Which we know will give us (0, 0, 60)when the light is directly on top of this pixel. After we normalize it, it will be (0, 0, 1).

Remember that we want a normal that's pointing directly up towards the light to have the maximum brightness. Our default surface normal, pointing upwards, is (0.5, 0.5, 1).

Challenge: Can you see the solution now? Can you implement it?

The problem is that you can't store negative numbers as color values in a texture. You can't denote a vector pointing to the left as (-0.5, 0, 0). So, people who create normal maps need to add 0.5 to everything. (Or, in more general terms, they need to shift their coordinate system). You need to be aware of this to know that you should subtract 0.5 from each pixel before using the map. 

Here's what the demo looks like after subtracting 0.5 from the x and y of our normal vector:

You can fork and edit this on CodePen.

There's one last fix we need to make. Remember that the dot product returns the cosine of the angle. This means that our output is clamped between -1 and 1. We don't want negative values in our colors, and while WebGL seems to automatically discard these negative values, you might get weird behavior elsewhere. We can use the built-in max function to fix this issue, by turning this:

Into this:

Now you've got a working lighting model! 

You can put back the stones texture, and you can find its real normal map in the GitHub repo for this series (or, directly, here):

We only need to change one JavaScript line, from:

to:

And one GLSL line, from:

No longer needing the solid white, we pull the real texture, like so:

And here's the final result:

You can fork and edit this on CodePen.

Optimization Tips

The GPU is very efficient in what it does, but knowing what can slow it down is valuable. Here are some tips regarding that:

Branching

One thing about shaders is that it's generally preferable to avoid branching whenever possible. While you rarely have to worry about a bunch of if statements on any code you write for the CPU, they can be a major bottleneck for the GPU. 

To see why, remember again that your GLSL code runs on every pixel on the screen in parallel. The graphics card can make a lot of optimizations based on the fact that all pixels need to run the same operations. If there's a bunch of if statements, however, then some of those optimizations might start to fail, because different pixels will run different code now. Whether or not if statements actually slow things down seems to depend on the specific hardware and graphics card implementation, but it's a good thing to keep in mind when trying to speed up your shader.

Deferred Rendering

This is a very useful concept when dealing with lighting. Imagine if we wanted to have two light sources, or three, or a dozen; we'd need to calculate the angle between every surface normal and every point of light. This will quickly slow our shader to a crawl. Deferred rendering is a way to optimize that by splitting the work of our shader into multiple passes. Here's an article that goes into the details of what it means. I'll quote the relevant part for our purposes here:

Lighting is the main reason for going one route versus the other. In a standard forward rendering pipeline, the lighting calculations have to be performed on every vertex and on every fragment in the visible scene, for every light in the scene.

For example, instead of sending an array of light points, you might instead draw them all onto a texture, as circles, with the color at each pixel representing the intensity of the light. This way, you'll be able to calculate the combined effect of all the lights in your scene, and just send that final texture (or buffer as it's sometimes called) to calculate the lighting from. 

Learning to split the work into multiple passes for the shader is a very useful technique. Blur effects make use of this idea to speed up the shader, for example, as well as effects like a fluid/smoke shader. It's out of the scope of this tutorial, but we might revisit the technique in a future tutorial!

Next Steps

Now that you've got a working lighting shader, here are some things to try and play around with:

  • Try varying the height (z value) of the light vector to see its effect
  • Try varying the intensity of the light. (You can do this by multiplying your diffuse term by a factor.)
  • Add an ambient term to your light equation. (This basically means giving it a minimum value, so that even dark areas won't be pitch black. This helps make it feel more realistic because things in real life are still lit even if there's no direct light hitting them)
  • Try implementing some of the shaders in this WebGL tutorial. It's done with Babylon.js instead of Three.js, but you can skip to the GLSL parts. In particular, the cell shading and Phong shading might interest you.
  • Get some inspiration from the demos on GLSL Sandbox and ShaderToy 

References

The stones texture and normal map used in this tutorial are taken from OpenGameArt:

http://opengameart.org/content/50-free-textures-4-normalmaps

There are lots of programs that can help you create normal maps. If you're interested in learning more about how to create your own normal maps, this article can help.

How to Fund Your Games By Creating and Selling Game Assets

$
0
0

I've been selling assets on the Unity Asset Store for two years, and I use a portion of the earnings to fund my current game's marketing budget. In this tutorial, I'll show you how you can do the same!

What Are Game Assets?

Game assets include everything that can go into a game, including 3D models, sprites, sound effects, music, code snippets and modules, and even complete projects that can be used by a game engine. Here's a list of examples:

2D/3D Design:

  • Characters
  • Objects
  • Environments
  • Vehicles

GUI:

  • HUD
  • Icons

Scripting:

  • AI
  • Special effects
  • Networking
  • Physics

Audio:

  • Background music
  • Sound effects

This means that, if you make games, you can sell some of your work as assets—whether you are a coder, artist, game designer, or music composer. Some people make a little extra money doing so, some are able to fund their games entirely, and others make a full living just selling assets.

Where Can You Sell Assets?

These are some of the most popular sites that allows you to sell your assets:

  • GameDev Market: A marketplace that includes 2D sprites, 3D models, GUI, music, and sound effects, along with a community forums that connects asset publishers with game developers allowing you to know what most of the buyers are looking 
  • TurboSquid: A marketplace specifically for 3D models. Includes high quality and professional models. 
  • Game Art 2D: A marketplace that includes 2D sprties, tilesets and GUI elements.
  • Unity Asset Store: The official Unity marketplace including all types of assets: 2D/3D models, GUI elements, sound effects, music, and everything related to Unity, such as scripting assets, shaders, animations, particle effects, and even complete Unity projects.
  • Unreal Engine Marketplace: The official Unreal Engine marketplace, including all types of assets along with ones related to Unreal Engine only, much like the Unity Asset Store.

How to Make an Asset Pack That Sells Well

First of all, if you are new to the game development world and you're still in the process of learning, don't jump to selling assets. Keep learning and take your time, because whatever you create while still learning is likely already to be available, and usually for free.

Second, you have to be original, and look for something that hasn't been created. Maybe you have created something new and unique while developing one of your games; why not sell it as an asset? Alternatively, look for something that has been done before but isn't that good, or is out-dated, or abandoned by the developer, and offer something better to attract buyers.

If You Are a Designer or Artist

You might have already created a lot of  2D or 3D designs, but make no mistake: that doesn't mean you should just package whatever you have created and try to sell it. First, you have to take a look at the market, which is already crowded with sprites and models of all types—characters, environments, vehicles—many of them available for free. Don't create some trees or zombie models and expect them to sell. 

For example, suppose you came up with an idea for a cool 2D character. Okay, let's see how you can sell it. The key here is variation. Customers are always looking for different types of the same thing; to attract buyers, you need to include few animations for this character: walking, running, jumping, giving damage, taking damage, crouching, and so on. You need at least to include the basic animations that are commonly used, but adding extra ones will increase your chances to sell this asset to more people. 

But that's not really enough. Why not create a pack of these characters, with same style and same animations for each one. Maybe three or four characters, or one character with variations: different clothes and accessories. If you were a customer looking to add characters to your game, you'd definitely want them to be the same style with all the animations you need, right? 

Another example would be an urban environment pack. If you just created a few building models and are ready to sell them, then why not include a few textures for each building which will allow the customer to create a rich environment? You can also include some extra models, like street signs and roads. Make a full, high-quality city construction pack; it will take some time, but in the end you'll be surprised by the number of customers you'll get. Customers are always looking for assets that include different types of the same thing.

Below is a screenshot from a popular Unity 3D art asset called Village Interior Kit, developed by 3DForge. The package contains 2288 meshes, 2626 prefabs, and 81 particle effect prefabs, and only costs $60!

If You Are a Composer

Game developers can get high quality sound effects and music loops for low prices and even for free. Therefore, you need to work on full audio assets that include a variation of sound effects for every action. 

For example, if creating a set of audio assets for a sci-fi game, make sure that you include all the sound effects that are likely to be needed, like explosions, weapons, collisions, and even opening and closing doors. 

As for music loops, an asset including 10 to 15 loops of the same genre is ideal for your customers, because it will allow them to use your audio assets for their entire game.

If You Are a Coder

As a coder, you are tied to the tools and platform you are working on, like Unity or Unreal, and these already have a lot of assets in their respective stores. So, unless you have some incredibly awesome ideas for a new scripting asset, don't create it and expect a lot of sales. 

Many scripts that are available on the Unity Asset Store aren't available on the Unreal Engine marketplace, and vice-versa, so it might be a good idea to look at what's popular on one of the stores and create something similar for the other platform, if it doesn't already exist. 

You should also look for something that solves a problem that many new users face in the game engine you're working with. Below is a screenshot of S-Quest, one of my assets on the Unity Asset Store. This asset allows you to create and customize quests for your games through code, which saves the time for many developers and allows them to add quests, quest logs, an objectives bar, and a player experience manager to their games easily.

You should also create assets that will work dynamically. For example, if you're working on an enemy AI asset, make sure that you include most possible actions that an enemy can perform, like wander, patrol, defend, hide. and attack. And don't tie your assets to one game type; in this case your enemy AI asset should work for both RTS and FPS games, for example.
Finally, make sure that your code is always clean and fully commented, so that your customers can figure out the job of each line you wrote in order to apply their modifications and customize your asset to their needs.

How to Set the Right Price for Your Packs

Now that you've finished making your asset—or even if you're still working on it but have a clear idea on what it will include exactly—it's time to give it a price.

Creating assets requires a lot of time and hard work, but they always cost less than their real value. For example, suppose you've just finished a city construction kit with high quality 3D models for roads, buildings, and street signs, and some additional textures. I'd say it shouldn't go above $80, maybe $100. (I came to this number by checking similar and popular assets in the market.) Yes, I know that's not the real value of such work and you're probably thinking that you can get its real value by doing freelance. But in freelance, you're doing all that work for one customer, who'll pay you once, and you can never sell that work again. If you're releasing it as an asset, however, it can generate more revenue in the long term because that asset will be available to unlimited potential buyers who will be attracted by its low price relative to its high quality content.

The price you pick also depends on other similar assets that are available in the market; I recommend always going a little bit lower with the price of your assets than the other assets that offer similar content. 

The most successful asset I released is S-Inventory, which includes an inventory system, an equipment system, a vendor system, a crafting system, a containers system, and a skill bar system for Unity, all in one. It only costs $15. If the price were higher, I don't think it would sell well and compete with similar assets in the store.

How to Provide Good Customer Support

As an asset developer, you must be active most of the time to provide support for your customers. (This is especially true if you're selling scripting assets or complete projects for a specific game engine, because then you'll be dealing with a lot of users who are new to these engines and will ask not only how to customize your assets to their needs, but also often questions that are not really related to your assets. If you can answer, they will really appreciate it.) 

I'd say you should always answer your customers in less than 24 hours. Maybe assign one hour a day to handling these support requests. Use e-mails and even Skype calls or Teamviewer sessions to help your customers, and inform your customers if you are going inactive for some time so that they don't assume you've abandoned your assets.

If your asset has a bug or a problem, a customer will quickly reach out to you and report that issue. Don't start by fixing the problem, as this might take some time (particularly if it's a serious problem), and don't leave the customer waiting for an answer. Instead, instantly answer them, thank them for finding the bug, and let them know that you're looking to find a solution, and how much time you expect it will it take you to fix it (hours, days or even weeks). Replying quickly to your customers will make you a responsible developer and will earn you a good reputation—assuming you follow up with the fixes!

Some customers will be loaded with suggestions requesting changes and improvements in your assets. You always have to consider their suggestions carefully. If you believe that ithe request would improve your asset for the majority of its users, then it's worth adding it. If you believe that it just serves that one customer, however, then you could simply deliver this new feature to them (if it's not too much work) without publishing it in your asset.

Pushing regular updates is also another sign that proves you're an active developer, which helps make customers comfortable with buying your assets, since they know that you'll be there to help them if they face problems using them.

What to Do About Reviews

Most people rely on reviews when they make a decision of whether or not to buy any asset. Even if your asset is great and satisfies all of its buyers, don't expect them all to write a good review, or even to rate it; only a few of them will likely take the time to do that. 

There is a way to ask your customers for reviews. When you finish answering their questions, fixing a problem, or handling a request for them regarding your asset, they will generally be very thankful and overwhelmed. Just ask them to consider rating and reviewing your asset at that time. Aside from complimenting your asset and all its features, they will also likely point out the great customer support that they you provided. That way, you make your customers' lives easier and, in return, they help you to get more sales.

Note that, if your asset doesn't offer what you promised in its description or screenshots, then every single customer will take a part in taking it down by writing long, bad reviews warning other users not to buy it. At that point, it might be too late to even update the asset and fix it. 

Conclusion

To sum up: creating successful game assets which end up being a decent source of revenue requires you to be original, offer variation, price for the market, and take responsibility when dealing with customers.

How to Incorporate Satisfying Death Mechanics Into Your Game

$
0
0

Games that do not allow you to die (or fail, for that matter) lack heft. When failure is impossible, what purpose is there in defying it? Success loses its meaning when there's no dread.

But player deaths don't have to end in frustration or the player having to replay long stretches of the game. Death mechanics can be integrated into the story and the gameplay, where they become part of the experience.

In this article we'll take a look at different ways to deal with player death and failure, both good and bad. Some games manage to do both at the same time!

The best ways to deal with player death are fully integrated into the narrative and gameplay; these are the ones we should use as inspiration in our titles.

(Note that we'll focus on non-competitive games. In multiplayer games, like Team Fortress 2, other rules apply. Also note that games can get away with no death if death or failure isn't expected. Point-and-click adventures like Monkey Island do not need to have death mechanics as there is no expectation of them.)

Narrative Deaths

Good: Keep Narrative Death and Gameplay Death Separate

Spoiler: In Final Fantasy VII a main character dies halfway through the game. Which is weird, because you and your teammates die all the time, but use resurrection spells to bring people back to life again.

Final Fantasy VII HD
Final Fantasy VII HD

The difference between the two is that one death exists in the narrative, while the other deaths exist in the gameplay. The game deals with that by keeping both things separate and never referencing each other. Nobody says, "Hey, couldn't you use Raise [the resurrection spell] on [Character] and save all that heartache?" as that would break the barrier between gameplay and narrative.

Bad: Mix Narrative Death and Gameplay Death

Borderlands 2 fails at this and mixes the two.

The game has a system of New U Stations. These work both as checkpoints from which to restart the game and as resurrection points. When you die, a New U is created: basically a clone of your previous self. You lose some money, and the voice of the machine quips that you should avoid jumping into lava and states how much money you have made for the company.

Borderlands 2 resurrection
A character from Borderlands 2, being reconstituted at a New U station.

This could work like in Final Fantasy VII, a system separate from the narrative, but it is not. The New U stations belong to Hyperion, one of the companies in the game. After a while you may start to wonder why you can't just use it to bring major dead story characters back to life, or why the enemies don't just use them as well.

Borderlands 2 reconsitution
The view in Borderlands 2 during reconstitution, re-entering the game, and travelling, which might imply you "die" and get cloned all the time, even for minor conveniences like travel.

This is especially egregious when Handsome Jack, the main antagonist who is trying to kill you, gives you a mission to kill yourself. You can do it, and you get insta-resurrected... Then why is he trying to kill you?!

Lead Writer, Anthony Burch, laments this as one of the major faults in the story. Keeping player death in the gameplay and out of the narrative should be done in a strict manner to prevent these weird overlaps.

Giving the Player Another Chance

Bad: Magically Save the Player at the Last Second

In the 2008 reboot of Prince of Persia, you can't actually die. Ever.

When you're about to fall to your death, you are saved at the last second and deposited back where you started. When your health is about to run out in battle, you are saved, you get your health back, and the enemies regain their health too, which is another break in the logic of the universe.

Prince of Persia being saved by Elika
Prince of Persia (2008). Before you die, your companion Elika saves you. The teamwork aspect itself is fun, but not when used to remove stakes from the game.

You end back where you started, with no progress made.

Good: Let the Player Rewind Time

One of the best features in Prince of Persia: The Sands of Time is the use of the titular sands to affect gameplay. The sand shows up in several story segments, but you can also use it during gameplay!

Prince of Persia The Forgotten Sands
Prince of Persia: The Forgotten Sands. All games in the Sands of Time series feature time reversal.

When you die, you just rewind time to a point where you are not dead. Instead of quick-loading, you stay in the game, and the gameplay fully supports this.

In a time-travel game, this feature practically comes with the gameplay. Braid allows you to rewind after death back to when you were alive, and even further to the point where you began the level.

Braid reversing time
Braid

The racing gameGRID also allows this, and is maybe the only game in that genre to do so (apart from its sequels). Races can be long, and failing one due to a slip-up or a freak crash can be very frustrating, especially since racing games usually do not have in-race save systems. In GRID,however, you get a few rewinds per race, which you can use to save yourself from a major crash. The limited nature of this feature also keeps the player from abusing this system for minor slip-ups.

GRID replay controls
GRID. After a fatal crash you see these replay controls. Press the button on the right to flashback and get back into the race at that point.

Good: Use an Unreliable Narrator

Prince of Persia: The Sands of Time has another fun system for dealing with player death. When you actually do run out of both health and sand, the hero says, "Hang on a second. This isn't how it happened!" before you need to reload.

What's happening is that the entire story is actually told by the characterafter it happened. The game starts right at the end, and everything is told in flashbacks.

Prince of Persia The Sands of Time starting scene
Prince of Persia: The Sands of Time. This is the very start of the game, and everything after that is told in flashbacks.

This feels as if the prince might actually have made a honest mistake. Realizing you have an unreliable narrator is fun and softens the impact of being pulled out of the game to reload it.

Call of Juarez: Gunslinger centers its entire game around the conceit that the story is actually a tall tale told in a saloon. The narrator changes details and facts, and the game world reshapes itself in front of your eyes to fit the new story. Sadly it doesn't involve the player dying, which would fit perfectly.

Good: Allow the Player to Escape

In Batman: Arkham Asylum (and its sequels) the grapple-hook is a central element of the game. It lets you quickly move around the world and climb objects.

Batman Arkham Origins pit
Batman: Arkham Origins. When you fail a jumping segment, Batman pulls himself up where you began.

If you should fall into a pit in the game, you do not die. Instead, Batman pulls himself out where you began the jump. This nicely integrates the grapple mechanic to prevent some player deaths. As there are plenty of other ways to die in the game, this does not feel like a cop-out.

EVE Online also has a unique way of escaping before actually having to die.

When your spaceship in EVE is destroyed, the "pilot capsule" remains. This is a ship that can do nothing other than move. Usually you use it to get back to the nearest station, where you can get a proper ship.

The capsule can be destroyed, however, killing you and taking all your costly implants with it. You can get cloned at the nearest port, but without the implants that were part of your character.

This creates a unique mechanic in MMOs. Capsule killing is frowned upon, but is used in assassinations and other plots.

Good: Keep the Player in a "Downed" State With a Way to Get Back Into the Action

When you run out of health in the Borderlands games, you don't immediately die. Instead, you go into "last chance" mode.

The screen color fades until everything is grey, and you can only crawl. But if you manage to kill an enemy, you get a "second wind", stand up, get a portion of your health back, and can fight and walk again. You can also be helped up by another player, if you are playing co-op.

Borderlands 2 second wind
Borderlands 2. The Second Wind is also an empowerment moment. Instead of dying you just keep on going.

This is a great system. It keeps you in the game and engaged, and doesn't immediately throw you out.

Long before Borderlands, though, Prey already had a second wind mechanic.

In Prey you get the ability to enter the "spirit world" during gameplay. Inside, you are invisible, you can move through certain barriers, and you can use paths that only exist in there. It is necessary to use it within the main game.

Prey spirit world
The spirit world of Prey. The entire game has Native American culture woven into it.

When you die you go into this spirit world, a ghostly version of the level you are in. If you manage to kill a certain number of spirits with your bow, you get resurrected right on the spot where you died and can continue playing.

Left 4 Dead has a similar system, where upon losing all your health you are "downed". You fall to the ground and cannot move, but you can still fire your pistol. All the while the screen slowly turns grey, until you actually die. You can continue playing if another player helps you up, which keeps groups of players tight.

Left 4 Dead incapacitated
Being downed in Left 4 Dead lets you still fire a pistol, if you have one. There are other types of being downed or being incapacitated where you cannot fire.

But the most integrated version of a downed state is in Republic Commando.

This was the first game to include the resuscitate option, way before Battlefield 2,Left 4 Dead, or Mass Effect 3. When one of your teammates dies, you can bring them back to life with an injection of the magical healing substance Bacta and a defibrillator blast.

What it does (still!) uniquely is offer you the choice to command whether they should get you or keep fighting.

Republic Commando
Republic Commando's downed screen, which sadly has never been copied.

When you die, you see three options displayed on your (now blurry) in-game helmet (pictured above). Maintain Current Orders has your squad continue to engage the enemy, and Recall and Revive calls one of them over to bring you back up. In the heat of the battle it might be more useful to have them reduce the danger by eliminating a few enemies first, so you can't just call this automatically. Only the last option has you reload a game, and becomes necessary when your entire team dies.

This creates tense fights and moments where you might get downed and have to weigh the chances of your squadmates being able to revive you. The AI is also good enough to support this—with horrible AI it wouldn't work as well as it does.

Above All...

Best: Integrate Death Into the Story

Bioshock Infinite handles the theme of multiple universes. You jump between them in the story.

And when you do die, you suddenly wake up in your office, which looks like it did in flashback sequences. But opening the door puts you close to where you were when you died.

Bioshock Infinite office
Bioshock Infinite. You also visit your office in regular flashback sequences.

This implies that you did actually die. Then the universe-hopping characters that engaged you in the first place went to another universe to get another you, and fast-forwarded through the story to leave you at the point of your previous death.

This is alluded to at the start of the game, where you see a list of decisions you have made before, and it implies there were several dozen yous.

Bioshock Infinite heads and tails
Bioshock Infinite. That's the number of yous that have been through here before.

Once again player death is woven beautifully into the gameplay, and actually extends the mythology.

Bastion uses a similar approach in its New Game Plus playthrough. At the end of the game you get the choice of leaving the broken world or engaging a machine with unknown properties, which could possibly turn back time to before the game started. If you do the latter, characters refer to the repetition after the game has been started again, integrating the out-of-game action into the game itself.

Good: Keep Downtime to a Minimum

If there is no way to have a fun death mechanic, at least make sure it's as painless as possible. This means mainly two things:

Allow reloading or restarting as quickly as possible. When you fail in Trials you can press the restart button, which puts you immediately at the last checkpoint. There is no lengthy animation or load time. Players aren't frustrated by failure, yet it still retains its heft.

Trials 2 reset
Trials 2: Second Edition. Pressing backspace at any time resets you to the latest checkpoint. If you crash, a timer starts, which resets you there after a few seconds.

Also, keep automatic and fair restart/save points, so the player doesn't feel punished by having to replay segments.

Gunpoint uses a fun system of staggered reload points in the past. When you die, you get to chose from several reload points at different times in the past, which essentially turns it into a time-travel mechanic.

Gunpoint death screen
Gunpoint's death screen. The multiple points offer various options of how to approach the situation again.

Conclusion

Having the ability to fail in a game is important, as it gives the gameplay and story meaning and substance. Using an unreliable narrator is a relatively cost-effective way of having player death integrated into the story. Avoiding having in-game characters acknowledge resurrection systems as part of the narrative also helps maintain the suspension of disbelief.

The Creative App Bundle

$
0
0

The Creative App Bundle, by Envato Bundles, features six beautiful Mac apps to help inspire your designs and streamline your workflow. Plus it also includes two unlockable games and a stack of super discounts worth $1,000, all for just $39!

What Apps Will I Get?

Montage of apps included in bundle

Macaw

Built for today’s web designer. It provides the same flexibility as your favorite image editor but also writes semantic HTML and remarkably succinct CSS. RRP $79.

Arq Backup

Automatically back up all of your files and store them securely where they’re readable only by you. RRP $40.

Ember

Your visual scrapbook; take screenshots, snap webpages, draw on images and much more! Capture and organize the images that inspire you. RRP $64.99.

RapidWeaver

The all-in-one web design software for Mac that lets you build the website you’ve always wanted. Design, Build and Publish. RRP $114.99.

ColorSnapper

The Mac OS X color picker app for designers and developers which makes it easy to collect, adjust, organize and export colors of any pixel on the screen. RRP $9.99.

Amberlight

A unique art tool that creates beautiful computer-generated images. Explore your artistic side and create amazing artworks. RRP $29.99.

Plus you can share to unlock two fun games, and also give the bundle as a gift with our new checkout option!

Over $1,000 in value, all for just $39.

Hurry, this is a limited time offer. Bundle ends 20 August at 11:59PM PST.

Creating Realistic Terrain for HTML5 Games With WebGL

$
0
0

The first version of Flight Simulator shipped in 1980 for the Apple II and, amazingly, it was in 3D! That was a remarkable achievement. It’s even more amazing when you consider that all of the 3D was done by hand, the result of meticulous calculations and low-level pixel commands. When Bruce Atwick tackled the early versions of Flight Simulator, not only were there no 3D frameworks, but there were no frameworks at all! Those versions of the game were mostly written in assembly, just a single step away from ones and zeroes that flow through a CPU.

When we set out to reimagine Flight Simulator (or Flight Arcade as we call it) for the web and to demonstrate what’s possible in the new Microsoft Edge browser and EdgeHTML rendering engine, we couldn’t help but think about the contrast of creating 3D then and now—old Flight Sim, new Flight Sim, old Internet Explorer, new Microsoft Edge. Modern coding seems almost luxurious as we sculpt 3D worlds in WebGL with great frameworks like Babylon.js. It lets us focus on very high-level problems. 

In this article, I’ll share our approach to one of these fun challenges: a simple way to create realistic-looking large-scale terrain.

Note: Interactive code and examples for this article are also located at Flight Arcade / Learn.

Modeling and 3D Terrain

Most 3D objects are created with modeling tools, and for good reason. Creating complex objects (like an airplane or even a building) is hard to do in code. Modeling tools almost always make sense, but there are exceptions! One of those might be cases like the rolling hills of the Flight Arcade island. We ended up using a technique that we found to be simpler and possibly even more intuitive: a heightmap.

A heightmap is a way to use a regular two-dimensional image to describe the elevation relief of a surface like an island or other terrain. It’s a pretty common way to work with elevation data, not only in games but also in geographic information systems (GIS) used by cartographers and geologists.

To help you get an idea for how this works, check out the heightmap in this interactive demo. Try drawing in the image editor, and then check out the resulting terrain.

Screenshot of the heightmap demo

The concept behind a heightmap is pretty straightforward. In an image like the one above, pure black is the “floor” and pure white is the tallest peak. The grayscale colors in-between represent corresponding elevations. This gives us 256 levels of elevation, which is plenty of detail for our game. Real-life applications might use the full color spectrum to store significantly more levels of detail (2564 = 4,294,967,296 levels of detail if you include an alpha channel).

A heightmap has a few advantages over a traditional polygonal mesh:

First, heightmaps are a lot more compact. Only the most significant data (the elevation) gets stored. It will need to be turned into a 3D object programmatically, but this is the classic trade: you save space now and pay later with computation. By storing the data as an image, you get another space advantage: you can leverage standard image compression techniques and make the data tiny (by comparison)!

Second, heightmaps are a convenient way to generate, visualize and edit terrain. It's pretty intuitive when you see one. It feels a little like looking at a map. This proved to be particularly useful for Flight Arcade. We designed and edited our island right in Photoshop! This made it very simple to make small adjustments as needed. When, for example, we wanted to make sure that the runway was completely flat, we just made sure to paint over that area in a single color.

You can see the heightmap for Flight Arcade below. See if you can spot the “flat” areas we created for the runway and the village.

heightmap for Flight Arcade
The heightmap for the Flight Arcade island. It was created in Photoshop and it's based on the "big island" in a famous Pacific Ocean island chain. Any guesses?
A texture that gets mapped onto the resulting 3D mesh after the heightmap is decoded. More on that below.

Decoding the Heightmap

We built Flight Arcade with Babylon.js, and Babylon gave us a pretty straightforward path from heightmap to 3D. Babylon provides an API to generate a mesh geometry from a heightmap image:

The amount of detail is determined by that subdivision’s property. It’s important to note that the parameter refers to the number of subdivisions on each side of the heightmap image, not the total number of cells. So increasing this number slightly can have a big effect on the total number of vertices in your mesh.

  • 20 subdivisions = 400 cells
  • 50 subdivisions = 2,500 cells
  • 100 subdivisions = 10,000 cells
  • 500 subdivisions = 250,000 cells
  • 1,000 subdivisions = 1,000,000 cells

In the next section we'll learn how to texture the ground, but when experimenting with heightmap creation, it's useful to see the wireframe. Here is the code to apply a simple wireframe texture so it’s easy to see how the heightmap data is converted into the vertices of our mesh:

Creating Texture Detail

Once we had a model, mapping a texture was relatively straightforward. For Flight Arcade, we simply created a very large image that matched the island in our heightmap. The image gets stretched over the contours of the terrain, so the texture and the height map remain correlated. This was really easy to visualize and, once again, all of the production work was done in Photoshop.

The original texture image was created at 4096x4096. That's pretty big! (We eventually reduced the size by a level to 2048x2048 in order to keep the download reasonable, but all of the development was done with the full-size image.) Here's a full-pixel sample from the original texture. 

A full-pixel sample of the original island texture
A full-pixel sample of the original island texture. The entire town is only around 300 px square.

Those rectangles represent the buildings in the town on the island. We quickly noticed a discrepancy in the level of texturing detail that we could achieve between the terrain and the other 3D models. Even with our giant island texture, the difference was distractingly apparent!

To fix this, we “blended” additional detail into the terrain texture in the form of random noise. You can see the before and after below. Notice how the additional noise enhances the appearance of detail in the terrain.

Before and after comparison of airport texture

We created a custom shader to add the noise. Shaders give you an incredible amount of control over the rendering of a WebGL 3D scene, and this is a great example of how a shader can be useful.

A WebGL shader consists of two major pieces: the vertex and fragment shaders. The principal goal of the vertex shader is to map vertices to a position in the rendered frame. The fragment (or pixel) shader controls the resulting color of the pixels.

Shaders are written in a high-level language called GLSL (Graphics Library Shader Language), which resembles C. This code is executed on the GPU. For an in-depth look at how shaders work, see this tutorial on how to create your own custom shader for Babylon.js, or see this beginner's guide to coding graphics shaders.

The Vertex Shader

We're not changing how our texture is mapped on to the ground mesh, so our vertex shader is quite simple. It just computes the standard mapping and assigns the target location.

The Fragment Shader

Our fragment shader is a little more complicated. It combines two different images: the base and blend images. The base image is mapped across the entire ground mesh. In Flight Arcade, this is the color image of the island. The blend image is the small noise image used to give the ground some texture and detail at close distances. The shader combines the values from each image to create a combined texture across the island.

The final lesson in Flight Arcade takes place on a foggy day, so the other task our pixel shader has is to adjust the color to simulate fog. The adjustment is based on how far the vertex is from the camera, with distant pixels being more heavily "obscured" by the fog. You'll see this distance calculation in the calcFogFactor function above the main shader code.

The final piece for our custom Blend shader is the JavaScript code used by Babylon. The primary purpose of this code is to prepare the parameters passed to our vertex and pixel shaders.

Babylon.js makes it easy to create a custom shader-based material. Our Blend material is relatively simple, but it really made a big difference in the appearance of the island when the plane flew low to the ground. Shaders bring the power of the GPU to the browser, expanding the types of creative effects you can apply to your 3D scenes. In our case, that was the finishing touch!

More Hands-On With JavaScript

Microsoft has a bunch of free learning on many open-source JavaScript topics, and we’re on a mission to create a lot more with Microsoft Edge. Here are some to check out:

And some free tools to get started: Visual Studio Code, Azure Trial, and cross-browser testing tools—all available for Mac, Linux, or Windows.

This article is part of the web dev tech series from Microsoft. We’re excited to share Microsoft Edge and the new EdgeHTML rendering engine with you. Get free virtual machines or test remotely on your Mac, iOS, Android, or Windows device @ http://dev.modern.ie/.

Creating Dynamic Sound With the Web Audio API

$
0
0

Before the Web Audio API, HTML5 gave us the <audio> element. It might seem hard to remember now, but prior to the <audio> element, our best option for sound in a browser was a plugin! 

The <audio> element was indeed exciting, but it had a pretty singular focus. It was essentially a video player without the video, good for long audio like music or a podcast, but ill-suited for the demands of gaming. We put up with (or found workarounds for) looping issues, concurrent sound limits, glitches and total lack of access to the sound data itself.

Fortunately, our patience has paid off. Where the <audio> element may have been lacking, the Web Audio API delivers. It gives us unprecedented control over sound, and it's perfect for everything from gaming to sophisticated sound editing. All this with a tidy API that's really fun to use and well supported.

Let's be a little more specific: Web Audio gives you access to the raw waveform data of a sound and lets you manipulate, analyze, distort or otherwise modify it. It is to audio what the canvas API is to pixels. You have deep and mostly unfettered access to the sound data. It's really powerful!

This tutorial is the second in a series on Flight Arcade—built to demonstrate what’s possible on the web platform and in the new Microsoft Edge browser and EdgeHTML rendering engine. Interactive code and examples for this article are also located at Flight Arcade / Learn.

Flight Sounds

Even the earliest versions of Flight Simulator made efforts to recreate the feeling of flight using sound. One of the most important sounds is the dynamic pitch of the engine which changes pitch with the throttle. We knew that, as we reimagined the game for the web, a static engine noise would really seem flat, so the dynamic pitch of the engine noise was an obvious candidate for Web Audio.

Less obvious (but possibly more interesting) was the voice of our flight instructor. In early iterations of Flight Arcade, we played the instructor's voice just as it had been recorded, and it sounded as if it was coming out of a sound booth! We noticed that we started referring to the voice as the "narrator" instead of the "instructor". 

Somehow that pristine sound broke the illusion of the game. It didn't seem right to have such perfect audio coming over the noisy sounds of the cockpit. So, in this case, we used Web Audio to apply some simple distortions to the voice instruction and enhance the realism of learning to fly!

In the sections below, I'll give you a pretty detailed view of how we used the Web Audio API to create these sounds.

Using the API: AudioContext and Audio Sources

The first step in any Web Audio project is to create an AudioContext object. Some browsers (including Chrome) still require this API to be prefixed, so the code looks like this:

Then you need a sound. You can actually generate sounds from scratch with the Web Audio API, but for our purposes we wanted to load a prerecorded audio source. If you already had an HTML <audio> element, you could use that, but a lot of times you won't. After all, who needs an <audio> element if you've got Web Audio? Most commonly, you'll just download the audio directly into a buffer with an HTTP request:

Now we have an AudioContext and some audio data. Next step is to get these things working together. For that, we need...

AudioNodes

Just about everything you do with Web Audio happens via some kind of AudioNode, and they come in many different flavors: some nodes are used as audio sources, some as audio outputs, and some as audio processors or analyzers. You can chain them together to do interesting things.

You might think of the AudioContext as a sort of sound stage. The various instruments, amplifiers and speakers that it contains would all be different types of AudioNodes. Working with the Web Audio API is a lot like plugging all these things together (instruments into, say, effects pedals and the pedal into an amplifiers and then speakers, etc.).

Well, in order to do anything interesting with our newly acquired AudioContext audio sources, we need to first encapsulate the audio data as a source AudioNode.

Playback

That's it. We have a source. But before we can play it, we need to connect it to a destination node. For convenience, the AudioContext exposes a default destination node (usually your headphones or speakers). Once connected, it's just a matter of calling start and stop.

It's worth noting that you can only call start() once on each source node. That means "pause" isn't directly supported. Once a source is stopped, it's expired. Fortunately, source nodes are inexpensive objects, designed to be created easily (the audio data itself, remember, is in a separate buffer). So, if you want to resume a paused sound you can simply create a new source node and call start() with a timestamp parameter. AudioContext has an internal clock that you can use to manage timestamps. 

The Engine Sound

That's it for the basics, but everything we've done so far (simple audio playback) could have been done with the old <audio> element. For Flight Arcade, we needed to do something more dynamic. We wanted the pitch to change with the speed of the engine.

That's actually pretty simple with Web Audio (and would have been nearly impossible without it)! The source node has a rate property which affects the speed of playback. To increase the pitch we just increase the playback rate:

The engine sound also needs to loop. That's also very easy (there's a property for it too):

But there's a catch. Many audio formats (especially compressed audio) store the audio data in fixed size frames and, more often than not, the audio data itself won't "fill" the final frame. This can leave a tiny gap at the end of the audio file and result in clicks or glitches when those audio files get looped. Standard HTML audio elements don't offer any kind of control over this gap, and it can be a big challenge for web games that rely on looping audio.

Fortunately, gapless audio playback with the Web Audio API is really straightforward. It's just a matter of setting a timestamp for the beginning and end of the looping portion of the audio (note that these values are relative to the audio source itself and not the AudioContext clock).

The Instructor's Voice

So far, everything we've done has been with a source node (our audio file) and an output node (the sound destination we set early, probably your speakers), but AudioNodes can be used for a lot more, including sound manipulation or analysis. In Flight Arcade, we used two node types (a ConvolverNode and a WaveShaperNode) to make the instructor's voice sound as if it's coming through a speaker.

Convolution

From the W3C spec:

Convolution is a mathematical process which can be applied to an audio signal to achieve many interesting high-quality linear effects. Very often, the effect is used to simulate an acoustic space such as a concert hall, cathedral, or outdoor amphitheater. It can also be used for complex filter effects, like a muffled sound coming from inside a closet, sound underwater, sound coming through a telephone, or playing through a vintage speaker cabinet. This technique is very commonly used in major motion picture and music production and is considered to be extremely versatile and of high quality.

Convolution basically combines two sounds: a sound to be processed (the instructor's voice) and a sound called an impulse response. The impulse response is, indeed, a sound file, but it's really only useful for this kind of convolution process. You can think of it as an audio filter of sorts, designed to produce a specific effect when convolved with another sound. The result is typically far more realistic than simple mathematical manipulation of the audio.

To use it, we create a convolver node, load the audio containing the impulse response, and then connect the nodes.

Wave Shaping

To increase the distortion, we also used a WaveShaper node. This type of node lets you apply mathematical distortion to the audio signal to achieve some really dramatic effects. The distortion is defined as a curve function. Those functions can require some complex math. For the example below, we borrowed a good one from our friends at MDN.

Notice the stark difference between the original waveform and the waveform with the WaveShaper applied to it.

The example above is a dramatic representation of just how much you can do with the Web Audio API. Not only are we making some pretty dramatic changes to the sound right from the browser, but we're also analyzing the waveform and rendering it into a canvas element! The Web Audio API is incredibly powerful and versatile and, frankly, a lot of fun!

More Hands-On With JavaScript

Microsoft has a bunch of free learning on many open-source JavaScript topics, and we’re on a mission to create a lot more with Microsoft Edge. Here are some to check out:

And some free tools to get started: Visual Studio Code, Azure Trial, and cross-browser testing tools—all available for Mac, Linux, or Windows.

This article is part of the web dev tech series from Microsoft. We’re excited to share Microsoft Edge and the new EdgeHTML rendering engine with you. Get free virtual machines or test remotely on your Mac, iOS, Android, or Windows device @ http://dev.modern.ie/.


How to Adapt A* Pathfinding to a 2D Grid-Based Platformer: Theory

$
0
0

A* grid-based pathfinding works well for games in which the characters can move freely along both the x- and y-axes. The problem with applying this to platformers is that movement on the y-axis is heavily restricted, due to simulated gravitational forces. 

In this tutorial, I'll give a broad overview of how to modify a standard A* algorithm to work for platformers by simulating these gravitational restrictions. (In the next part, I'll walk through actually coding the algorithm.) The adapted algorithm could be used to create an AI character that follows the player, or to show the player a route to their goal, for example.

I assume here that you're already familiar with A* pathfinding, but if you need an introduction, Patrick Lester's A* Pathfinding for Beginners is excellent.

Demo

You can play the Unity demo, or the WebGL version (64MB), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, and right-click a cell to toggle the ground at that point.

Defining the Rules

Before we can adapt the pathfinding algorithm, we need to clearly define what forms the paths themselves can take.

What Can the Character Do?

Let's say that our character takes up one cell, and can jump three cells high. 

We won't allow our character to move diagonally through cells, because we don't want it to go through solid terrain. (That is, we won't allow it to move through the corner of a cell, only through the top, bottom, left, or right side.) To move to a diagonally adjacent cell, the character must move one cell up or down, and one cell left or right. 

Since the character's jump height is three cells, then whenever it jumps, after it has moved up three times, it should not be able to move up any more cells, but it should still be able to move to the side.

Based on these rules, here is an example of the path the character would take during its maximum distance jump:

Of course the character can jump straight up, or with any combination of left and right movements, but this example shows the kind of approximation we'll need to embrace when calculating the path using the grid.

Representing the Jump Paths With Jump Values

Each of the cells in the path will need to keep data about the jump height, so that our algorithm can detect that the player can jump no higher and must start falling down. 

Let's start by assigning each cell's jump value by increasing it by one, each cell, for as long as the jump continues. Of course, when the character is on the ground, the jump value should be 0.

Here's that rule applied to the same maximum-distance jump path from before:

The cell that contains a 6 marks the highest point in the path. This makes sense, since that's twice the value of the character's maximum jump height, and the character alternates moving one cell up and one cell to the side, in this example.

Note that, if the character jumps straight up, and we continue increasing the jump value by one each cell, then this no longer works, because in that case the highest cell would have a jump value of 3.

Let's modify our rule to increase the jump value to the next even number whenever the character moves upwards. Then, if the jump value is even, the character can move either left, right, or down (or up, if they haven't reached a jump value of 6 yet), and if the jump value is odd, the character only move up or down (depending on whether they have reached the peak of the jump yet).

Here's what that looks like for a jump straight up:

And here's a more complicated case:

Here's how the jump values are calculated:

  1. Start on the ground: jump value is 0.
  2. Jump horizontally: increase the jump value by 1, so the jump value is 1.
  3. Jump vertically: increase the jump value to the next even number: 2.
  4. Jump vertically: increase the jump value to the next even number: 4.
  5. Jump horizontally: increase the jump value by 1; jump value is now 5.
  6. Jump vertically: increase the value to the next even number: 6. (The character can no longer move upwards, so only left, right and bottom nodes are available.)
  7. Fall horizontally: increase the jump value by 1; jump value is now 7.
  8. Fall vertically: increase the jump value to the next even number: 8.
  9. Fall vertically: increase the value to the next even number: 10.
  10. Fall vertically: since the character is now on the ground. set the jump value to 0.

As you can see, with this kind of numbering we know exactly when the character reaches its maximum jump height: it's the cell with the jump value equal to twice the maximum character jump height. If the jump value is less than this, the character can still move upwards; otherwise, we need to ignore the node directly above.

Adding Coordinates

Now that we're aware of the kind of movements the character can make on the grid, let's consider the following setup:

The green cell at position (3, 1) is the character; the blue cell at position (2, 5) is the goal. Let's draw a route that the A* algorithm may choose first to attempt to reach the goal@

The yellow number in the top right corner of a cell is the cell's jump value. As you can see, with a straight upwards jump, the character can jump three tiles up, but no further. This is no good. 

We may find more luck with another route, so let's rewind our search and start again from node (3, 2).

As you can see, jumping on the block to the right of the character allowed it to jump high enough to get to the goal! However, there is a big problem here... 

In all likelihood, the first route the algorithm will take is the first one we examined. After taking it, the algorithm will not get very far, and will end up back at node (3, 2). It can then search through nodes (4, 2), (4, 3), (3, 3) (again), (3, 4) (again), (3, 5), and finally the goal cell, (2, 5)

In a basic version of the A* algorithm, if a node has been visited once already, then we don't need to process it ever again. In this version, however, we do. This is because the nodes are not distinguished solely by x- and y-coordinates, but also by jump values. 

In our first attempt to find a path, the jump value at node (3, 3) was 4; in our second attempt, it was 3. Since in the second attempt the jump value at that cell was smaller, it meant that we could potentially get higher from there than we could during the first attempt. 

This basically means that node (3, 3) with a jump value of 4 is a different node than the node at (3, 3) with a jump value of 3. The grid essentially needs to become three-dimensional at some coordinates to accommodate for these differences, like so:

We can't simply change the jump value of the node at (3, 3) from 4 to 3, because some paths use the same node multiple times; if we did that, we would basically override the previous node and that would of course corrupt the end result. 

We'd have the same issue if the first path would have reached the goal despite the higher jump values; if we had overridden one of the nodes with a more promising one, then we would have no way to recover the original path.

The Strengths and Weaknesses of Using Grid-Based Pathfinding

Remember, it's the algorithm that uses a grid-based approach; in theory, your game and its levels don't have to.

Strengths

  • Works really well with tile-based games.
  • Extends the basic A* algorithm, so you don't have to have two completely different algorithms for flying and land characters.
  • Works really well with dynamic terrain; doesn't require a complicated node-rebuilding process.

Weaknesses

  • Inaccuracy: minimum distance is the size of a single cell.
  • Requires a grid representation of each map or level.

Engine and Physics Requirements

Character Freedom vs Algorithm Freedom

It is important for the character to have at least as much freedom of movement as the algorithm expects—and preferably a bit more than that. 

Having the character movement match exactly the algorithm's constraints is not a viable approach, due to the discrete nature of the grid and the grid's cell size. It is possible to code the physics in such a way that the algorithm will always find a way if there is one, but that requires you to build the physics for that purpose. 

The approach I take in this tutorial is to fit the algorithm to the game's physics, not the other way around.

The main problems occur in edge cases when the algorithm's expected freedom of movement freedom does not match the true, in-game character's freedom of movement.

When the Character Has More Freedom Than the Algorithm Expects

Let's say the AI allows for jumps six cells long, but the game's physics allow for a seven-cell jump. If there is a path that requires the longer jump to reach the goal in the quickest time, then the bot will ignore that path and choose the more conservative one, thinking that the longer jump is impossible. 

If there is a path that requires the longer jump and there is no other way to reach the goal, then the pathfinder will conclude that the goal is unreachable.

When the Algorithm Expects More Freedom Than the Character Has

If, conversely, the algorithm thinks that it is possible to jump seven cells away, but the game's physics actually allow only for a six-cell jump, then the bot may either follow the wrong path and fall into a place from which it cannot get out, or try to find a path again and receive the same incorrect result, causing a loop.

(Out of these two problems, it's preferable to let the game's physics allow for more freedom of movement than the algorithm expects.)

Solving These Problems

The first way to ensure that the algorithm is always correct is to have levels which players cannot modify. In this case, you just need to make sure that whatever terrain you design or generate works well with the pathfinding AI.

The second solution to these problems is to tweak the algorithm, the physics, or both, to make sure that they match. As I mentioned earlier, this doesn't mean they need to match exactly; for example, if the algorithm thinks the character can jump five cells upwards, it is fine to set the real maximum jump at 5.5 cells high. (Unlike the algorithm, the game's physics can use fractional values.)

Depending on the game, it could also be true that the AI bot not finding an existing path is not a huge deal; it will simply give up and go back to its post, or just sit and wait for the player.

Conclusion

At this point, you should have a decent conceptual understanding of how A* pathfinding can be adapted to a platformer. In my next tutorial, we'll make this concrete, by actually adapting an existing A* pathfinding algorithm in order to implement this in a game!

How to Adapt A* Pathfinding to a 2D Grid-Based Platformer: Implementation

$
0
0

Now that we have a good idea of how our A* platforming algorithm will work, it's time to actually code it. Rather than build it from scratch, we'll adapt an existing A* pathfinding system to add the new platfomer compatibility.

The implementation we'll adapt in this tutorial is Gustavo Franco's grid-based A* pathfinding system, which is written in C#; if you're not familiar with it, read his explanation of all the separate parts before continuing. If you haven't read the previous tutorial in this series, which gives a general overview of how this adaptation will work, read that first.

Note: the complete source code for this tutorial can be found in this GitHub repo, in the Implementation folder. 

Demo

You can play the Unity demo, or the WebGL version (64MB), to see the final result in action. Use WASD to move the character, left-click on a spot to find a path you can follow to get there, and right-click a cell to toggle the ground at that point.

Setting Up the Game Project

We need to set some rules for the example game project used in this tutorial. You can of course change these rules when implementing this algorithm in your own game!

Setting Up the Physics

The physics rules used in the example project are very simple. 

When it comes to horizontal speed, there is no momentum at all. The character can change directions immediately, and immediately moves in that direction at full speed. This makes it much easier for the algorithm to find a correct path, because we don't have to care about the character's current horizontal speed. It also makes it easier to create a path-following AI, because we don't have to make the character gain any momentum before performing a jump.

We use four constants to define and restrict character movement:

  • Gravity
  • Maximum falling speed
  • Walking speed
  • Jumping speed

Here's how they're defined for this example project:

The base unit used in the game is a pixel, so the character will move 160 pixels per second horizontally (when walking or jumping); when jumping, the character's vertical speed will be set to 410 pixels per second. In the test project the character's falling speed is limited to 900, so there is no possibility of it falling through the tiles. The gravitation is set to be -1030 pixels per second2.

The character's jump height is not fixed: the longer the jump key is pressed, the higher the character will jump. This is achieved by setting the character's speed to be no more than 200 pixels per second once the jump key is no longer pressed:

Setting Up the Grid

The grid is a simple array of bytes which represent the cost of movement to a specific cell (where 0 is reserved for blocks which the character cannot move through). 

We will not go deep into the weights in this tutorial; we'll actually be using just two values: 0 for solid blocks, and 1 for empty cells. 

The algorithm used in this tutorial requires the grid's width and height to be a power of two, so keep that in mind.

Here's an example of a grid array and an in-game representation of it.

A Note on Threading

Normally we would set up the pathfinder process in a separate thread, so there are no freezes in the game while it is working, but in this tutorial we'll use a single threaded version, due to the limitations of the WebGL platform which this demo runs on. 

The algorithm itself can be run in a separate thread, so you should have no problems with merging it into your code in that way if you need to.

Adding the Jump Values to the Nodes

Remember from the theory overview that nodes are distinguished not just by x- and y-coordinates, but also by jump values. In the standard implementation of A*, x- and y-coordinates are sufficient to define a node, so we need to modify it to use jump values as well.

From this point on, we'll be modifying the core PathFinderFast.cs source file from Gustavo Franco's implementation.

Re-Structuring the List of Nodes

First, we'll add a new list of nodes for each grid cell; this list will replace mCalcGrid from the original algorithm:

Note that PathFinderFast uses one-dimensional arrays, rather than a 2D array as we might expect to use when representing a grid. 

This is not a problem, because we know the grid's width and height, so instead of accessing the data by X and Y indices, we'll access it by a single int which is calculated using  location = (y << gridWidthLog2) + x. (This is a slightly faster version of a classic location = (y * gridWidth) + x). 

Because we need a grid that is three-dimensional (to incorporate the jump values as a third "dimension"), we'll need to add another integer, which will be a node's index in a list at a particular X and Y position. 

Note that we cannot merge all three coordinates into one integer, because the third dimension of the grid is not a constant size. We could consider using simply a three-dimensional grid, which would restrict the number of nodes possible at a particular (x, y) position—but if the array size on the "z-axis" were too small, then the algorithm could return an incorrect result, so we'll play it safe.

Here, then, is the struct that we will use to index a particular node:

The next thing we need to modify is the PathFinderNodeFast structure. There are two things we need to add here:

  • The first is the index of a node's parent, which is basically the previous node from which we arrived to the current node. We need to have that index since we cannot identify the parent solely by its x- and y-coordinates. The x- and y-coordinates will point us to a list of nodes that are at that specific position, so we also need to know the index of our parent in that list. We'll name that index PZ.
  • The other thing we need to add to the structure is the jump value.

Here's the old struct:

And here's what we'll modify it to:

There's still one problem, though. When we use the algorithm once, it will populate the cells' lists of nodes. We need to clear those lists after each time the pathfinder is run, because if we don't, then those lists will grow all the time with each use of the algorithm, and the memory use will rise uncontrollably. 

The thing is, we don't really want to clear every list every time the pathfinder is run, because the grid can be huge and the path will likely never touch most of the grid's nodes. It would cause a big overhead, so it's better to only clear the lists that the algorithm went through. 

For that, we need an additional container which we'll use to remember which cells were touched:

A stack will work fine; we'll just need to clear all the lists contained in it one by one.

Updating the Priority Queue

Now let's get our priority queue mOpen to work with the new index. 

The first thing we need to do is change the declaration to use Locations rather than integers—so, from:

to:

Next, we need to change the queue's comparer to make it use the new structure. Right now it uses just an array of nodes; we need to change it so it uses an array of lists of nodes instead. We also need to make sure it compares the nodes using a Location instead of just an integer.

Here's the old code:

And here's the new:

Now, let's initialize the lists of nodes and the touched locations stack when the pathfinder is created. Again, here's the old code:

And here's the new:

Finally, let's create our priority queue using the new constructor:

Initializing the Algorithm

When we start the algorithm, we want to tell it how big our character is (in cells) and also how high the character can jump. 

(Note that, for this tutorial, we will not actually use characterWidth nor characterHeight; we will assume that the size of the character is a 1x1 block.)

Change this line:

To this:

First thing we need to do is clear the lists at the previously touched locations:

Next, we must make sure that the character can fit in the end location. (If it can't, there's no point in running the algorithm, because it will be impossible to find a valid path.)

Now we can create a start node. Instead of setting the values in mCalcGrid, we need to add a node to the nodes list at a particular position. 

First, let's calculate the location of the node. Of course, to be able to do this, we also need to change the type of mLocation to Location.

Change this line:

To this:

The mEndLocation can be left as-is; we'll use this to check if we have already reached our goal, so we only need to check the X and Y positions in that case:

For the start node initialization, we need to reset the parent PZ to 0 and set the appropriate jump value. 

When the starting point is on the ground, the jump value should be equal to 0—but what if we're starting in the air? The simplest solution will be to set it to the falling value and not to worry about it too much; finding a path when starting in mid-air may be quite troublesome, so we'll take the easy way out.

Here's the old code:

And the new:

We must also add the node to the list at the start position:

And we also need to remember that the start node list is to be cleared on the next run:

Finally, the location is queued and we can start with the core algorithm. To sum up, this is what the initialization of the pathfinder run should look like:

Calculating a Successor

Checking the Position

We don't need to modify much in the node processing part; we just need to change mLocation to mLocation.xy so that mLocationX and mLocationY can be calculated.

Change this:

To this:

Note that, when we change the status of a node inside the nodes list, we use the UpdateStatus(byte newStatus) function. We cannot really change any of the members of the struct inside the list, since the list returns a copy of the node; we need to replace the whole node. The function simply returns a copied node with the Status changed to newStatus:

We also need to alter the way the successors are calculated:

Here we just calculate the location of one of the successors; we need this so that we can find the relative position of the successor node.

Determining the Type of a Position

The next thing we need to know is what type of position a successor represents. 

We are interested in four variants here:

  1. The character does not fit in the position. We assume that the cell position responds to the bottom-left "cell" of a character. If this is the case, then we can discard the successor, since there is no way to move the character through it.
  2. The character fits in the position, and is on the ground. This means that the successor's jump value needs to be changed to 0.
  3. The character fits in the position, and is just below the ceiling. This means that even if the character has enough speed to jump higher, it cannot, so we need to change the jump value appropriately.
  4. The character simply fits in the spot and is neither on the ground nor at the ceiling.

First, let's assume that the character is neither on the ground nor at the ceiling:

Let's check if the character fits the new spot. If not, we can safely skip the successor and check the next one.

To check if the character would be on the ground when in the new location, we just need to see if there's a solid block below the successor:

Similarly, we check whether the character would be at the ceiling:

Calculating the Jump Value

The next thing we need to do is see whether this successor is a valid one or not, and if it is then calculate an appropriate JumpLength for it.

First, let's get the JumpLength of the parent node:

We'll also declare the newJumpLength for the currently processed successor:

Now we can calculate the newJumpLength. (How we do this is explained at the very beginning of the theory overview.)

If the successor node is on the ground, then the newJumpValue is equal to 0:

Nothing to worry about here. It's important to check whether the character is on the ground first, because if the character is both on the ground and at the ceiling then we want to set the jump value to 0.

If  the position is at the ceiling then we need  to consider two cases: 

  1. the character needs to drop straight down, or 
  2. the character can still move one cell to either side.

In the first case, we need to set the newJumpLength to be at least maxCharacterJumpHeight * 2 + 1, because this value means that we are falling and our next move needs to be done vertically. 

In the second case, the value needs to be at least maxCharacterJumpHeight * 2. Since the value is even, the successor node will still be able to move either left or right.

The "on ground" and "at ceiling" cases are solved; now we can get to calculating the jump value while in air.

Calculating the Jump Value in Mid-Air

First, let's handle the case in which the successor node is above the parent node.

If the jump length is even, then we increment it by two; otherwise, we increment it by one. This will result in an even value for newJumpLength:

Since, in an average jump, the character speed has its highest value at the jump start and end, we should represent this fact in the algorithm. 

We'll fix the jump start by forcing the algorithm to move two cells up if we just got off the ground. This can easily be achieved by swapping the jump value of 2 to 3 at the moment of the jump, because at the jump value of 3, the algorithm knows the character cannot go to the side (since 3 is an odd number). 

The jump curve will be changed to the following.

Let's also accommodate for this change in the code:

(We'll fix the curve when the speed of the character is too high to go sideways when we validate the node later.)

Now let's handle the case in which the new node is below the previous one. 

If the new y-coordinate is lower than the parent's, that means we are falling. We calculate the jump value the same way we do when jumping up, but the minimum must be set to maxCharacterJumpHeight * 2. (That's because the character does not need to do a full jump to start falling—for example, it can simply walk off a ledge.) In that case the jump value should be changed from 1 to 6 immediately (in the case where the character's maximum jump height is 3):


This way, the character can't step off a ledge and then jump three cells in the air!

Validating the Successor

Now we have all the data we need to validate a successor, so let's get to it.

First, let's dismiss a node if its jump value is odd and the parent is either to the left or to the right. That's because if the jump value is odd, then that means the character went to the side once already and now it needs to move either one block up or down:

If the character is falling, and the child node is above the parent, then we should skip it. This is how we prevent jumping ad infinitum; once the jump value hits the threshold we can only go down.

If the node's jump value is larger than (six plus the fall threshold value), then we should stop allowing the direction change on every even jump value. This will prevent the algorithm giving incorrect values when the character is falling really fast, because in that case instead of 1 block to the side and 1 block down it would need to move 1 block to the side and 2 or more blocks down. (Right now, the character can move 3 blocks to the side after it starts falling, and then we allow it to move sideways every 4 blocks traversed vertically.)

If there's a need for a more accurate jump check, then instead of dismissing the node in the way shown above, we could create a lookup table with data determining at which jump lengths the character would be able to move to the side.

Calculating the Cost

When calculating the node cost, we need to take into account the jump value.

Making the Character Move Sensibly

It's good to make the character stick to the ground as much as possible, because it will make its movement less jumpy when moving through flat terrain, and will also encourage it to use "safer" paths, which do not require long falls. 

We can easily make the character do this by increasing the cost of the node according to its jump value. Here's the old code:

And here's the new:

The newJumpLength / 4 works well for most cases; we don't want the character to stick to the ground too much, after all.

Revisiting Nodes With Different Jump Values

Normally, when we've processed the node once, we set its status to closed and never bother with it again; however, as we've already discussed, we may need to visit a particular position in the grid more than once.

First, before we decide to skip the currently checked node, we need to see if there is any node at the current (x, y) position. If there are no nodes in there yet, then we surely cannot skip the current one:

The only condition which allows us to skip the node is this: the node does not allow for any new movement compared to the other nodes on the same position

The new movement can happen if:

  • The currently processed node's jump value is lower than any of the other nodes at the same (x, y) position—in this case, the current node promises to let the character jump higher using this path than any other.
  • The currently processed node's jump value is even, and all other nodes' jump values at the position are not. This basically means that this particular node allows for sideways movement at this position, while others force us to move either up or down.

The first case is simple: we want to look through the nodes with lower jump values since these let us jump higher. The second case comes out in more peculiar situations, such as this one:

Here, we cannot move sideways when jumping up because we wanted to force the algorithm to go up twice after starting a jump. The problem is that, even when falling down, the algorithm would simply ignore the node with the jump value of 8, because we have already visited that position and the previous node had a lower jump value of 3. That's why in this case it's important to not skip the node with an even (and reasonably low) jump value.

First, let's declare our variables that will let us know what the lowest jump value at the current (x, y) position is, and whether any of the nodes there allow sideways movement:

Next, we need to iterate over all the nodes and set the declared variables to the appropriate values:

As you can see, we not only check whether the node's jump value is even, but also whether the jump value is not too high to move sideways.

Finally, let's get to the condition which will decide whether we can skip the node or not:

As you can see, the node is skipped if the lowestJump is less or equal to the processed node's jump value and any of the other nodes in the list allowed for sideways movement.

We can leave the heuristic formula as-is; we don't need to change anything here:

Tidying Up

Now, finally, since the node has passed all the checks, we can create an appropriate PathFinderNodeFast instance for it.

And we can also finally add the node to the node list at the mNewLocation

Before we do that, though, let's add the location to the touched locations stack if the list is empty. We'll know that we need to clear this location's list when we run the pathfinder again:

After all the children have been processed, we can change the status of the parent to closed and increment the mCloseNodeCounter:

In the end, the children's loop should look like this.

Filtering the Nodes

We don't really need all the nodes that we'll get from the algorithm. Indeed, it will be much easier for us to write a path-following AI if we filter the nodes to a smaller set which we can work with more easily.

The node filtering process isn't actually part of the algorithm, but is rather an operation to prepare the output for further processing. It doesn't need to be executed in the PathFinderFast class itself, but that will be the most convenient place to do it for the purposes of this tutorial.

The node filtering can be done alongside the path following code; it is fairly unlikely that we'll filter the node set perfectly to suit our needs with our initial assumptions, so, often, a lot of tweaks will be needed. In this tutorial we'll go ahead and reduce the set to its final form right now, so later we can focus on the AI without having to modify the pathfinder class again.

We want our filter to let through any node that fulfills any of the following requirements:

  1. It is the start node.
  2. It is the end node.
  3. It is a jump node.
  4. It is a first in-air node in a side jump (a node with jump value equal to 3).
  5. It is the landing node (a node that had a non-zeo jump value becomes 0).
  6. It is the high point of the jump (the node between moving upwards and and falling downwards).
  7. It is a node that goes around an obstacle.

Here are a couple of illustrations that show which nodes we want to keep. The red numbers show which of the above rules caused the filter to leave the node in the path:

Setting Up the Values

We filter the nodes as they get pushed to mClose list, so that means we'll go from the end node to the start node.

Before we start the filtering process, we need to set up a few variables to keep track of the context of the filtered node:

fNode and fPrevNode are simple Vector2s, while fNodeTmp and fPrevNodeTmp are the PathFinderNodeFast nodes. We need both; we'll be using Vector2s to get the position of the nodes and PathFinderNodeFast objects to get the parent location, jump value, and everything else we'll need.

loc points to the XY position in the grid of the node that will be processed next iteration.

Defining the Loop

Now we can start our loop. We'll keep looping as long we don't get to the start node (at which point, the parent's position is equal to the node's position):

We will need access to the next node as well as the previous one, so let's get it:

Adding the End Node

Now let's start the filtering process. The start node will get added to the list at the very end, after all other items have been dealt with. Since we're going from the end node, let's be sure to include that one in our final path:

If mClose is empty, that means we haven't pushed any nodes into it yet, which means the currently processed node is the end node, and since we want to include it in the final list, we add it to mClose.

Adding Jump Nodes

For the jump nodes, we'll want to use two conditions. 

The first condition is that the currently processed node's jump value is 0, and the previous node's jump value is not 0:

The second condition is that the jump value is equal to 3. This basically is the first jump-up or first in-air direction change point in a particular jump:

Adding Landing Nodes

Now for the landing nodes:

We detect the landing node by seeing that the next node is on the ground and the current node isn't. Remember that we are processing the nodes in reversed order, so in fact the landing is detected when the previous node is on the ground and the current isn't.

Adding Highest Point Nodes

Now let's add the jump high points. We detect these by seeing if both the previous and the next nodes are lower than the current node:

Note that, in the last case, we don't compare the current node's y-coordinate to fPrevNode.y, but rather to the previous pushed node's y-coordinate. That's because it may be the case that the previous node is on the same height with the current one, if the character moved to the side to reach it.

Adding Nodes that Go Around Obstacles

Finally, let's take care of the nodes that let us maneuver around the obstacles. If we're next to an obstacle and the previous pushed node isn't aligned with the current one either horizontally or vertically, then we assume that this node will align us with the obstacle and let us move cleanly over it if need be:

Preparing for the Next Loop

After adding a node to the mClose list or disregarding it, we need to prepare the variables for the next iteration:

As you can see, we calculate everything in the same way we prepare the loop for the first iteration.

Adding the Start Node

After all the nodes have been processed (and the loop is finished), we can add the start point to the list and finish the job:

All Together

The whole path filtering procedure should look like this.

Conclusion

The final product of the algorithm is a path found for a character which is one block wide and one block high with a defined maximum jump height. 

We can improve on this: we could allow the character's size to be varied, we could add support for one-way platforms, and we could code an AI bot to follow the path. We will address all of these things in the next part of the tutorial!

Simple Xbox Controller Input in HTML5 With PxGamepad

$
0
0

Gaming on the web has come a long way with HTML5 technologies like Canvas, WebGL, and WebAudio. It's now possible to produce high-fidelity graphics and sound within the browser. However, to provide a true gaming experience, you need input devices designed for gaming. The Gamepad API is a proposed standard of the W3C, and is designed to provide a consistent API across browsers.

The Gamepad API allows users toconnect devices like an Xbox Controller to a computer and use them for browser-based experiences! Our helper class, PxGamepad, then maps the button and axis indices to the more familiar names as labeled on the Xbox controller.

If you have a gamepad, try plugging it into your computer, click the picture of the Xbox controller below, and press a button. You’ll see the controller light up to mirror each movement you make!

This tutorial is the third in a series on Flight Arcade—built to demonstrate what’s possible on the web platform and in the new Microsoft Edge browser and EdgeHTML rendering engine. You can find the first two articles on WebGL and Web API, plus interactive code and examples for this article, at flightarcade.com and here on Tuts+.

Flexible API

The Gamepad API is intelligently designed with flexibility in mind. At a basic level, it provides access to buttons and axes. Button values range from 0 to 1 inclusive, while axes range from -1 to 1 inclusive. All values are normalized to these ranges so developers can expect consistent behavior between devices.

The Gamepad object provides detailed information about the manufacturer and model of the connected gamepad. More useful is a “mapping” property which describes the general type of gamepad. Currently the only supported mapping is “standard”, which corresponds to the controller layout used by many popular game consoles like the Xbox.

The standard controller mapping has two sticks, each of which is represented by two axes (x and y). It also includes a D-pad, four game buttons, top buttons, and triggers: all represented as buttons in the Gamepad API.

Current Xbox controllers report button state as either 0 (normal state) or 1 (pressed). However, you could imagine that future controllers could report the amount of force applied to each button press.

The Xbox D-pad also reports discrete values (0or1), but the sticks provide continuous values across the entire axis range (-1to1). This additional precision makes it much easier to fly the airplane in our Flight Arcade missions. 

PxGamepad

The array of buttons and axes provided by the Gamepad API is forward thinking and perfect as a low-level API. However, when writing a game, it’s nice to have a higher-level representation of a standard gamepad like the Xbox One controller. We created a helper class named PxGamepad that maps the button and axis indices to the more familiar names as labeled on the Xbox controller.

I'll walk through a few interesting pieces of the library, but the full source code (MIT License) is available on GitHub.

The standard Gamepad API provides button state as an array of buttons. Again, this API is designed for flexibility, allowing controllers with various button counts. However, when writing a game, it's much easier to write and read code that uses the standard mapped button names.

For example, with the HTML5 Gamepad API, here is the code to check whether the left trigger is currently pressed:

The PxGamepad class contains an update method that will gather the state for all the standard mapped buttons and axes. So determining whether the leftTrigger is pressed is as simple as accessing a Boolean property:

Axes in the standard Gamepad API are also provided as an array of numerical values. For example, here is the code to get the normalized x and y values for the left stick:

The D-pad is a special case, because it is considered to be a set of four buttons by the HTML5 Gamepad API (indices 12, 13, 14, and 15). However, it's common for developers to allow the D-pad to be used in the same way as one of the sticks. PxGamepad provides button infomation for the D-pad, but also synthesizes axis information as though the D-pad were a stick:

Another limitation of the HTML5 Gamepad API is that is doesn't provide button-level events. It's common for a game developer to want to activate a single event for a button press. In Flight Arcade, the ignition and brake buttons are good examples. PxGamepad watches button state and allows callers to register for notifications on button release.

 Here is the full list of named buttons supported by PxGamepad:

  • a
  • b
  • x
  • y
  • leftTop
  • rightTop
  • leftTrigger
  • rightTrigger
  • select
  • start
  • leftStick
  • rightStick
  • dpadUp
  • dpadDown
  • dpadLeft
  • dpadRight

Obtaining the Current Gamepad

There are two methods for retrieving the gamepad object. The Gamepad API adds a method to the navigator object named getGamepads(), which returns an array of all connected gamepads. There are also new gamepadconnected and gamepaddisconnected events that are fired whenever a new gamepad has been connected or disconnected. For example, here is how the PxGamepad helper stores the last connected gamepad:

And here is the helper to retrieve the first standard gamepad using the navigator.getGamepads() API:

The PxGamepad helper class is designed for the simple scenario where a single user is playing a game with a standard mapped gamepad. The latest browsers, like Microsoft Edge, fully support the W3C Gamepad API. However, older versions of some other browsers only supported pieces of the emerging specification. PxGamepad listens for the gamepadconnected events and falls back to querying for the list of all gamepads if needed.

The Future of Gamepad

While PxGamepad is focused on the simple, most common scenario, the Gamepad API is fully capable of supporting multiple players, each with their own gamepad. One possible improvement for PxGamepad might be to provide a manager-style class which tracks connection of multiple gamepads and maps them to multiple players in a game. Another might be to allow users to remap or customize the button functions on their gamepads.

We're also excited about the potential of the Gamepad API for non-game scenarios. With the rise of WebGL, we're seeing a variety of innovative uses for 3D on the web. That might mean exploring the Mt. Everest region in 3D with GlacierWorks, or viewing the Assyrian Collection of the British Museum thanks to CyArk's efforts to digitally preserve important world sites and artefacts.

During the development of Flight Arcade, we frequently used Blender and other 3D tools to process models for Babylon.js. Some developers and artists use a device called a 3D mouse to help manipulate and navigate 3D models. These devices track movement of a single knob through six axes! They make it really easy and quick to manipulate models. Beyond gaming, they're used in a variety of interesting applications from engineering to medical imaging. While adding gamepad support to Flight Arcade, we were surprised to learn that the Gamepad API detected our 3D SpaceMouse and provided movement data for all six axes!

It's exciting to imagine all the possibilities that the new Gamepad API offers. Now is a great time to experiment with the new Gamepad API and add precision control and a lot of fun to your next game or application!

More Hands-On With JavaScript

Microsoft has a bunch of free learning on many open source JavaScript topics, and we’re on a mission to create a lot more with Microsoft Edge. Here are some to check out:

And some free tools to get started: Visual Studio Code, Azure Trial, and cross-browser testing tools—all available for Mac, Linux, or Windows.

This article is part of the web dev tech series from Microsoft. We’re excited to share Microsoft Edge and the new EdgeHTML rendering engine with you. Get free virtual machines or test remotely on your Mac, iOS, Android, or Windows device @ http://dev.modern.ie/.

4 Ways to Teach Your Players How to Play Your Game

$
0
0

We all hate in-game tutorials. When we buy a game, we want to jump straight into the action, not spend ages reading through menus and flowcharts of moves. But we need to know how to play. We need to understand the new rules of each game—after all, if every game were the same, why would we need other games?

And because computer games are so complex, there's much more to a tutorial than just "use arrow keys and space to shoot". There's how we interact, our objectives, how the world reacts to us: all this needs to be imparted to the player, and preferably without sitting them down and specifically having to say "spikes are bad".

So we want to get our tutorial out of the way as quickly as possible, right? The thing is, the first few minutes of a game can make or break a player's experience. Triple-A games have a little bit more leeway, but if you're making a mobile or web game then you need to get the player into the meat of the game as soon as possible to ensure they're having fun; otherwise, they'll just find something else to play.

OK, so we need to make a tutorial, but we don't want the player to sit through a boring lesson on how the game works... this is a conundrum. The solution lies in how we construct our tutorial: can we make it fun? In fact, can we make it part of the game?

Tutorials can be largely split into three types: non-interactive, interactive, and passive. We'll look at each in turn.

Non-Interactive In-Game Tutorials

Non-interactive in-game tutorials are, in many ways, a leftover of old game design. The image below is from Infected, a game my team and I made several years back. 

Infected tutorial image
The "tutorial" from our game, Infected.

The entire game tutorial is essentially this one image; in retrospect, it was a massive design flaw. When watched people actually play the game, they would hit that screen, their eyes would briefly glaze over, and they would hit Start. For most of our players, they were none the wiser on how the game actually worked.

George Fan, the creator of the fantastic Plants vs Zombies, goes by the rule that "there should be no more than eight words on the screen at any time". Players generally have short attention spans, and are not looking to digest large quantities of information. 

While non-interactive tutorials are not necessarily bad, they nearly always break this eight-word limit. If we were wanted to explore this design flaw, we could sum it up neatly as:

Don't overwhelm the player.

This is really the first rule of in-game tutorial design. Players need to understand what's going on at all times: if you give the player a list of 200 combo moves and special attacks, then chances are they'll remember two or three and use those for the entire game. However, if you “trickle teach” the player—introduce one concept at a time—then they will have plenty of opportunity to get to grips with each ability.

Controller image
We've used this image before, but its still relevant.

We've actually talked briefly about this idea before, under the concept of using achievements as tutorial aids. Forcing the player to complete a level with only a basic weapon might damage the overall “fun level”, but giving players an achievement for doing it makes it optional, and encourages players (especially those who are already competent at the game) to try new strategies and tactics. 

Any sort of rank or reward system can be used in this way, such as a star rating or an A+ to F ranking. “Bad” players can complete the level easily, whereas players who adhere to more difficult challenges are rewarded with higher scores. This also allows more hardcore gamers to aim for 100% completion, while casual gamers can still enjoy the game without getting stuck on a difficult level.

Super Meat Boy spreads its "tutorial" over the first half dozen levels. The first level teaches you the fundamentals: moving left and right and jumping. Level 2 teaches wall jumping. Level 3, sprinting. Once the player has understood these basic concepts, the game starts introducing concepts like spinning blades, disintegrating platforms, and scrolling levels.

super meat boy gameplay image
The first level of Super Meat Boy. Can you handle walking and jumping? Then you're probably good.

The first level of Super Meat Boy, in fact, is incredibly difficult to fail at. The game uses a technique often referred to as a “noob cave”. Essentially, the player starts in a position from which it is impossible to fail—they need to make progress in order to get to a point where they can die. This gives the player a chance to get to grips with the game mechanics, without feeling under threat of enemies attacking or timers running out. 

The “noob cave” technique is something we implemented in Infected to some degree: although it is possible to lose the first level, it requires some effort. The player is given a significant starting advantage (twice as many pieces as the enemy), and the AI is almost non-existent. At high levels, the AI will make calculated moves, but on the first level, the AI will move 100% randomly (within the rules of making a legal move). This means that even if the player has zero idea of how to play, they are still highly likely to win. (Of course some players still managed to lose, but the overall experience was significantly better for our players than our first version, where we simply threw them against an advanced AI that would crush them.)

There are a few ways to implement a "noob cave" in a game, but one effective way is to create an interactive tutorial: a section of the game with locked down mechanics where the player can only perform the actions required to win. This allows the player to "play" the game, without running the risk of losing and getting bored.

Interactive In-Game Tutorials

Allowing player interaction within an in-game tutorial is a good way to teach mechanics. Rather than simply telling the player how the game works, making them perform required actions results in better retention. While you can tell a player the instructions a hundred times, actually getting them to perform game actions will guarantee that they remember and understand.

Highrise heroes gameplay image
Forcing the player through the motions in Highrise Heroes

The above image, from Highrise Heroes, is an excellent demonstration of how to involve a player in a tutorial. Although it would be easier to simply display an image of how making a word works, forcing them to go through the actions of actually completing a word ensures they understand this concept before they proceed. The player is locked out of “full” gameplay until they have demonstrated their ability to complete this basic gameplay element. Because you force the player to perform the action, you can be sure that the player is unable to progress until they have mastered this task.

The only drawback with this style is that, if the player is already familiar with how the game works, they can find playing through obligatory tutorial levels tedious. This can be avoided by letting players skip through the tutorial levels—but be aware that some players will then skip the tutorial regardless of whether they've played the game before. Andy Moore, creator of Steambirds, has talked about how he went from a 60% to a 95% player retention rate after making the tutorial unskippable.

Background In-Game Tutorials

Background in-game tutorials allow the player direct access to gameplay. While the player can still progress through a “noob cave” area, they are able to (hopefully) do it at a faster pace, and thus get into the "proper game" faster.

Here's an example:

VVVVVV gameplay image
In VVVVVV, the player starts here, in a safe area where they can work out the controls.

This is the first screen in VVVVVV, which uses a small pop-up to tell the player how to move left and right. As the player is completely boxed off, the only action they can perform is moving onto the second screen, where they are shown how to “jump” over obstacles. From there, they have essentially mastered gameplay—and although moving and jumping could fit into a single room, spacing them out ensures players have mastered each skill and aren't overwhelmed by masses of text.

The difference between a background and an interactive tutorial can be subtle, as they can both use a similar style. The primary difference is that a background tutorial can be skipped by the player with no effort: everything merely merges into the background. 

I saw her standing there gameplay image
I saw her standing there, a fantastic Flash game. Try it out, and notice that the game gets you playing even before the  main menu has come up.

An interactive or background tutorial is something we should have used in Infected. The entire tutorial could have been taught in three moves (how to move, how to jump, and how to capture), so there wouldn’t have been a significant loss to gameplay if we'd implemented it. Although we were aware of the issue, the tutorial was one of the last things we developed, so good design took second place to just getting the game finished.

No In-Game Tutorial

How do you get cold water from a faucet? Generally, you turn the tap handle to the right. How do you tighten a screw? You turn it clockwise. These things don't come with instruction manuals—we instinctively know how they work. Or rather, we learn how they work at an early age, and since they (generally) all work in the same way, that knowledge is reinforced throughout our lives.

Games operate on this principle as well. As veteran gamers, we automatically know that we want to collect coins and gain upgrades. Non-gamers might not understand these things automatically, so it's important to make these things as obvious as possible. (Even if you haven't played a platform game before, it's fairly obvious that jumping into fire is not a good idea.) 

A player should generally be able to identify the difference between “good” and “bad” objects at a glance, without having to use death as a trial and error discovery method. Shigeru Miyamoto once talked about how he decided why coins specifically were used in Mario:

"Thus, when we were thinking about something that anybody would look at and go 'I definitely want that!', we thought, 'Yep, it's gotta be money.'" 

Plants vs Zombies' use of "plant" and "zombie" themes helps teach the players without any explanation at all: players know that plants don't move around, and that zombies are slow moving. Rather than going for a boring "turrets vs soliders" theme, the Plants vs Zombies theme allows the game to be interesting and cutesy, and still impart vital knowledge.

Plants vs Zombies image
Honestly, it's really not that hard to work out the basics of whats going on here.

Since players will often “know” how a game plays, we don't always have to explain everything. When you throw a player into a game, consider telling them nothing—let them figure things out for themselves. If they stand around motionless, then give them a few hints ("try moving the control stick to walk!"), but  remember that players will generally try to perform basic commands themselves. Because the “basic rules” of gameplay tend to be universal, this means we can assume the player has some familiarity with them; however, it also means that it's incredibly dangerous to make changes to those basic rules.

Some fledgling games designers decide that doing things “the normal way” is the wrong way, and ignore established rules. The real-time strategy (RTS) genre has suffered from this in the past: while today's games tend to use a fairly standardised control system (left click to select, right click to move or attack), older games had very little consistency, and would often switch the left/right mouse button controls, or try to bind multiple commands to a single button. If you play one of these old games today, then the bizarre controls can be very jarring, as we've since learned a different control set.

Implementing "obvious" controls was the saving grace of Infected: while the player may not have read the tutorial, clicking on a piece immediately highlighted moves the player could make. By giving the player a visual response that clicking on a piece was the "right" move, it let them continue exploring the control system. Our initial versions of the game did not have this highlighting, and we found that adding this minor graphical addition made gameplay much more accessible and obvious, without taking anything anything away.

Infected gameplay image
In Infected, after you select a piece, highlighted squares show you where you can move.

If you want to change traditional gameplay elements around, have a good reason. Katamari Damacy uses both thumbsticks to move, rather than the more traditional setup of using one thumbstick to move and the other to control the camera. While this may cause some intial confusion, the simplicity of the game means that this control system works exceptionally well. 

In a similar vein, the web game Karoshi Suicide Salaryman actually demands that the player kill themselves, often by by jumping on spikes. While this is "non-standard" game behaviour due of the game's reversed theme (die to win), the player's objectives are always clear. 

Games will always change and evolve, but it's important to understand that when you change things—be it controls or game objectives—the player should not have to relearn everything.

Continual Learning and Experimenting

Its also interesting to note that not providing the player with explicit instructions can actually encourage gameplay through experimentation. In the Zelda series, the player is constantly finding new items and equipment, and must learn how they work. Rather than sitting through a lengthy explanation of “the hookshot: learning how to use your new weapon”, the game just dumps the player in a closed room and says “figure it out”—although the "puzzle" is generally so obvious that the player should be able to work things out instantly.

These rooms are basically noob caves: despite being found halfway through the game, they allow the player to explore how their new toy works within a safe environment. Once the player has worked out the intricacies of their new toy, they are thrown back into the game world, where they can continue puzzling and fighting monsters. By the end of the game, the player is so used to using their new weapons that switching between them to solve multi-tiered puzzles is second nature.

The way Zelda games handle in-game tutorials is also worth noting: for Zelda, the game is the tutorial. With perhaps the exception of the final dungeon, the player never really stops learning; this “trickle teaching” is one of the greatest strengths of the Zelda series. Rather than being given everything at the beginning, the player slowly unlocks the game, so they are never overwhelmed and are always unlocking cool new toys to use.

Plants vs Zombies, again, also uses trickle teaching effectively: in every level, the player unlocks a new plant (and sometimes new stages), and must learn how to use these to defeat the zombie army. The game never overwhelms the player, but always gives them something new to play with. Because the player gets to spend time with each weapon, it encourages the player to select the plants that are most effective, rather than finding a few plants they like at the start and sticking with them throughout the whole game.

Don't Scare the Player Away

All of this really just says: don't scare the player away, and don't bore them away either. It seems like such obvious advice, but it's remarkable how few games (including triple-A games) seem to be unable to understand this.

One common example of this, a mistake made time and time again, is requiring registration to play online. If you're trying to get a player hooked, don't make them wade through forms filling out their date of birth, their email address, and so on—just give them a guest account and let them play. If they enjoy the game, then they're more likely to sign up for a “full” account. (Tagpro is a online, multiplayer game which does this fairly well: select a server, hit Play as Guest and you're in.)

It's fine to make complex games, but realise that humans have poor memories and short attention spans, and do not learn well by being presented with masses of text. If you've not played them before, try playing a game like Crusader Kings, Europa Universalis, Dwarf Fortress, or even Civilisation. If you haven't been show how to play by someone else, it can be quite daunting to learn how the game actually works. And while these are all fantastic games, they will always have a certain “niche” quality—not because they have learning curves, but because they have learning cliffs, impassible to all but the most determined.

Conclusion

Remember: try make your tutorials fun, rather than a tedious slog. Every game is different, and it might be difficult to implement all of these ideas within a particular genre—but if you can make the first five minutes fun, you can probably hook the player to the end credits.

References

9 More Inexplicably Underused Game Genres for Your Next Project

$
0
0

In the last twoarticles in this series we looked at a multitude of underused genres, from both classic and modern eras. It turns out there are even more!

In each genre, there are one or two huge examples of games, but barely any others (even including knock-offs). Let's revisit these genres and change that!

Round-Based Tactics

These games have a unique combination of several mechanics that make them interesting. In addition to the round-based combat and tactics, you'll also often have a base to which you return between missions.

XCOM: Enemy Within

In this base, equipment is assigned to your units, which are uniquely generated. The units also gain experience and skills over time, and tend to become very dear to you—which makes it all the more engaging when one of them dies, permanently.

XCOM: Enemy Within

As there are only a few games in this genre, it is interesting to see how they are spread over settings, with little repetition inside each one. XCOM is set in a near-future sci-fi world. Invisible Inc deals with spycraft. The upcoming Hard West is a Western, of which there are also few games, which makes it doubly unique.

Guns and Conversation

Both the Mass Effect and Deus Ex series are huge successes. The games offer a unique combination of talking and shooting elements. (They often have other elements, such as RPG-like stats and skills, but the combination of guns and conversation is unique and interesting enough to work on its own.)

Alpha Protocol

In addition to shooting segments (or sneaking segments, as in Alpha Protocol or Escape From Butcher Bay), you get to talk to characters, often with quest lines, multiple paths, and options in the dialogue.

Deus Ex

The dialogue sections don't have to influence the action sections that much, but serve to embed the action in the larger game world. A game only about shooting and violence can feel detached after a while, but interacting with people in a regular, non-violent manner serves to connect you to the universe and creates a lot of variety. It underscores the feeling that these are actual people you are dealing with, not mindless space-zombies.

A shooting-free segment also allows you to add other gameplay elements, such as tasks that can be solved via discussion, and allow the player to explore the world safely.

Quiz Games

Apart from the occasional tie-in, like the Who Wants to be a Millionaire games, You Don't Know Jack is the only major quiz game out there.

You Don't Know Jack 2015

You Don't Know Jack accomplishes this by being ridiculously funny from beginning to end. The announcer (Cookie Masterson is the most well known, but there are others) is the heart of the game, bringing life and humour to each question. The questions themselves are also uniquely crafted. 

A typical question is something like this:

Imagine a bunch of Merrill Lynch bankers snacking on bailout money while bilking their customers. Because it does NOT have ridges, which coin might break while scooping up some creamy French Onion dip?

These are much more engaging than the basic "Which one of these is the biggest?" questions, and much more fun.

You Don't Know Jack 2015

Other mechanics make this an ideal party-game. Four players can play simultaneously on a single computer or console. Players can also "screw" somebody with a question, meaning the other person has to answer. This can be used to avoid difficult questions, as well as to give other players a hard time.

There are also other mechanics than simply "answer this". A fun type of challenge orders you to say whether 30 rapidly-named characters are from either Star Trek or Star Wars, which is great fun.

Meditative Games

Phyta is one of my favourite games. It has been created a long time ago by a dev who never made another game again, which only adds to its uniqueness.

Phyta

In Phyta you control a plant. It grows slowly towards the mouse-cursor, while relaxing guitar music plays in the background. From time to time, you encounter flying sprites, which you can trap in the plant. There is no score or other "game-y" artifacts outside what goes on in the game. You just are, and it is incredibly relaxing.

Mountain

Another recent meditative game is Mountain. It, too, presents itself without any game-y elements. There is only a mountain—your mountain, which is unique—and its continuing presence and evolution. Over time, the seasons change and the mountain collects various amounts of detritus in a display that simply is.

Element Combination Games

You could argue that these games are the inventory management mechanics of old  point-and-click games boiled down to their essence—because in these games, you combine everything with everything.

Little Inferno

The Doodle God games start out with the four basic elements, which you can combine. Combine Fire and Water and you get Steam. Combine Earth and Fire and you get Lava. Continue this, and you end up with more than 100 new objects.

At the beginning, where only few options are available, wanton experimentation is encouraged, and fun. Combine everything with everything, and you get quick results. Over time, though, the number of possible combinations becomes too large, and you must put more thought into it.

Little Inferno

Little Inferno works similarly. Your game consists of a fireplace, where you can insert toys and set them on fire. Certain toys produce special effects, such as green fire. The game gives you a list with combos, which you have to solve. For instance, you are given the clue "Springtime", and have to figure out that it requires a Seed Packet and an Alarm Clock. Each combo is essentially a riddle, only solved with the correct combination of burnt items.

Dyson-Like

I struggled to name this type of game, as there are so few entries. Another good name would be "mass strategy". Dyson (now Eufloria) was one of the first to do this type of unit-management.

Galcon 2

These games centre around circular hotspots. In Eufloria and Galcon Fusion, these represent planets; in Oil Rush, they are oil derricks. You command units, like in a strategy game, but you order them en masse, from hotspot to hotspot. In this way, it streamlines the classic Real-Time Strategy genre.

Eufloria HD

In Eufloria you select a planet, and can decide how many of your seedlings to send to another planet, where they will fight other players or seed the planet. Once captured, the planet produces more units.

In terms of strategy, these are often numbers games. How many units will it take to take over that planet? I can pull several dozen from a further planet, but it will take them time to arrive nearby. Once then have arrived, I can then launch all units at once.

Surprise enemy attacks create moments of panic, where you have to abandon your plans, figure out where the enemies are headed, and then try to get a big enough defence ready before they take away one of your (possibly highly important) planets.

Serious Military Strategy Game

These games take a utilitarian view. In Naval War, you command from afar, sending single ships and planes. The interface is purposefully minimalistic, mimicking the view you would conceivably get in the combat information centre of a battleship. Your funds and units during each mission are severely limited, and you can only operate within the rules of engagement, which are set for every mission.

Naval War: Arctic Circle

In Combat Mission Shock Force, each mission is asymmetric, with each faction having different objectives. When playing as NATO forces, you have to occupy certain points; when playing the same mission as the other side, you have to try to survive, and not lose more than a certain percentage of your units.

Naval War: Arctic Circle

A wonderful feature is "unit visiblity". This means that, just because one of your foot soldiers can see where an enemy is located, it does not mean that every one of your other units automatically knows where they are. The location has to be communicated between them, adding another layer to the battle and organization.

Dystopian Bloodsport

Setting a game in a bloodsport is fun, as it allows you to go nuts. Tired of all-brown shooters? You can make everything insanely colorful! Just look at Monday Night Combat, which is predominantly orange.

Monday Night Combat

You can also have other sports-like tropes, like announcers, cheerleaders, or sponsorships. Managing teams and players adds more fun elements.

Monday Night Combat

Unreal Tournament already predated this by having arenas set it all different kinds of environments. While there were sci-fi levels, there were also grungy streets, Egyptian environments, medieval scenery, and feudal Japanese architecture.

Firefighting Games

Firefighting games are so few and far between that each one differs a lot from all the others.

Jones on Fire

Jones On Fire has you run through a burning forest, endless runner style, while trying to save kittens.

Firefighters 2014

Firefighters 2014 is a more serious simulation. You have to direct firefighters and trucks to the optimal positions. Each fire needs a fitting response, as there are many kinds.

Conclusion

Making a game in a rare genre can give you a unique niche to explore. If the genre's only other games are successes, there a clear demand without saturation. Take this chance!

















Viewing all 728 articles
Browse latest View live