Filtering damage

Creating a more complicated damage model is something you should always implement in the beginning of a game’s development. You may not plan to need it, but when your design changes it’s a good thing to have in place.

Why would we want this?
It may be that you want certain characters to respond to only particular damage types like fire, or high-velocity munitions. Or perhaps you want equipment to shield against a certain super-set of these types, like a flak vest that stops explosive damage.

The idea of this is relatively simple – you set up a system that enables a receiver to only take damage from the types that it is allowing. Whenever you create a character, you can specify what it is vulnerable or invulnerable to. When damage is sent, it carries with it the relevant information that allows this filtering.

The fundamental idea behind its implementation is that the designer is able to create damage types, such as explosive, blunt, fire, and sub-types such as grenade, bullet and so on. These are then processed by the engine and assigned to unique bitmasks for fast processing. You can then create weapons that deal this damage, triggers that watch for them (for example a destructible wall only vulnerable to explosive damage), and characters who can listen for these types and process them how you choose.

In an upcoming post I will outline the basic implementation of this system. Stay tuned!

How would we go about coding the memory marker?

In my experience the most important thing when coding this system in various forms is that your memory markers, in code and their references in script, must be nullable.

This provides us with a very quick and easy way of wiping these markers when they are no longer needed, or querying the null state to see if the agent has no memory of something – and therefore if we need to create it.

The first pass implementation of these markers simply has two rules:

  1. You update the marker for a character to that character’s location when it’s been seen by an enemy.
  2. You make the AI’s logic – search routines and so on, act on this marker instead of the character

It’s worth mentioning that each AI will need one of these markers for every character on an opposing team, and every object they must keep track of.

Because of this, it is useful to populate some kind of array with these markers.

Think too, about how you can sort this list by priority. When the AI loses track of a target they can grab the next marker in the list which may be an objective, or a pickup they passed.

When the list is empty, they fall back to their patrol state.

Memory markers

Memory is something that is often overlooked in combat games, more often than not when a character becomes aware of you in a combatative action game, they remain aware until dead. Sometimes they may run a countdown when they lose sight of the player and lapse back into their patrol state if that ends before they find them.

Neither of these techniques looks particularly intelligent. The AI either looks unreasonably aware of you, or unrealistically gullible, in that they go about their business after they’ve lost track of you for a few seconds.

A memory marker is a simple trick (The UE4 implementation of which can be seen here) that allows you to update and play with the enemies perception. It is a physical representation of where the enemy ‘thinks’ the player is.

In its simplest form, it has two simple rules:

  • The AI use this marker for searches and targeting instead of the character
  • The marker only updates to the players position when the player is in view of the AI

this gives you a number of behaviours for free- For example, the AI will look as if you have eluded them when you duck behind cover and they come to look for you there. Just from this minor change you now have a cat-and-mouse behaviour that can lead to some very interesting results.

I was pleased to see that Naughty Dog also use this technique. In this Last of Us editor screen-grab, you can see their enemy marker (white) has been disconnected from the hiding character

I was pleased to see that Naughty Dog also use this technique. In this Last of Us editor screen-grab, you can see their enemy marker (white) has been disconnected from the hiding character

It is also very extensible – in more complicated implementations (covered in future video tutorials) a list of these markers is maintained and acted upon. This lets us do things like have the AI notice a pickup when running after the player, and return to get it if they ever lose their target.

Using a ‘Razor’ to make a lean, focussed game.

Edit: Article now up on Gamasutra.

steenbeck-noscale

I’m not usually a fan of short, snappy phrases that sum up design concepts because more often than not they end up being reductive and misleading. A Razor however, while not being entirely guilt-free in this regard, is a simple concept that I’ve seen used consistently enough for it to be worth covering.

The name comes because it helps you identify and cut unnecessary features. It is however a slight misnomer because it also helps you design them, and generally keep the creative team on track.

In short, it’s a phrase that serves as a small reminder about the core of your game.

Using this Razor for your game design

Often in design you find yourself paused, not by creative block or because you’re unable to see a solution to a problem, but because you have a variety of seemingly equal and interesting solutions in front of you that you have to choose between.

The idea is simple – you think of a way to define the core of your game, a summary of the experience, and then you encapsulate that in an easy to remember phrase. For example “You are a swashbuckling pirate”. From that point onwars you then make sure every part of your design validates this phrase. If you think of a feature you like, but it really doesn’t help achieve this feeling, then you cut it.

Of course, it works both ways – the realisation of your game’s true core helps you to think of gameplay that will further help you evoke this feeling in the player.

It’s a tool designed to keep your game lean, focussed and consistent. It’s meant to produce something where everything works together to push towards one overall feeling. Game mechanics, music, art, story – all should be working towards one goal.

To use a more relatable example, let’s imagine that the Razor for Uncharted 1 was “you are Indiana Jones”, not only can the design team use this as a reminder for their mechanic design, but the level designers too can use this to remember the heart of the game, and how they should be thinking about movement through their spaces. Animators too can refer to this, and remember that in combat Drake is not supposed to be a master of the martial arts, but rather someone who comes off as a scrappy street fighter – untrained but experienced.

Now it’s important also to point out that it’s not some magical phrase – really it’s an understanding the whole team has to come to. A true understanding of what the game is about at its core. The phrase is merely there to remind you of it, so don’t be too beholden to the words you’ve chosen – it’s the idea behind them that’s important.

Things to watch out for

It’s important that this phrase encapsulate the feeling the player should have when playing the game. It should not describe something the game should do – this is a common mistake. You may be tempted to use a phrase such as ‘everything is a weapon’ because this might remind you of one of the core tenants of your game, and in some ways can help other aspects of the game’s creation, but it doesn’t help you make decisions, and more importantly, it certainly won’t help an animator decide what style to choose when blocking out a new animation.

It’s the overall feeling that you’re trying to convey – “You’re a ragtag group of mercenaries.” “You are inside a Grimm's fairytale.” “You are a scared civilian in a war with absolutely no training.”

A small, simple thing that should not be overstated, but a useful trick nonetheless.

The link between space and mechanics

You can also find this article on Gamasutra here.

Dependence between different game elements is an often-overlooked but fundamental component in any great design.

This article will deal specifically with the link between level-design and core gameplay.

This is important for three reasons:

  1. It stops your mechanics from becoming boring as every new environment offers new ways or new opportunities for how you use them. It creates variety.
  2. This variety allows you to create a narrative of pressure throughout your encounters.
  3. It makes the player feel excited to look at a new area and wonder how they can take advantage of it to have fun.

Creating variety

The first point is fundamental. We've all played a game that eventually got boring, even though we at first enjoyed it. Often this is due to the game mechanics becoming repetitive. As designers it is our job to give players many interesting, challenging and varied situations to use them in.

Linking level design heavily to core mechanics is one way of doing this. You want space and verticality to play a part in how it feels to play your game. To give a more specific example, in a combat heavy game you want it to be a completely different experience to fight in a corridor, as it is to fight in an open space.

This allows you to create variety without having to teach the player anything new, without having to complicate your core mechanics.

Gears of War is a good example of this. Enemies attempt to flank you, and so the combat experience changes drastically depending on how restricted the space is, and what the distribution of cover is. It is a good example too, because while space may vary the combat in other games subtly, Gears of War really pushes this into the forefront of the player's mind. It’s clear and readable. This aspect of the design is made as noticeable as possible so that players can make meaningful decisions based on where they are – should they run, flank, look for a different weapon or push on.

Varying pressure

When you have successfully created core mechanics that heavily rely on the surrounding environment, you can then use this knowledge to design your encounter flow.

Perhaps you want the player to enter the level in a calm moment, then have the pressure build to a crescendo before rewarding them with something fun and easy at the end.

Taking a simple combat example you might want to…

  1. (Calm moment) Start with some nice open environments
  2. (Build the pressure) Slowly have them close in
  3. (Crescendo) Then at the hardest part of the encounter have the player flanked by enemies positioned in high positions, giving them the height advantage.
  4. (Reward) You emerge onto a balcony, this time with the height advantage in your favour – where the low cover offers the enemies no protection against your potshots.

This allows you to build your levels from specific narrative requirements – what is the player supposed to be going through at this point in the story? This will help level designers work towards a goal, rather than scratching their heads over the dreaded blank piece of paper.

Creating excitement

fantasy, city, view, mirrors edge

The other very important effect this has may sound trivial, but personally I think it's very important. It is the excitement the player gets when they truly understand how different levels can mean whole new types of fun, and it’s the feeling you have when you walk into a new area of the game and start thinking about how you might use it to your advantage.

I often liken this to a child walking up to a playground they’ve never been to before. They know what a slide is, what a climbing frame is, and on seeing this new play-area their minds start working – they get excited simply by the prospect of what they will be able to do there.

So with this in mind it becomes more important to think about how you present each new area of the game. Think about the reveal, and about the readability of your interactive level components.

Also take this particular point with a pinch of salt – this readability is something I like personally as a designer. I think it adds a great deal, but it is definitely a preference and not a hard-and-fast rule.

So to wrap up, some simple questions to ask yourself:

  • Do my game’s core mechanics change significantly based on level layout?
  • Are we calling attention to how the environment affects core mechanics?
  • Are we presenting this to the player in a way that makes them feel excited about a new environment?

In the next few articles I will continue to talk about the dependance between game elements and how this can be used to create variety with your core-mechanics, before wrapping up with a working model of design aimed at story-driven character-action games.

How would we script… The grenade-shooting moment from MGS5

In one of the recent MGS5 gameplay videos, there is a moment where the protagonist throws a grenade, only to have their companion AI shoot it into a helicopter that was otherwise almost impossible to take down. It’s the kind of over-the-top action we’ve come to expect from Hideo Kojima, but how would we implement something like this into one of our own games?

There are of course always an enormous amount of ways this can be done. Here, we’ll look at just two:

  1. As a script attached to the helicopter
  2. As a more generic action available to the companion AI

The key here is to really start thinking about how we would structure the underlying code to give us the building blocks we would need to not only script this event, but many like it. Let’s think about the steps involved.

As a script attached to the helicopter

We might want to attach this script to the helicopter if it’s a non-generic action that can only occur when the helicopter is present.

  1. First we would want a generic way of testing for certain objects based on proximity to other objects, in our case, the helicopter
  2. We would then want a system that let us query by object type, so again for us, this would be the grendade
  3. We would probably want to extend this to check to see who threw the grenade, and check for the player as the owner
  4. Next we would want to check that the grenade was on the same side of the helicopter as our companion AI
  5. The companion AI would then be sent a signal to shoot the grenade
  6. Finally we would add the approprioate velocity to the grenade and destroy the Helicopter

With this piece of logic we could write an event that would happen naturally if the player ever ended up in this particular situation. More importantly though in writing generic functions to let us query for objects, owners, positions and so on, we give ourselves the ability to script more events like this, that are equally as unique, very quickly.

Maybe we could make it even more generic though. Let’s take another approach.

As a more generic action available to the companion AI

In all honesty it might be a better idea to code this as a generic action that the companion AI possesses, so that they can use this on multiple types of enemies. This is now not then a scripting task, but it’s still good to think about the steps involved.

Here we would probably want the AI to be constantly positioning in a way that is helpful for the player, again this will be something core to the way the AI is coded so is not really a scripting task. You can see in the demo that the AI jumps next to the player before jumping away, signposting her position so that they can better choose to use the action. Then, when a grenade is thrown, the AI will perform a cone check to see if the grenade is going to move in between her and an enemy, and if it does, shoot it towards them. This will give the AI the ability to shoot multiple objects at multiple enemies given the right positioning, and in doing so may result in some interesting opportunities for emergent gameplay.

In actual fact there is an extra step in the video – the player actually asks her to perform this action, but it’s not really relevant to the core of the explanation.

State machine messages

Imagine you have a situation we often encounter in a game – a character approaches a ladder and begins to climb it.

What is really happening behind the scenes to make this happen? First we would position the character under the ladder, where a trigger-box may alert the code that we can now press a button to enter a ladder climbing state.

When we press this button, the character places their hands and legs on the ladder – an authored animation that takes a few seconds to play out. Now we need to switch to a different movement scheme so the character can move up and down the ladder. But how do we make sure we’re switching to this state at the right time so that we don’t try and start moving the character before they’ve finished playing this animation?

The answer is to catch a message sent from the animation state machine. This is a feature almost all commercial engines now offer. You can set up specific transitions or states to send a message when they have been entered or left, and you can then use these messages to trigger code at exactly the right time. You can also often set up specific markers in an animation to send out a custom message at a specified time.

This two-way communication is a key tool that we use in scripting complex gameplay, as we will see in upcoming blog posts.

Root-motion

Root motion is simply the ability to animate a character from its origin. This means we are not animating a mesh away from the point that the programmer has placed it, but actually influencing where the character moves directly from animation.

This becomes especially useful when an animation contains quite complicated movement. Imagine a character has to climb up a wall, at first jumping, then grabbing on and holding before finally heaving themselves over. This is not a linear movement for the character. If we simply had this animation with no root-motion, the programmer would have to try and code the movement to match up with this, it would be a very hard task. Instead, we position the character in the starting position with code, and then hand over control to the animation that executes the move, before returning control to the code on completion.

In most modern game engines you are able to have a mix of root motion and non-root motion animations. For example, you may want your characters movement to be driven by code for the sake of consistency, but ‘leaning-out-of-cover’ animations might be better off using root motion, so that the movement can be less linear.

Blend-spaces

A blend space is a concept used in animation scripting where several animations are asssigned a position on a graph, and one or more input parameters calculate what mix of which animations should be playing at the current moment.

Here we see an example of a One Dimensional blend space. Speed is controlling where the blend is calculated to be on the graph. The bottom node represents the idle animation, where speed is equal to zero. Above that are the Run, Walk and current blend position (in orange).

The player moves the character forward, and this speed value is fed into the blend space to make sure that the animation syncs up with the movement.

The player moves the character forward, and this speed value is fed into the blend space to make sure that the animation syncs up with the movement.

A 2D blend space is exactly what it sounds like – many animations can be mapped onto a graph with two dimenstions rather than just one. A common example of this would be a characters base movement. Below we can see many different animations have been plotted on a graph representing direction along one axis, and speed along another. Using this the scripter is able to create a state for an animation state machine that takes into account these two inputs to play a blend that will see the character turning and running in the correct way.

2DBlend

Animation State Machines

This is one of several small posts that aim to give an overview of animation technology in current games, as part of an ongoing scripting tutorial series.

Animation state machines are a core concept used in modern computer games, and are a necessary component for us to understand and work with characters and scripted moments.

There are three main components that compose animation state machines

States

These contain an animation. When a state is active, the character is playing this animation. You may be able to alter the way the animation plays, but nothing more.
The only caveat is that states can also contain blend-spaces, which we will cover in an upcoming blog post.

Transitions

These are the links between states. They define which animations are allowed to blend between each other. For example, you may add transitions to allow a character to blend from Idle to Walking, but not straight from Idle to Running.

Notice there are no links between Idle and Run

Notice there are no links between Idle and Run

Rules

These are the set of conditions that you write, that when met allow one state to travel through a transition to another state.
They are heavily parameter driven – as an example, speed increasing may transition from the idle to a walking state.

Here is an example rule from UE4

Here is an example rule from UE4

There are some more advanced elements to animations state machines, such as the way they handle additive animations, encapsulate other state machines and send messages based on their state. These will all be covered in the next blog posts.