How would we script… The grenade-shooting moment from MGS5

In one of the recent MGS5 gameplay videos, there is a moment where the protagonist throws a grenade, only to have their companion AI shoot it into a helicopter that was otherwise almost impossible to take down. It’s the kind of over-the-top action we’ve come to expect from Hideo Kojima, but how would we implement something like this into one of our own games?

There are of course always an enormous amount of ways this can be done. Here, we’ll look at just two:

  1. As a script attached to the helicopter
  2. As a more generic action available to the companion AI

The key here is to really start thinking about how we would structure the underlying code to give us the building blocks we would need to not only script this event, but many like it. Let’s think about the steps involved.

As a script attached to the helicopter

We might want to attach this script to the helicopter if it’s a non-generic action that can only occur when the helicopter is present.

  1. First we would want a generic way of testing for certain objects based on proximity to other objects, in our case, the helicopter
  2. We would then want a system that let us query by object type, so again for us, this would be the grendade
  3. We would probably want to extend this to check to see who threw the grenade, and check for the player as the owner
  4. Next we would want to check that the grenade was on the same side of the helicopter as our companion AI
  5. The companion AI would then be sent a signal to shoot the grenade
  6. Finally we would add the approprioate velocity to the grenade and destroy the Helicopter

With this piece of logic we could write an event that would happen naturally if the player ever ended up in this particular situation. More importantly though in writing generic functions to let us query for objects, owners, positions and so on, we give ourselves the ability to script more events like this, that are equally as unique, very quickly.

Maybe we could make it even more generic though. Let’s take another approach.

As a more generic action available to the companion AI

In all honesty it might be a better idea to code this as a generic action that the companion AI possesses, so that they can use this on multiple types of enemies. This is now not then a scripting task, but it’s still good to think about the steps involved.

Here we would probably want the AI to be constantly positioning in a way that is helpful for the player, again this will be something core to the way the AI is coded so is not really a scripting task. You can see in the demo that the AI jumps next to the player before jumping away, signposting her position so that they can better choose to use the action. Then, when a grenade is thrown, the AI will perform a cone check to see if the grenade is going to move in between her and an enemy, and if it does, shoot it towards them. This will give the AI the ability to shoot multiple objects at multiple enemies given the right positioning, and in doing so may result in some interesting opportunities for emergent gameplay.

In actual fact there is an extra step in the video – the player actually asks her to perform this action, but it’s not really relevant to the core of the explanation.

State machine messages

Imagine you have a situation we often encounter in a game – a character approaches a ladder and begins to climb it.

What is really happening behind the scenes to make this happen? First we would position the character under the ladder, where a trigger-box may alert the code that we can now press a button to enter a ladder climbing state.

When we press this button, the character places their hands and legs on the ladder – an authored animation that takes a few seconds to play out. Now we need to switch to a different movement scheme so the character can move up and down the ladder. But how do we make sure we’re switching to this state at the right time so that we don’t try and start moving the character before they’ve finished playing this animation?

The answer is to catch a message sent from the animation state machine. This is a feature almost all commercial engines now offer. You can set up specific transitions or states to send a message when they have been entered or left, and you can then use these messages to trigger code at exactly the right time. You can also often set up specific markers in an animation to send out a custom message at a specified time.

This two-way communication is a key tool that we use in scripting complex gameplay, as we will see in upcoming blog posts.


Root motion is simply the ability to animate a character from its origin. This means we are not animating a mesh away from the point that the programmer has placed it, but actually influencing where the character moves directly from animation.

This becomes especially useful when an animation contains quite complicated movement. Imagine a character has to climb up a wall, at first jumping, then grabbing on and holding before finally heaving themselves over. This is not a linear movement for the character. If we simply had this animation with no root-motion, the programmer would have to try and code the movement to match up with this, it would be a very hard task. Instead, we position the character in the starting position with code, and then hand over control to the animation that executes the move, before returning control to the code on completion.

In most modern game engines you are able to have a mix of root motion and non-root motion animations. For example, you may want your characters movement to be driven by code for the sake of consistency, but ‘leaning-out-of-cover’ animations might be better off using root motion, so that the movement can be less linear.


A blend space is a concept used in animation scripting where several animations are asssigned a position on a graph, and one or more input parameters calculate what mix of which animations should be playing at the current moment.

Here we see an example of a One Dimensional blend space. Speed is controlling where the blend is calculated to be on the graph. The bottom node represents the idle animation, where speed is equal to zero. Above that are the Run, Walk and current blend position (in orange).

The player moves the character forward, and this speed value is fed into the blend space to make sure that the animation syncs up with the movement.

The player moves the character forward, and this speed value is fed into the blend space to make sure that the animation syncs up with the movement.

A 2D blend space is exactly what it sounds like – many animations can be mapped onto a graph with two dimenstions rather than just one. A common example of this would be a characters base movement. Below we can see many different animations have been plotted on a graph representing direction along one axis, and speed along another. Using this the scripter is able to create a state for an animation state machine that takes into account these two inputs to play a blend that will see the character turning and running in the correct way. 2DBlend

Animation State Machines

This is one of several small posts that aim to give an overview of animation technology in current games, as part of an ongoing scripting tutorial series.

Animation state machines are a core concept used in modern computer games, and are a necessary component for us to understand and work with characters and scripted moments.

There are three main components that compose animation state machines


These contain an animation. When a state is active, the character is playing this animation. You may be able to alter the way the animation plays, but nothing more.
The only caveat is that states can also contain blend-spaces, which we will cover in an upcoming blog post.


These are the links between states. They define which animations are allowed to blend between each other. For example, you may add transitions to allow a character to blend from Idle to Walking, but not straight from Idle to Running.

Notice there are no links between Idle and Run

Notice there are no links between Idle and Run


These are the set of conditions that you write, that when met allow one state to travel through a transition to another state. They are heavily parameter driven – as an example, speed increasing may transition from the idle to a walking state.

Here is an example rule from UE4

Here is an example rule from UE4

There are some more advanced elements to animations state machines, such as the way they handle additive animations, encapsulate other state machines and send messages based on their state. These will all be covered in the next blog posts.

The importance of NOT optimising

I talked briefly already about the importance of not trying to optimise when prototyping, but as it turns out, ‘not prototyping’ is generally something I would recommend you do almost all of the time.

This seems like a very odd thing to be recommending, but here are some reasons why.

  • You may not need to. If your game is running at 60fps, there is simply no need to waste time optimising. Remember that your time is a precious resource, you only have so much time to make a game. Wait until you actually need to optimise before you spend time doing it.
  • Always profile first. When your game does begin to run slower, don’t second guess your code and spend weeks or months trying to make everything faster. it will amaze you what ends up being the problem. Use a profiler to actually show you what is taking the most time each frame, and focus in on that.
  • You probably aren’t helping anyway. Optimisation is a very in-depth subject, something that you might think needs optimising may actually be optimised by the compiler anyway, and your efforts could even make it worse.

If in doubt, ALWAYS write readable code over optimised code. Even in AAA, code that someone else can come and jump into is almost always preferred over fast code (unless we’re talking about very low-level systems), because when people leave, someone else will have to try and understand it.

Tips for rapid prototyping

Rapid prototyping is an important part of development any game, and is an especially necessary skill when working on big titles. The ability to get something out that plays well and demonstrates your idea fast at the beginning of a project can mean the difference between the game adopting something new and unique, or just falling back to using a standard design that has been lingering in the industry for decades.

The key is that in that brief prototyping phase, you have to be able to produce something that makes an impression, and you have to be able to do it fast.

So, here is a quick collection of tips to really focus on.

  • Your prototype should be throwaway. This means you don’t waste time optimising or writing good code you plan to use later. Also scrapping something and writing the system again is a really great way of working out the best way to implement something. Just like walking away from a tough problem, your brain has an amazing way of solving problems for you when you’re not thinking about them.
  • Use third party tools. Sometimes you have to convince the studio to let you use a third party tool such as UE4 or Unity, but it’s almost always worth it as most custom game engines take far longer to get content into them. Again, push on the fact that these things are throwaway, so it’s ok if a quick mock-up is created in some other tool.
  • Don’t be afraid of messy code. The point is to make this fast, not well. A nice tip is to encapsulate things (if you happen to be coding) in blocks of empty scope instead of using up time creating new classes.
         //a simple block like this can encapsulate variables
         without the overhead of writing a new class
  • Don’t use anything other than basic container types. In fact don’t attempt to optimise in general/ This applies to all kinds of scripting. Your time is the most precious resource here, so use it as sparingly as you can.
  • Look and feel is important. This is a little counter intuitive, but despite the above you have to remember that you can’t rely on people filling in the blanks when they play it like you do. It has to communicate what you are trying to achieve, so this often means – camera shake, hit-reactions, some basic effects, sounds and so on. these are linked with the basics of what makes something feel good so you can’t afford to miss them out.
  • Sometimes polish will save your idea. If you’re lucky enough to work with leads that can see past cubes and debug text, then great, but recognize if this is not the case at first and get a modeller and animator to throw together a quick model and some basic animations before you show anyone. Showing a prototype too early can means its death before you’ve had a chance to complete it.

Djikstra and A* search algorithms

Since we covered navmeshes briefly, I also thought it would be good to provide a quick overview of the pathfinding algorithms that underpin them.

This knowledge will serve as a good jumping off point if you want to delve deeper, but can also be very useful in recognising what might be happening with an AI when they are not behaving as expected.

Djikstra’s Algorithm

This is the most basic search algorithm that we would reasonably use in a 3D video game, and it is very closely related to A* search which is the industry standard algorithm, so it’s a good place to start.

The concept is very simple.

  1. From your starting point you search every adjacent space and place them into a list representing the frontier of the search.
  2. After all of the adjacents have been explored, you search the adjacents of the frontier nodes
  3. You repeat this on and on and on, until you find the goal node
  4. You then recurse back though the adjacencies until you have a path back to the start

A* Algorithm

This is simply an optimisation of Djikstra. It uses a concept called a Heuristic – this is simply an estimation of how close each node is to the goal node. For this example we will use linear distance to the goal node (the most common heursitic).

  1. From the starting point you look for the adjacent with the smallest heuristic value and place them into a list representing the frontier of the search
  2. You then move onto the next node with the lowest heuristic cost
  3. You continue to do this until you reach the goal node
  4. You then recurse back though the nodes until you have a path back to the start

The difference here is that instead of searching all adjacencies before you expand the frontier, you are trying to guess which direction the search should go in to be successful faster.

Of course this isn’t always optimal, as shown in this example. however it’s an edge case and it still slightly more optimal that Djikstra seaching the same space.

Here our heuristic takes us towards a wall before it finally finds its way around

Here our heuristic takes us towards a wall before it finally finds its way around

Djikstra is still less optimal, however.

Djikstra is still less optimal, however.

Navigation meshes explained

As a gameplay scripter we work a lot with AI, and so it’s important to understand the basics of how these systems are put together so we have a better understanding of how to work with them.

A little history

Historically AI navigation was handled with waypoint grids. They use a search algorithm to query the location of an AI and work out what order they need to travel between nodes to get to the desired location, resulting in a path they can follow. path

This was fine for a long time, until someone came up with the concept of a navigation mesh. These have an enormous amount of advantages we’ll cover briefly.

For pathfinding they actually work the same way as waypoint grids, only each point is associated with a convex polygon. An initial path is still calculated using the central points of each polygon. navmeshpath1

The difference is that the paths then go through two phases of optimisation.

The first is a culling phase, in which the grid works out if there are any unnecessary links. linkcull

The path then runs a smoothing algorithm using the bounds of the polygons and a radius for the agent to set how extremely or subtly this path is smoothed. phase3

This gets rid of the strange zigzagging we used to see in games – the result of sticking to the links on a waypoint grid.

Further notes

Just on their own this would be enough to use them over grids, but there are some added benefits that come with this implementation.

Path correction
AI can use the knowledge of polygon bounds to correct their paths around obstacles, as long as they can remain within the bound of the mesh. aroundbox

Smarter actions
The AI can make sure that they have enough space to complete an action, such as a dodge-roll, so that they avoid doing stupid looking things like rolling into a wall, or worse – rolling off an edge. moves

Navmeshes allow for multiple agents with different navigation requirements, such as larger or smaller AI’s, or AI’s that have restrictions based on turning circles (things like planes, boats or vehicles), therefore allowing all of them to use one set of navigation data. footprint

How to explore a new game engine

Whether you are working with well documented, widely used engines like Unreal, or using a custom set up on a big project, the ability to quickly find your way around a new toolset or codebase is an important skill. Often you need to get up and running fast, and get over that hump where you start to be fluent with what you’re using.

This is often a daunting or seemingly impossible process for people with less experience. Many people have recently asked me what to do in these situations, and the answer is almost too obvious, and too un-technical to even occur to most.

The key is to actually use basic search functionality to find something relevant and then to expand from that point. From there you then dig down into anything relevant, and also explore any functions that you don’t immediately understand.

When all else fails sometimes you have to fall back on these most basic techniques. It sounds like strange advice, but when there is no documentation, and no obvious place to start, this is how you begin to piece together your knowledge of a toolset.

And be aware that this can take years. I have worked on projects where large teams are still not 100% familiar with all of an engine’s facets after using them for multiple years.


Think of synonyms to search for. If you’re trying to find out how the AI works, search for character names, commonly used keywords. Drill into base classes then search for them and see where they’re used. You’re essentially trying to find an anchor point to then build up your knowledge base from.

With commercial engines like UE4 you have the benefit of being able to search their huge list of nodes, and documentation. Pull them out and see what they do. See what they link to and be aware that this knowledge will start to gel together the more you absorb.

With something like Unity for example, searching in-engine is less easy. Here documentation is your friend. If you don’t know how to do something, don’t agonise for hours, just google it and get used to looking up their documentation pages and how to read them. It is a skill you will need time and time again on new projects so get used to the idea of jumping in at the deep end.

And as one final piece of advice, if you’re lucky enough to be using a commercially available engine, the associated answerhub or stackexchange websites are a brilliant resource.