What if I suck at game development?

So this is kind of a followup to the last post on switching to Unity. Basically I’m now making an actual game in Unity, instead of making an engine for a game in C++/OpenGL. It’s massively changed what I’m doing day to day; instead of implementing new engine features (collision detection, graphics pipeline, etc) I’m adding new game features. I can see pieces falling together so much faster, but of course there’s a massive catch; what if I’m not as good a game developer as I was an engine developer? It’s not unlike moving to a completely new role at work. I was quantifiably good at writing game engines – my code worked, mostly followed best practices, very few bugs, appropriate optimisations… all of the desirable qualities. But I’m not an engine programmer any more, I’m a game developer or something!

Here’s a good example – I recently completely removed a collectible resource from my game – water was identical in every way (turned into biowaste by people, replenished by food synthesisers) – so there was literally no point to having it. Was that the right thing to do? Would it have been better if water acted differently – if it was turned into sewage that had to be treated separately? What if there was a slight drain so the player was always under some pressure to get more? I have theories about how these alternatives would play out, but I chose what I thought to be the best option. I will literally never know if that was the right choice. At least with engine design I could quantify my options – one method uses X more texture lookups but needs Y% less CPU cycles for example.

The answer is probably in making smaller games, or even game jams, as an exercise in understanding game design and development. I find these super hard to work on though, particularly as I just can’t art so I feel that everything I make looks shitty, which evolves into thinking the game itself is shitty. I probably need to just get on with it and stop complaining!

I guess I’m just scared because whilst this is new territory, I’ve been on the sidelines looking in for the last decade and I think that I should know exactly how it all works. Or maybe I just know enough about game design now to also know how bad at it I am? The really scary part is that I’ve defined myself as a game programmer for a couple years, I can’t imagine what I’d think of myself if I sucked at it.

 

Are you making a game or an engine?

When I started creating Impulse, I was writing C++ in Code::Blocks and using SDL as a graphics library. Of course, I was writing my own engine because that’s what real programmers do and graphics engine technology is really interesting. Culling, shaders, optimisations and most of the graphics pipeline all had a bunch of really cool, interesting problems to solve. But a few months back, whilst fixing the nth bug in a specific part of my optimisation algorithm that would crash once per 5-10 loads (memory bugs woo!), I figured out a decent question:
Am I making a game, or an engine?

I’d made something that looks a little like a game, sure – it had one level, no menu, no multiplayer, no save function, no level logic and was no fun. But look at the glow function I wrote! It does a full screen glow in real time on HD mobile screens without burning through texel lookups! And horrible memory bugs only crash the user’s device once per hundredish launches.

In retrospective I can see what I was doing, but it was very difficult to notice it at the time, after all I was making progress right? Turns out that making progress on your game engine isn’t the same as making progress on your game – they are separate entities!

As you can guess, I switched to Unity and holy magicarp, game development is fun again! I can just throw in a couple dozen lines to do something I was working on for weeks before. It actually only took about 2 weeks before I was at the same point as I was at in my own engine, and I hadn’t used Unity before, nor had any training. My primary worry when I was switching is speed – I do a lot of ray casts for the physics (4 per car per frame) and my custom solution was lightning fast. It turns out that the developers behind Unity are actually paid to work on improving Unity! Unlike most indie developers, the Unity guys can just pump programmer hours into making features fast and reliable, and that’s exactly what they do. What I’m getting at is that a large, organised team paid to work on a function are probably going to do a better job than 99% of indie developers have time to do. This is particularly true for complex or math-heavy portions like graphics and collision detection.

I don’t really care much for C# (Unity’s language of choice) but it gets the job done well enough, and as an added bonus there’s a wealth of features you don’t need to implement yourself. Want to encrypt your save file? Just drop in 10 lines from the crypto library and you’re using a decent enough implementation of modern crypto – easy and fast.

There are of course a bunch of downsides to Unity, but nothing that really outweighs the advantages; certainly no show stoppers. Probably the biggest downside was the hit to my ego of wasting months of time and some really pretty code. The other obvious downsides are giving up complete control (do you really need it?) and having to pay for pro features like the debugger.

I guess it all comes down to that big question – “Am I making a game or an engine?”. Obviously, I always intended on making a game, but before switching to Unity I was getting really bogged down with the details of the engine. Now that I can focus on actually finishing a game and making it fun, it feels fantastic. Sorry if this post feels like I’m a Unity salesman; realistically all decent engines are going to have the same benefits.

Creating the Illusion of Speed for Impulse

Before working on Impulse, I never really thought about what makes games feel ‘fast'; XG3 and F-Zero did it really well, but what’s actually happening to give that impression? Remember that when we’re playing games, we’re just looking at simulated 3D worlds projected onto 2D screens – when so much of our perception of the world is not visual, how do we give the illusion of speed entirely through the visual medium?

Initially I thought I could have some fancy blur effects going on, but it turns out that most mobile devices just can’t handle modern blur effects nicely – they have the nasty habit of having lots of pixels and relatively low performance, so I had to think outside the box. That last sentence underpins a lot of my issues here – if I want to make a good looking game, I can’t dedicate too much processing time to fancy effects, even if they do make it look fast. We need a table.

Free:
Camera closer to the ground
Road texture patterns
Increased engine glow
Shaking camera
Low cost:
FOV increase
Speed lines
More roadside/close objects
Costly:
Fancy graphical effects such as motion and radial blur

In our handy table, I jotted down everything I thought of that could give that fast feel. Surprisingly, a lot of the effects that cost the least turned out to be the most effective, especially since I can add them together with very little overhead.

Increasing the field of view was my first action, it stretches the edges of vision and it does give a great effect, but turn it up too high and it distorts the view loads. One thing though – we actually need something on the screen to distort! Adding some visual noise around the edge of the tracks and near to the camera increased the visual speed hugely. Things added were buildings close to the track and some edge features, like an embankment and hazard stripes at the edge. The next thing I changed was the road texture. Mario Kart has a vertically-blurred gravel texture for some tracks that I tried to emulate, which looked good but it didn’t feel fast. By adding horizontal lines to the track, we’re creating loads of reference points for the player to see zoom past. Interestingly, if we slightly change how far apart these lines are, we also change how the player perceives their speed, despite it not really changing – shows how much is about perception. I’m not sure how to use this new found super power to impact the track designs yet, I’m sure I’ll think of something!

Position of the camera was also pretty important, in two ways. First, I used it to maximise the above effects; placing the camera nearer to the ground makes the track’s horizontal lines go by really fast and draws the player in to the screen. As an added bonus, the more of the screen I can cover with track the better, since it’s so fast to render! The second way I used the camera position was by moving it depending on the player’s speed. Increasing the field of view slightly and moving the camera away from the player’s car when boosting makes it seem really fast, far moreso than the actual speed boost gained. I feel that having the temporary moments of boost speed looking different helps separate “cruise speed” from “shit that’s fast” speed, in the same way pacing would work in non-racing games.

My final method for creating the illusion of speed was to add a shadowing effect to everything that was glowing. Basically instead of clearing the old frame completely, it leaves a little there dependent on speed, as seen below, to make an almost-motion-blur effect. It does take a little extra memory, but it’s well worth it. Worth noting is that it only comes in to effect when the player’s going really fast, so it really drives home the point that they’re going above cruising speed.

So that’s what worked for me! I’m also interested in hearing what you’ve tried, or if you have any suggestions, so send me an email or tweet at me!

Mipmapping with ETC1 textures on Android (with a C++ and NDK example)

This post also known as “trade-offs make for long sentences”

When I ported my game engine to Android (and thus GLES 2.0), I switched from using PNG images to using the Ericsson Texture Compression format, ETC1. The advantages were pretty clear: significant speed up, slightly reduced file size, acceptable loss of quality, faster loading speeds and simpler code. Of course, when do we ever get all that without a trade-off? ETC1 doesn’t support alpha channels and, more crucially, OpenGL can’t generate mipmaps for compressed textures. My solution to this was to write a python script which resizes a PNG multiple times and packs the resultant ETC1 data into a single file. You can find the script and code at the end of the article, but I’ll just explain quickly what it’s doing; it’s always best to have a deeper understanding than just “oh, this tool did it for me”.

Here’s how my PNG loading was, and chances are loading from another format or library is pretty similar:

And here’s my mipmapped ETC1 system:

Clearly because the diagram is more green it is much better! Maybe “runtime” is the wrong word, I mean “when the player runs the game on their device” but there’s not enough space for it! What we’re doing is performing the slow scaling before runtime (i.e. before we even create the APK), then the fast bit is done at runtime. Probably my favourite thing about using ETC1 on Android is that OpenGL just accepts raw compressed ETC1 data; we can literally just extract the data into a temporary buffer and upload it to OpenGL for super-fast loading!

Here’s the quick python script I wrote, just install Python and the Python Image Library and run it like so:

python makemipmaps.py image.png

Note that you’ll want it to have access to etc1tool (normally found in android-sdk/tools/). I run it by putting it in the same directory as etc1tool, adding that directory to my path environment variable, then running a batch script wrapper like below.

@ECHO OFF

python “C:\Program Files (x86)\Android\android-sdk\tools\makemipmaps.py” %1

Obviously you’ll need to change the file name as appropriate, but it should let you run the python file from any directory, provided you put it in a directory that is in the path environment variable.

 

So now we have a file with the compressed and mipmapped ETC1 textures in, how do we load them? That’s actually pretty easy but you might need to make a few adjustments to my code. Also of note is that I’m using C++ with the Android NDK, but it should be pretty simple to port it to Java. The code is pretty well commented so it’s probably best if you just check it out! loadETC1.h

Let me know if any part of this didn’t work for you, or if I haven’t explained something very well!

As a follow up, here’s my game with and without mipmapping (click for a larger image)

Even with terrible JPG compression to get the image to ~100KB, the difference is massive! Since it’s doing less scaling at runtime, it’s also slightly faster. Whilst we do have a larger file size (we are storing more textures on disk), and we’re using more memory than not having mipmaps altogether, we should be using less memory than automatically generated mipmaps since our ETC1 mipmaps are compressed!

Optimisations

This post also known as: “Why I should listen to advice future me gave past me in an alternate timeline”

So this week, I finished a lot of little bits I’ve been working on lately, mostly to do with best practice and optimisation, so I figured I’d run through them. Hopefully there’s some OpenGL and mobile developers out there this might help :)

Speeding up OpenGL

I just implemented a draw stack for OpenGL and holy crap I should have done this months ago. Before I had a model class, which had functions to load and draw, so I could call something like:

shader->useShader();

//Setup shader uniforms, about 5 lines

model->draw(position, texture);

Spot any issues? The two major problems are that I have no control over when the model gets drawn, and that shaders and their uniforms have to be set by each draw (you changed a uniform name? Good luck changing every reference to it!). Also, if the shader needs anything extra (such as another texture for bump mapping), I’ve got to overload the draw function, which is just another pain to maintain. With my new fancy draw stack, I basically do this:

cDrawElement *myDE = new cDrawElement( SHADER_TYPE_LIT, &mCar, &tCar, &mRotationMatrix);

drawStack[activeDrawStack].push_back(myDE);

I’m now adding an instance of cDrawElement with shader type, model, texture and transform matrix to a list. If the shader will need any extra info, I can just give cDrawElement an extra variable and set it like myDE->specTex = x . The draw code itself is a little complex as it sorts everything before drawing and does some checks to avoid changing state. When you select a new shader to use, most devices have to wait for all operations using the old shader to finish, so it can end up being a pretty lengthy operation, sorting all draw calls by shader was a great speedup. I did also sort by texture, since binding textures can take a little while, but the sorting code actually took longer than the gain. That’s pretty common in optimisation – some things just have the impact you’re looking for, so try not to get attached to anything you write!

Bake and Divide

The second huge speedup was a load more complicated. Most of the map stays the same throughout playing the game, so the level editor now combines all the static parts of the map into one big model. Then, it precomputes and bakes lighting for each vert (so we can have as many static lights as we want!) and, finally, it splits all the triangles between 64 (8×8 square) sections of the track. Having a large map split into square sections is great, because when the player’s not looking at a section they don’t need to be drawn. The downside here is the additional draw calls, since we need one call per texture per section. Initially I had 16×16 sections, which is 256 sections with 3 or 4 textures each; even when I was only drawing a third, that’s still too many draw calls, which really slow things down. 8×8 sections seemed like a sweet spot in this case, but it does depend on the game. Binary space partitioning, which is similar but splits the map cleverly based on the density and geometry of triangles, instead of just location, would have been better, but I don’t think the improvement would justify the time taken to implement. Optimisations are important, but it’s easy to get carried away!

The actual game doesn’t need to compute lighting for each vertex any more, so that shader is now really fast – it basically reads position, vertex and colour from a VBO and blends the colour with the texture. There’s also a little global illumination code but that’s super fast too. This has a real impact when playing the game, just have a look at the crappy neon sign before and after pics, the glowing green effect really helps it fit in. Best of all, it’s free to have more lights, so I can use it everywhere to add some atmosphere :)

The second thing I baked was collisions. Being a racing game, it has a fairly linear collision mesh (i.e. just the track) so I set out to divide into distinct sections. I quickly thought of loads of methods to do this, but the best trade-off will use a number of nodes that are user-defined in the level editor. I’ll need to have nodes anyway, so the game knows where the track is, particularly for keeping score and guiding AI. Adding the nodes is a pain, but we end up with something like this:

The large red balls are the nodes and their size is their width. When baking the track, the level editor links together the nodes into lines, and it checks each collision vertex to see if it’s in any line, using the size of the nodes as width at each end. This lets us build up the entire track out of small sections, so instead of having 10k triangles to collide with, we now have less than a hundred in each of 130 section, and we can check collisions with a single section at a time. Actually we need to check the next and previous sections as well, just in case we’re on a boundary, but that’s still ~400 triangle checks instead of ~10k, a saving of 96%! Another awesome speed boost comes from the convex-ish nature of the sections; whereas before I had to find the closest collision since the track can loop over itself, I can now stop at the first collision. Unlike OpenGL, which does its own stuff parallel to our code, timing the increase here is easy and accurate. Collision checking for the player’s ship (4 collision checks per step), used to take ~0.0075s, but now takes ~0.0002s, or about 2% of the original time!

The speed up from these changes has been drastic, each frame takes way less time to perform physics and render. My test device is the beastly Nexus 7 so it was already running at full speed, but for a mobile developer it’s really important to squeeze every last drop of performance, both so it’ll run on weaker devices and to save battery, hopefully any future users will appreciate this :)

Woo blog.

Hurrah, we have blog.

Shall be updating the theme and hopefully getting some content online in the next few days. I’m expecting noone to really read it for a few months while we get off the ground, but it’s a start :)