• Debugging Unity Android Apps

    This is pretty much a repost/summary of a StackOverflow post, as it took me a long while to find this information; hopefully Google will do its magic and surface this for future generations.

    I had a need to debug a Unity game on an Android device. The build running on the device was a Development build with debugging enabled etc etc. For reasons I had neither time nor inclination to investigate, Visual Studio was not discovering the device and it was not showing in the Attach Unity Debugger list, despite the device being on the same Wifi network, and physically attached by USB cable. I knew the IP address of the device, but not the port the debugger process was listening on, which apparently changes with each launch.

    To summarise the StackOverflow post, you want to get hold of the ADB logcat output for your game. I did (something like):

    adb logcat -c
    adb logcat > out.txt
    (launch game on device)
    (wait some period of time)
    (CTRL + C)

    …which will dump the logcat output to a file called out.txt . If you now search for monoOptions in the file you should see a line like:

    Using monoOptions --debugger-agent=transport=dt_socket,embedding=1,defer=y,address=

    If you add the device’s IP address with the port from above to the Visual Studio Attach Unity Debugger window, it should now connect, obey breakpoints and the like!

  • How and Why Is This Object Being Destroyed, Unity?!

    I recently ran up against a problem in a Unity project I’m working on: a GameObject was being Destroyed, but I didn’t know why or from where. The codebase, naturally, has many calls to Destroy() and contains its own methods with that name, which made both Find References and text-based searches impractical. I just wanted a breakpoint in UnityEngine.Object.Destroy().

    Round 1: OnDestroy()

    Spoilers: you may well know this method is a dead-end.

    Unity will automatically call OnDestroy() on all components on a destroyed GameObject. I thought this might allow me to set a breakpoint, but OnDestroy() is deferred to the end of the frame, so the callstack doesn’t go back to the original Destroy() call. Next!

    Round 2: Hacking Unity’s IL

    A discussion with a friend led to the idea of modifying the .NET IL in Unity’s DLLs to modify the contents of UnityEngine.Object.Destroy(). I Googled upon Simple Assembly Explorer, which allows you to view and modify the IL of compiled .NET binaries. Without any prior knowledge of .NET IL I was quickly able to insert a ldarg.0, to push this onto the stack as the argument for the next function call, and a call instruction, to call out to UnityEngine.Debug.LogWarning, which would give me a stack trace. I booted up my project in Unity and sure enough, every call to Destroy produced the log I hacked in there. Amazing! While this worked, it felt very fragile: any future update to Unity would stamp over this, and I didn’t fancy learning IL to build this out further.

    Round 3: Harmony

    I was aware of the concept of hooking/detours from lower level, C++/assembly code, and was interested whether something similar existed for .NET/C#/Mono/Unity. Google led me to Harmony on GitHub:

    A library for patching, replacing and decorating .NET and Mono methods during runtime.

    Not only does it support Unity, it’s built with Unity in mind- the example code is a mod for a Unity game. I was able to copy and paste the example code into a fresh project referencing the Harmony DLL, change the target class to UnityEngine.Object, the target method to Destroy and the target parameter to a UnityEngine.Object, then change the hooked method implementation to a log call. After a build, all I needed to do was drop my new DLL, along with the provided Harmony DLL, into the Unity project call the setup function to initialise the hooks and boom: the log was again produced on every Destroy. This has a bunch of benefits over the previous method, such as writing the patch in the language I was already using, not having to screw with binary file (so the patch’s code could happily live in source control), and being forwards compatible (unless Unity make any breaking API changes). In theory any C# Unity could be hooked in this fashion, which could be great for mocking functions or gaining a bit more control over what’s going on under the hood!

  • Sanitarium HD Patcher On GitHub

    Just a quick update: I’ve open sourced my Sanitarium HD Patcher. The code and latest release can now be found on GitHub! Enjoy!

  • Tiled2Unity Auto Import

    I have created a new project! It works alongside Tiled2Unity to ease importing Tiled maps into Unity. It is: Tiled2UnityAutoImport.

    In a nutshell, it hooks into Unity’s asset importing flow and automatically runs Tiled2Unity when new/modified tmx maps are found. More details can be found in the project’s README.

  • Sanitarium HD Patcher

    It is now available! Please find v0.1.0 here. This version will only work with executables with MD5 of 0A09F1956FCA274172B0BB90AA5F08A7. If people turn out to have other versions I’ll try to get hold of the exes and get them supported. Enjoy!

    UPDATE: Permanent page at sanitariumhd.

  • New Year's Resolution Patch

    I guess that title would have worked better a month ago.

    Anyhow. I’ve always been fascinated by software reverse engineering and general binary hackery, but had never really thought of a project to try it out on. Then I remembered the Fallout High Resolution Patch and the Infinity Engine Widescreen Mod, which apply cracking/patching techniques to allow old games designed at 1990s resolutions to run at glorious 1080p. I decided to do something similar.


    I wanted a target for which no fan patch already existed. I was browsing GOG, and saw that the game Sanitarium had recently been added. I remembered that I already had a copy installed on my PC - perfect! Target acquired.

    Hacking Commences

    The first step was to figure out what resolution the game runs at out of the box. To do this, I took a screenshot of the game running windowed (command line param -w, if you’re interested), and highlighted the rectangle excluding the standard windows border stuff. I’m sure there are more scientific ways to tell the size of a window, but this worked for me.

    Original window

    I deduced the game to be rendering at a 90s-classic 640x480. I opened up game in the trusty debugger, OllyDBG, and began investigating the heck out of it.

    I searched for the number constant 480, set a breakpoint on each reference and ran the game. One of these was hit very early on in the initialization, and was proceded by a reference to 640 - strong candidate! (ignore the fact that the offsets are from patched - I’d already backed up the original executable)

    Reference to 640x480

    I used Olly to patch the 640/480 values to 1280/720 respectively, and ran the game. The window was now 720p, with the main menu occupying the upper-left corner, but once in game it was rendering a much larger visible area. See below for comparison

    Original playfield

    720p playfield

    If you’re familiar with the game you’ll notice that all the game objects outside of the 640x480 camera the game is expecting aren’t drawn. I’ll address this later, but at this point I got ambitious(/distracted). The changes made in Olly can be saved out as a modified .exe, which can be used in the future. This would technically let me distribute the patched executable, allowing the wider internet to play the game at high-res. However, there are a couple of drawbacks:

    • It’s pretty illegal: the modified version would still contain all the original code generated by the copyright holder
    • It’s pretty inflexible: everyone using my modified executable would be stuck with the resolution I chose. Also if further changes were required a new modified executable would have to be obtained Solution: patch the executable in memory right before running it, just as Olly does.

    Rolling a debugger

    A quick bit of googling showed me that in order to modify executable code on the fly in Windows you basically have to write a debugger. This sounded very intimidating. I continued my research and it turned out to be conceptually very simple. All that’s required is a C++ project to do the following:

    • Make a call to CreateProcess, passing the DEBUG_PROCESS flag. This starts a child process owned by your executable, which sends debugger-relevant events to your code.
    • While you’re interested in these debugger events, call WaitForDebugEvent/ContinueDebugEvent. The only event I needed was CREATE_PROCESS_DEBUG_EVENT, so I handled that in a (very small) switch statement. When this event arrives I make a call DebugSetProcessKillOnExit, passing in false, so after my patch is applied my program can close, leaving the game process to live on. I then…
    • …apply the patches. This is the part I assumed would be complex, but boils down to one Win32 API call.

    The target’s executable code is memory mapped to an offset from a base address. For 32 bit Windows programs, this address is 0x00400000. I referred to the patches I made in Olly to get the address which needed to be modified. As can be seen in the screenshot of the debugger, we started with a PUSH 1E0, followed by a PUSH 280 (480 and 640 in hexadecimal). The compiled x86 machine code for PUSH [some 4 byte value] is 68 [some 4 byte value in little-endian] - 68 E001000 in our exaple. In this case, and most cases we’ll need to deal with, we can leave the PUSH (68) part untouched, and only change the operand (E001000). The program I wrote takes the desired resolution (x and y) as command line arguments and parses them as an unsigned 16 bit integer. We can then take a pointer to one of these values, cast it to a pointer to a byte, and treat it as a little-endian 2-byte array, like so:

    uint16_t resY = parseInt(resYString);
    uint8_t* resYBytes = (uint8_t*)&resY;

    The PUSH 1E0 happens at 0x0041A5FF. We can leave the first byte as 68 for PUSH, and just modify the 2 bytes at 0x0041A600/0x0041A601, to the 2 bytes of resYBytes. To do this we can use WriteProcessMemory, passing the offset we found with Olly as the lpBaseAddress param, the 2 byte array representing the dimension (e.g. resYBytes) as lpBuffer, and then the size to write as 2. That’s basically all there is to it. Once the patch for setting resolution width and height are applied, my program closes and lets the game carry on as normal.

    Culling me softly

    As I mentioned earlier, even with the resolution patches applied there are still some objects inside the newly-embiggened viewport which are not being drawn. Jumping back into Olly, I continued searching for 640/480. This lead me to the area of code below:

    Some rect math

    To ease both rendering and logic load, games often skip (or cull) objects which aren’t visible. I could see some calls to functions operating on Rects (IntersectRect/OffsetRect), and figured this could be the logic for culling offscreen objects, still using the hardcoded 640x480. Applying a couple more patches to bring these up to 720p I was presented with this:

    Less culling

    Note the extra dudes in the bottom right. Amazing! I then jumped over to my project and made the code a bit more generic, using a std::map<uint32_t, const uint8_t*> to store arrays of bytes to be patched in, indexed by their memory address. And that’s where I’m at. There is still one pretty glaring issue:

    Smudging with camera pan

    Previously the camera was restricted so it would never draw beyond the edge of the level. Now we’re drawing a bigger area around the player, empty space is visible. It looks like the surface the game draws to isn’t cleared every frame, leaving the remnants of the previous frame hanging around. I’ll need to figure out a way to clear it before the background is drawn to it, then we should be all set!

    I also still need to add some validation of command line arguments, and I’ll make a follow up post with it (and hopefully the full source code) attached once it’s ready.

  • Redesign!

    I’ve spent the last couple of days redesigning this page in preparation for a big plan I’ve been cooking. More as it develops!

  • Now a GitHub Page

    So I’ve joined the GitHub Pages/Jekyll revolution - I’ve ditched WordPress in favour of a git based workflow, writing simple Markdown documents to create posts. Hopefully the decreased friction in writing posts will actually encourage me to write things. I’m writing this in GitHub’s Atom editor, to then use git to commit and push it to GitHub and it feels like git is pervading every element of my life!

    I threw together some HTML/CSS based on the default Jekyll page and this is what happened. It looks like a sedate Geocities page and I love it. Retro-functional-chique.

  • JSON Is Driving My New Game

    I’ve begun work on a game idea that’s been bouncing off the inside wall of my brain for a couple of years now. I’m (obviously) convinced it’s going to be the greatest thing that’s ever been achieved by any being so I won’t go into any gameplay details until I actually have something to show of it. There’s going to be a lot of systemsy stuff in it, so I’ve implemented an entity/component system (or ECS) in Haxe. This enables me to write my code in terms of Entities, Components and Systems. My Entities are (more or less) a collection of Components. My Systems register interest with combinations of Components so that they can be notified when a relevant Entity becomes active and can then manipulate is as appropriate. For example my TouchSystem is interested in Entities that have a TouchableComponent and a SpriteComponent (so they know how big the touchable area for the entity is).
    One interesting outcome of this approach is it keeps the data very separate from the functionality - Components are just dumb data holders, while the Systems provide the functionality, based on Component composition. I had a plan about how I could leverage this…

    I’ve used Unity quite a bit, both professionally and for-funsies, and really appreciate its editor. It gives you the ability to build entities out of components with a drag and drop interface, the effects of which can be seen before you very eyes over in the live-updating 3D views. While I have no intention of writing a full editor for my game I wanted to achieve some degree of development, beyond just level editing, without having to touch the code. I’m using OpenFLTiled to allow me to import Tiled maps into my game. Tiled also lets you add arbitrary string to string key-value data to the map and its objects. My plan was this: store the types of components, and their initial data, right there in the map.
    I implemented a step in the loading of the level that will iterate through the objects in the map and get hold of all the data they store. For each object I spawn an entity. For each key-value pair I use the key as a string representation of a type of component (literally its class name). The value is interpreted as a JSON array, and I get back a bunch of values. Haxe has extensive reflection support, so I instantiate the component based on its class name, and pass the deserialised JSON array along to the constructor to build the component just from data stored in the Tiled map. In the map’s data (as opposed to the objects in it) I specify a list of Systems to instantiate for this map (so we could turn off the Physics system for specific maps, for example). So now we can build unique combinations of Components right in Tiled. This is the main advantage ECS has over OO: we don’t have to extend anything to add functionality - it’s all based on run time composition. We can then specify which Systems we want to exist, save it and run the game. Once I have a good library of Components and Systems implemented I can see this leading to a very Unity-like(-a-bit), editor driven workflow for Super Secret Game X.

  • On Posting

    I often think about writing more on this here internet, but quickly dismiss the idea, thinking that what’s on my mind is trivial and obvious and of use to no one. Lately I’ve been considering the type of people who would actually read any of the content I’m ever likely to post and my opinion has changed: if you’re reading this you might find stuff I’m implementing in my projects new, interesting or at least vaguely applicable. It doesn’t have to be ground breaking and it might just inspire someone to try a different approach to their development. With that in mind I’m going to endeavour to post more, starting with the thing that will follow shortly!

  • Dirty Haxe

    I’m working on my first ever Haxe project and I am loving it. Haxe is a free, open source, object-oriented, cross platform, multi-target, ActionScript-like language and compiler. From one project you can generate:

    • a compiled SWF, with support for a bunch of Flash versions
    • JavaScript
    • PHP
    • C#, for targeting Windows,Windows Phone, Xbox or anything Mono supports
    • Java, for targeting Android or anything that runs the JVM
    • C++, for targeting basically anything else

    Apart from SWF these will all provide source code which can then be built in an environment of your choosing.

    What about making games?

    For me, the real magic comes with the OpenFL library - an open source, cross platform implementation of the Flash standard library. This gives you access to a complete, well documented node based view hierarchy that can target basically anything. From the same project, using a different build argument, OpenFL can build to run in browser using HTML5 or Flash; natively on Windows, Mac and Linux desktop; on mobile, with support for iOS, Android and BlackBerry. This means you can hedge your bets when targeting browser: don’t want to worry about flakey HTML5 support? Provide it as an option, but also build a SWF for everybody else. There’s always going to be discrepancy between hardware and screen resolutions available on different mobile devices, but that doesn’t mean you need different code. Build support for different resolutions into your game and target standard hardware features and your game can run on anything.

    The language

    So it can build a game to run on your toaster, but how is it to actually use? Well, for a start it has a compiler, so you know ahead of execution that your local variable is being used without being assigned. It’s strictly typed, so that same compiler can catch you assigning a String to an Int. It also makes good use of type inference so you still don’t have to be too verbose with assignment. It supports generics for some sweet strictly-typed containers. It supports anonymous functions and function objects for super-convenient callbacks. It’s got a thorough standard library with support for sockets, web requests, XML parsing, functional-style set manipulation and a bunch more. The standard library support varies depending on what’s actually available on different platforms, but that’s all covered in the API docs. I haven’t had to touch too much of it for the project I’ve been working on, but the hash map implementation has served me well. Compile times are an oft-quoted boon of Haxe, and from my limited experience it seems great. My MacBook Air generates the JS target for my simple puzzle games in 2-3 seconds.

    I’ve struggled a little bit with debugging but that’s mainly a result of targeting HTML5. The JavaScript debugger in Firefox is great but you’re debugging a single monolithic JS file which is a different representation of your Haxe code, and then figuring out what that actually translates to. I’ll have to explore further if there’s a way to debug something closer to the actual game code I wrote, ideally the Haxe code itself. Declarations are ECMAScript style, name:Type (e.g. var x:Int), which I find a little off-putting as I spend most of my time in C-like languages, but I can cope!


    I use Unity extensively, both at work and at home, and it’s perfect for making cross-platform 3D games, making heavy use of the concept of scenes. I’ve made a few 2D games in Unity and it always feels like I’m fighting against it, shoehorning 2D elements into a 3D workflow. Haxe with OpenFL seems like a great code-centric alternative to Unity - you get all the benefits of write-once, deploy anywhere, plus the whole Flash standard library to drive your game’s sprites. I’ll definitely be using it for any small 2D games I make in the near future. Also having the option of targeting Flash is a nice bonus few other cross platform solutions still support!

  • Events Are Operational

    I have implemented my proposal outlined in the previous post and all it’s working great. There’s a base TriggerComponent class which holds a pointer to its action. PositionTriggerComponent extends TriggerComponent, and PositionTriggerSubSystem checks for objects occupying PositionTriggerComponent-Entities’ tiles and sets the triggered flag on their Actions. Finally, classes inheriting from ActionSubsystem decide how to handle their actions being triggered. This is working for a simple DebugActionComponent/SubSystem but the idea should apply to anything else. Getting closer to being able to make a game.

  • Hello, The Internets

    I’m currently developing a personal-project mobile game using Cocos2d-x, a cross platform port of the Cocos2d-iOS framework. I got as far as having a few graphical layers moving sprites on the screen when I got tired of 2d-x’s weird Objective-C–shoehorned–into–C++ style, which encourages the use of static factory functions to create objects, when C++ already has constructors. I had also been reading a lot about Entity/Component (EC) systems for game development – a methodology that allows for game objects to be built up from collections of reusable components, and, relevantly, forces logic to be separated from display code. I thought this would be a great opportunity to implement my own EC system, as it would keep the nasty Object-C style code very localised and wouldn’t spread to any parts of the game logic.

    This went really well – after writing and refining the EC framework, then coding some fundamental components and subsystems (such as SpriteRenderable and Position components and SpriteRenderer subsystem), I had a character I could set the world position of, and the SpriteRenderer subsystem would set the Cocos sprite’s position accordingly. I went on to write the relevant components and subsystems to get the character moving around a tile base map, using the Cocos touch interface and all was good.

    Then I hit a problem. The game I’m developing needs a trigger system. An entity (e.g. a button) needs to trigger an action (e.g. a door opening). This requires the subsystem to track entities with Button components (the trigger) and entities with Position components (the entity that will cause the button to trigger) and consider them independently:

    for each Entity with Button and Position: b
        for each Entity with Position: p
            if b.pos == p.pos

    ..which my current system doesn’t support, so I’ll have to add the concept of or and and operators to the class that defines the types a subsystem is interested in. I’ll check back in when that’s implemented and hopefully I’ll have a nice, flexible event triggering system.

subscribe via RSS