Extending Unity for Git (and maybe other source control?)

Note: The really interesting bit is at the end of the article.  So if you don’t stick around for the whole journey, at least skip to the end!  That being said, this is all useful stuff so you should, yeknow, stick around…

Unity is a pretty solid game engine that at least a few people are using, but it doesn’t play very nicely with Git – my preferred source control mechanism – by default.  While I can’t speak for everyone, there are three major issues I’ve always wanted to address:

  • The default setup all but guarantees that having two different users push the same scene, prefab, ScriptableObject, etc will render it totally unusable and force you to rollback (quite painfully) to an old version.
  • Git, by design, doesn’t play very nicely with binary data.  It’s diffing tool doesn’t support binary files and, as such, it stores the entirety of each revision of binary files for every commit.  As a lot of game content is stored as binary data – FBX files, textures, etc – this presents a bit of a problem for storage.
  • After playing with Unity Collaborate a bit, it became extremely evident that the UI/UX for interaction between traditional source control and the Unity editor could be drastically improved.

In the past we’ve sort of just worked around these issues, but I spent a fair amount of time recently figuring out how to address all of these issues more directly.  With some additional configuration steps and a little bit of editor scripting, all of these issues can be addressed quite nicely.

Unity’s SmartMerge

YamlSmartMerge is a feature included in Unity 5+ that allows you to safely merge changes to prefabs and scene files, addressing issue 1.  I believe it is what Unity Collaborate uses internally, but have nothing other than my own observations to back up that claim.

When it works, it is fantastic.  Person A can tweak component values on objects X, Y, and Z while Person B is building out the level, then Git merges the changes with only a little bit of coaxing (you have to manually merge from the command line, or write an editor script to do it – more on that in a bit).

However, it isn’t always that easy.  Inevitably, you’ll run into a scenario where Person A and B modify the same object.  Unity seems to solve this by picking either Person A or B’s changes and silently clobbering the other person’s.  This is understandable behavior, but without telling the user exactly what changes were clobbered, this solution suddenly looks a whole lot worse.  That being said, it is definitely better than the alternative wherein assets are simply left unusable, and the manual intervention generally lets the user know when they should be careful.

The Unity documentation has partial instructions on how to get SmartMerge working with git, but I tend to refer people to this guide which is considerably more detailed and goes into a few things the docs miss.  That being said, the setup must be done for each collaborator and is a bit overcomplicated in my opinion, especially for collaborators who aren’t quite as technically inclined.

While I believe all Unity projects using git should really be using this, given its drawbacks (which, for me, mostly involve its perceived unreliability), I don’t believe this to be a complete solution by itself.  Furthermore, it does nothing to address issues 2 or 3.


LFS is an extension to git which is primarily meant for storing large binary files outside of your main repository.  It is fairly easy to install and configure for GitHub, though I can’t speak for the other major git providers (I believe GitLab’s support is only partial and I’m not sure about BitBucket).  This solves issue 2.

For configuration reference, this is the .gitattributes for my current project:

*.unity binary lockable
*.prefab binary lockable
*.asset binary lockable
*.fbx filter=lfs diff=lfs merge=lfs -text lockable
*.png filter=lfs diff=lfs merge=lfs -text lockable
*.tga filter=lfs diff=lfs merge=lfs -text lockable
*.psd filter=lfs diff=lfs merge=lfs -text lockable

LFS also implements an incredibly useful feature sorely missing from git – file locking.  As the name implies, file locking prevents other users from committing files you have explicitly locked.  This locking prevents issue 1 from occurring altogether, though in my opinion, it is best to combine this with YamlSmartMerge in case you know two users will be editing specific parts of the scene.

The major drawback of LFS locking is that it doesn’t actually prevent the local user from writing to the file, or even from committing- just from pushing it.  So by itself, this can be a bit confusing as there’s no built-in way to visualize locks or physically lock the files.

LFS and SmartMerge can therefore handily solve problem 2 and partially solve problem 1.  However, neither touches issue 3.  If only Unity had an extensible editor…  Oh wait, it does!

Integrating with the Unity Editor

As mentioned earlier, I really liked parts of Collaborate’s integration into the Unity editor.  Specifically, I really liked the ability to see who was working on a file at any given time.  Unfortunately, we had some major issues with Collaborate that made it essentially unusable in the project I’m currently working on, so we had to switch to git mid-production.

Because some of the team members were very used to the ease-of-use of collaborate (and because I liked the editor stuff so much), I wrote an editor plugin that adds most of the features that I liked in collaborate into the editor for Git, as well as a host of additional features.  I could explain the major ideas behind it, but it’d be easiest just to show them.

It shows file changes recursively, allowing you to see exactly how your project has changed.
Inside of the context menu, you have the option to lock, unlock, and discard changes to modified files.
Which, for a folder like this…
Will warn you only about modified files

By default, saving a scene will automatically lock it.  This can be toggled in the settings, but it’s very useful to prevent accidental merge conflicts.  Even though smart merge can theoretically save you if that happens, I don’t trust it enough to try.

The plugin tells you (via a tooltip) who owns any active lock so you know exactly who to yell at if you need to change a locked scene.  By default, it also actually locks files for writing if someone else owns the lock.  This way there’s absolutely no chance that two people will be working in and saving a scene at the same time.

Furthermore, the plugin provides an abstract interface so that you can write a backend for whatever VCS you want.  It’s a pretty nifty little framework, if I do say so myself.

Check out the GitHub repository if you’d like and, by all means, please contribute.  It isn’t perfect and, as my current project progresses, it’ll undoubtedly mature and evolve, but it’s pretty neat for a first revision and definitely fills a gap that it seems Unity has really always had.  So hopefully you find it useful, or at least interesting.

Until next time…

Optimizing Networked Strings as a Prebuild Step

Lately, I’ve been working on a fast-paced VR multiplayer shooter game in Unity with some classmates. Since we’ve had negative experiences with UNET in the past and knew we would need to optimize the netcode pretty carefully if the project was to play well, we decided to build a custom networking layer on top of the fantastic Lidgren UDP library. Most of my time has gone into building the networking layer from the ground up (which has been a total blast).

Strings are big and wasteful, but so easy

We initially jammed a prototype out in a weekend (which you can see a mildly self-deprecating demo of here – that is undoubtedly the final title). The netcode was a bit rough, but it worked well enough and, with some minor revisions, served as a good basis to build upon.  As the name of this article implies, one of the first revisions we tackled had to do with networked strings.

I knew that we were wasting quite a bit of bandwidth by repeatedly sending the same strings for things like RPCs and object spawns to clients. Initially, I was either going to have the server dynamically build a mapping of numerical identifiers to strings as the game progressed and synchronize that with clients – which sounded like a bit of a nightmare – or manually maintain a static mapping with some editor utility.  I was torn – the second option was way less prone to race conditions, but also considerably harder to maintain.  As I was profiling, however, it occurred to me that most of the important strings were available at some capacity either directly in the code or in the asset folder. I realized that it would be fairly trivial to write an editor script which generated all of these mappings for us. At runtime, we could load this precompiled set of values and dynamically send either the numerical index into that table or, if we happen to miss, the full string.

How it works

We already had a custom build script for quick iteration during the jam (something which I believe is absolutely essential, especially for multiplayer VR development), so adding a prebuild step was fairly simple. Before we start the build, we search for anything in any of the project’s Resources folders and store each relative path in a HashSet<string>.  Then, we iterate through all types in our project which might contain RPCs and search for any ObjectRPC or StaticRPC attributes, storing the names in the same HashSet.  Here, we also search for any RegisterNetworkString attributes so that developers can manually add strings to the cache.  After we’ve gathered all of this into the HashSet, we assign an integer ID  to each string and write these pairs to a TextAsset, encoded as an id:string pair per line, in the root of the Resources folder.  At runtime, we load this into two dictionaries – one mapping int->string and another mapping string->int – and anytime a call is made to NetOutgoingMessage.WriteCachedString, we check to see if the desired string exists in this cache.  If so, we write a 1 bit and the ID.  If not, we write a 0 bit and the actual string.  NetIncomingMessage.ReadCachedString then checks this bit and returns either the string at the sent index or the string written to the net message.

Closing and Code

While the actual optimization isn’t all that special, there is a valuable lesson to be learned here – pay careful attention to the things that you can automate! The prebuild script took almost no time to write and it makes a huge difference for our build pipeline.  Not only is it considerably less error prone than manually setting up the string tables before a build, but it is also way faster for our team for essentially no cost.

While the project is still fairly young, we’ve taken some pretty clever steps to ensuring an usable, stable networking API which is fast enough for a relatively unpredictable VR shooter.  I’ll be talking more about that in the coming weeks.

Finally, here’s the relevant code to generate the string cache.  It doesn’t run as fast as it possibly could and it generates more garbage than it needs to, but it happens so infrequently that it doesn’t really matter.  I’ll leave the actual usage of this to you, the reader.


public static void RegenerateManifest()
        var data = "";
        foreach (var v in Directory.GetFiles("Assets", "*", SearchOption.AllDirectories))
            var path = Path.Combine(Path.GetDirectoryName(v), Path.GetFileNameWithoutExtension(v)).Replace('\\', '/');
            if (path.Contains("/Resources/") && !v.Contains(".meta") && !string.IsNullOrEmpty(Path.GetFileNameWithoutExtension(v)) && !v.Contains("_resourceManifest.txt"))
                var index = path.IndexOf("Resources/") + "Resources/".Length;
                if (index < path.Length)
                    path = path.Substring(index);
                    var obj = Resources.Load(path);
                    if (obj)
                        var encoded = path + "\n";
                        data += encoded;
                        Debug.LogError("Non-resource in resources @ " + v);
        File.WriteAllText("Assets/Resources/_resourceManifest.txt", data);


public static void GenerateNetworkStrings()
        // Tells it which assemblies to search
        // We could search literally everything, but that takes a long time
        var assemblies = new List<Assembly> {

        var set = new HashSet<string>();
        EditorUtility.DisplayProgressBar("Building Network Strings", "Preprocessing...", 0);
            EditorUtility.DisplayProgressBar("Building Network Strings", "Generating resource strings...", 0);
            foreach(var resource in ResourceManifest.Resources)

            EditorUtility.DisplayProgressBar("Building Network Strings", "Searching code...", 0);
            //var assemblies = System.AppDomain.CurrentDomain.GetAssemblies();
            for (int i = 0; i < assemblies.Count; ++i)
                float baseProgress = ((float)i) / ((float)assemblies.Count);
                float nextBaseProgress = ((float)(i + 1)) / ((float)assemblies.Count);
                var assembly = assemblies[i];
                var types = assembly.GetExportedTypes();
                for(int ti = 0; ti < types.Length; ++ti)
                    var type = types[ti];
                    EditorUtility.DisplayProgressBar("Building Network Strings", "Checking type " + type.Name + " for values...", Mathf.Lerp(baseProgress, nextBaseProgress, ((float)ti) / ((float)types.Length)));
                    foreach(var v in type.GetCustomAttributes(true))
                        if (v is RegisterNetworkStringAttribute)
                            var attrib = v as RegisterNetworkStringAttribute;
                    foreach(var method in type.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static))
                        foreach(var a in method.GetCustomAttributes(true))
                            if (a is ObjectRPCAttribute)
                                var rpc = a as ObjectRPCAttribute;

                            if (a is StaticRPCAttribute)
                                var rpc = a as StaticRPCAttribute;

            var strbuild = new System.Text.StringBuilder();
            int current = 0;
            foreach (var v in set)
                strbuild.AppendLine(current + ":" + v);
            System.IO.File.WriteAllText("Assets/Resources/" + NetworkStringRegistry.fileLocation + ".txt", strbuild.ToString());

Note that HashSet.Insert is just an extension method that checks to see whether or not it already contains the element before adding it.

Reflection in standard C++

I’ve been playing with C++ a lot lately, trying to see what ways I can manipulate templates and macros.  I’m aware that it isn’t a wholly original concept, but today I decided it would be a good idea to try implementing a nice reflection interface for C++.  Right now it only supports members and parent/child relationships.  The next step is to implement method metadata, and eventually I’d like the system to be extensible such that you can attach arbitrary metadata to classes and their members.  Here’s an example class definition from the git repo:

// mammals.h

#include <reflection.hpp>
class Mammal
  int age;

class Human : Mammal
  // Declare that the reflection system can access
  // private members
  std::string name;
  std::vector<Human> children;

  int GetAge();

// mammals.cpp

// Define the Mammal class for reflection
  // Define its age member

  // Declare that we have a parent class
  // Note that this also declares Human a child of Mammal

  // Declare Human's members

  // Since the class definition allows private reflection, we can get Mammal::Age

As you can see, all it takes to reveal a class to the reflection library is to list the names of the members you want it to see, and it automagically does the rest.  Pretty nifty, huh?

All of the code is available at here as a header-only library.  Feel free to use it and contribute if you want.

Wave simulation is hard, or why you should only compute what you have to

I almost called this “Why you should always remember that games are illusions”, but that sounded way too pretentious and, honestly, kind of sad.

This weekend I participated Global Game Jam 2017, wherein the theme was “Waves”.  Our game, Bermuda Dieangle, involved controlling the ocean surface to make boats crash into each other.  Initially, we planned on running a complex wave simulation to make the boat and water physics extremely realistic and even had code in place to approximate it (one iteration on the CPU running on a background thread which ended up being too slow, and another using the GPU to propagate the wave forces, which we never got to work quite right).  But in the end, given our relatively limited experience in physical simulation and extremely limited time, we decided to fake it.  Without going into too much detail, each wave was essentially represented by a sphere which grew until it had no more energy.  Each boat checked if it was within the sphere of influence of any of these waves and, if so, applied a force away from the wave’s origin.  Since there were so few waves per frame, this was really cheap to compute and easy to write.  For the visualization, we took this data and generated a heightmap of the ocean surface by rendering each wave to a render texture as the sine of (the distance from the center / the current radius of the wave) + time + a random value unique to each wave (specifically, we had an orthographic camera pointing down towards the fluid surface and drew a radial texture with extra math being done in the fragment shader for each wave).  This was also extremely cheap and ended up looking really, really good (especially for the low-poly aesthetic we were trying to achieve).  Not only that, we were able to implement it in a couple of hours, compared to the 10+ hours we spent trying to figure out how to run a more physically accurate fluid simulation which never actually worked.  Furthermore, it should theoretically run on mobile or the web, as opposed to either of the more complex solutions where that would have been totally impractical.  And since all of the computations are pretty simple, buoyancy becomes feasible without copying any data from the GPU to the CPU because you only have to sample at a few points per ship/per wave (even though our buoyancy was broken at the end, it should be pretty simple to get in and fix).

So in short, don’t bother computing anything that you don’t have to.  And for the love of all that is holy, if you’re a bunch of college kids with very little experience in wave dynamics and not enough time to do thorough research, don’t try to write a real-time wave simulation/propagation system in a hackathon where that’s only a small part of the project.  Just don’t.