Programming

Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
    }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
    }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here 😉 BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
            .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


RPG Development Progress Pulse

Progress Pulse Series

I figured this would be a fun thing to start to do just to get small updates out and talk about what I’ve been working on for ProjectXyz and my RPG I’m building in Unity3D. This will hopefully be some small updates on the order of semi to bi-weekly about what kinds of things are going on when I’m programming for these projects. This could include:

  • How and why I decided to refactor something
  • A new design practice I’m trying
  • Reflecting on why a design decision has(n’t) been working out
  • A new feature that’s interesting
  • etc…

Some of these will be technical and others much less. A bit of progress pulse allows me an outlet to talk about interesting things I’m doing and maybe sheds some light on some areas (game development or just general programming) that you might be interested.

Where Can I Find Entries In This Series?

I’ll try to organize these Progress Pulse entries into a specific category on my blog. Ideally that way you can navigate them pretty easily. You can click the link below and you should get all the entries in this series!

Click Here For Entire Progress Pulse Series


ProjectXyz: Why I Started A Team For My Hobby Project

ProjectXyz - Why I Started a Team

Who Needs A Team?!

I’ve been building RPG backends for as long as I’ve been able to code. I think my first one that I made for my grade 11 class is the only RPG that I “finished”… It was text-based and all you could do was fight AI via clicking attack, buy better weapons, level up, and repeat. It was also 10000 lines of VB6 code and so brutal that I couldn’t add anything to it without copying hundreds of lines of code.

Since then, I’ve had the itch. I keep rewriting this thing. I keep taking “Text RPG” (super cool and catchy, I know) and rewriting it. I had my first visual representation of this game called Macerus (here’s another rewrite for unity), which is actually how I landed my first co-op job. But every time I’d get so far, I’d decide I needed to rewrite it because I had messed up the architecture in some way and refactoring would be too much work.

My latest attempt is called ProjecyXyz, because I can’t come up with names. And funny enough, I just Googled it while writing this article and there’s actually a company with the same name… So maybe I’ll have to get more creative. ProjectXyz is supposed to be a very generic RPG game framework that allows new systems, mechanics, and game content to be dropped in, in addition to being independent of a front end for rendering.

It’s also something I’ve been making on my own. Because I’ve been making RPG backends on my own for years now. So who needs to have a team, right?

Too Much Pride For A Team

I think initially I wanted to do this all on my own because of pride. I also don’t think it was something I was conscious about except for the fact I looked at this project as my baby and something I could control the development of. I wasn’t consciously telling myself “I have to do this on my own so that I’m better than other people” or anything silly like that.

But why would I go ask others for help? They don’t code like me. They don’t have the same investment into this idea as me. They aren’t as passionate. They might have their own ideas for how to do things too! How could I have someone like that working on MY project?

Those are all pretty naive reasons for considering to work alone though. Sure, this is my pet project and I’m going to likely feel more attached to it than anyone else. That’s probably expected. It doesn’t mean that I can’t find people that are super interested in working on something like this. They could be totally passionate about learning different aspects of creating an RPG backend.

As for having their own ideas… That’s probably one of the BIGGEST reasons in FAVOUR of having a team! It’s easy to get scared about having other people put their ideas into something you feel like is “yours”. It might have taken a few years of working in the industry (currently just passed 6 years of working at Magnet Forensics), but it’s actually very common for other people to be contributing ideas into code bases you’re working on. It happens every day. Sometimes you have design meetings or code reviews or general architectural discussions and your idea ISN’T the one that’s picked. That’s cool! As long as everyone is striving for extensible and testable code, we can make changes if we need to going forward. You don’t need to make every decision and sometimes it’s much better that way. Other people are smart too 😉

Passion is Key for a Team

While the “team” I started isn’t an official team, it’s the first time I’ve been very open to having people directly contribute to my pet project. I think one of the most obvious reasons I became comfortable with this is because I found someone that was very passionate about exploring this space.

My colleague and I were talking about some of the concepts in ProjectXyz and where I wanted to go with it. Immediately he expressed interest in map generation and how that’s always been something he wanted to explore. How can maps be procedurally generated? Can we take this concept and generate maps on the fly? What are memory and runtime constraints? How do we represent this information in code? What about persistent storage?

I could immediately tell he was very curious about how a system like this might work. After several conversations with him about how he was starting to hack up some ideas and doing research on different algorithms, I knew he was passionate about it. We discussed working on some of these things together and contributing to the project code that I have, and we’ve been going back and forth for a few weeks now sharing ideas and his progress that he’s making for map generation. I’ve been hands off only really acting as a sounding board for him.

I think having someone passionate like this is critical for a small team. There’s going to be many barriers when working on a challenging project, and it’s easy to get bogged down and lose motivation when you’re stuck. Having additional people that are passionate about seeing progress in your project means you have some support for pushing through those hard times when you might lose motivation. If my colleague comes to me and says “I’ve been stuck on this issue and maps wont generate how I want…”, then I’m more than happy to sit down with him and talk through his algorithm and maybe where there’s an issue. I’m invested in seeing his piece come to fruition. Similarly, if I’m working on something like dynamic item generation for the game and I get stuck, I know he’s there to do the exact same thing. We both want to see this thing working how we intend.

So passion is important for a team. But is it sufficient? Is it the only requirement for adding a team member?

A Team is Built on Trust

Trust! Trust is a huge part of establishing a team because you need to be able to rely on each other. As mentioned, my colleague is passionate about working on this and has an interest in map generation. But what if I had never seen any of his code before? What if I didn’t know if he’s had practice with writing extensible code, testable code, following good design practices, etc… What if?

To be honest, I probably would be pretty nervous about him contributing code. It might be a huge barrier for me. I’d want to review his code and make sure it wasn’t “polluting” my pet project. I’ve re-written this code enough times that I really don’t want to have to think about rewriting it again! If I was nervous about someone contributing code I was going to need to re-write from scratch just to have an extensible design, it might not even be worth it having them contribute in the first place. It might actually create MORE work in the long run. It sounds selfish, but if the goal of adding someone to the team is to provide a net positive effect, then having to re-write code that isn’t up to par might be a deal breaker.

But that’s not the case here. I have multiple years of experience working with this colleague closely on various projects. We align to coding practices but still have our own twist on things. We value the same things in “good” code (extensible and testable). We use many of the same design patterns in similar situations. I’ve seen enough of his code to know that most of the time my comments about it are “oh, have you considered” and not “… you need to rewrite this”.

I can trust that what he wants to contribute will be aligned to my vision. I also can trust that new ideas he introduces are probably awesome new perspective that I hadn’t thought of. I also trust that if we disagree on something, we’re open to discussing it and coming to a resolution. So trust in this case certainly removes the barrier to entry to adding additional people to my hobby project.

Should You Form a Team?

While this was a pretty general article, I just wanted to get you thinking about opening up your hobby project(s) to other people to contribute. This is something I wish I would have considered more seriously early on. Maybe I wouldn’t be re-writing my project for the millionth time!

Some general points:

  • You’re not a “worse” programmer for getting other people contributing. Good programmers need to be able to work with others!
  • Other people can have good ideas too! Sometimes, they’re even better than your own ideas 😉
  • Other people may have more knowledge or interest in areas that need to get work done that you just don’t want to do! Perfect!
  • You’ll want to try and find people passionate about working in the area your project focuses
  • You’ll want to find people that you feel like you can trust so that you’re comfortable with them working on “your baby”
  • Getting help doesn’t mean your code must be “open source”. You can still share private repositories together (i.e. consider BitBucket!)

So what do you think? Is your hobby project kind of stale because you’ve hit enough roadblocks and it’s time to get some more firepower to tackle it?

Share your thoughts below about your experiences with forming teams for your hobby projects!


Stitching – Combining Unity3D And Autofac

Stitching - Combining Unity3D And Autofac

Before We Talk About Stitching…

In Unity3D, the scripts we write and attach to GameObjects inherit from a base class called MonoBehaviour (and yes, that says Behaviour with a U in it, not the American spelling like Behavior… Just a heads up). MonoBehaviour instances can be attached to GameObjects in code by calling the AddComponent method, which takes a type parameter or type argument, and returns the new instance of the attached MonoBehaviour that it creates.

This API usage means that:

  • We cannot attach existing instances of a MonoBehaviour to a GameObject
  • Unity3D takes care of instantiating MonoBehaviours for us (thanks Unity!)
  • … We can’t pass parameters into the constructor of a MonoBehaviour because Unity3D only handles parameterless constructors (boo Unity!)

So what’s the problem with that? It kind of goes against some design patterns I’m a big fan of, where you pass your object’s dependencies in via the constructor. You can read my little primer about constructor parameter passing, dependency injection, and Autofac to learn more.

The challenge I’m trying to address is that my non-MonoBehaviour classes are all going to be setup to use constructor parameter passing as much as possible but the MonoBehaviour classes cannot. So I’d like to reduce the amount of disjoint coding styles as much as I can and make the MonoBehaviour classes feel like the rest of my stuff!

What Is “Stitching”?

Here’s where this little pattern I created called “Stitching” comes into play. Stitching involves using a class referred to as a Stitcher that’s single purpose is to take parameters in via a constructor, and wire them up to either public properties or public fields (but I REALLY suggest using properties) on the MonoBehaviour that we instantiate through the GameObject.AddComponent() API.

The code ends up looking something like this:

public sealed class MyComponentStitcher
{
  private readonly IDependency _dependency;

  public MyComponentStitcher(IDependency dependency)
  {
    // take in our dependencies and save them as fields
    _dependency = dependency;
  }

  public MyComponent Stitch(GameObject gameObject)
  {
    // create the MonoBehaviour instance using the Unity3D API
    var componentInstance = gameObject.AddComponent<MyComponent>();

    // wire up our dependencies (assign our field to a property on the component)
    componentInstance.Dependency = _dependency;

    return componentInstance;
  }
}

Where you can see that:

  • We inject dependencies into the Stitcher’s constructor
  • We call AddComponent() with the component type we want on the object we want to “stitch” to
  • We mutate the component
  • We return the newly made component

How Do We Use Stitching In Practice?

Now that we see the pattern for a how a Stitcher works, how do we actually use Stitching in practice? Let’s start by using another example:

public sealed class SomeClass
{
  private readonly IMyComponentStitcher _stitcher;

  public SomeClass(IMyComponentStitcher stitcher)
  {
    _stitcher = stitcher;
  }

  public void MyMethod()
  {
    // create a new Unity3D game object
    var gameObject = new GameObject("My Game Object");

    // "stitch" our 
    var myComponent = _stitcher.Stitch(gameObject);

    // we can use some information that would have been injected into the constructor
    // this should print the injected value
    Debug.Log(myComponent.InjectedInfo);
  }
}

From this, you can see that:

  • We have a class called MyClass following our constructor parameter passing paradigm
  • The method MyMethod()
    • Creates a new game object
    • Adds a MyComponent instance to our game object by calling the Stitch() method
    • Using our imagination and the example above, pretend our Stitcher implementation takes a parameter in its constructor to assign to the InjectedInfo property of of MonoBehaviour
  • Logs out the value of the InjectedInfo property found on our newly created instance

So What Makes Stitching Better?

You might feel like this is extra code right now, but this is where the power of Autofac comes into play. You can read my article about using Autofac with Unity3D for more information.

By creating a Stitcher, we can register it to our Autofac container. The Autofac container will then resolve any dependencies that our Stitcher requires for us. The net effect of this is that when we Stitch MonoBehaviours to GameObjects, we get what feels like Autofac resolving dependencies for our MonoBeaviours. We don’t need to mutate MonoBehaviour fields/properties all over our code to assign the dependencies the script needs to use. Instead, we treat the Stitcher class like a factory for our MonoBehaviour.

So in summary:

  • Stitching allows us to leverage Autofac for instantiating MonoBehaviours
  • Stitcher classes essentially become a factory class for our MonoBehaviours (with the side effect that they *must* mutate the GameObject that we need to attach the MonoBehaviour to)
  • Allows assignment of MonoBehaviour fields/properties for initialization to exist in one spot so we can put the bad object mutating code in one spot that feels hidden

Using Autofac With Unity3D

Autofac With Unity

Why Consider Using Autofac With Unity3D?

I think using a dependency injection framework is really valuable when you’re building a complex application, and in my opinion, a game built in Unity is a great example of this. Using Autofac with Unity3D doesn’t need to be a special case. I wrote a primer for using Autofac, and in it I discuss reasons why it’s valuable and some of the reasons you’d consider switching to using a dependency container framework. Now it doesn’t need to be Autofac, but I love the API and the usability, so that’s my weapon of choice.

Building a game can result in many complex systems working together. Not only that, if you intend to build many games it’s a great opportunity to refactor code into different libraries for re-usability. If we’re practicing writing good code using constructor dependency passing with interfaces, then things really start to line up in favour of using a dependency injection framework.

Getting Set Up

At the end of my autofac primer article, I provided a link to the Nuget package for Autofac. You’ll notice that there’s a version dependency for .NET 4.5, so if you’re not sure how to get Unity3D working with .NET 4.5, you’ll want to check this other article of mine. It’s very simple, so don’t worry!

Unity3D, at the time of writing this and using version 2018.1.1f1, there’s no native Nuget package support. I haven’t spent too much time investigating alternatives, but not to worry. I’ll explain a quick work around. The TL;DR is that we need the binaries from the Nuget package to be loaded up by Unity3D and we’ll miss out on the Nuget-y-ness for now. Not a huge deal since we’ll still have Autofac support!

  • Start a new Visual Studio C# project
    • Ensure that the .NET framework is at least 4.5 and more specifically, the version of .NET that you’d like to use in your Unity3D project
  • Open up the Nuget package manager in Visual Studio
  • Search for Autofac online in the package manager (it should be the same one I referred to above!)
  • Add this package to your visual studio project
  • Compile this visual studio project
  • Assuming you built in debug, go to the output folder (which is in bindebug if you didn’t change anything from default)
  • In the output folder, you’ll find “Autofac.dll”
  • You’ll want to add this into your Unity3D project’s “Assets” folder
    • I like nice folder hierarchies, so I’d suggest making a subfolder inside of “Assets” called “Third Party” or “Dependencies”… Something that’s obvious for what it means
    • Drop in the Autofac.dll file into there
  • Unity3D will add a corresponding *.meta file to go along with this

Great! We’re almost there. If you want to test it out, open up a script from Unity3D. This will launch a new Visual Studio instance if you haven’t opened up one for your Unity project yet. At the very top of your file you should be able to type:

using Autofac;

And the namespace should resolve! If not, sometimes this takes Unity3D a refresh operation to regenerate the project file on disc, so if you switch to Unity3D again and it starts doing some processing, switching back to Visual Studio might resolve this.

Using Autofac With Unity3D

Up until this point, we’ve proven we can reference Autofac. I’m not going to explain all the ins and outs for how you’ll want to organize your Autofac initialization in this post, but we can walk through a quick example!

  • Pick a game object on your scene
  • Add a new C# script to it
    • Call it whatever you’d like, but make sure you know how to open it
  • … now go open it in Visual Studio 🙂
  • We should have a method in there called Start()
    • If not, feel free to add it:
    • private void Start()
      {
        // TODO: we'll add stuff here
      }
  • Let’s use this code to make a new class that you can put inside the same script file for now:
    • public sealed class MyAutofacObject
      {
      
        public MyAutofacObject()
        {
          Debug.Log("Constructor for our object!");
        }
      
        public void DoThing()
        {
          Debug.Log("Test!");
        }
      }
  • Inside this start method, let’s try doing something VERY simple to prove Autofac works!
    • var containerBuilder = new Autofac.ContainerBuilder();
      containerBuilder.RegisterType<MyAutofacObject>().SingleInstance();
      
      var container = containerBuilder.Build()
      var instance = container.Resolve<MyAutofacObject>();
      
      instance.DoThing();

Now if we run our game, here’s what should happen:

  • The script attached to the game object should run
  • The Start() method on the script should be the first thing that goes
  • The code we added should:
    • Make a new ContainerBuilder
    • Register our MyAutofacObject type as a single instance
    • Build the container
    • Resolve an instance of our type
    • Log out a message saying it’s in the constructor
    • Log out a message that says Test!

And voila! It’s simple, but it should demonstrate that Autofac is working!

Next Steps

This is a very contrived example of using Autofac with Unity3D. It proves that the code can be run, but it doesn’t do too much that’s useful. There are going to be many considerations you’ll need to make for how you want to organize your dependencies, register your classes/interfaces, and so on.

I’ll continue to add into this Unity3D series of posts, but let me know what else you’d like to know about using Autofac with Unity3D! I’d be happy to try and answer, or even create an article to help explain.

Thanks!


Dependency Injection with Autofac – A Primer

Autofac Logo

Before Autofac…

I’ve written before about IoC and dependency injection, but these are older posts and my perspective and experience with these topics has fortunately been growing. I think they’re incredibly important when you’re building complex systems, but the concepts can offer some benefits in all of your programming! When you get in the habit of practicing this kind of thing, you can get some pretty flexible code… for free.

So a quick recap on what I mean by dependency injection here… I’m mostly focused on passing interfaces into constructors (and yes, I’m going to be using C# terminology as I do in most of my programming examples, but these concepts are generally the same in other languages). The benefits here:

  • You can write implementations that don’t depend on other implementations… Just an API.
  • Not depending on an interface means you can write mockable code for your unit tests. (I’ll follow up with a post on this to help provide examples)
  • You can swap out functionality by providing a different implementation of an interface and NOT re-writing core code
    • This can be a very powerful refactoring tool
    • This can allow creation of new functionality in a system simply by adding one small class instead of re-writing code

So that’s all good and well… So what do we use Autofac for?

When you might want to take the leap to Autofac

So you’ve been writing code now using interfaces in your constructor parameters. You’ve got nice modular code using composition. You have unit tests. Things are great.

There comes a point where you decide you need to break open a class in the depths of your system and provide it a new interface as part of the constructor. This is in line with the constructor parameter passing paradigm (nice alliteration, woo!) you’ve been using, so it feels good. You modify your constructor to take the new interface parameter. You change up your method to call this new interface’s API. You update your tests. It works!

Now you need to make the rest of your application work though, and it turns out because this class is created so deep down in your system, you need to find a way to pass this new interface implementation allllllllll the way down. And suddenly, you find you need to break open 10 other classes to pass this interface into the constructor. It’s a simple change in that it’s the same change in 10 spots… But it’s 10 spots. And it’s tedious. And you got lucky because you own this code and you don’t need to worry about breaking the constructor API for other people.

But it might be time to look into something like Autofac at this point because it can make this problem disappear for you.

Enter Autofac!

Autofac is awesome. The end.

But seriously, Autofac is one example of a dependency container framework. The idea with a framework like this is that programmers can register things to the container and then at a later point these things can be resolved. So you could:

  • Decide to take a particular implementation and register it so that it can be resolved by its interface
  • Decide if you want a registration to act like a singleton (and remember, a singleton does NOT have to have global access… it just means literally a single instance)
  • Run callbacks when an instance is created
  • … and so much more

In my opinion, the two major benefits of Autofac as they relate to this example are:

  • You can better organize the top level of your application to wire up specific implementations to use in your code
  • … Autofac can magically resolve the dependencies for you so it solves that nasty problem of passing down dependencies via constructors to deep areas of your code

You’ll need to be careful that you don’t abuse the container though! It’s considered an anti-pattern to use the container to manually resolve dependencies across various areas of your application (generally this is referred to as the Service Locator (anti)Pattern, but people go back and forth on why it’s good or bad). The “proper” use case is to resolve your single entry point class in one spot, call the methods you need on your entry point class, and let Autofac do its magic to resolve all of your registered dependencies.

Where Can I Get Autofac?

This is the easy part! You can use your Nuget package manager in Visual Studio to find the right package for your .NET framework dependency. Check it out at the Nuget Gallery!

What’s Next?

I have some examples I’d like to write about next for using Autofac including:

  • Using Modules for Organizing Code Dependencies
  • Patterns for Dynamically Resolving Modules Across Assemblies
  • How to use Autofac with Unity3D

But I’d love to hear what you want to know more about! Comment and let me know, and I’ll see what I can do.


Unity3D and .NET 4.x Framework

Unity

Unity3D Default .NET Framework

I recently wrote that I wanted to start writing more Unity3D articles because I’m starting to pick up more Unity3D hobby work. It felt like a good opportunity to share some of my learnings so that anyone searching across the web might stumble upon this and get answers to the same problems I had.

Unity3D as of 2018.1.1f1 (which is the version I’m currently using), still defaults to using .NET 3.5 as the framework version. Nothing wrong with that either. I’m sure there are reasons that they have for staying at that version, probably because of Mono and cross platform reasons if I were to guess, so I’m not complaining. For reference, this setting in Unity3D is referred to as “Scripting Runtime Version”. So if you’re googling more about this later, that’s what Unity calls it. For the libraries I was building to use as a game framework, I was using .NET 4.6 and discovered I was going to have a challenge getting them working in Unity3D.

If you want to see what your setting is currently set at, you need to check out the “Player” settings. This was kind of buried in the UI for me so I didn’t know it was a thing that could be adjusted. In Unity3D 2018.1.1f1, click Edit->Project Settings->Player. Here’s what it looks like:

Unity3D - Player Settings

In Unity, click Edit->Project Settings->Player

From there, you’re going to get “PlayerSettings” in your Inspector tab. You’ll need to expand the “Other Settings” to see your scripting runtime version:

Unity3D - Other Settings

“Other Settings” accordion control in PlayerSettings Inspector tab

Once you expand that, here’s the setting you’re interested in:

Unity3D - Scripting Runtime Version

Scripting Runtime Version – The selected .NET version Unity will use

Switching Unity3D to .NET 4.x

Now that you know where the setting is… it’s pretty easy 🙂

Unity3D - Scripting Runtime Version

Use the dropdown to pick which .NET framework version you’d like to use.

You can read more about this setting over at the official Unity3D documentation pages:

https://docs.unity3d.com/Manual/ScriptingRuntimeUpgrade.html

This outlines what things are affected in different platforms and scenarios so YOU SHOULD READ IT to understand what will change.

Hope that makes things a bit easier for you to get up and running with .NET 4.x assemblies in Unity3D!


Delta State Algorithm Creation Series

Delta State Algorithm

Delta State Algorithm Motivation

This post will act as the table of contents for an algorithm I’m developing for calculating deltas between state for generic sets of data.

I figured this would be an interesting series to write about so I can document my thought process, trials, errors, and successes. At the end of this I plan to share working code that implements this algorithm so that you can use it in your own work.

Now that I’ve been not diving more into Unity3D development for my hobby programming, I’m getting to a point in game development where I need to manage state for data in a way that allows patches of state to be applied in a layered fashion. A couple of examples of this include:

  • Applying save game state to a base game state
  • Applying a patch to a base game state
    • Optionally allow save game state to be applied on top of this and completely override as necessary
    • Optionally allow save game state to be applied on top of this but “retroactively patch” existing save game data

I believe that I can design an algorithm that’s relatively simple that should allow the layering to work, but I have a lot of potential corner cases to figure out (especially when considering retroactively vs not for applying game patches). I’ll be exploring data structures to use and technologies to leverage for persisting this state.

It should be pretty exciting!

Delta State Algorithm Series

Just a reminder that if you’re interested in this series, you’ll want to use this page as the starting point. As I continue to add new content to this series, I’ll come back and update this page with links to the parts of the series. Ideally at the end I’ll have some working code I can share!

  • Part 1 – Exploring Graphs and Trees (TBD)
  • Part 2 – Undo/Redo Patterns (TBD)
  • Part 3 – Generic State Serialization to a Graph (TBD)
  • Part 4 –  … TBD

What Makes Good Code? – Should Every Class Have An Interface? Pt 2

Should Every Class Have an Interface?

This is part two in the sub-series of “Should Every Class Have an Interface?“, and part of the bigger “What Makes Good Code?” series.

Other Peoples’ Code

So in the last post, we made sure we could get an interface for every class we made. Okay, well that’s all fine and dandy (I say half sarcastically). But you and I are smart programmers, so we like to re-use other peoples’ code in our own projects. But wait just a second! It looks like Joe Shmoe didn’t use interfaces in his API that he created! We refuse to pollute our beautiful interface-rich code with his! What can we do about it?

Wrap it.

That’s right! If we add a little bit of code we can get all the benefits as the example we walked through originally. It’s not going to completely fix “the problem”, but I’ll touch on that after. So, we all remember our good friend encapsulation, right?

Let’s pretend that Joe Shmoe wrote some cool code that does string lookups from an Excel file. We want to use it in our code, but Joe didn’t use the IStringLookup interface (because… it’s in OUR code, not his) and he didn’t even use ANY interfaces. The constructor for his class looks like:


public ExcelParser(string pathToExcelFile);

On this class, there’s two methods. One method allows us to find the column index for a certain heading, and the other method allows us to get a cell’s value given a column and row index. The method calls looks like:


public int GetColumnIndex(string columnName);

public string GetCellValue(int columnIndex, int rowIndex);

We can wrap that class by creating a wrapper class that meets our interface, like so:


public sealed class ExcelStringLookup
{
  // ugh... we have to reference the class directly!
  private readonly ExcelParser _excelParser;

  // ugh... we have to reference the class directly!
  public ExcelStringLookup(ExcelParser excelParser)
  {
    _excelParser = excelParser;
  }

  public string GetString(string name)
  {
    var columnIndex = _excelParser.GetColumnIndex(name);
    // assumes all of our strings will be under a column header
    var cellValue = _excelParser.GetCellValue(columnIndex, 1);
    return cellValue;
  }
}

And now this will plug right into the rest of our code that we defined originally.

This doesn’t totally eliminate “the problem” though (the problem being that some class doesn’t have an interface (what this post is trying to answer)). There’s still a class we’re making use of that doesn’t have an interface, but it looks like we’ve reduced the exposure of that problem to JUST this class and the spot that would construct this class. Are we okay with that?

Thoughts So Far…

Let’s do a little recap on what we’ve seen so far:

  • Having interfaces for our classes is a nice way to introduce a layer of abstraction
  • Interfaces are just *one* tool to get layers of abstraction introduced
  • If you wanted to have interfaces for all of the classes in your code and some third party didn’t use interfaces, that code is likely not as common in your code base (especially if you wrap it like I mentioned above). This may not always be true in your code base, but it’s likely the case.
  • The amount of work to wrap things can vary greatly. Some things are straight forward to wrap, but you need to add many methods/properties. Sometimes it’s the inverse and you only have a few things to wrap but they’re not straight forward.
  • The number of classes you’d need to wrap to get to this state can vary greatly… Since even built-in System classes aren’t all backed with interfaces!
  • There’s certainly a trade off between the original work + maintenance to wrap a class in an interface versus the benefits it provides.

Is that last point blasphemy?! So there may actually be times we DON’T want to have an interface for a class?

Watch this space for part 3 where we start to look at a counter-example!

 


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I work as a team lead of software engineering at Magnet Forensics (http://www.magnetforensics.com). I'm into powerlifting, bodybuilding, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress