Tag: Programming

TL;DR: Unit vs Functional Tests

Here’s a super quick peek into unit tests compared to functional tests. And full disclaimer here is that depending on your circle of influence, these might be given slightly different names. Try not to dwell on that but instead the comparison and contrast presented!

Unit Tests

Coded tests that take a white-box visibility approach to exercising and asserting that written code, generally a specific function/method, works as it was designed.

Pros:

  • Generally very programmer-focused
  • Very granular coverage (breaks can identify exact lines where an issue occurs)
  • (Should) run extremely quickly
  • Have very little test setup in ideal cases
  • Provide full control (generally via ‘mocking’ dependencies) to exercise very specific logical paths

Cons:

  • Generally more challenging to convey coverage to other stakeholders
  • By nature these are brittle and break with refactoring
  • Require sets of design patterns to help ensure tests are easy to write/maintain
  • Sometimes present false sense of confidence in large codebases (i.e. systems working together) due to mocking requiring assumptions

Functional Tests

Coded tests that take a black-box visibility approach to exercising and asserting that software performs particular functionality as expected.

Pros:

  • Generally easier for non-programmer stakeholders to understand
  • Generally covers multiple classes/systems/methods working together to demonstrate behavior
  • More resilient to refactoring since this approach ignores the internals of what’s being covered
  • Exercises REAL parts of the codebase with no mocking

Cons:

  • More coarse than unit tests. Breakages might need more investigation.
  • Can potentially have more test setup given that multiple things will be interacting

Summary

So which one of these should you and your team be writing? The answer is probably both! There are benefits and drawbacks to both of these testing strategies and they overlap really nicely together. If you know you’re going to be refactoring a complex piece of your system, functional tests would be great to put in place so you can check expected behavior after. Unit tests might be great for mocking out complex dependencies or validating a variety of inputs quickly into a function. If you have an understanding of how you can leverage these, you have more tools in your programming toolbox!

Don’t like reading? Listen to me instead!


RPG Game Dev Weekly #1

As I’ve been trying to get more YouTube content put together more steadily, one of the themes I’m interested in is doing some behind-the-scenes of the role playing game (RPG) I’m making with some friends in Unity3D. I’ve found that being able to work on an RPG outside of my regular day job is a really awesome way for me to keep up on my technical skills. I love coding, and the further along I move in my career as an engineering manager, the less time I actually spend writing code myself. I pride myself in being a technical engineering manager, so for me working on this RPG is a great outlet for creativity and practice. I mentioned this in my LinkedIn post here:

Persisting Game Objects Across Maps

In this video, I focus on one of the challenges the game was facing due to how objects are materialized onto the playable map. The map that we load from disk to be shown and interacted with in the playable RPG commonly has “templates” and “spawners”. Both of these are responsible for creating objects at runtime given some criteria. As a result, uniquely placed game objects appear on the playable map for the player to interact with.

Sounds good then, right? Well the two challenges I focused on addressing were:

  1. If we leave the map and go to another one, there’s no way to persist the player across maps! That means you get a brand new character every time you transition maps. In an RPG, this is definitely not going to work out.
  2. If we return to an existing map, we expect it to be in the same state as when we were last on it. This means that the objects generated from templates or spawners must remain and NOT be respawned (which would effectively make completely new game objects).

Check out the video below:

RPG Dev Log 1 – Persist Game Objects Across Maps

Persisting Map Game Objects in a Cache

Next up was actually implementing some of the changes being discussed previously. In order to make this work in our RPG, my goal was to:

  1. Allow maps to save a lookup of the unique IDs for game objects that were generated
  2. Save the list of game objects generated into a “cache”
  3. Upon revisiting a map, tap into the “cache” to reload the existing game objects

One of my mental hurdles with this was acknowledging that we don’t yet have a solid serialization foundation for our game. I was thinking I’d want to serialize the game data to something persistent to make this work, but I was also worried writing things to disk would be overkill (and how does this mix with save game concepts?). Instead, I opted for the lightweight approach was “get something working” and I can revisit this later to extend it to persist things to disk if necessary.

Check out the video below:

RPG Dev Log 2 – Persisting Map Game Objects in a Cache

Which Domain Does This Belong To?

We’ve been trying to practice Domain Driven Design (DDD) concepts in our game. One pattern I’ve taken to an extreme is over-separating domains into very granular pieces and the result is that we have HUNDREDS of C# projects. It’s overkill, for sure. However, I feel that it’s offered some flexibility in having boundaries we can collapse later. My experience so far has told me it’s easier to collapse boundaries than it is to split.

This is all and well until we have situations where I need a class that has knowledge of two domains. That’s what this next video was about. I talk through acknowledging that I don’t know what to do and that I’ll move ahead with something best effort. I think the answer to my dilemma is that I need to translate things into a new domain for usage, but it feels like overkill. If you have thoughts on this leave a comment on this post or on the video!

Check out the video below:

RPG Dev Log 3 – Which Domain Does This Belong To?

Death Animations For All!

Finally, this last update was about something hopefully less boring… Death animations! I worked through how:

  1. I can extend our reusable sprite animation factory to create a death animation to be reused by ALL our actor sprites
  2. I can build a system that checks actor life and changes the animation as necessary to death

Unfortunately I had some hiccups the first time through recording this, but I wanted to code the whole thing live. After a Blue Screen of Death interrupted my first attempt, my second attempt I think worked pretty well! Without much code at all we could get this system fired up and it worked the first time! Well, close. I didn’t realize that our animation system already supports an animation that has an “infinite” duration final frame (in this case, the body laying on the tile). This is handled by a null value instead of a set length in seconds. I fixed it right after recording! Overall, I was incredibly happy with the result.

Check out the video below:

RPG Dev Log 4 – Death Animations For All!

NoesisGUI – The Unity UI Framework That You Probably Aren’t Using!

If you’re like me, trying to create user interfaces in general is a challenge. So when it comes to working in tools that you’re less familiar with, that challenge basically grows to a level where it’s a roadblock. For me, trying to create user interfaces in Unity3D is basically the perfect example of hitting this roadblock! That’s not to say the UI tools that are available in Unity3D are bad, but my skill level is essentially reset to zero when working with these tools. Fortunately I came across this little gem called NoesisGUI that enables WPF inside of Unity3D!

I plan to do a few updates on this either via YouTube or short blog posts, but NoesisGUI has essentially unlocked my ability to create user interfaces inside of Unity3D. You can find my intro video here, or watch it directly below:

Now I’m still not a UI expert by any means, but at least I get my familiar environment back. NoesisGUI has two primary benefits for me as a software developer:

  • I have access to all the WPF controls, XAML syntax, and styling capabilities that I know and love.
  • I can use tools/IDEs I’m familiar with, not sacrificing years of experience using these tools.

On the first point, for me I find it very tedious to create UI elements in Unity3D. For some good examples, I consider trying to do two pretty common things: control layout and lists of items. On control layout, I find the anchoring system and transforms a total pain to work with in Unity3D. It’s a personal preference sort of thing, but I find it difficult to navigate and get things to work as I expect. However, I know thanks to NoesisGUI I can leverage things like grids and flowing panels so I can use what I know and not try to design a layout system from scratch. Same thing goes with lists! I can use a ListView control thanks to NoesisGUI and then style/template all the controls inside of it leveraging XAML. Effectively, being able to leverage NoesisGUI to enable my experience with WPF means I can struggle with UI design instead of struggling with UI design AND how to make the UI framework work! It doesn’t fix my poor UX abilities, but NoesisGUI does allow me to do my best work at least.

The second point is around tooling. The Unity3D editor is powerful and if you watch any amount of tutorials from YouTube you’ll know that there’s no shortage of people showing how to drag and drop objects into the scene to get the result you’re after. But this doesn’t work well for my design approach because I don’t want my game to be coupled to the Unity3D engine (which I’ll need to write more about later). In fact, the more things I place concretely in the scene, the more it couples my game to Unity3D and it’s just not something I want to commit to. As a result, to decouple my UI code I found myself trying to programmatically make Unity3D UI elements and started to dream up some templating language.

Nonsense. NoesisGUI puts me back in my comfort zone and allows me to use familiar tools like Blend where I get my visual editor for my WPF controls as well as the split view with the XAML editor. Aside from a couple of minor quirks, I was able to get Blend to show things exactly as Unity3D shows them. That means I can rapidly develop my game UI inside of Blend. Along with with a bunch of other design philosophies (i.e. decoupling from Unity3D engine), this means we could literally write a couple-of-line game entry point with a WPF UI overtop of it directly in Blend and have it map to expected behavior in Unity3D. Again, more on some of those design philosophies later, but NoesisGUI really took it to the next level by allowing us to decouple the UI completely from Unity3D restrictions.

I plan to create more writeups and videos on how we’re using NoesisGUI in our RPG project, so stay tuned!


Video Stream – RPG Systems with Loot Generation

I asked on LinkedIn about whether or not people would be interested in a video stream that focused on programming, and I had some positive feedback. In order to test the waters, I decided I’d start with some system-design stuff given that I’m going through a bunch of practice with distributed systems. This is a bit of a change up from distributed systems in that this is interactions between co-located systems in a game framework I’m creating.

Here’s the video!

In the video stream, what I’m trying to accomplish is finding a way to share information from particular domains to be used in other domain. I mean, that’s the gist of it 🙂 The complicated parts are:

  • How do I keep domain information from leaking into other domains
  • How do I control access to the information without globally exposing it (i.e. avoiding something like a static global variable)
  • How do I make sure I have the right state when I want to go use it? (i.e. the systems run on a game loop, but some interactions that get triggered are from user events outside of the game loop)

My white-boarding skills in MS Paint are pretty rough, but I feel like it went okay! I’ll follow up with my findings, and hopefully get some programming videos put together to better explain some programming concepts.

Let me know what you think!


Xamarin Forms and Leveraging Autofac

I love dependency injection frameworks ever since I started using them. Specifically, I’m obsessed with using Autofac and I have a hard time developing applications unless I can use a solid DI framework like Autofac! I’ve recently been working with Xamarin and found that I wanted to use dependency injection, but some of the framework doesn’t support this well out of the box. I’ was adamant to get something going though, so I wanted to show you my way to make this work.

Disclaimer: In its current state, this is certainly a bit of a hack. I’ll explain why I’ve taken this approach though!

In your Android projects for Xamarin, any class that inherits from Activity is responsible for being created by the framework. This means where we’d usually have the luxury of passing in dependencies via a constructor and then having Autofac magically wire them up for us isn’t possible. Your constructors for these classes need to remain parameterless, and your OnCreate method is usually where your initialization for your activity will happen. We can work around that though.

My solution to this is to use a bit of a reflection hack coupled with Autofac to allow Autofac resolutions in the constructor as close as possible as to how they would normally work. A solution I wanted to avoid was a globally accessible reference to our application’s lifetime scope. I wanted to make sure that I limited the “leakage” of this not-so-great pattern to as few places as possible. With that said, I wanted to introduce a lifetime scope as a reference only to the classes that were interested in using Autofac where they’d otherwise be unable to.

  1. Make a static readonly variable in your classes that care about doing Autofac with a particular name that we can lookup via reflection. (An alternative is using an attribute to mark a static variable)
  2. After building your Autofac container and getting your scope (but prior to using it for anything), use reflection to check all types that have this static scope variable.
  3. Assign your scope to these static variables on the types that support it.
  4. In the constructors of these classes (keeping them parameterless so the framework can still do its job!), access your static scope variable and resolve the services you need

Here’s what that looks like in code!

MainActivity.cs

 public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity
    {
        protected override void OnCreate(Bundle savedInstanceState)
        {
            var builder = new ContainerBuilder();

            // TODO: add your registrations in!

            var container = builder.Build();
            var scope = container.BeginLifetimeScope();

            // the static variable i decided to use is called "_autofacHack"
            // so that it deters people from using it unless they know
            // what it's for! you could use reflection to find similar
            // fields with something like an attribute if you wanted.
            foreach (var field in GetType()
                .Assembly
                .GetTypes()
                .Select(x => x.GetField("_autofacHack", BindingFlags.NonPublic | BindingFlags.Static))
                .Where(x => x != null))
            {
                field.SetValue(null, scope);
            }

            LoadApplication(scope.Resolve<App>());
        }

The class that can take advantage of this would look like the following:

public sealed class MyActivityThatNeedsDependencyInjection : Activity
{
    private static readonly ILifetimeScope _autofacHack;
    private readonly IMyService _theServiceWeWant;

    // NOTE: we kept the constructor PARAMETERLESS because we need to
    public MyActivityThatNeedsDependencyInjection ()
    {
        _theServiceWeWant= _autofacHack.Resolve<IMyService>();
    }

    protected override void OnCreate(Bundle savedInstanceState)
    {
        base.OnCreate(savedInstanceState);

        // now we can use this service that we "injected"
        _theServiceWeWant.DoTheCoolStuff();
    }
}

Summary

Reading this you might think “well I don’t want to pollute my Xamarin code with variables that say _autofacHack, that’s gross”. And I don’t blame you! So this is to serve as a starting point for a greater solution, which I think is something I’ll evolve out for myself and I encourage you to do the same.

Things I’m focused on:

  • Minimize where “ugly” code is. A globally accessible scope on a static class seems like it can spread the “ugly” code to too many spots. This approach is intended to help minimize that.

    What are some next steps to make that EVEN better? Maybe an attribute so we can call it something nicer?
  • Write code that feels as close as possible to the “real” thing. Autofac usually allows us to specify services in the constructor and then automatically allows us to get the instances we need. This code is structured to be very similar, but since we’re NOT allowed to change the parameterless constructors, we resolve our services off the container there instead. And because it’s in the constructor, we can assign things to readonly variables as well which is a nice bonus.

The more implementations of this I go to use, the more I plan to refine it! How have you leveraged Autofac in your Xamarin projects?


CircleCI + BitBucket => Free Continuous Integration!

CircleCI is a service that I heard about from a friend that allows you to get continuous integration pipelines built up for your repositories… And it does it quick and easy. Also, free if you’re someone like me and you don’t have a large demand for getting builds done! I wanted to write about my experience with getting CircleCI wired up with BitBucket, which I like to use for my project hosting, and hopefully it’ll help you get started.

First thing, signing up is super easy if you have BitBucket because you can oauth right away with it. CircleCI will show you your projects & repositories that you have in BitBucket and you can decide which one you’d like to get started with. You can navigate to the projects in their new UI from the “Add Projects” menu.

CircleCI Left Navigation

When you click “Add Projects” you’ll be met with a list that looks like this but… With your own projects and not mine 🙂

Circle CI + BitBucket Project Listing

On this screen, you’ll want to select “Set Up Project” for the project of your choice. For me, I was dealing with a .NET project (which I’ve already setup) so I selected it and was presented with the following screen. It also allows you to pick a template out to get started:

CircleCI Template Dropdown

However, I needed to change the default template to get things to work properly when I had nuget packages! We’re missing a restore step. With some help from my friend Graeme, we were able to transform the sample from this:

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run: dotnet build

To now include the nuget restore step prior to building!

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run:
          name: Restore
          command: dotnet restore
       - run:
          name: Build
          command: dotnet build -c Release

Once you save this, CircleCi will make a branch called “circleci-project-setup” on your remote. It then goes ahead and runs your build for you! When the build for this new remote branch succeeded, I pushed this configuration to my “master” branch so that all builds on master going forward would get continuous integration builds.

Checking the CircleCI dashboard now looks like the following:

CircleCI Successful Pipelines

You can see pipeline #1 is on the branch where the test circleci configuration was made (and passed). Pipeline #2 is once I added this commit onto my master branch and pushed up! Now I have continuous integration for pushing to my lib-nexus-collections-generic BitBucket project. When I check out my commit page, I can see the new commits after the configuration landed get a nice green check when the builds pass on CircleCI:

BitBucket Commit Listing With Builds

So with a few easy steps, you can not only have free source hosting in BitBucket but free continuous integration from CircleCI. Every time you push code to a remote branch, you kick off a build! This is only the starting point as you can configure CircleCI to do much more than just restore nuget packages and build .NET solutions 🙂


xUnit Tests Not Running With .NET Standard

Having worked with C# for quite some time now writing desktop applications, I’ve begun making the transition over to .NET standard. In my professional working experience, it was a much slower transition because of product requirements and time, but in my own personal development there’s no reason why I couldn’t get started with it. And call me crazy, but I enjoy writing coded tests for the things I make. My favourite testing framework for my C# development is xUnit, and naturally as I started writing some new code with .NET Standard I wanted to make sure I could get my tests to run.

Here’s an example of some C# code I wrote for my unit tests of a simple LRU cache class I was playing around with:

    [ExcludeFromCodeCoverage]
    public sealed class LruCachetests
    {
        [Fact]
        public void Constructor_CapacityTooSmall_ThrowsArgumentException()
        {
            Assert.Throws<ArgumentException>(() => new LruCache<int, int>(0));
        }

        [Fact]
        public void ContainsKey_EntryExists_True()
        {
            var cache = new LruCache<int, int>(1);
            cache.Add(0, 1);
            var actual = cache.ContainsKey(0);
            Assert.True(
                actual,
                $"Unexpected result for '{nameof(LruCache<int, int>.ContainsKey)}'.");
        }
    }

Pretty simple stuff. I know that for xUnit in Visual Studio, I need to get a nuget package for the test runner to work right in the IDE. Simple enough, I just need to add the “xunit.runner.visualstudio” package alongside the xunit package I had already included into my test project.

Nuget package management for project in visual studio showing required xUnit packages.
Required xUnit nuget packages

Ready to rock! So I go run all my tests in the solution but I’m met with this little surprise:

[3/24/2020 3:59:10.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0622045) ==========
[3/24/2020 3:59:20.510 PM] ---------- Discovery started ----------
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find C:[redacted]binDebugnetstandard2.0testhost.dll. Please publish your test project and retry.
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
[3/24/2020 3:59:20.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0600179) ==========
Executing all tests in project: [redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========

Please publish your test project and retry? Huh?

As any software engineer does, I set out to Google for answers. I came across this Stack Overflow post: https://stackoverflow.com/q/54770830/2704424

And fortunately someone had responded with a link to the xUnit documentation: Why doesn’t xUnit.net support netstandard?

The answer was right at the top!

netstandard is an API, not a platform. Due to the way builds and dependency resolution work today, xUnit.net test projects must target a platform (desktop CLR, .NET Core, etc.) and run with a platform-specific runner application.

https://xunit.net/docs/why-no-netstandard

My solution was that I changed my test project to build for one of the latest .NET Frameworks… and voila! I chose .NET 4.8 as the latest available at the time of writing.

My next attempt at running all of my tests looked like this:

Executing all tests in project: [Redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========
[3/24/2020 4:08:14.898 PM] ---------- Discovery started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.40]   Discovering: [Redacted].Tests
[xUnit.net 00:00:00.47]   Discovered:  [Redacted].Tests
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Universal Windows)
[3/24/2020 4:08:16.289 PM] ========== Discovery finished: 2 tests found (0:00:01.3819229) ==========
Executing all tests in project: [Redacted].Tests
[3/24/2020 4:08:17.833 PM] ---------- Run started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.41]   Starting:    [Redacted].Tests
[xUnit.net 00:00:00.66]   Finished:    [Redacted].Tests
[3/24/2020 4:08:19.337 PM] ========== Run finished: 2 tests run (0:00:01.4923808) ==========

And I was back on my path to success! Hopefully if you run into this same issue you can resolve it in the same fashion. Happy testing!


RPG Development Progress Pulse – Entry 2

Progress Pulse

Progress Pulse – Entry 2

Things have been pretty busy in real life the past couple of weeks, so I haven’t had too much time for working on this. However, for this entry in the progress pulse series I’ll talk about some of the challenges I had while looking at making a generic data (de)serialization API + implementation, and why I chose to make some of the decisions I did!

Which Tech To Pick?

I’ve felt burned in the past by trying to do data serialization for my game framework because it’s always created a barrier for refactoring once it’s in place (i.e. i change some data i need and now i have to re-make or migrate allllll my SQL data).

So I was thinking about how I plan to store game state, which I have written about, and then considered the implementations I had considered for persistent storage. One of them was a graph database called Neo4j, which has a JSON representation of all of its node data. Except… I’m not ready to commit to Neo4j just yet because I don’t want to feel tied down (like I used to tie myself down to SQLite). But my objects I’m creating *are* well suited to hierarchies of entities+components, so maybe JSON is a happy medium?

Here was my breakdown for starting with JSON:

Pros:

  • Very easy to get started with
  • (De)Serialization libraries available via nuget for free
  • Human readable which is great for creating, editing, and debugging
  • Hierarchical, which lends itself well to my data structures in memory
    • Should make refactoring easy (did a component change? only change that component’s data representation)
  • Could be a stepping stone for working with Neo4j in the future

Cons:

  • Writing is probably slow, especially if I want to just modify one chunk of JSON data
    • Likely will need to write whole JSON blobs out… But who knows if it’s slow, I need to benchmark it.
  • I suspect lookups would be slow
    • But… Maybe important data is cached in memory on startup? Maybe it’s not even an issue. Benchmark it.

Basically, I was left with a bunch of pros and a couple of cons that were really just speculation. Seemed like a great way to get started!

Lesson learned was to start with something that won’t keep you locked in, but is also just enough to get you going!

Start With Something Specific

I’m a sucker for trying to make really generic things in software. It’s an extreme I find myself taking because I want to make things as extensible and re-usable as possible. The side effect of it though is that sometimes I miss corner cases (and they end up being not corner cases in the general sense) or that I make APIs that suck to use because they’re so general and maybe they shouldn’t be.

I decided I was going to switch up my approach. I wanted to figure out how I could serialize and deserialize my item definition data. That probably warrants a brief explanation:

I want items (i.e. loot) in the game to be part of a system that can control generation of them based on game state, randomness, and pre-defined organization of loot. Some drops might be totally random common items. Others might be based on quest state and need to be very specific. Maybe there’s some that only drop at a specific time of day during specific whether after killing a certain enemy. This is what I’m shooting for. So the item definitions will contain information about how to generate a base item, and provide components that tell the game how to mutate that base item (i.e. set damage to a value between 5 and 10 and call it “Axe”). But there are drop tables that have weights associated with them that can link to specific items or other drop tables. This allows the game’s content creator to generate loot that’s like “When the player is in the swamp lands, common enemies drop between 1-3 items, with a 60% chance of those items being junk, 20% chance of those items being normal equipment, 15% chance of those items being magic equipment, and 5% chance of those items being powerful legendary equipment”. Drop tables are essentially nodes with weights on the vertices that point to other tables or specific item definitions. Simple 🙂

The reason I went with this approach is because I felt that even though some of the C# types I have might be specific to item definitions, the abstract structure of the types (i.e. entities with components on them) is shared across many different game systems. So if I can make it work for this one, it shouldn’t be too hard to do for the next.

Lesson learned was try not to repeat all of your history… Learn from it. Experiment with new approaches.

Hello Singletons, My Old Friend

My arch-nemesis Dr Singleton! Actually, way back I’ve written about singletons so I’m not TOTALLY against them, I just think that 99% of the time they aren’t actually what you need. Let’s talk about my little run in with them though.

I started custom writing some APIs for JSON serialization that would use Newtonsoft JSON behind the scenes. Based on the structure of my objects, I figured I was going to have some sort of recursive call system going on where children would have to tell their children to serialize, etc… Once I got this working for a simple case, I realized that Newtonsoft has custom converters you can set up. These use attributes to mark up interfaces/classes to tell the serialization engine to use particular converters when they encounter a type. (Edit: after writing this I realize that I don’t HAVE to use the attribute… which might make this whole point moot)

The problem with attributes is that I cannot control the instantiation of them. And because I can’t control the instantiation of them, I can’t control the parameters passed in via the constructor. In my particular case, I needed to create a singleton that this attribute class could access and use Autofac to configure the singleton instance. Essentially, I needed to register custom handlers into my singleton instance, and then the attribute class could pull the registrations from the singleton instance.

Ugly pattern? Yes. I’m not familiar with any other ways to pass information or access to objects when I can’t control the initialization of my object though. It’s buried deep down so it’s not like the API usage feels like garbage, but still wasn’t happy with it.

Lesson learned here was sometimes we end up using “bad patterns”, but if they’re limited in scope we can limit their “badness”.


Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
    }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
    }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here 😉 BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
            .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress