Programming

Video Stream – RPG Systems with Loot Generation

I asked on LinkedIn about whether or not people would be interested in a video stream that focused on programming, and I had some positive feedback. In order to test the waters, I decided I’d start with some system-design stuff given that I’m going through a bunch of practice with distributed systems. This is a bit of a change up from distributed systems in that this is interactions between co-located systems in a game framework I’m creating.

Here’s the video!

In the video stream, what I’m trying to accomplish is finding a way to share information from particular domains to be used in other domain. I mean, that’s the gist of it ๐Ÿ™‚ The complicated parts are:

  • How do I keep domain information from leaking into other domains
  • How do I control access to the information without globally exposing it (i.e. avoiding something like a static global variable)
  • How do I make sure I have the right state when I want to go use it? (i.e. the systems run on a game loop, but some interactions that get triggered are from user events outside of the game loop)

My white-boarding skills in MS Paint are pretty rough, but I feel like it went okay! I’ll follow up with my findings, and hopefully get some programming videos put together to better explain some programming concepts.

Let me know what you think!


Xamarin Forms and Leveraging Autofac

I love dependency injection frameworks ever since I started using them. Specifically, I’m obsessed with using Autofac and I have a hard time developing applications unless I can use a solid DI framework like Autofac! I’ve recently been working with Xamarin and found that I wanted to use dependency injection, but some of the framework doesn’t support this well out of the box. I’ was adamant to get something going though, so I wanted to show you my way to make this work.

Disclaimer: In its current state, this is certainly a bit of a hack. I’ll explain why I’ve taken this approach though!

In your Android projects for Xamarin, any class that inherits from Activity is responsible for being created by the framework. This means where we’d usually have the luxury of passing in dependencies via a constructor and then having Autofac magically wire them up for us isn’t possible. Your constructors for these classes need to remain parameterless, and your OnCreate method is usually where your initialization for your activity will happen. We can work around that though.

My solution to this is to use a bit of a reflection hack coupled with Autofac to allow Autofac resolutions in the constructor as close as possible as to how they would normally work. A solution I wanted to avoid was a globally accessible reference to our application’s lifetime scope. I wanted to make sure that I limited the “leakage” of this not-so-great pattern to as few places as possible. With that said, I wanted to introduce a lifetime scope as a reference only to the classes that were interested in using Autofac where they’d otherwise be unable to.

  1. Make a static readonly variable in your classes that care about doing Autofac with a particular name that we can lookup via reflection. (An alternative is using an attribute to mark a static variable)
  2. After building your Autofac container and getting your scope (but prior to using it for anything), use reflection to check all types that have this static scope variable.
  3. Assign your scope to these static variables on the types that support it.
  4. In the constructors of these classes (keeping them parameterless so the framework can still do its job!), access your static scope variable and resolve the services you need

Here’s what that looks like in code!

MainActivity.cs

 public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity
    {
        protected override void OnCreate(Bundle savedInstanceState)
        {
            var builder = new ContainerBuilder();

            // TODO: add your registrations in!

            var container = builder.Build();
            var scope = container.BeginLifetimeScope();

            // the static variable i decided to use is called "_autofacHack"
            // so that it deters people from using it unless they know
            // what it's for! you could use reflection to find similar
            // fields with something like an attribute if you wanted.
            foreach (var field in GetType()
                .Assembly
                .GetTypes()
                .Select(x => x.GetField("_autofacHack", BindingFlags.NonPublic | BindingFlags.Static))
                .Where(x => x != null))
            {
                field.SetValue(null, scope);
            }

            LoadApplication(scope.Resolve<App>());
        }

The class that can take advantage of this would look like the following:

public sealed class MyActivityThatNeedsDependencyInjection : Activity
{
    private static readonly ILifetimeScope _autofacHack;
    private readonly IMyService _theServiceWeWant;

    // NOTE: we kept the constructor PARAMETERLESS because we need to
    public MyActivityThatNeedsDependencyInjection ()
    {
        _theServiceWeWant= _autofacHack.Resolve<IMyService>();
    }

    protected override void OnCreate(Bundle savedInstanceState)
    {
        base.OnCreate(savedInstanceState);

        // now we can use this service that we "injected"
        _theServiceWeWant.DoTheCoolStuff();
    }
}

Summary

Reading this you might think “well I don’t want to pollute my Xamarin code with variables that say _autofacHack, that’s gross”. And I don’t blame you! So this is to serve as a starting point for a greater solution, which I think is something I’ll evolve out for myself and I encourage you to do the same.

Things I’m focused on:

  • Minimize where “ugly” code is. A globally accessible scope on a static class seems like it can spread the “ugly” code to too many spots. This approach is intended to help minimize that.

    What are some next steps to make that EVEN better? Maybe an attribute so we can call it something nicer?
  • Write code that feels as close as possible to the “real” thing. Autofac usually allows us to specify services in the constructor and then automatically allows us to get the instances we need. This code is structured to be very similar, but since we’re NOT allowed to change the parameterless constructors, we resolve our services off the container there instead. And because it’s in the constructor, we can assign things to readonly variables as well which is a nice bonus.

The more implementations of this I go to use, the more I plan to refine it! How have you leveraged Autofac in your Xamarin projects?


Downtime? Time to Build!

The COVID-19 pandemic has caused many of us to stay isolated and at home, but that’s OK! I genuinely enjoy developing software and wanted to take this opportunity to focus on learning. Having some downtime has afforded me to try putting together a system that I otherwise might not have explored building.

In this article, I’ll share different aspects about an application I’m building that purposefully put me outside of my comfort zone. In my opinion, having downtime is an opportunity to learn and grow! It’s time to take advantage of that.

When the app and system is ready to showcase I’ll share more insight into what’s actually being built!

The Client Framework

The application being built was intended to run on multiple mobile platforms, so Xamarin was my choice here. I have briefly used Xamarin several years ago, but my reasons for this time through were:

  • C#, .NET, and Visual Studio support. There were many things I wanted to learn about, but I wanted to limit myself to a familiar foundation so that I could still feel that I’m making progress.
  • Supports iOS and Android right from the start. I thought it would be an interesting software design challenge to be able to build base components in a shared library and then leverage dependency injection for platform-specific components.
  • Traction. Xamarin has been around for a number of years now, and it’s only continuing to gain support. I didn’t want to focus on a platform or SDK that wasn’t getting any love.

Xamarin was easy to get setup with because of the familiar pieces, but I was pushed out of my comfort zone to learn about how certain things are handled differently on iOS and Android

I was able to learn about:

  • Android/iOS permission requests
  • The new UI controls in Xamarin (vs WPF which I’m used to)
  • More practice with async/await and UI experience
  • Different mobile API frameworks (Crashlytics, Google Analytics, user control libraries)

The Server Framework

I’ve joked with my colleagues in the past that “web isn’t my thing”, but what I really mean is that “I don’t have experience making web pages”. I’ve build many client server systems in my professional experience and hobby programming, but serving nice-looking web pages hasn’t been my strength.

For the system that I’m designing in my downtime, I need an application server. I decided to go with ASP.NET Core because I haven’t set up many ASP.NET systems before, and I don’t have experience having them hosted in the cloud. However, I do have experience with C# and Visual Studio, so again, this seemed like a good balance of trying new things with some familiar concepts to ensure I could make progress.

In short order, I was able to get a handful of application server routes setup and communicating with the client application properly. The most difficult part was truly just making sure firewall and SSH settings were configured locally, and a handful of times of cursing at my phone for not having it on WiFi (and thus not seeing the development server on my local network)!

I was able to learn about:

  • Authentication attributes (and JWT token handling)
  • Routes with query parameters
  • Serving static content as well as application requests

The Authentication Framework

This one was fun. Having a professional career in software development, one thing that scares everyone away is designing authentication and user management. Nobody wants to because it’s complex, has plenty of edge cases, and… it’s probably critical to your system working ๐Ÿ™‚

Thankfully, Firebase saved the day. I wrote about this already, but Firebase truly made authentication and user management way more straight forward than I’m used to. The hardest parts of working with Firebase had nothing to do with Firebase and everything to do with implementing OAuth for the providers of my choosing.

Because I could use OAuth to authenticate users and have identifiable information provided via a JWT, having a simple registration and login system that mapped OAuth’d users to some sort of internal-system user identifier was trivial. All of the routes for my web application could also authenticate and control access via this same JWT! One of the scariest components about building a system became a relatively light lift.

I was able to learn about:

  • OAuth for popular providers (Google, Facebook, etc…)
  • OAuth scopes
  • JWT tokens
  • Firebase SDK from a server and client side
  • Route authentication in ASP.NET using JWTs

The Database Framework

As part of the journey for exploring unfamiliar technology in my downtime, I decided I’d like to pursue a database that wasn’t SQL-based. I was already using Firebase for authentication and Google offers an intriguing document store in Firebase that provides real-time update triggers.

Being unused to document databases (I’m much more familiar with relational databases), I spent some time trying to design my schemas that I intended to use. One thing that caught me off guard pretty quickly was that in order to modify to a list of things, I’d need to have a local copy of the list, manipulate the collection, and then push the entire structure back to replace the existing structure. This seemed like overkill what I was trying to do, but the alternative was that I could modify objects in the data store to add/remove child items, but each child item would receive another identifier object as a linking object. So a list of X things actually meant 2X things (one for the identifier, one for the entry). Again, this was overkill.

I decided to go back to familiar technology but explore a not so familiar space! I have a good deal of experience working with SQLite and MySQL from my career. What I don’t have a lot of experience with is the management and provision of a MySQL instance with availability in the cloud! Enter Amazon RDS!

Switching to Amazon RDS meant a bit of a learning curve for making sure I could host and configure an instance of MySQL in the cloud. I was able to learn about various Amazon AWS services and roles and how they play together. But once the instance was up and running, I was able to get up and running effectively.

The Tooling

I had help with this one, fortunately, from a friend and old colleague of mine Graeme Harvey. If you’ve worked with me professionally or on a hobby project, one of the things I admit pretty quickly is that I really dislike tinkering to get a configuration right. Generally this is because there’s a lot of tweaking to get something set up right, documentation to understand, and frustrating trial and error.

Source control was going to be git. I didn’t even want to consider anything else. Git is so widely used now that I didn’t see a real benefit to trying to learn a new source control system. Repository management though? BitBucket. I’m a huge fan of the Atlassian suite of products having used them professionally for many many years.

And with that said Jira for issue management. Again, I’ve used Jira professionally for many years. What I’ve never done is had my own Jira instance though! This was made very simple by Atlassian, and not being a company that’s generating big bucks I was able to get the free tier that handles all of my needs. Jira is straight up awesome for issue management and visibility into your ongoing work.

Another no-brainer for me was getting Slack setup. Slack is something I’ve used a lot professionally but like Jira, I’ve never had my own Slack instance. Simple to setup and works just like I’m used to in my career. This wasn’t really a huge requirement but working with another person provided a nice workspace for chatting about stuff we were working on.

And finally… builds. I wrote about using Circle CI to get our server builds up and running already, and to re-iterate it was extremely simple. I even have them wired up to report back to Slack when we push code up to BitBucket! Where we’re still having some fun is figuring out how to deploy our application server builds to an EC2 instance automatically. This would allow us to “release” to a branch and have production hosting of our application server get updated in the cloud!

But we’re building a mobile application in Xamarin, so we have three outputs:

  • The application server in ASP.NET
  • The Android client
  • The iOS client

Mobile app development gets interesting because what you’re actually building is intended to run on a device and not a desktop/server… And why that matters is that generally your desktop or server application will be output as a binary, but your mobile application will be some sort of package you’ll need to sign and distribute on an app store.

After some back and forth, we decided to explore App Center. If I’m being honest, this was equally as easy to setup for our iOS and Android apps using Xamarin as our server was to setup using Circle CI for builds. App Center provided a simple wizard for triggering off of our BitBucket repositories getting new commits to a branch, and the rest was done for us.

What I learned:

  • Git+BitBucket = Free git repository hosting (private if you want) and plenty of integrations
  • Jira = Free issue management and kanban board with plenty of integrations
  • Slack = Free chat workspace with plenty of integrations
  • CircleCI = Free continuous builds with integrations into ALL the other things I just mentioned ๐Ÿ™‚
  • App Center = Free continuous builds for iOS and Android Xamarin apps with plenty of integrations!

In Summary

It’s been a couple of weeks of getting to try building out this project and setting up these different systems to go to work for me. I’ve been able to learn a lot about new or previously unexplored SDKs/technology and even learn some different facets of things I already have professional experience with.

I’m not one to sit idle… So using my downtime to learn cool things and build something has been an awesome time! I’d highly recommend that if you’re in quarantine, lockdown, or otherwise unable to really get out and do too much that you try your hand at something new! Get creative. Get learning.

Be safe and stay healthy!


Firebase and Low-Effort User Management

I’ve found myself with some additional time to be creative during the great COVID-19 and lockdown/quarantine days. That’s why there’s more blog posts recently! Actually, I wanted to take the time to experiment with some unfamiliar technologies and build something. For a project, I wanted to leverage authentication but I’m well aware that user management can become a really complex undertaking. I had heard about Firebase from Google and wanted to give it a shot.

For the purposes of this discussion, Firebase would allow me to create something like an OAuth proxy to the system I wanted to build, and by doing so, would end up managing all of the users for me. What I needed to do with Firebase to get that setup was actually quite straight forward.

First, you start off in typical fashion registering for Firebase. From there, you’re asked about adding a new project, which looks like the following:

Create Firebase project

You’re then required to add apps to your project within Firebase. But here’s where your journey might differ from mine. I’m working in Xamarin, so I wanted to be able to add an iOS app and an Android app. The reason you need to do this is so that you can get the proper service information for your app so that it can communicate with Firebase. Google does a great job with walking you through the process, and in the end you’re required to add a service configuration file to each of your projects.

The next part was probably the most time consuming, and that was integrating some sort of OAuth for a platform into my mobile app. There’s tons of documentation about that on the Internet, so I’m not getting into that here. There’s different steps to take depending on what platform (i.e. Google, Facebook, Twitter, etc…) you want to authenticate with and whether you’re working on iOS, Android, web, or something else. Getting this all up and running required the most time on this step but it wasn’t really anything to do with Firebase… it was picking + supporting OAuth for the platforms of my choosing.

I knew which platforms I wanted to work with, but Firebase actually has a set that it supports (including email + password)! You’ll want to check that out because you need to enable the platforms you want to support in the console:

Firebase OAuth Providers

Now you can find the Firebase SDK for the platform you’re working with! Once your application/service is able to OAuth with a platform that you support, ensure it’s enabled in the console. From there you can use a method from the SDK that allows you to sign into Firebase with Oauth. This is where you’d provide the access token from the platform of your choice after having logged into that platform successfully.

The result is that Firebase actually builds a user entry for you with data related back to the OAuth platform. These are based on the providers that you used to authenticate originally. By doing this, you can use these external authentication providers and with minimal effort connect them to your Firebase project! You can get all of the authentication options you’d like AND free user management as a result.

This is high-level, but I will follow up with how we’re leveraging Firebase with the components we’re putting together in our system. Spoiler: ASP.NET controller routes can get protected by Firebase authentication with almost no effort!


CircleCI + BitBucket => Free Continuous Integration!

CircleCI is a service that I heard about from a friend that allows you to get continuous integration pipelines built up for your repositories… And it does it quick and easy. Also, free if you’re someone like me and you don’t have a large demand for getting builds done! I wanted to write about my experience with getting CircleCI wired up with BitBucket, which I like to use for my project hosting, and hopefully it’ll help you get started.

First thing, signing up is super easy if you have BitBucket because you can oauth right away with it. CircleCI will show you your projects & repositories that you have in BitBucket and you can decide which one you’d like to get started with. You can navigate to the projects in their new UI from the “Add Projects” menu.

CircleCI Left Navigation

When you click “Add Projects” you’ll be met with a list that looks like this but… With your own projects and not mine ๐Ÿ™‚

Circle CI + BitBucket Project Listing

On this screen, you’ll want to select “Set Up Project” for the project of your choice. For me, I was dealing with a .NET project (which I’ve already setup) so I selected it and was presented with the following screen. It also allows you to pick a template out to get started:

CircleCI Template Dropdown

However, I needed to change the default template to get things to work properly when I had nuget packages! We’re missing a restore step. With some help from my friend Graeme, we were able to transform the sample from this:

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run: dotnet build

To now include the nuget restore step prior to building!

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run:
          name: Restore
          command: dotnet restore
       - run:
          name: Build
          command: dotnet build -c Release

Once you save this, CircleCi will make a branch called “circleci-project-setup” on your remote. It then goes ahead and runs your build for you! When the build for this new remote branch succeeded, I pushed this configuration to my “master” branch so that all builds on master going forward would get continuous integration builds.

Checking the CircleCI dashboard now looks like the following:

CircleCI Successful Pipelines

You can see pipeline #1 is on the branch where the test circleci configuration was made (and passed). Pipeline #2 is once I added this commit onto my master branch and pushed up! Now I have continuous integration for pushing to my lib-nexus-collections-generic BitBucket project. When I check out my commit page, I can see the new commits after the configuration landed get a nice green check when the builds pass on CircleCI:

BitBucket Commit Listing With Builds

So with a few easy steps, you can not only have free source hosting in BitBucket but free continuous integration from CircleCI. Every time you push code to a remote branch, you kick off a build! This is only the starting point as you can configure CircleCI to do much more than just restore nuget packages and build .NET solutions ๐Ÿ™‚


xUnit Tests Not Running With .NET Standard

Having worked with C# for quite some time now writing desktop applications, I’ve begun making the transition over to .NET standard. In my professional working experience, it was a much slower transition because of product requirements and time, but in my own personal development there’s no reason why I couldn’t get started with it. And call me crazy, but I enjoy writing coded tests for the things I make. My favourite testing framework for my C# development is xUnit, and naturally as I started writing some new code with .NET Standard I wanted to make sure I could get my tests to run.

Here’s an example of some C# code I wrote for my unit tests of a simple LRU cache class I was playing around with:

    [ExcludeFromCodeCoverage]
    public sealed class LruCachetests
    {
        [Fact]
        public void Constructor_CapacityTooSmall_ThrowsArgumentException()
        {
            Assert.Throws<ArgumentException>(() => new LruCache<int, int>(0));
        }

        [Fact]
        public void ContainsKey_EntryExists_True()
        {
            var cache = new LruCache<int, int>(1);
            cache.Add(0, 1);
            var actual = cache.ContainsKey(0);
            Assert.True(
                actual,
                $"Unexpected result for '{nameof(LruCache<int, int>.ContainsKey)}'.");
        }
    }

Pretty simple stuff. I know that for xUnit in Visual Studio, I need to get a nuget package for the test runner to work right in the IDE. Simple enough, I just need to add the “xunit.runner.visualstudio” package alongside the xunit package I had already included into my test project.

Nuget package management for project in visual studio showing required xUnit packages.
Required xUnit nuget packages

Ready to rock! So I go run all my tests in the solution but I’m met with this little surprise:

[3/24/2020 3:59:10.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0622045) ==========
[3/24/2020 3:59:20.510 PM] ---------- Discovery started ----------
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find C:[redacted]binDebugnetstandard2.0testhost.dll. Please publish your test project and retry.
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
[3/24/2020 3:59:20.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0600179) ==========
Executing all tests in project: [redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========

Please publish your test project and retry? Huh?

As any software engineer does, I set out to Google for answers. I came across this Stack Overflow post: https://stackoverflow.com/q/54770830/2704424

And fortunately someone had responded with a link to the xUnit documentation: Why doesn’t xUnit.net support netstandard?

The answer was right at the top!

netstandardย is an API, not a platform. Due to the way builds and dependency resolution work today, xUnit.net test projects must target a platform (desktop CLR, .NET Core, etc.) and run with a platform-specific runner application.

https://xunit.net/docs/why-no-netstandard

My solution was that I changed my test project to build for one of the latest .NET Frameworks… and voila! I chose .NET 4.8 as the latest available at the time of writing.

My next attempt at running all of my tests looked like this:

Executing all tests in project: [Redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========
[3/24/2020 4:08:14.898 PM] ---------- Discovery started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.40]   Discovering: [Redacted].Tests
[xUnit.net 00:00:00.47]   Discovered:  [Redacted].Tests
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Universal Windows)
[3/24/2020 4:08:16.289 PM] ========== Discovery finished: 2 tests found (0:00:01.3819229) ==========
Executing all tests in project: [Redacted].Tests
[3/24/2020 4:08:17.833 PM] ---------- Run started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.41]   Starting:    [Redacted].Tests
[xUnit.net 00:00:00.66]   Finished:    [Redacted].Tests
[3/24/2020 4:08:19.337 PM] ========== Run finished: 2 tests run (0:00:01.4923808) ==========

And I was back on my path to success! Hopefully if you run into this same issue you can resolve it in the same fashion. Happy testing!


Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
ย  ย  private static void Main(string[] args)
ย  ย  {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
ย  ย  }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
ย  ย  private static void Main(string[] args)
ย  ย  {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
ย  ย  }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here ๐Ÿ˜‰ BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
ย            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
          ย  .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


RPG Development Progress Pulse

Progress Pulse Series

I figured this would be a fun thing to start to do just to get small updates out and talk about what I’ve been working on for ProjectXyz and my RPG I’m building in Unity3D. This will hopefully be some small updates on the order of semi to bi-weekly about what kinds of things are going on when I’m programming for these projects. This could include:

  • How and why I decided to refactor something
  • A new design practice I’m trying
  • Reflecting on why a design decision has(n’t) been working out
  • A new feature that’s interesting
  • etc…

Some of these will be technical and others much less. A bit of progress pulse allows me an outlet to talk about interesting things I’m doing and maybe sheds some light on some areas (game development or just general programming) that you might be interested.

Where Can I Find Entries In This Series?

I’ll try to organize these Progress Pulse entries into a specific category on my blog. Ideally that way you can navigate them pretty easily. You can click the link below and you should get all the entries in this series!

Click Here For Entire Progress Pulse Series


ProjectXyz: Why I Started A Team For My Hobby Project

ProjectXyz - Why I Started a Team

Who Needs A Team?!

I’ve been building RPG backends for as long as I’ve been able to code. I think my first one that I made for my grade 11 class is the only RPG that I “finished”… It was text-based and all you could do was fight AI via clicking attack, buy better weapons, level up, and repeat. It was also 10000 lines of VB6 code and so brutal that I couldn’t add anything to it without copying hundreds of lines of code.

Since then, I’ve had the itch. I keep rewriting this thing. I keep taking “Text RPG” (super cool and catchy, I know) and rewriting it. I had my first visual representation of this game called Macerus (here’s another rewrite for unity), which is actually how I landed my first co-op job. But every time I’d get so far, I’d decide I needed to rewrite it because I had messed up the architecture in some way and refactoring would be too much work.

My latest attempt is called ProjecyXyz, because I can’t come up with names. And funny enough, I just Googled it while writing this article and there’s actually a company with the same name… So maybe I’ll have to get more creative. ProjectXyz is supposed to be a very generic RPG game framework that allows new systems, mechanics, and game content to be dropped in, in addition to being independent of a front end for rendering.

It’s also something I’ve been making on my own. Because I’ve been making RPG backends on my own for years now. So who needs to have a team, right?

Too Much Pride For A Team

I think initially I wanted to do this all on my own because of pride. I also don’t think it was something I was conscious about except for the fact I looked at this project as my baby and something I could control the development of. I wasn’t consciously telling myself “I have to do this on my own so that I’m better than other people” or anything silly like that.

But why would I go ask others for help? They don’t code like me. They don’t have the same investment into this idea as me. They aren’t as passionate. They might have their own ideas for how to do things too! How could I have someone like that working on MY project?

Those are all pretty naive reasons for considering to work alone though. Sure, this is my pet project and I’m going to likely feel more attached to it than anyone else. That’s probably expected. It doesn’t mean that I can’t find people that are super interested in working on something like this. They could be totally passionate about learning different aspects of creating an RPG backend.

As for having their own ideas… That’s probably one of the BIGGEST reasons in FAVOUR of having a team! It’s easy to get scared about having other people put their ideas into something you feel like is “yours”. It might have taken a few years of working in the industry (currently just passed 6 years of working at Magnet Forensics), but it’s actually very common for other people to be contributing ideas into code bases you’re working on. It happens every day. Sometimes you have design meetings or code reviews or general architectural discussions and your idea ISN’T the one that’s picked. That’s cool! As long as everyone is striving for extensible and testable code, we can make changes if we need to going forward. You don’t need to make every decision and sometimes it’s much better that way. Other people are smart too ๐Ÿ˜‰

Passion is Key for a Team

While the “team” I started isn’t an official team, it’s the first time I’ve been very open to having people directly contribute to my pet project. I think one of the most obvious reasons I became comfortable with this is because I found someone that was very passionate about exploring this space.

My colleague and I were talking about some of the concepts in ProjectXyz and where I wanted to go with it. Immediately he expressed interest in map generation and how that’s always been something he wanted to explore. How can maps be procedurally generated? Can we take this concept and generate maps on the fly? What are memory and runtime constraints? How do we represent this information in code? What about persistent storage?

I could immediately tell he was very curious about how a system like this might work. After several conversations with him about how he was starting to hack up some ideas and doing research on different algorithms, I knew he was passionate about it. We discussed working on some of these things together and contributing to the project code that I have, and we’ve been going back and forth for a few weeks now sharing ideas and his progress that he’s making for map generation. I’ve been hands off only really acting as a sounding board for him.

I think having someone passionate like this is critical for a small team. There’s going to be many barriers when working on a challenging project, and it’s easy to get bogged down and lose motivation when you’re stuck. Having additional people that are passionate about seeing progress in your project means you have some support for pushing through those hard times when you might lose motivation. If my colleague comes to me and says “I’ve been stuck on this issue and maps wont generate how I want…”, then I’m more than happy to sit down with him and talk through his algorithm and maybe where there’s an issue. I’m invested in seeing his piece come to fruition. Similarly, if I’m working on something like dynamic item generation for the game and I get stuck, I know he’s there to do the exact same thing. We both want to see this thing working how we intend.

So passion is important for a team. But is it sufficient? Is it the only requirement for adding a team member?

A Team is Built on Trust

Trust! Trust is a huge part of establishing a team because you need to be able to rely on each other. As mentioned, my colleague is passionate about working on this and has an interest in map generation. But what if I had never seen any of his code before? What if I didn’t know if he’s had practice with writing extensible code, testable code, following good design practices, etc… What if?

To be honest, I probably would be pretty nervous about him contributing code. It might be a huge barrier for me. I’d want to review his code and make sure it wasn’t “polluting” my pet project. I’ve re-written this code enough times that I really don’t want to have to think about rewriting it again! If I was nervous about someone contributing code I was going to need to re-write from scratch just to have an extensible design, it might not even be worth it having them contribute in the first place. It might actually create MORE work in the long run. It sounds selfish, but if the goal of adding someone to the team is to provide a net positive effect, then having to re-write code that isn’t up to par might be a deal breaker.

But that’s not the case here. I have multiple years of experience working with this colleague closely on various projects. We align to coding practices but still have our own twist on things. We value the same things in “good” code (extensible and testable). We use many of the same design patterns in similar situations. I’ve seen enough of his code to know that most of the time my comments about it are “oh, have you considered” and not “… you need to rewrite this”.

I can trust that what he wants to contribute will be aligned to my vision. I also can trust that new ideas he introduces are probably awesome new perspective that I hadn’t thought of. I also trust that if we disagree on something, we’re open to discussing it and coming to a resolution. So trust in this case certainly removes the barrier to entry to adding additional people to my hobby project.

Should You Form a Team?

While this was a pretty general article, I just wanted to get you thinking about opening up your hobby project(s) to other people to contribute. This is something I wish I would have considered more seriously early on. Maybe I wouldn’t be re-writing my project for the millionth time!

Some general points:

  • You’re not a “worse” programmer for getting other people contributing. Good programmers need to be able to work with others!
  • Other people can have good ideas too! Sometimes, they’re even better than your own ideas ๐Ÿ˜‰
  • Other people may have more knowledge or interest in areas that need to get work done that you just don’t want to do! Perfect!
  • You’ll want to try and find people passionate about working in the area your project focuses
  • You’ll want to find people that you feel like you can trust so that you’re comfortable with them working on “your baby”
  • Getting help doesn’t mean your code must be “open source”. You can still share private repositories together (i.e. consider BitBucket!)

So what do you think? Is your hobby project kind of stale because you’ve hit enough roadblocks and it’s time to get some more firepower to tackle it?

Share your thoughts below about your experiences with forming teams for your hobby projects!


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I have nearly a decade of professional hands on software engineering experience in parallel to leading multiple engineering teams to great results. I'm into bodybuilding, modified cards, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress