Programming

TL;DR: Unit vs Functional Tests

Here’s a super quick peek into unit tests compared to functional tests. And full disclaimer here is that depending on your circle of influence, these might be given slightly different names. Try not to dwell on that but instead the comparison and contrast presented!

Unit Tests

Coded tests that take a white-box visibility approach to exercising and asserting that written code, generally a specific function/method, works as it was designed.

Pros:

  • Generally very programmer-focused
  • Very granular coverage (breaks can identify exact lines where an issue occurs)
  • (Should) run extremely quickly
  • Have very little test setup in ideal cases
  • Provide full control (generally via ‘mocking’ dependencies) to exercise very specific logical paths

Cons:

  • Generally more challenging to convey coverage to other stakeholders
  • By nature these are brittle and break with refactoring
  • Require sets of design patterns to help ensure tests are easy to write/maintain
  • Sometimes present false sense of confidence in large codebases (i.e. systems working together) due to mocking requiring assumptions

Functional Tests

Coded tests that take a black-box visibility approach to exercising and asserting that software performs particular functionality as expected.

Pros:

  • Generally easier for non-programmer stakeholders to understand
  • Generally covers multiple classes/systems/methods working together to demonstrate behavior
  • More resilient to refactoring since this approach ignores the internals of what’s being covered
  • Exercises REAL parts of the codebase with no mocking

Cons:

  • More coarse than unit tests. Breakages might need more investigation.
  • Can potentially have more test setup given that multiple things will be interacting

Summary

So which one of these should you and your team be writing? The answer is probably both! There are benefits and drawbacks to both of these testing strategies and they overlap really nicely together. If you know you’re going to be refactoring a complex piece of your system, functional tests would be great to put in place so you can check expected behavior after. Unit tests might be great for mocking out complex dependencies or validating a variety of inputs quickly into a function. If you have an understanding of how you can leverage these, you have more tools in your programming toolbox!

Don’t like reading? Listen to me instead!


RPG Game Dev Weekly #1

As I’ve been trying to get more YouTube content put together more steadily, one of the themes I’m interested in is doing some behind-the-scenes of the role playing game (RPG) I’m making with some friends in Unity3D. I’ve found that being able to work on an RPG outside of my regular day job is a really awesome way for me to keep up on my technical skills. I love coding, and the further along I move in my career as an engineering manager, the less time I actually spend writing code myself. I pride myself in being a technical engineering manager, so for me working on this RPG is a great outlet for creativity and practice. I mentioned this in my LinkedIn post here:

Persisting Game Objects Across Maps

In this video, I focus on one of the challenges the game was facing due to how objects are materialized onto the playable map. The map that we load from disk to be shown and interacted with in the playable RPG commonly has “templates” and “spawners”. Both of these are responsible for creating objects at runtime given some criteria. As a result, uniquely placed game objects appear on the playable map for the player to interact with.

Sounds good then, right? Well the two challenges I focused on addressing were:

  1. If we leave the map and go to another one, there’s no way to persist the player across maps! That means you get a brand new character every time you transition maps. In an RPG, this is definitely not going to work out.
  2. If we return to an existing map, we expect it to be in the same state as when we were last on it. This means that the objects generated from templates or spawners must remain and NOT be respawned (which would effectively make completely new game objects).

Check out the video below:

RPG Dev Log 1 – Persist Game Objects Across Maps

Persisting Map Game Objects in a Cache

Next up was actually implementing some of the changes being discussed previously. In order to make this work in our RPG, my goal was to:

  1. Allow maps to save a lookup of the unique IDs for game objects that were generated
  2. Save the list of game objects generated into a “cache”
  3. Upon revisiting a map, tap into the “cache” to reload the existing game objects

One of my mental hurdles with this was acknowledging that we don’t yet have a solid serialization foundation for our game. I was thinking I’d want to serialize the game data to something persistent to make this work, but I was also worried writing things to disk would be overkill (and how does this mix with save game concepts?). Instead, I opted for the lightweight approach was “get something working” and I can revisit this later to extend it to persist things to disk if necessary.

Check out the video below:

RPG Dev Log 2 – Persisting Map Game Objects in a Cache

Which Domain Does This Belong To?

We’ve been trying to practice Domain Driven Design (DDD) concepts in our game. One pattern I’ve taken to an extreme is over-separating domains into very granular pieces and the result is that we have HUNDREDS of C# projects. It’s overkill, for sure. However, I feel that it’s offered some flexibility in having boundaries we can collapse later. My experience so far has told me it’s easier to collapse boundaries than it is to split.

This is all and well until we have situations where I need a class that has knowledge of two domains. That’s what this next video was about. I talk through acknowledging that I don’t know what to do and that I’ll move ahead with something best effort. I think the answer to my dilemma is that I need to translate things into a new domain for usage, but it feels like overkill. If you have thoughts on this leave a comment on this post or on the video!

Check out the video below:

RPG Dev Log 3 – Which Domain Does This Belong To?

Death Animations For All!

Finally, this last update was about something hopefully less boring… Death animations! I worked through how:

  1. I can extend our reusable sprite animation factory to create a death animation to be reused by ALL our actor sprites
  2. I can build a system that checks actor life and changes the animation as necessary to death

Unfortunately I had some hiccups the first time through recording this, but I wanted to code the whole thing live. After a Blue Screen of Death interrupted my first attempt, my second attempt I think worked pretty well! Without much code at all we could get this system fired up and it worked the first time! Well, close. I didn’t realize that our animation system already supports an animation that has an “infinite” duration final frame (in this case, the body laying on the tile). This is handled by a null value instead of a set length in seconds. I fixed it right after recording! Overall, I was incredibly happy with the result.

Check out the video below:

RPG Dev Log 4 – Death Animations For All!

Video Stream – RPG Systems with Loot Generation

I asked on LinkedIn about whether or not people would be interested in a video stream that focused on programming, and I had some positive feedback. In order to test the waters, I decided I’d start with some system-design stuff given that I’m going through a bunch of practice with distributed systems. This is a bit of a change up from distributed systems in that this is interactions between co-located systems in a game framework I’m creating.

Here’s the video!

In the video stream, what I’m trying to accomplish is finding a way to share information from particular domains to be used in other domain. I mean, that’s the gist of it 🙂 The complicated parts are:

  • How do I keep domain information from leaking into other domains
  • How do I control access to the information without globally exposing it (i.e. avoiding something like a static global variable)
  • How do I make sure I have the right state when I want to go use it? (i.e. the systems run on a game loop, but some interactions that get triggered are from user events outside of the game loop)

My white-boarding skills in MS Paint are pretty rough, but I feel like it went okay! I’ll follow up with my findings, and hopefully get some programming videos put together to better explain some programming concepts.

Let me know what you think!


Xamarin Forms and Leveraging Autofac

I love dependency injection frameworks ever since I started using them. Specifically, I’m obsessed with using Autofac and I have a hard time developing applications unless I can use a solid DI framework like Autofac! I’ve recently been working with Xamarin and found that I wanted to use dependency injection, but some of the framework doesn’t support this well out of the box. I’ was adamant to get something going though, so I wanted to show you my way to make this work.

Disclaimer: In its current state, this is certainly a bit of a hack. I’ll explain why I’ve taken this approach though!

In your Android projects for Xamarin, any class that inherits from Activity is responsible for being created by the framework. This means where we’d usually have the luxury of passing in dependencies via a constructor and then having Autofac magically wire them up for us isn’t possible. Your constructors for these classes need to remain parameterless, and your OnCreate method is usually where your initialization for your activity will happen. We can work around that though.

My solution to this is to use a bit of a reflection hack coupled with Autofac to allow Autofac resolutions in the constructor as close as possible as to how they would normally work. A solution I wanted to avoid was a globally accessible reference to our application’s lifetime scope. I wanted to make sure that I limited the “leakage” of this not-so-great pattern to as few places as possible. With that said, I wanted to introduce a lifetime scope as a reference only to the classes that were interested in using Autofac where they’d otherwise be unable to.

  1. Make a static readonly variable in your classes that care about doing Autofac with a particular name that we can lookup via reflection. (An alternative is using an attribute to mark a static variable)
  2. After building your Autofac container and getting your scope (but prior to using it for anything), use reflection to check all types that have this static scope variable.
  3. Assign your scope to these static variables on the types that support it.
  4. In the constructors of these classes (keeping them parameterless so the framework can still do its job!), access your static scope variable and resolve the services you need

Here’s what that looks like in code!

MainActivity.cs

 public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity
    {
        protected override void OnCreate(Bundle savedInstanceState)
        {
            var builder = new ContainerBuilder();

            // TODO: add your registrations in!

            var container = builder.Build();
            var scope = container.BeginLifetimeScope();

            // the static variable i decided to use is called "_autofacHack"
            // so that it deters people from using it unless they know
            // what it's for! you could use reflection to find similar
            // fields with something like an attribute if you wanted.
            foreach (var field in GetType()
                .Assembly
                .GetTypes()
                .Select(x => x.GetField("_autofacHack", BindingFlags.NonPublic | BindingFlags.Static))
                .Where(x => x != null))
            {
                field.SetValue(null, scope);
            }

            LoadApplication(scope.Resolve<App>());
        }

The class that can take advantage of this would look like the following:

public sealed class MyActivityThatNeedsDependencyInjection : Activity
{
    private static readonly ILifetimeScope _autofacHack;
    private readonly IMyService _theServiceWeWant;

    // NOTE: we kept the constructor PARAMETERLESS because we need to
    public MyActivityThatNeedsDependencyInjection ()
    {
        _theServiceWeWant= _autofacHack.Resolve<IMyService>();
    }

    protected override void OnCreate(Bundle savedInstanceState)
    {
        base.OnCreate(savedInstanceState);

        // now we can use this service that we "injected"
        _theServiceWeWant.DoTheCoolStuff();
    }
}

Summary

Reading this you might think “well I don’t want to pollute my Xamarin code with variables that say _autofacHack, that’s gross”. And I don’t blame you! So this is to serve as a starting point for a greater solution, which I think is something I’ll evolve out for myself and I encourage you to do the same.

Things I’m focused on:

  • Minimize where “ugly” code is. A globally accessible scope on a static class seems like it can spread the “ugly” code to too many spots. This approach is intended to help minimize that.

    What are some next steps to make that EVEN better? Maybe an attribute so we can call it something nicer?
  • Write code that feels as close as possible to the “real” thing. Autofac usually allows us to specify services in the constructor and then automatically allows us to get the instances we need. This code is structured to be very similar, but since we’re NOT allowed to change the parameterless constructors, we resolve our services off the container there instead. And because it’s in the constructor, we can assign things to readonly variables as well which is a nice bonus.

The more implementations of this I go to use, the more I plan to refine it! How have you leveraged Autofac in your Xamarin projects?


Downtime? Time to Build!

The COVID-19 pandemic has caused many of us to stay isolated and at home, but that’s OK! I genuinely enjoy developing software and wanted to take this opportunity to focus on learning. Having some downtime has afforded me to try putting together a system that I otherwise might not have explored building.

In this article, I’ll share different aspects about an application I’m building that purposefully put me outside of my comfort zone. In my opinion, having downtime is an opportunity to learn and grow! It’s time to take advantage of that.

When the app and system is ready to showcase I’ll share more insight into what’s actually being built!

The Client Framework

The application being built was intended to run on multiple mobile platforms, so Xamarin was my choice here. I have briefly used Xamarin several years ago, but my reasons for this time through were:

  • C#, .NET, and Visual Studio support. There were many things I wanted to learn about, but I wanted to limit myself to a familiar foundation so that I could still feel that I’m making progress.
  • Supports iOS and Android right from the start. I thought it would be an interesting software design challenge to be able to build base components in a shared library and then leverage dependency injection for platform-specific components.
  • Traction. Xamarin has been around for a number of years now, and it’s only continuing to gain support. I didn’t want to focus on a platform or SDK that wasn’t getting any love.

Xamarin was easy to get setup with because of the familiar pieces, but I was pushed out of my comfort zone to learn about how certain things are handled differently on iOS and Android

I was able to learn about:

  • Android/iOS permission requests
  • The new UI controls in Xamarin (vs WPF which I’m used to)
  • More practice with async/await and UI experience
  • Different mobile API frameworks (Crashlytics, Google Analytics, user control libraries)

The Server Framework

I’ve joked with my colleagues in the past that “web isn’t my thing”, but what I really mean is that “I don’t have experience making web pages”. I’ve build many client server systems in my professional experience and hobby programming, but serving nice-looking web pages hasn’t been my strength.

For the system that I’m designing in my downtime, I need an application server. I decided to go with ASP.NET Core because I haven’t set up many ASP.NET systems before, and I don’t have experience having them hosted in the cloud. However, I do have experience with C# and Visual Studio, so again, this seemed like a good balance of trying new things with some familiar concepts to ensure I could make progress.

In short order, I was able to get a handful of application server routes setup and communicating with the client application properly. The most difficult part was truly just making sure firewall and SSH settings were configured locally, and a handful of times of cursing at my phone for not having it on WiFi (and thus not seeing the development server on my local network)!

I was able to learn about:

  • Authentication attributes (and JWT token handling)
  • Routes with query parameters
  • Serving static content as well as application requests

The Authentication Framework

This one was fun. Having a professional career in software development, one thing that scares everyone away is designing authentication and user management. Nobody wants to because it’s complex, has plenty of edge cases, and… it’s probably critical to your system working 🙂

Thankfully, Firebase saved the day. I wrote about this already, but Firebase truly made authentication and user management way more straight forward than I’m used to. The hardest parts of working with Firebase had nothing to do with Firebase and everything to do with implementing OAuth for the providers of my choosing.

Because I could use OAuth to authenticate users and have identifiable information provided via a JWT, having a simple registration and login system that mapped OAuth’d users to some sort of internal-system user identifier was trivial. All of the routes for my web application could also authenticate and control access via this same JWT! One of the scariest components about building a system became a relatively light lift.

I was able to learn about:

  • OAuth for popular providers (Google, Facebook, etc…)
  • OAuth scopes
  • JWT tokens
  • Firebase SDK from a server and client side
  • Route authentication in ASP.NET using JWTs

The Database Framework

As part of the journey for exploring unfamiliar technology in my downtime, I decided I’d like to pursue a database that wasn’t SQL-based. I was already using Firebase for authentication and Google offers an intriguing document store in Firebase that provides real-time update triggers.

Being unused to document databases (I’m much more familiar with relational databases), I spent some time trying to design my schemas that I intended to use. One thing that caught me off guard pretty quickly was that in order to modify to a list of things, I’d need to have a local copy of the list, manipulate the collection, and then push the entire structure back to replace the existing structure. This seemed like overkill what I was trying to do, but the alternative was that I could modify objects in the data store to add/remove child items, but each child item would receive another identifier object as a linking object. So a list of X things actually meant 2X things (one for the identifier, one for the entry). Again, this was overkill.

I decided to go back to familiar technology but explore a not so familiar space! I have a good deal of experience working with SQLite and MySQL from my career. What I don’t have a lot of experience with is the management and provision of a MySQL instance with availability in the cloud! Enter Amazon RDS!

Switching to Amazon RDS meant a bit of a learning curve for making sure I could host and configure an instance of MySQL in the cloud. I was able to learn about various Amazon AWS services and roles and how they play together. But once the instance was up and running, I was able to get up and running effectively.

The Tooling

I had help with this one, fortunately, from a friend and old colleague of mine Graeme Harvey. If you’ve worked with me professionally or on a hobby project, one of the things I admit pretty quickly is that I really dislike tinkering to get a configuration right. Generally this is because there’s a lot of tweaking to get something set up right, documentation to understand, and frustrating trial and error.

Source control was going to be git. I didn’t even want to consider anything else. Git is so widely used now that I didn’t see a real benefit to trying to learn a new source control system. Repository management though? BitBucket. I’m a huge fan of the Atlassian suite of products having used them professionally for many many years.

And with that said Jira for issue management. Again, I’ve used Jira professionally for many years. What I’ve never done is had my own Jira instance though! This was made very simple by Atlassian, and not being a company that’s generating big bucks I was able to get the free tier that handles all of my needs. Jira is straight up awesome for issue management and visibility into your ongoing work.

Another no-brainer for me was getting Slack setup. Slack is something I’ve used a lot professionally but like Jira, I’ve never had my own Slack instance. Simple to setup and works just like I’m used to in my career. This wasn’t really a huge requirement but working with another person provided a nice workspace for chatting about stuff we were working on.

And finally… builds. I wrote about using Circle CI to get our server builds up and running already, and to re-iterate it was extremely simple. I even have them wired up to report back to Slack when we push code up to BitBucket! Where we’re still having some fun is figuring out how to deploy our application server builds to an EC2 instance automatically. This would allow us to “release” to a branch and have production hosting of our application server get updated in the cloud!

But we’re building a mobile application in Xamarin, so we have three outputs:

  • The application server in ASP.NET
  • The Android client
  • The iOS client

Mobile app development gets interesting because what you’re actually building is intended to run on a device and not a desktop/server… And why that matters is that generally your desktop or server application will be output as a binary, but your mobile application will be some sort of package you’ll need to sign and distribute on an app store.

After some back and forth, we decided to explore App Center. If I’m being honest, this was equally as easy to setup for our iOS and Android apps using Xamarin as our server was to setup using Circle CI for builds. App Center provided a simple wizard for triggering off of our BitBucket repositories getting new commits to a branch, and the rest was done for us.

What I learned:

  • Git+BitBucket = Free git repository hosting (private if you want) and plenty of integrations
  • Jira = Free issue management and kanban board with plenty of integrations
  • Slack = Free chat workspace with plenty of integrations
  • CircleCI = Free continuous builds with integrations into ALL the other things I just mentioned 🙂
  • App Center = Free continuous builds for iOS and Android Xamarin apps with plenty of integrations!

In Summary

It’s been a couple of weeks of getting to try building out this project and setting up these different systems to go to work for me. I’ve been able to learn a lot about new or previously unexplored SDKs/technology and even learn some different facets of things I already have professional experience with.

I’m not one to sit idle… So using my downtime to learn cool things and build something has been an awesome time! I’d highly recommend that if you’re in quarantine, lockdown, or otherwise unable to really get out and do too much that you try your hand at something new! Get creative. Get learning.

Be safe and stay healthy!


Firebase and Low-Effort User Management

I’ve found myself with some additional time to be creative during the great COVID-19 and lockdown/quarantine days. That’s why there’s more blog posts recently! Actually, I wanted to take the time to experiment with some unfamiliar technologies and build something. For a project, I wanted to leverage authentication but I’m well aware that user management can become a really complex undertaking. I had heard about Firebase from Google and wanted to give it a shot.

For the purposes of this discussion, Firebase would allow me to create something like an OAuth proxy to the system I wanted to build, and by doing so, would end up managing all of the users for me. What I needed to do with Firebase to get that setup was actually quite straight forward.

First, you start off in typical fashion registering for Firebase. From there, you’re asked about adding a new project, which looks like the following:

Create Firebase project

You’re then required to add apps to your project within Firebase. But here’s where your journey might differ from mine. I’m working in Xamarin, so I wanted to be able to add an iOS app and an Android app. The reason you need to do this is so that you can get the proper service information for your app so that it can communicate with Firebase. Google does a great job with walking you through the process, and in the end you’re required to add a service configuration file to each of your projects.

The next part was probably the most time consuming, and that was integrating some sort of OAuth for a platform into my mobile app. There’s tons of documentation about that on the Internet, so I’m not getting into that here. There’s different steps to take depending on what platform (i.e. Google, Facebook, Twitter, etc…) you want to authenticate with and whether you’re working on iOS, Android, web, or something else. Getting this all up and running required the most time on this step but it wasn’t really anything to do with Firebase… it was picking + supporting OAuth for the platforms of my choosing.

I knew which platforms I wanted to work with, but Firebase actually has a set that it supports (including email + password)! You’ll want to check that out because you need to enable the platforms you want to support in the console:

Firebase OAuth Providers

Now you can find the Firebase SDK for the platform you’re working with! Once your application/service is able to OAuth with a platform that you support, ensure it’s enabled in the console. From there you can use a method from the SDK that allows you to sign into Firebase with Oauth. This is where you’d provide the access token from the platform of your choice after having logged into that platform successfully.

The result is that Firebase actually builds a user entry for you with data related back to the OAuth platform. These are based on the providers that you used to authenticate originally. By doing this, you can use these external authentication providers and with minimal effort connect them to your Firebase project! You can get all of the authentication options you’d like AND free user management as a result.

This is high-level, but I will follow up with how we’re leveraging Firebase with the components we’re putting together in our system. Spoiler: ASP.NET controller routes can get protected by Firebase authentication with almost no effort!


CircleCI + BitBucket => Free Continuous Integration!

CircleCI is a service that I heard about from a friend that allows you to get continuous integration pipelines built up for your repositories… And it does it quick and easy. Also, free if you’re someone like me and you don’t have a large demand for getting builds done! I wanted to write about my experience with getting CircleCI wired up with BitBucket, which I like to use for my project hosting, and hopefully it’ll help you get started.

First thing, signing up is super easy if you have BitBucket because you can oauth right away with it. CircleCI will show you your projects & repositories that you have in BitBucket and you can decide which one you’d like to get started with. You can navigate to the projects in their new UI from the “Add Projects” menu.

CircleCI Left Navigation

When you click “Add Projects” you’ll be met with a list that looks like this but… With your own projects and not mine 🙂

Circle CI + BitBucket Project Listing

On this screen, you’ll want to select “Set Up Project” for the project of your choice. For me, I was dealing with a .NET project (which I’ve already setup) so I selected it and was presented with the following screen. It also allows you to pick a template out to get started:

CircleCI Template Dropdown

However, I needed to change the default template to get things to work properly when I had nuget packages! We’re missing a restore step. With some help from my friend Graeme, we were able to transform the sample from this:

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run: dotnet build

To now include the nuget restore step prior to building!

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run:
          name: Restore
          command: dotnet restore
       - run:
          name: Build
          command: dotnet build -c Release

Once you save this, CircleCi will make a branch called “circleci-project-setup” on your remote. It then goes ahead and runs your build for you! When the build for this new remote branch succeeded, I pushed this configuration to my “master” branch so that all builds on master going forward would get continuous integration builds.

Checking the CircleCI dashboard now looks like the following:

CircleCI Successful Pipelines

You can see pipeline #1 is on the branch where the test circleci configuration was made (and passed). Pipeline #2 is once I added this commit onto my master branch and pushed up! Now I have continuous integration for pushing to my lib-nexus-collections-generic BitBucket project. When I check out my commit page, I can see the new commits after the configuration landed get a nice green check when the builds pass on CircleCI:

BitBucket Commit Listing With Builds

So with a few easy steps, you can not only have free source hosting in BitBucket but free continuous integration from CircleCI. Every time you push code to a remote branch, you kick off a build! This is only the starting point as you can configure CircleCI to do much more than just restore nuget packages and build .NET solutions 🙂


xUnit Tests Not Running With .NET Standard

Having worked with C# for quite some time now writing desktop applications, I’ve begun making the transition over to .NET standard. In my professional working experience, it was a much slower transition because of product requirements and time, but in my own personal development there’s no reason why I couldn’t get started with it. And call me crazy, but I enjoy writing coded tests for the things I make. My favourite testing framework for my C# development is xUnit, and naturally as I started writing some new code with .NET Standard I wanted to make sure I could get my tests to run.

Here’s an example of some C# code I wrote for my unit tests of a simple LRU cache class I was playing around with:

    [ExcludeFromCodeCoverage]
    public sealed class LruCachetests
    {
        [Fact]
        public void Constructor_CapacityTooSmall_ThrowsArgumentException()
        {
            Assert.Throws<ArgumentException>(() => new LruCache<int, int>(0));
        }

        [Fact]
        public void ContainsKey_EntryExists_True()
        {
            var cache = new LruCache<int, int>(1);
            cache.Add(0, 1);
            var actual = cache.ContainsKey(0);
            Assert.True(
                actual,
                $"Unexpected result for '{nameof(LruCache<int, int>.ContainsKey)}'.");
        }
    }

Pretty simple stuff. I know that for xUnit in Visual Studio, I need to get a nuget package for the test runner to work right in the IDE. Simple enough, I just need to add the “xunit.runner.visualstudio” package alongside the xunit package I had already included into my test project.

Nuget package management for project in visual studio showing required xUnit packages.
Required xUnit nuget packages

Ready to rock! So I go run all my tests in the solution but I’m met with this little surprise:

[3/24/2020 3:59:10.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0622045) ==========
[3/24/2020 3:59:20.510 PM] ---------- Discovery started ----------
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find C:[redacted]binDebugnetstandard2.0testhost.dll. Please publish your test project and retry.
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
[3/24/2020 3:59:20.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0600179) ==========
Executing all tests in project: [redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========

Please publish your test project and retry? Huh?

As any software engineer does, I set out to Google for answers. I came across this Stack Overflow post: https://stackoverflow.com/q/54770830/2704424

And fortunately someone had responded with a link to the xUnit documentation: Why doesn’t xUnit.net support netstandard?

The answer was right at the top!

netstandard is an API, not a platform. Due to the way builds and dependency resolution work today, xUnit.net test projects must target a platform (desktop CLR, .NET Core, etc.) and run with a platform-specific runner application.

https://xunit.net/docs/why-no-netstandard

My solution was that I changed my test project to build for one of the latest .NET Frameworks… and voila! I chose .NET 4.8 as the latest available at the time of writing.

My next attempt at running all of my tests looked like this:

Executing all tests in project: [Redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========
[3/24/2020 4:08:14.898 PM] ---------- Discovery started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.40]   Discovering: [Redacted].Tests
[xUnit.net 00:00:00.47]   Discovered:  [Redacted].Tests
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Universal Windows)
[3/24/2020 4:08:16.289 PM] ========== Discovery finished: 2 tests found (0:00:01.3819229) ==========
Executing all tests in project: [Redacted].Tests
[3/24/2020 4:08:17.833 PM] ---------- Run started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.41]   Starting:    [Redacted].Tests
[xUnit.net 00:00:00.66]   Finished:    [Redacted].Tests
[3/24/2020 4:08:19.337 PM] ========== Run finished: 2 tests run (0:00:01.4923808) ==========

And I was back on my path to success! Hopefully if you run into this same issue you can resolve it in the same fashion. Happy testing!


Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
    }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
    }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here 😉 BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
            .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress