Using Autofac With Xamarin Forms

I love dependency injection frameworks ever since I started using them. Specifically, I’m obsessed with using Autofac and I have a hard time developing applications unless I can use a solid DI framework like Autofac! I’ve recently been working with Xamarin and found that I wanted to use dependency injection, but some of the framework doesn’t support this well out of the box. I’ was adamant to get something going though, so I wanted to show you my way to make this work.

Disclaimer: In its current state, this is certainly a bit of a hack. I’ll explain why I’ve taken this approach though!

In your Android projects for Xamarin, any class that inherits from Activity is responsible for being created by the framework. This means where we’d usually have the luxury of passing in dependencies via a constructor and then having Autofac magically wire them up for us isn’t possible. Your constructors for these classes need to remain parameterless, and your OnCreate method is usually where your initialization for your activity will happen. We can work around that though.

My solution to this is to use a bit of a reflection hack coupled with Autofac to allow Autofac resolutions in the constructor as close as possible as to how they would normally work. A solution I wanted to avoid was a globally accessible reference to our application’s lifetime scope. I wanted to make sure that I limited the “leakage” of this not-so-great pattern to as few places as possible. With that said, I wanted to introduce a lifetime scope as a reference only to the classes that were interested in using Autofac where they’d otherwise be unable to.

  1. Make a static readonly variable in your classes that care about doing Autofac with a particular name that we can lookup via reflection. (An alternative is using an attribute to mark a static variable)
  2. After building your Autofac container and getting your scope (but prior to using it for anything), use reflection to check all types that have this static scope variable.
  3. Assign your scope to these static variables on the types that support it.
  4. In the constructors of these classes (keeping them parameterless so the framework can still do its job!), access your static scope variable and resolve the services you need

Here’s what that looks like in code!

MainActivity.cs

 public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity
    {
        protected override void OnCreate(Bundle savedInstanceState)
        {
            var builder = new ContainerBuilder();

            // TODO: add your registrations in!

            var container = builder.Build();
            var scope = container.BeginLifetimeScope();

            // the static variable i decided to use is called "_autofacHack"
            // so that it deters people from using it unless they know
            // what it's for! you could use reflection to find similar
            // fields with something like an attribute if you wanted.
            foreach (var field in GetType()
                .Assembly
                .GetTypes()
                .Select(x => x.GetField("_autofacHack", BindingFlags.NonPublic | BindingFlags.Static))
                .Where(x => x != null))
            {
                field.SetValue(null, scope);
            }

            LoadApplication(scope.Resolve<App>());
        }

The class that can take advantage of this would look like the following:

public sealed class MyActivityThatNeedsDependencyInjection : Activity
{
    private static readonly ILifetimeScope _autofacHack;
    private readonly IMyService _theServiceWeWant;

    // NOTE: we kept the constructor PARAMETERLESS because we need to
    public MyActivityThatNeedsDependencyInjection ()
    {
        _theServiceWeWant= _autofacHack.Resolve<IMyService>();
    }

    protected override void OnCreate(Bundle savedInstanceState)
    {
        base.OnCreate(savedInstanceState);

        // now we can use this service that we "injected"
        _theServiceWeWant.DoTheCoolStuff();
    }
}

Summary

Reading this you might think “well I don’t want to pollute my code with variables that say _autofacHack, that’s gross”. And I don’t blame you! So this is to serve as a starting point for a greater solution, which I think is something I’ll evolve out for myself and I encourage you to do the same.

Things I’m focused on:

  • Minimize where “ugly” code is. A globally accessible scope on a static class seems like it can spread the “ugly” code to too many spots. This approach is intended to help minimize that.

    What are some next steps to make that EVEN better? Maybe an attribute so we can call it something nicer?
  • Write code that feels as close as possible to the “real” thing. Autofac usually allows us to specify services in the constructor and then automatically allows us to get the instances we need. This code is structured to be very similar, but since we’re NOT allowed to change the parameterless constructors, we resolve our services off the container there instead. And because it’s in the constructor, we can assign things to readonly variables as well which is a nice bonus.

The more implementations of this I go to use, the more I plan to refine it! How have you leveraged Autofac in your Xamarin projects?


TODO Lists: Keeping focused when you feel lost

TODO List - Photo by Tirachard Kumtanom from Pexels

It’s more relevant for more people now than it probably has been in other times in their professional careers, but COVID-19 means remote work for a lot of people. It also means no work for a lot of people too. I’ve found that a simple tool for me to keep focused is leveraging a TODO list. It’s so simple that I think people often overlook the power of a TODO list when you’re feeling like you’re a bit lost or not making progress.

With your TODO list, the first thing I’d suggest is thinking about a daily routine. Now that you’re working remote, or in the unfortunate case out of work, I think it’s really important that you keep some sort of daily routine to help give you some guard rails. Think about what things you usually do in the morning. What does your lunch look like? How about the later afternoon and into the evening? What does before bed look like? Thinking through your routine will give you an idea of the things you want to focus on and then give you an idea of how much time you’ll need for regular things and how much time you’ll have remaining for stuff that can pop up.

The next thing I’d suggest with your TODO list is to get granular. This is especially important in my opinion if you’re struggling to feel like you’re making progress on anything. You can even write things down like “brush your teeth”. It doesn’t have to be specific to work and it doesn’t have to be anything groundbreaking. You’ll find that as you start making progress by checking off small items that suddenly you’re accomplishing a lot and a day might have gone from feeling like nothing getting done to fulfilling, or from insurmountable to progress being made! The small steps you can take while working through your TODO list are a great way to remind yourself that you’re making progress.

It’s also a good opportunity to remind yourself that these are very strange times for many of us. If you’re working from home or at home and out of work, these could be very new circumstances for you and not at all like your normal routine. That’s okay! Remember the rest of the world is in this together with you.

You should be affording yourself the time to do little things here and there around the house even if you’re working remotely. For example, if you hear your washer/dryer go off in the middle of the day, instead of letting that be something that’s nagging you in the back of your mind consciously go make the time to take a few minute break and change loads of laundry! There are plenty of unique distractions to be had while working from home, but instead of letting them distract you take control and consciously spend the time on the appropriate things.

A sample TODO list might look like the following:

  • AM
    • Get the dog outside!
    • Feed the dog
    • Eat breakfast
    • Hygiene routine
    • Coffee + read news
    • Answer emails
    • Work on Project A
    • Video meeting 1
    • Continue Project A
  • Afternoon
    • Eat lunch
    • Get the dog outside!
    • Do the dishes
    • Answer emails
    • Video meeting 2
    • Work on Project B
    • Get work schedule planned for tomorrow
  • Evening
    • Eat dinner
    • Get the dog outside!
    • Get some exercise!
    • Hygiene routine
    • Pick-a-chore around the house
    • Watch TV/Movie or play a game
  • Before Bed
    • Hygiene routine
    • Read your current book
    • Write out your TODO list for tomorrow

You might have read through that and thought that it feels silly to have a line item for something as simple as brushing your teeth or eating a meal. And that’s totally normal for some people! You might have adapted really well to your change of environment and you’ve got little to no issues adjusting. For others, that won’t be the case. If you’re struggling to feel like you’re making progress, staying on track, or if that the day seems like you’ll never accomplish everything you need to… Take the little wins with very simple things. You’ll notice that you’ll build momentum for yourself.

A friendly reminder to make sure you take the time to get your TODO list together before the next day! I like doing it right before bed so I can run through what I think my following day will look like.


Downtime? Time to Build!

The COVID-19 pandemic has caused many of us to stay isolated and at home, but that’s OK! I genuinely enjoy developing software and wanted to take this opportunity to focus on learning. Having some downtime has afforded me to try putting together a system that I otherwise might not have explored building.

In this article, I’ll share different aspects about an application I’m building that purposefully put me outside of my comfort zone. In my opinion, having downtime is an opportunity to learn and grow! It’s time to take advantage of that.

When the app and system is ready to showcase I’ll share more insight into what’s actually being built!

The Client Framework

The application being built was intended to run on multiple mobile platforms, so Xamarin was my choice here. I have briefly used Xamarin several years ago, but my reasons for this time through were:

  • C#, .NET, and Visual Studio support. There were many things I wanted to learn about, but I wanted to limit myself to a familiar foundation so that I could still feel that I’m making progress.
  • Supports iOS and Android right from the start. I thought it would be an interesting software design challenge to be able to build base components in a shared library and then leverage dependency injection for platform-specific components.
  • Traction. Xamarin has been around for a number of years now, and it’s only continuing to gain support. I didn’t want to focus on a platform or SDK that wasn’t getting any love.

Xamarin was easy to get setup with because of the familiar pieces, but I was pushed out of my comfort zone to learn about how certain things are handled differently on iOS and Android

I was able to learn about:

  • Android/iOS permission requests
  • The new UI controls in Xamarin (vs WPF which I’m used to)
  • More practice with async/await and UI experience
  • Different mobile API frameworks (Crashlytics, Google Analytics, user control libraries)

The Server Framework

I’ve joked with my colleagues in the past that “web isn’t my thing”, but what I really mean is that “I don’t have experience making web pages”. I’ve build many client server systems in my professional experience and hobby programming, but serving nice-looking web pages hasn’t been my strength.

For the system that I’m designing in my downtime, I need an application server. I decided to go with ASP.NET Core because I haven’t set up many ASP.NET systems before, and I don’t have experience having them hosted in the cloud. However, I do have experience with C# and Visual Studio, so again, this seemed like a good balance of trying new things with some familiar concepts to ensure I could make progress.

In short order, I was able to get a handful of application server routes setup and communicating with the client application properly. The most difficult part was truly just making sure firewall and SSH settings were configured locally, and a handful of times of cursing at my phone for not having it on WiFi (and thus not seeing the development server on my local network)!

I was able to learn about:

  • Authentication attributes (and JWT token handling)
  • Routes with query parameters
  • Serving static content as well as application requests

The Authentication Framework

This one was fun. Having a professional career in software development, one thing that scares everyone away is designing authentication and user management. Nobody wants to because it’s complex, has plenty of edge cases, and… it’s probably critical to your system working ๐Ÿ™‚

Thankfully, Firebase saved the day. I wrote about this already, but Firebase truly made authentication and user management way more straight forward than I’m used to. The hardest parts of working with Firebase had nothing to do with Firebase and everything to do with implementing OAuth for the providers of my choosing.

Because I could use OAuth to authenticate users and have identifiable information provided via a JWT, having a simple registration and login system that mapped OAuth’d users to some sort of internal-system user identifier was trivial. All of the routes for my web application could also authenticate and control access via this same JWT! One of the scariest components about building a system became a relatively light lift.

I was able to learn about:

  • OAuth for popular providers (Google, Facebook, etc…)
  • OAuth scopes
  • JWT tokens
  • Firebase SDK from a server and client side
  • Route authentication in ASP.NET using JWTs

The Database Framework

As part of the journey for exploring unfamiliar technology in my downtime, I decided I’d like to pursue a database that wasn’t SQL-based. I was already using Firebase for authentication and Google offers an intriguing document store in Firebase that provides real-time update triggers.

Being unused to document databases (I’m much more familiar with relational databases), I spent some time trying to design my schemas that I intended to use. One thing that caught me off guard pretty quickly was that in order to modify to a list of things, I’d need to have a local copy of the list, manipulate the collection, and then push the entire structure back to replace the existing structure. This seemed like overkill what I was trying to do, but the alternative was that I could modify objects in the data store to add/remove child items, but each child item would receive another identifier object as a linking object. So a list of X things actually meant 2X things (one for the identifier, one for the entry). Again, this was overkill.

I decided to go back to familiar technology but explore a not so familiar space! I have a good deal of experience working with SQLite and MySQL from my career. What I don’t have a lot of experience with is the management and provision of a MySQL instance with availability in the cloud! Enter Amazon RDS!

Switching to Amazon RDS meant a bit of a learning curve for making sure I could host and configure an instance of MySQL in the cloud. I was able to learn about various Amazon AWS services and roles and how they play together. But once the instance was up and running, I was able to get up and running effectively.

The Tooling

I had help with this one, fortunately, from a friend and old colleague of mine Graeme Harvey. If you’ve worked with me professionally or on a hobby project, one of the things I admit pretty quickly is that I really dislike tinkering to get a configuration right. Generally this is because there’s a lot of tweaking to get something set up right, documentation to understand, and frustrating trial and error.

Source control was going to be git. I didn’t even want to consider anything else. Git is so widely used now that I didn’t see a real benefit to trying to learn a new source control system. Repository management though? BitBucket. I’m a huge fan of the Atlassian suite of products having used them professionally for many many years.

And with that said Jira for issue management. Again, I’ve used Jira professionally for many years. What I’ve never done is had my own Jira instance though! This was made very simple by Atlassian, and not being a company that’s generating big bucks I was able to get the free tier that handles all of my needs. Jira is straight up awesome for issue management and visibility into your ongoing work.

Another no-brainer for me was getting Slack setup. Slack is something I’ve used a lot professionally but like Jira, I’ve never had my own Slack instance. Simple to setup and works just like I’m used to in my career. This wasn’t really a huge requirement but working with another person provided a nice workspace for chatting about stuff we were working on.

And finally… builds. I wrote about using Circle CI to get our server builds up and running already, and to re-iterate it was extremely simple. I even have them wired up to report back to Slack when we push code up to BitBucket! Where we’re still having some fun is figuring out how to deploy our application server builds to an EC2 instance automatically. This would allow us to “release” to a branch and have production hosting of our application server get updated in the cloud!

But we’re building a mobile application in Xamarin, so we have three outputs:

  • The application server in ASP.NET
  • The Android client
  • The iOS client

Mobile app development gets interesting because what you’re actually building is intended to run on a device and not a desktop/server… And why that matters is that generally your desktop or server application will be output as a binary, but your mobile application will be some sort of package you’ll need to sign and distribute on an app store.

After some back and forth, we decided to explore App Center. If I’m being honest, this was equally as easy to setup for our iOS and Android apps using Xamarin as our server was to setup using Circle CI for builds. App Center provided a simple wizard for triggering off of our BitBucket repositories getting new commits to a branch, and the rest was done for us.

What I learned:

  • Git+BitBucket = Free git repository hosting (private if you want) and plenty of integrations
  • Jira = Free issue management and kanban board with plenty of integrations
  • Slack = Free chat workspace with plenty of integrations
  • CircleCI = Free continuous builds with integrations into ALL the other things I just mentioned ๐Ÿ™‚
  • App Center = Free continuous builds for iOS and Android Xamarin apps with plenty of integrations!

In Summary

It’s been a couple of weeks of getting to try building out this project and setting up these different systems to go to work for me. I’ve been able to learn a lot about new or previously unexplored SDKs/technology and even learn some different facets of things I already have professional experience with.

I’m not one to sit idle… So using my downtime to learn cool things and build something has been an awesome time! I’d highly recommend that if you’re in quarantine, lockdown, or otherwise unable to really get out and do too much that you try your hand at something new! Get creative. Get learning.

Be safe and stay healthy!


Firebase and Low-Effort User Management

I’ve found myself with some additional time to be creative during the great COVID-19 and lockdown/quarantine days. That’s why there’s more blog posts recently! Actually, I wanted to take the time to experiment with some unfamiliar technologies and build something. For a project, I wanted to leverage authentication but I’m well aware that user management can become a really complex undertaking. I had heard about Firebase from Google and wanted to give it a shot.

For the purposes of this discussion, Firebase would allow me to create something like an OAuth proxy to the system I wanted to build, and by doing so, would end up managing all of the users for me. What I needed to do with Firebase to get that setup was actually quite straight forward.

First, you start off in typical fashion registering for Firebase. From there, you’re asked about adding a new project, which looks like the following:

Create Firebase project

You’re then required to add apps to your project within Firebase. But here’s where your journey might differ from mine. I’m working in Xamarin, so I wanted to be able to add an iOS app and an Android app. The reason you need to do this is so that you can get the proper service information for your app so that it can communicate with Firebase. Google does a great job with walking you through the process, and in the end you’re required to add a service configuration file to each of your projects.

The next part was probably the most time consuming, and that was integrating some sort of OAuth for a platform into my mobile app. There’s tons of documentation about that on the Internet, so I’m not getting into that here. There’s different steps to take depending on what platform (i.e. Google, Facebook, Twitter, etc…) you want to authenticate with and whether you’re working on iOS, Android, web, or something else. Getting this all up and running required the most time on this step but it wasn’t really anything to do with Firebase… it was picking + supporting OAuth for the platforms of my choosing.

I knew which platforms I wanted to work with, but Firebase actually has a set that it supports (including email + password)! You’ll want to check that out because you need to enable the platforms you want to support in the console:

Firebase OAuth Providers

Now you can find the Firebase SDK for the platform you’re working with! Once your application/service is able to OAuth with a platform that you support, ensure it’s enabled in the console. From there you can use a method from the SDK that allows you to sign into Firebase with Oauth. This is where you’d provide the access token from the platform of your choice after having logged into that platform successfully.

The result is that Firebase actually builds a user entry for you with data related back to the OAuth platform. These are based on the providers that you used to authenticate originally. By doing this, you can use these external authentication providers and with minimal effort connect them to your Firebase project! You can get all of the authentication options you’d like AND free user management as a result.

This is high-level, but I will follow up with how we’re leveraging Firebase with the components we’re putting together in our system. Spoiler: ASP.NET controller routes can get protected by Firebase authentication with almost no effort!


CircleCI + BitBucket => Free Continuous Integration!

CircleCI is a service that I heard about from a friend that allows you to get continuous integration pipelines built up for your repositories… And it does it quick and easy. Also, free if you’re someone like me and you don’t have a large demand for getting builds done! I wanted to write about my experience with getting CircleCI wired up with BitBucket, which I like to use for my project hosting, and hopefully it’ll help you get started.

First thing, signing up is super easy if you have BitBucket because you can oauth right away with it. CircleCI will show you your projects & repositories that you have in BitBucket and you can decide which one you’d like to get started with. You can navigate to the projects in their new UI from the “Add Projects” menu.

CircleCI Left Navigation

When you click “Add Projects” you’ll be met with a list that looks like this but… With your own projects and not mine ๐Ÿ™‚

Circle CI + BitBucket Project Listing

On this screen, you’ll want to select “Set Up Project” for the project of your choice. For me, I was dealing with a .NET project (which I’ve already setup) so I selected it and was presented with the following screen. It also allows you to pick a template out to get started:

CircleCI Template Dropdown

However, I needed to change the default template to get things to work properly when I had nuget packages! We’re missing a restore step. With some help from my friend Graeme, we were able to transform the sample from this:

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run: dotnet build

To now include the nuget restore step prior to building!

 version: 2.1

 orbs:
  win: circleci/windows@2.2.0

 jobs:
   build:
     executor: win/default     
    
     steps:
       - checkout
       - run:
          name: Restore
          command: dotnet restore
       - run:
          name: Build
          command: dotnet build -c Release

Once you save this, CircleCi will make a branch called “circleci-project-setup” on your remote. It then goes ahead and runs your build for you! When the build for this new remote branch succeeded, I pushed this configuration to my “master” branch so that all builds on master going forward would get continuous integration builds.

Checking the CircleCI dashboard now looks like the following:

CircleCI Successful Pipelines

You can see pipeline #1 is on the branch where the test circleci configuration was made (and passed). Pipeline #2 is once I added this commit onto my master branch and pushed up! Now I have continuous integration for pushing to my lib-nexus-collections-generic BitBucket project. When I check out my commit page, I can see the new commits after the configuration landed get a nice green check when the builds pass on CircleCI:

BitBucket Commit Listing With Builds

So with a few easy steps, you can not only have free source hosting in BitBucket but free continuous integration from CircleCI. Every time you push code to a remote branch, you kick off a build! This is only the starting point as you can configure CircleCI to do much more than just restore nuget packages and build .NET solutions ๐Ÿ™‚


xUnit Tests Not Running With .NET Standard

Having worked with C# for quite some time now writing desktop applications, I’ve begun making the transition over to .NET standard. In my professional working experience, it was a much slower transition because of product requirements and time, but in my own personal development there’s no reason why I couldn’t get started with it. And call me crazy, but I enjoy writing coded tests for the things I make. My favourite testing framework for my C# development is xUnit, and naturally as I started writing some new code with .NET Standard I wanted to make sure I could get my tests to run.

Here’s an example of some C# code I wrote for my unit tests of a simple LRU cache class I was playing around with:

    [ExcludeFromCodeCoverage]
    public sealed class LruCachetests
    {
        [Fact]
        public void Constructor_CapacityTooSmall_ThrowsArgumentException()
        {
            Assert.Throws<ArgumentException>(() => new LruCache<int, int>(0));
        }

        [Fact]
        public void ContainsKey_EntryExists_True()
        {
            var cache = new LruCache<int, int>(1);
            cache.Add(0, 1);
            var actual = cache.ContainsKey(0);
            Assert.True(
                actual,
                $"Unexpected result for '{nameof(LruCache<int, int>.ContainsKey)}'.");
        }
    }

Pretty simple stuff. I know that for xUnit in Visual Studio, I need to get a nuget package for the test runner to work right in the IDE. Simple enough, I just need to add the “xunit.runner.visualstudio” package alongside the xunit package I had already included into my test project.

Nuget package management for project in visual studio showing required xUnit packages.
Required xUnit nuget packages

Ready to rock! So I go run all my tests in the solution but I’m met with this little surprise:

[3/24/2020 3:59:10.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0622045) ==========
[3/24/2020 3:59:20.510 PM] ---------- Discovery started ----------
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find C:[redacted]binDebugnetstandard2.0testhost.dll. Please publish your test project and retry.
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
[3/24/2020 3:59:20.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0600179) ==========
Executing all tests in project: [redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========

Please publish your test project and retry? Huh?

As any software engineer does, I set out to Google for answers. I came across this Stack Overflow post: https://stackoverflow.com/q/54770830/2704424

And fortunately someone had responded with a link to the xUnit documentation: Why doesn’t xUnit.net support netstandard?

The answer was right at the top!

netstandardย is an API, not a platform. Due to the way builds and dependency resolution work today, xUnit.net test projects must target a platform (desktop CLR, .NET Core, etc.) and run with a platform-specific runner application.

https://xunit.net/docs/why-no-netstandard

My solution was that I changed my test project to build for one of the latest .NET Frameworks… and voila! I chose .NET 4.8 as the latest available at the time of writing.

My next attempt at running all of my tests looked like this:

Executing all tests in project: [Redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========
[3/24/2020 4:08:14.898 PM] ---------- Discovery started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.40]   Discovering: [Redacted].Tests
[xUnit.net 00:00:00.47]   Discovered:  [Redacted].Tests
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Universal Windows)
[3/24/2020 4:08:16.289 PM] ========== Discovery finished: 2 tests found (0:00:01.3819229) ==========
Executing all tests in project: [Redacted].Tests
[3/24/2020 4:08:17.833 PM] ---------- Run started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.41]   Starting:    [Redacted].Tests
[xUnit.net 00:00:00.66]   Finished:    [Redacted].Tests
[3/24/2020 4:08:19.337 PM] ========== Run finished: 2 tests run (0:00:01.4923808) ==========

And I was back on my path to success! Hopefully if you run into this same issue you can resolve it in the same fashion. Happy testing!


RPG Development Progress Pulse โ€“ Entry 2

Progress Pulse

Progress Pulse – Entry 2

Things have been pretty busy in real life the past couple of weeks, so I haven’t had too much time for working on this. However, for this entry in the progress pulse series I’ll talk about some of the challenges I had while looking at making a generic data (de)serialization API + implementation, and why I chose to make some of the decisions I did!

Which Tech To Pick?

I’ve felt burned in the past by trying to do data serialization for my game framework because it’s always created a barrier for refactoring once it’s in place (i.e. i change some data i need and now i have to re-make or migrate allllll my SQL data).

So I was thinking about how I plan to store game state, which I have written about, and then considered the implementations I had considered for persistent storage. One of them was a graph database called Neo4j, which has a JSON representation of all of its node data. Except… I’m not ready to commit to Neo4j just yet because I don’t want to feel tied down (like I used to tie myself down to SQLite). But my objects I’m creating *are* well suited to hierarchies of entities+components, so maybe JSON is a happy medium?

Here was my breakdown for starting with JSON:

Pros:

  • Very easy to get started with
  • (De)Serialization libraries available via nuget for free
  • Human readable which is great for creating, editing, and debugging
  • Hierarchical, which lends itself well to my data structures in memory
    • Should make refactoring easy (did a component change? only change that component’s data representation)
  • Could be a stepping stone for working with Neo4j in the future

Cons:

  • Writing is probably slow, especially if I want to just modify one chunk of JSON data
    • Likely will need to write whole JSON blobs out… But who knows if it’s slow, I need to benchmark it.
  • I suspect lookups would be slow
    • But… Maybe important data is cached in memory on startup? Maybe it’s not even an issue. Benchmark it.

Basically, I was left with a bunch of pros and a couple of cons that were really just speculation. Seemed like a great way to get started!

Lesson learned was to start with something that won’t keep you locked in, but is also just enough to get you going!

Start With Something Specific

I’m a sucker for trying to make really generic things in software. It’s an extreme I find myself taking because I want to make things as extensible and re-usable as possible. The side effect of it though is that sometimes I miss corner cases (and they end up being not corner cases in the general sense) or that I make APIs that suck to use because they’re so general and maybe they shouldn’t be.

I decided I was going to switch up my approach. I wanted to figure out how I could serialize and deserialize my item definition data. That probably warrants a brief explanation:

I want items (i.e. loot) in the game to be part of a system that can control generation of them based on game state, randomness, and pre-defined organization of loot. Some drops might be totally random common items. Others might be based on quest state and need to be very specific. Maybe there’s some that only drop at a specific time of day during specific whether after killing a certain enemy. This is what I’m shooting for. So the item definitions will contain information about how to generate a base item, and provide components that tell the game how to mutate that base item (i.e. set damage to a value between 5 and 10 and call it “Axe”). But there are drop tables that have weights associated with them that can link to specific items or other drop tables. This allows the game’s content creator to generate loot that’s like “When the player is in the swamp lands, common enemies drop between 1-3 items, with a 60% chance of those items being junk, 20% chance of those items being normal equipment, 15% chance of those items being magic equipment, and 5% chance of those items being powerful legendary equipment”. Drop tables are essentially nodes with weights on the vertices that point to other tables or specific item definitions. Simple ๐Ÿ™‚

The reason I went with this approach is because I felt that even though some of the C# types I have might be specific to item definitions, the abstract structure of the types (i.e. entities with components on them) is shared across many different game systems. So if I can make it work for this one, it shouldn’t be too hard to do for the next.

Lesson learned was try not to repeat all of your history… Learn from it. Experiment with new approaches.

Hello Singletons, My Old Friend

My arch-nemesis Dr Singleton! Actually, way back I’ve written about singletons so I’m not TOTALLY against them, I just think that 99% of the time they aren’t actually what you need. Let’s talk about my little run in with them though.

I started custom writing some APIs for JSON serialization that would use Newtonsoft JSON behind the scenes. Based on the structure of my objects, I figured I was going to have some sort of recursive call system going on where children would have to tell their children to serialize, etc… Once I got this working for a simple case, I realized that Newtonsoft has custom converters you can set up. These use attributes to mark up interfaces/classes to tell the serialization engine to use particular converters when they encounter a type. (Edit: after writing this I realize that I don’t HAVE to use the attribute… which might make this whole point moot)

The problem with attributes is that I cannot control the instantiation of them. And because I can’t control the instantiation of them, I can’t control the parameters passed in via the constructor. In my particular case, I needed to create a singleton that this attribute class could access and use Autofac to configure the singleton instance. Essentially, I needed to register custom handlers into my singleton instance, and then the attribute class could pull the registrations from the singleton instance.

Ugly pattern? Yes. I’m not familiar with any other ways to pass information or access to objects when I can’t control the initialization of my object though. It’s buried deep down so it’s not like the API usage feels like garbage, but still wasn’t happy with it.

Lesson learned here was sometimes we end up using “bad patterns”, but if they’re limited in scope we can limit their “badness”.


Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
ย  ย  private static void Main(string[] args)
ย  ย  {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
ย  ย  }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
ย  ย  private static void Main(string[] args)
ย  ย  {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
ย  ย  }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here ๐Ÿ˜‰ BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
ย            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
          ย  .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


RPG Development Progress Pulse

Progress Pulse Series

I figured this would be a fun thing to start to do just to get small updates out and talk about what I’ve been working on for ProjectXyz and my RPG I’m building in Unity3D. This will hopefully be some small updates on the order of semi to bi-weekly about what kinds of things are going on when I’m programming for these projects. This could include:

  • How and why I decided to refactor something
  • A new design practice I’m trying
  • Reflecting on why a design decision has(n’t) been working out
  • A new feature that’s interesting
  • etc…

Some of these will be technical and others much less. A bit of progress pulse allows me an outlet to talk about interesting things I’m doing and maybe sheds some light on some areas (game development or just general programming) that you might be interested.

Where Can I Find Entries In This Series?

I’ll try to organize these Progress Pulse entries into a specific category on my blog. Ideally that way you can navigate them pretty easily. You can click the link below and you should get all the entries in this series!

Click Here For Entire Progress Pulse Series


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I have nearly a decade of professional hands on software engineering experience in parallel to leading multiple engineering teams to great results. I'm into bodybuilding, modified cards, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress