Tag: code

ProjectXyz: Why I Started A Team For My Hobby Project

ProjectXyz - Why I Started a Team

Who Needs A Team?!

I’ve been building RPG backends for as long as I’ve been able to code. I think my first one that I made for my grade 11 class is the only RPG that I “finished”… It was text-based and all you could do was fight AI via clicking attack, buy better weapons, level up, and repeat. It was also 10000 lines of VB6 code and so brutal that I couldn’t add anything to it without copying hundreds of lines of code.

Since then, I’ve had the itch. I keep rewriting this thing. I keep taking “Text RPG” (super cool and catchy, I know) and rewriting it. I had my first visual representation of this game called Macerus (here’s another rewrite for unity), which is actually how I landed my first co-op job. But every time I’d get so far, I’d decide I needed to rewrite it because I had messed up the architecture in some way and refactoring would be too much work.

My latest attempt is called ProjecyXyz, because I can’t come up with names. And funny enough, I just Googled it while writing this article and there’s actually a company with the same name… So maybe I’ll have to get more creative. ProjectXyz is supposed to be a very generic RPG game framework that allows new systems, mechanics, and game content to be dropped in, in addition to being independent of a front end for rendering.

It’s also something I’ve been making on my own. Because I’ve been making RPG backends on my own for years now. So who needs to have a team, right?

Too Much Pride For A Team

I think initially I wanted to do this all on my own because of pride. I also don’t think it was something I was conscious about except for the fact I looked at this project as my baby and something I could control the development of. I wasn’t consciously telling myself “I have to do this on my own so that I’m better than other people” or anything silly like that.

But why would I go ask others for help? They don’t code like me. They don’t have the same investment into this idea as me. They aren’t as passionate. They might have their own ideas for how to do things too! How could I have someone like that working on MY project?

Those are all pretty naive reasons for considering to work alone though. Sure, this is my pet project and I’m going to likely feel more attached to it than anyone else. That’s probably expected. It doesn’t mean that I can’t find people that are super interested in working on something like this. They could be totally passionate about learning different aspects of creating an RPG backend.

As for having their own ideas… That’s probably one of the BIGGEST reasons in FAVOUR of having a team! It’s easy to get scared about having other people put their ideas into something you feel like is “yours”. It might have taken a few years of working in the industry (currently just passed 6 years of working at Magnet Forensics), but it’s actually very common for other people to be contributing ideas into code bases you’re working on. It happens every day. Sometimes you have design meetings or code reviews or general architectural discussions and your idea ISN’T the one that’s picked. That’s cool! As long as everyone is striving for extensible and testable code, we can make changes if we need to going forward. You don’t need to make every decision and sometimes it’s much better that way. Other people are smart too ūüėČ

Passion is Key for a Team

While the “team” I started isn’t an official team, it’s the first time I’ve been very open to having people directly contribute to my pet project. I think one of the most obvious reasons I became comfortable with this is because I found someone that was very passionate about exploring this space.

My colleague and I were talking about some of the concepts in ProjectXyz and where I wanted to go with it. Immediately he expressed interest in map generation and how that’s always been something he wanted to explore. How can maps be procedurally generated? Can we take this concept and generate maps on the fly? What are memory and runtime constraints? How do we represent this information in code? What about persistent storage?

I could immediately tell he was very curious about how a system like this might work. After several conversations with him about how he was starting to hack up some ideas and doing research on different algorithms, I knew he was passionate about it. We discussed working on some of these things together and contributing to the project code that I have, and we’ve been going back and forth for a few weeks now sharing ideas and his progress that he’s making for map generation. I’ve been hands off only really acting as a sounding board for him.

I think having someone passionate like this is critical for a small team. There’s going to be many barriers when working on a challenging project, and it’s easy to get bogged down and lose motivation when you’re stuck. Having additional people that are passionate about seeing progress in your project means you have some support for pushing through those hard times when you might lose motivation. If my colleague comes to me and says “I’ve been stuck on this issue and maps wont generate how I want…”, then I’m more than happy to sit down with him and talk through his algorithm and maybe where there’s an issue. I’m invested in seeing his piece come to fruition. Similarly, if I’m working on something like dynamic item generation for the game and I get stuck, I know he’s there to do the exact same thing. We both want to see this thing working how we intend.

So passion is important for a team. But is it sufficient? Is it the only requirement for adding a team member?

A Team is Built on Trust

Trust! Trust is a huge part of establishing a team because you need to be able to rely on each other. As mentioned, my colleague is passionate about working on this and has an interest in map generation. But what if I had never seen any of his code before? What if I didn’t know if he’s had practice with writing extensible code, testable code, following good design practices, etc… What if?

To be honest, I probably would be pretty nervous about him contributing code. It might be a huge barrier for me. I’d want to review his code and make sure it wasn’t “polluting” my pet project. I’ve re-written this code enough times that I really don’t want to have to think about rewriting it again! If I was nervous about someone contributing code I was going to need to re-write from scratch just to have an extensible design, it might not even be worth it having them contribute in the first place. It might actually create MORE work in the long run. It sounds selfish, but if the goal of adding someone to the team is to provide a net positive effect, then having to re-write code that isn’t up to par might be a deal breaker.

But that’s not the case here. I have multiple years of experience working with this colleague closely on various projects. We align to coding practices but still have our own twist on things. We value the same things in “good” code (extensible and testable). We use many of the same design patterns in similar situations. I’ve seen enough of his code to know that most of the time my comments about it are “oh, have you considered” and not “… you need to rewrite this”.

I can trust that what he wants to contribute will be aligned to my vision. I also can trust that new ideas he introduces are probably awesome new perspective that I hadn’t thought of. I also trust that if we disagree on something, we’re open to discussing it and coming to a resolution. So trust in this case certainly removes the barrier to entry to adding additional people to my hobby project.

Should You Form a Team?

While this was a pretty general article, I just wanted to get you thinking about opening up your hobby project(s) to other people to contribute. This is something I wish I would have considered more seriously early on. Maybe I wouldn’t be re-writing my project for the millionth time!

Some general points:

  • You’re not a “worse” programmer for getting other people contributing. Good programmers need to be able to work with others!
  • Other people can have good ideas too! Sometimes, they’re even better than your own ideas ūüėČ
  • Other people may have more knowledge or interest in areas that need to get work done that you just don’t want to do! Perfect!
  • You’ll want to try and find people passionate about working in the area your project focuses
  • You’ll want to find people that you feel like you can trust so that you’re comfortable with them working on “your baby”
  • Getting help doesn’t mean your code must be “open source”. You can still share private repositories together (i.e. consider BitBucket!)

So what do you think? Is your hobby project kind of stale because you’ve hit enough roadblocks and it’s time to get some more firepower to tackle it?

Share your thoughts below about your experiences with forming teams for your hobby projects!


Stitching – Combining Unity3D And Autofac

Stitching - Combining Unity3D And Autofac

Before We Talk About Stitching…

In Unity3D, the scripts we write and attach to GameObjects inherit from a base class called MonoBehaviour (and yes, that says Behaviour with a U in it, not the American spelling like Behavior… Just a heads up). MonoBehaviour instances can be attached to GameObjects in code by calling the AddComponent method, which takes a type parameter or type argument, and returns the new instance of the attached MonoBehaviour that it creates.

This API usage means that:

  • We cannot attach existing instances of a MonoBehaviour to a GameObject
  • Unity3D takes care of instantiating MonoBehaviours for us (thanks Unity!)
  • … We can’t pass parameters into the constructor of a MonoBehaviour because Unity3D only handles parameterless constructors (boo Unity!)

So what’s the problem with that? It kind of goes against some design patterns I’m a big fan of, where you pass your object’s dependencies in via the constructor. You can read my little primer about constructor parameter passing, dependency injection, and Autofac to learn more.

The challenge I’m trying to address is that my non-MonoBehaviour classes are all going to be setup to use constructor parameter passing as much as possible but the MonoBehaviour classes cannot. So I’d like to reduce the amount of disjoint coding styles as much as I can and make the MonoBehaviour classes feel like the rest of my stuff!

What Is “Stitching”?

Here’s where this little pattern I created called “Stitching” comes into play. Stitching involves using a class referred to as a Stitcher that’s single purpose is to take parameters in via a constructor, and wire them up to either public properties or public fields (but I REALLY suggest using properties) on the MonoBehaviour that we instantiate through the GameObject.AddComponent() API.

The code ends up looking something like this:

public sealed class MyComponentStitcher
{
  private readonly IDependency _dependency;

  public MyComponentStitcher(IDependency dependency)
  {
    // take in our dependencies and save them as fields
    _dependency = dependency;
  }

  public MyComponent Stitch(GameObject gameObject)
  {
    // create the MonoBehaviour instance using the Unity3D API
    var componentInstance = gameObject.AddComponent<MyComponent>();

    // wire up our dependencies (assign our field to a property on the component)
    componentInstance.Dependency = _dependency;

    return componentInstance;
  }
}

Where you can see that:

  • We inject dependencies into the Stitcher’s constructor
  • We call AddComponent() with the component type we want on the object we want to “stitch” to
  • We mutate the component
  • We return the newly made component

How Do We Use Stitching In Practice?

Now that we see the pattern for a how a Stitcher works, how do we actually use Stitching in practice? Let’s start by using another example:

public sealed class SomeClass
{
  private readonly IMyComponentStitcher _stitcher;

  public SomeClass(IMyComponentStitcher stitcher)
  {
    _stitcher = stitcher;
  }

  public void MyMethod()
  {
    // create a new Unity3D game object
    var gameObject = new GameObject("My Game Object");

    // "stitch" our 
    var myComponent = _stitcher.Stitch(gameObject);

    // we can use some information that would have been injected into the constructor
    // this should print the injected value
    Debug.Log(myComponent.InjectedInfo);
  }
}

From this, you can see that:

  • We have a class called MyClass following our constructor parameter passing paradigm
  • The method MyMethod()
    • Creates a new game object
    • Adds a MyComponent instance to our game object by calling the Stitch() method
    • Using our imagination and the example above, pretend our Stitcher implementation takes a parameter in its constructor to assign to the InjectedInfo property of of MonoBehaviour
  • Logs out the value of the InjectedInfo property found on our newly created instance

So What Makes Stitching Better?

You might feel like this is extra code right now, but this is where the power of Autofac comes into play. You can read my article about using Autofac with Unity3D for more information.

By creating a Stitcher, we can register it to our Autofac container. The Autofac container will then resolve any dependencies that our Stitcher requires for us. The net effect of this is that when we Stitch MonoBehaviours to GameObjects, we get what feels like Autofac resolving dependencies for our MonoBeaviours. We don’t need to mutate MonoBehaviour fields/properties all over our code to assign the dependencies the script needs to use. Instead, we treat the Stitcher class like a factory for our MonoBehaviour.

So in summary:

  • Stitching allows us to leverage Autofac for instantiating MonoBehaviours
  • Stitcher classes essentially become a factory class for our MonoBehaviours (with the side effect that they *must* mutate the GameObject that we need to attach the MonoBehaviour to)
  • Allows assignment of MonoBehaviour fields/properties for initialization to exist in one spot so we can put the bad object mutating code in one spot that feels hidden

Using Autofac With Unity3D

Autofac With Unity

Why Consider Using Autofac With Unity3D?

I think using a dependency injection framework is really valuable when you’re building a complex application, and in my opinion, a game built in Unity is a great example of this. Using Autofac with Unity3D doesn’t need to be a special case. I wrote a primer for using Autofac, and in it I discuss reasons why it’s valuable and some of the reasons you’d consider switching to using a dependency container framework. Now it doesn’t need to be Autofac, but I love the API and the usability, so that’s my weapon of choice.

Building a game can result in many complex systems working together. Not only that, if you intend to build many games it’s a great opportunity to refactor code into different libraries for re-usability. If we’re practicing writing good code using constructor dependency passing with interfaces, then things really start to line up in favour of using a dependency injection framework.

Getting Set Up

At the end of my autofac primer article, I provided a link to the Nuget package for Autofac. You’ll notice that there’s a version dependency for .NET 4.5, so if you’re not sure how to get Unity3D working with .NET 4.5, you’ll want to check this other article of mine. It’s very simple, so don’t worry!

Unity3D, at the time of writing this and using version 2018.1.1f1, there’s no native Nuget package support. I haven’t spent too much time investigating alternatives, but not to worry. I’ll explain a quick work around. The TL;DR is that we need the binaries from the Nuget package to be loaded up by Unity3D and we’ll miss out on the Nuget-y-ness for now. Not a huge deal since we’ll still have Autofac support!

  • Start a new Visual Studio C# project
    • Ensure that the .NET framework is at least 4.5 and more specifically, the version of .NET that you’d like to use in your Unity3D project
  • Open up the Nuget package manager in Visual Studio
  • Search for Autofac online in the package manager (it should be the same one I referred to above!)
  • Add this package to your visual studio project
  • Compile this visual studio project
  • Assuming you built in debug, go to the output folder (which is in bindebug if you didn’t change anything from default)
  • In the output folder, you’ll find “Autofac.dll”
  • You’ll want to add this into your Unity3D project’s “Assets” folder
    • I like nice folder hierarchies, so I’d suggest making a subfolder inside of “Assets” called “Third Party” or “Dependencies”… Something that’s obvious for what it means
    • Drop in the Autofac.dll file into there
  • Unity3D will add a corresponding *.meta file to go along with this

Great! We’re almost there. If you want to test it out, open up a script from Unity3D. This will launch a new Visual Studio instance if you haven’t opened up one for your Unity project yet. At the very top of your file you should be able to type:

using Autofac;

And the namespace should resolve! If not, sometimes this takes Unity3D a refresh operation to regenerate the project file on disc, so if you switch to Unity3D again and it starts doing some processing, switching back to Visual Studio might resolve this.

Using Autofac With Unity3D

Up until this point, we’ve proven we can reference Autofac. I’m not going to explain all the ins and outs for how you’ll want to organize your Autofac initialization in this post, but we can walk through a quick example!

  • Pick a game object on your scene
  • Add a new C# script to it
    • Call it whatever you’d like, but make sure you know how to open it
  • … now go open it in Visual Studio ūüôā
  • We should have a method in there called Start()
    • If not, feel free to add it:
    • private void Start()
      {
        // TODO: we'll add stuff here
      }
  • Let’s use this code to make a new class that you can put inside the same script file for now:
    • public sealed class MyAutofacObject
      {
      
        public MyAutofacObject()
        {
          Debug.Log("Constructor for our object!");
        }
      
        public void DoThing()
        {
          Debug.Log("Test!");
        }
      }
  • Inside this start method, let’s try doing something VERY simple to prove Autofac works!
    • var containerBuilder = new Autofac.ContainerBuilder();
      containerBuilder.RegisterType<MyAutofacObject>().SingleInstance();
      
      var container = containerBuilder.Build()
      var instance = container.Resolve<MyAutofacObject>();
      
      instance.DoThing();

Now if we run our game, here’s what should happen:

  • The script attached to the game object should run
  • The Start() method on the script should be the first thing that goes
  • The code we added should:
    • Make a new ContainerBuilder
    • Register our MyAutofacObject type as a single instance
    • Build the container
    • Resolve an instance of our type
    • Log out a message saying it’s in the constructor
    • Log out a message that says Test!

And voila! It’s simple, but it should demonstrate that Autofac is working!

Next Steps

This is a very contrived example of using Autofac with Unity3D. It proves that the code can be run, but it doesn’t do too much that’s useful. There are going to be many considerations you’ll need to make for how you want to organize your dependencies, register your classes/interfaces, and so on.

I’ll continue to add into this Unity3D series of posts, but let me know what else you’d like to know about using Autofac with Unity3D! I’d be happy to try and answer, or even create an article to help explain.

Thanks!


Part 1 – Exploring Graphs and Trees

Graphs and Trees - Delta State Algorithm

Graphs and Trees to Start

I was chatting with my colleague about generating maps for a 2D role playing game the other day after getting super excited explaining picking ProjectXyz back up and looking into Unity3D more. He was expressing interest in algorithms for procedural generation and storing data in trees or graphs as an optimal data structure for the scenario we were going over.

It stuck with me though. I’ve been putting a lot of thought into game state management and wanting to address it by using a generic layering/stacking approach. By that, I mean that I want to find a way to take base game state, allow mods or plugins to overlay their state, allow game patches to overlay their state, and then save game data to be overlaid on top of all of that. Conceptually, I believe I can create a generic system for doing this stacking of game data, which I’ll be referring to as my Delta State Algorithm, but I’ve been struggling with a starting point.

But the trees and graph concept stuck with me because the properties of trees and graphs that my colleague was referring to seemed to line up with how I envisioned this algorithm working. A graph or a tree just might be the right structure to represent this game state and the deltas between them, so I wanted to start mocking up some ideas.

A Map is a Game Object… And so is Everything Else?

Confusing subheader? You bet. But it’s because it was an odd realization.

I’ve been re-writing the same back end for an RPG for years now. Literally, it’s probably been at least 10 years that I’ve started, progressed, and scrapped such a game. But one thing I borrowed from Unity3D development was the idea of components and component-based design. I was far too married to object hierarchies, and these would eventually result in me having to refactor a TON of code. If you couple that with the fact I wasn’t using dependency injection properly (see this little primer article), it meant my refactoring efforts got to the point where it just made sense to start over.

So I was working with loading maps from TMX or Tiled map formats, and after getting these to load I was putting some thought into how I wanted to save/load my game objects. My game objects are essentially containers of components, and the containers have no properties at all. What does that mean in practice? My Actor/NPC game object class is the exact same class as an equippable item! It’s just “GameObject”. The difference between them is the variation in the components that I attach to them. In my mind, I figured I’d need some sort of hierarchical serialization paradigm where my game object would serialize itself, then the components would serialize themselves within that process, and if I could have components with components… You get the idea. But this could sort of be represented by a tree structure.

After the discussion with my colleague about trees and graph structures for maps, I connected some dots. Really the tiles and placed objects on a map were just… game objects as well. And that means that my thought process behind hierarchical serialization aligns well with some of the properties he was referring to for maps as graphs/trees. This suggests that “world state” for a game where everything is a “GameObject” with components attached to them (where some of those components can contain other GameObjects) could be represented by a graph or tree data structure

What About Tabulated and Listed Data Sets?

So we’ve come up with a starting point for “World Data” being represented by a graph. However, if we go back to my initial goal, I want plugins/mods/patches to be able to get applied to base “Game State” data.

What’s the difference between world and game state data?

These are just my terms so I should try and clarify. The world data is the data the player sees in the playable world. It’s the status of the objects and the maps they are interacting with. Game state data is the superset of this data and all of the other lookup data for loading content into the game. So this OTHER data includes things like the possible stats players and items can have, the types of monsters that can spawn (not the instances of the spawned monsters, because that’s the world state data), the possible dialog entries, etc… This type of data is really well represented in a relational database where we can create lists of data entries.

But if we take a step back, can’t that relational database content also be represented as a graph?

Could I have:

  • A node called “Stats” as a parent to a “Life” node that is a parent to a “StatRange” node, and “LocalizedNameResource” node?
  • A node called “BaseItems” as a parent to a “Sword” node that is a parent to a “DamageRange” node, “RequiredStats” node, and “EquippableSlots” node?
  • A node called “MonsterSpawns” as a parent to a “Skeleton” node that is a parent to…

You get the idea. I generally think about this data in a relational database paradigm (and perhaps it’s much faster to edit the data this way or look it up this way). However, if I can transform this data into the same type of graph structure I want to use for world state, then it means my delta state algorithm would work on not only world state data but also ALL game state data.

So Everything in the Graph? Always?

This is an important question, and I think one that without actually explicitly thinking about was causing me some headaches and keeping me from progressing down exploring more of my delta state algorithm.

I mentioned that some of the game state data feels better represented in a relational database. It might be much faster to add/remove or look them up when the data feels like it’s just a list of records when you contrast that with a graph. But why not have multiple storage mechanisms?

The delta state algorithm serves a very important goal: Layer deltas of data on top of a base set to get a new set.

That’s it though. So once I have that final transform of data that represents the new game state, who says I can’t transform it into a different format?

Maybe the world state is still better represented as a graph. So things like maps with tiles and game objects in the world still are super efficient to work with in graph form. Maybe things like dialog entries or definitions of player stats are better represented in a relational database… I could populate a database with the necessary data based on the graph data though. And the really cool part? A save game, which is probably the delta that changes the most, probably shouldn’t have the extra game state data in it (likely just has world state deltas represented). So… if a relational database is better suited for the remaining game state data, this could get generated and cached, and then only regenerated when different mods/plugins/patches are applied!

Concluding Thoughts

To summarize some of the major thoughts discussed in this article:

  • Game objects with components can be represented by nodes in a graph just like maps with tiles on them.
  • Other game state that feels better suited for relational databases can still be transformed into nodes in a graph for working with deltas.
  • Once a final game state is calculated (based on base state + deltas for mods/plugins/patches + delta for save game), data can be transformed back into the most suitable form to work with
  • Data does NOT have to exist in only one form! We can transform it as we need to work with it more effectively.

While this article was a bit of a brain dump, this the thought process that got the ball rolling for starting to design this delta state algorithm. In my next post in this series, I want to discuss a quick look at undo/redo approaches used in applications and how they might be similar (or different) to how the delta state algorithm might work.


Dependency Injection with Autofac – A Primer

Autofac Logo

Before Autofac…

I’ve written before about IoC and dependency injection, but these are older posts and my perspective and experience with these topics has fortunately been growing. I think they’re incredibly important when you’re building complex systems, but the concepts can offer some benefits in all of your programming! When you get in the habit of practicing this kind of thing, you can get some pretty flexible code… for free.

So a quick recap on what I mean by dependency injection here… I’m mostly focused on passing interfaces into constructors (and yes, I’m going to be using C# terminology as I do in most of my programming examples, but these concepts are generally the same in other languages). The benefits here:

  • You can write implementations that don’t depend on other implementations… Just an API.
  • Not depending on an interface means you can write mockable code for your unit tests. (I’ll follow up with a post on this to help provide examples)
  • You can swap out functionality by providing a different implementation of an interface and NOT re-writing core code
    • This can be a very powerful refactoring tool
    • This can allow creation of new functionality in a system simply by adding one small class instead of re-writing code

So that’s all good and well… So what do we use Autofac for?

When you might want to take the leap to Autofac

So you’ve been writing code now using interfaces in your constructor parameters. You’ve got nice modular code using composition. You have unit tests. Things are great.

There comes a point where you decide you need to break open a class in the depths of your system and provide it a new interface as part of the constructor. This is in line with the constructor parameter passing paradigm (nice alliteration, woo!) you’ve been using, so it feels good. You modify your constructor to take the new interface parameter. You change up your method to call this new interface’s API. You update your tests. It works!

Now you need to make the rest of your application work though, and it turns out because this class is created so deep down in your system, you need to find a way to pass this new interface implementation allllllllll the way down. And suddenly, you find you need to break open 10 other classes to pass this interface into the constructor. It’s a simple change in that it’s the same change in 10 spots… But it’s 10 spots. And it’s tedious. And you got lucky because you own this code and you don’t need to worry about breaking the constructor API for other people.

But it might be time to look into something like Autofac at this point because it can make this problem disappear for you.

Enter Autofac!

Autofac is awesome. The end.

But seriously, Autofac is one example of a dependency container framework. The idea with a framework like this is that programmers can register things to the container and then at a later point these things can be resolved. So you could:

  • Decide to take a particular implementation and register it so that it can be resolved by its interface
  • Decide if you want a registration to act like a singleton (and remember, a singleton does NOT have to have global access… it just means literally a single instance)
  • Run callbacks when an instance is created
  • … and so much more

In my opinion, the two major benefits of Autofac as they relate to this example are:

  • You can better organize the top level of your application to wire up specific implementations to use in your code
  • … Autofac can magically resolve the dependencies for you so it solves that nasty problem of passing down dependencies via constructors to deep areas of your code

You’ll need to be careful that you don’t abuse the container though! It’s considered an anti-pattern to use the container to manually resolve dependencies across various areas of your application (generally this is referred to as the Service Locator (anti)Pattern, but people go back and forth on why it’s good or bad). The “proper” use case is to resolve your single entry point class in one spot, call the methods you need on your entry point class, and let Autofac do its magic to resolve all of your registered dependencies.

Where Can I Get Autofac?

This is the easy part! You can use your Nuget package manager in Visual Studio to find the right package for your .NET framework dependency. Check it out at the Nuget Gallery!

What’s Next?

I have some examples I’d like to write about next for using Autofac including:

  • Using Modules for Organizing Code Dependencies
  • Patterns for Dynamically Resolving Modules Across Assemblies
  • How to use Autofac with Unity3D

But I’d love to hear what you want to know more about! Comment and let me know, and I’ll see what I can do.


Unity3D and .NET 4.x Framework

Unity

Unity3D Default .NET Framework

I recently wrote that I wanted to start writing more Unity3D articles because I’m starting to pick up more Unity3D hobby work. It felt like a good opportunity to share some of my learnings so that anyone searching across the web might stumble upon this and get answers to the same problems I had.

Unity3D as of 2018.1.1f1 (which is the version I’m currently using), still defaults to using .NET 3.5 as the framework version. Nothing wrong with that either. I’m sure there are reasons that they have for staying at that version, probably because of Mono and cross platform reasons if I were to guess, so I’m not complaining. For reference, this setting in Unity3D is referred to as “Scripting Runtime Version”. So if you’re googling more about this later, that’s what Unity calls it. For the libraries I was building to use as a game framework, I was using .NET 4.6 and discovered I was going to have a challenge getting them working in Unity3D.

If you want to see what your setting is currently set at, you need to check out the “Player” settings. This was kind of buried in the UI for me so I didn’t know it was a thing that could be adjusted. In Unity3D¬†2018.1.1f1, click Edit->Project Settings->Player. Here’s what it looks like:

Unity3D - Player Settings

In Unity, click Edit->Project Settings->Player

From there, you’re going to get “PlayerSettings” in your Inspector tab. You’ll need to expand the “Other Settings” to see your scripting runtime version:

Unity3D - Other Settings

“Other Settings” accordion control in PlayerSettings Inspector tab

Once you expand that, here’s the setting you’re interested in:

Unity3D - Scripting Runtime Version

Scripting Runtime Version – The selected .NET version Unity will use

Switching Unity3D to .NET 4.x

Now that you know where the setting is… it’s pretty easy ūüôā

Unity3D - Scripting Runtime Version

Use the dropdown to pick which .NET framework version you’d like to use.

You can read more about this setting over at the official Unity3D documentation pages:

https://docs.unity3d.com/Manual/ScriptingRuntimeUpgrade.html

This outlines what things are affected in different platforms and scenarios so YOU SHOULD READ IT to understand what will change.

Hope that makes things a bit easier for you to get up and running with .NET 4.x assemblies in Unity3D!


Delta State Algorithm Creation Series

Delta State Algorithm

Delta State Algorithm Motivation

This post will act as the table of contents for an algorithm I’m developing for calculating deltas between state for generic sets of data.

I figured this would be an interesting series to write about so I can document my thought process, trials, errors, and successes. At the end of this I plan to share working code that implements this algorithm so that you can use it in your own work.

Now that I’ve been not diving more into Unity3D development for my hobby programming, I’m getting to a point in game development where I need to manage state for data in a way that allows patches of state to be applied in a layered fashion. A couple of examples of this include:

  • Applying save game state to a base game state
  • Applying a patch to a base game state
    • Optionally allow save game state to be applied on top of this and completely override as necessary
    • Optionally allow save game state to be applied on top of this but “retroactively patch” existing save game data

I believe that I can design an algorithm that’s relatively simple that should allow the layering to work, but I have a lot of potential corner cases to figure out (especially when considering retroactively vs not for applying game patches). I’ll be exploring data structures to use and technologies to leverage for persisting this state.

It should be pretty exciting!

Delta State Algorithm Series

Just a reminder that if you’re interested in this series, you’ll want to use this page as the starting point. As I continue to add new content to this series, I’ll come back and update this page with links to the parts of the series. Ideally at the end I’ll have some working code I can share!

  • Part 1 – Exploring Graphs and Trees (TBD)
  • Part 2 – Undo/Redo Patterns (TBD)
  • Part 3 – Generic State Serialization to a Graph (TBD)
  • Part 4 –¬†¬†… TBD

API: Top-Down? Bottom-Up? Somewhere in the Middle?

A Quick Brain-Dump on API Desgin

I’ll keep this one pretty brief as I haven’t totally nailed down my thoughts on this. I still thought it was worth a quick little post:

When you’re creating a brand new API to expose some functionality of a system, should you design it with a strong focus on how the internals work? Should you ignore how internals work and make it as easy to consume as possible? Or is there an obvious balance?

I find myself trying to answer this question without ever explicitly asking it. Any time I’m looking to extend or connect systems, this is likely to come up.

Most Recently…

Most recently I started trying to look at creating an API over AMQP to connect my game back-end to a Unity 3D front-end. I had been developing the back-end for a little while now, so I had a pretty good idea for how things needed to work from that perspective. The front-end? Not so much really. I knew some basic actions that I was considering, so I tried coding up an early API for them.

A lot of my focus was around how I was going to implement the code on the back-end to make this API work. It resulted in some of the API calls looking a little bit gross. But the idea was that if I settled on an API that would make the back-end easy to implement, I could get it up and running faster.

After a little while, I feel like my API isn’t getting any cleaner from a consumer perspective, and funny enough, it’s not actually as easy to implement as I was hoping on the back-end. Which had me reflect on a work example…

Once Upon a Time at Work…

Well, it was only a couple of months ago, really. I was working with a colleague on integrating a new system into an existing code base. We decided we wanted to approach the API problem from a consumer perspective. We said “let’s make this as easy to call and use as possible so that people ACTUALLY want to use it”.

We set out with this mission, and created a pretty simplistic API. The challenge? There was a lot of heavy lifting and a bit of voodoo going on behind the scenes. But you know what? We hid the magic in one spot of the code (instead of having ugly stuff scattered all over the code base) and it ended up being a very usable API.

So…

So does this consumer-first, top-down approach to API design always work? I’m not sure. Some similarities/differences in the scenarios:

  • In my current situation, I have a back-end implemented and very minimal code implemented on the caller side. In the work example, we had nothing implemented on the back-end, and a ton of code implemented where the caller side would be.
  • In both examples, at least one of the caller side or back-end side was reasonably well understood. For my game, the back-end was pretty well understood. For work, the caller side was pretty well understood and we had experience with what we’d call a “failed’ back-end implementation (that we were actually setting out to redesign).
  • The work example was a relatively small subset for an API, but the game example was about to be a very specific implementation that I’d need to adapt into a pattern for all messaging in my AMQP system

So there’s a few things to consider there. I think I’m at the point in my game where I’d like to revisit how I’m forming this API and try it from a client-first perspective. Now that I know some of the catches, maybe I’ll shed some new light!

How do you approach API design?


What Makes Good Code? – Should Every Class Have An Interface? Pt 1

What Makes Good Code? - Should Every Class Have An Interface?

What’s An Interface?

I mentioned in the first post of this series that I’ll likely be referring to C# in most of these posts. I think the concept of an interface in C# extends to other languages–sometimes by a different name–so the discussion here may still be applicable. Some examples in C++, Java,¬†and Python to get you going for comparisons.

From MSDN:

An interface contains definitions for a group of related functionalities that a class or a struct can implement.
By using interfaces, you can, for example, include behavior from multiple sources in a class. That capability is important in C# because the language doesn’t support multiple inheritance of classes. In addition, you must use an interface if you want to simulate inheritance for structs, because they can’t actually inherit from another struct or class.

It’s also important to note that an interface¬†decouples the definition of something from its implementation. Decoupled code is, in general, something that programmers are always after. If we refer back to the¬†points I defined for what makes good code (again, in my opinion), we can see how interfaces should help with that.

  • Extensibility: Referring to interfaces in code instead of concrete classes allows a developer to swap out the implementation easier (i.e. extend support for different data providers in your data layer). They provide a specification to be met should a developer want to extend the code base with new concrete implementations.
  • Maintainability:¬†Interfaces make refactoring an easier job (when the interface signature doesn’t have to change). A developer can get the flexibility of modifying the implementation that already exists or creating a new one provided that it meets the interface.
  • Testability: Referring to interfaces in code instead of concrete classes allows mocking frameworks to leverage mocked objects so that true unit tests are easier to write.
  • Readability: I’m neutral on this. I don’t think interfaces are overly helpful for making code more readable, but I don’t think they inherently make code harder to read.

I’m only trying to focus on¬†some of the pro’s here, and we’ll use this sub-series to explore if these hold true across the board. So… should¬†every class have a backing¬†interface?

An Example

Let’s walk through a little example. In this example, we’ll look at an object that “does stuff”, but it requires something that can do a string lookup to “do stuff” with. We’ll look at how using an interface can make this type of code extensible!

First, here is our interface that we’ll use for looking up strings:

public interface IStringLookup
{
    string GetString(string name);
}

And here is our first implementation of something that can lookup strings for us. It’ll just lookup an XML node and pull a value from it. (How it actually does this stuff isn’t really important for the example, which is why I’m glossing over it):

public sealed class XmlStringLookup : IStringLookup
{
    private readonly XmlDocument _xmlDocument;

    public XmlStringLookup(XmlDocument xmlDocument)
    {
        _xmlDocument = xmlDocument;
    }

    public string GetString(string name)
    {
        return _xmlDocument
            .GetElementsByTagName(name)
            .Cast<XmlElement>()
            .First()
            .Value;
    }
}

This will be used to plug into the rest of the code:

private static int Main(string[] args)
{
    var obj = CreateObj();
    var stringLookup = CreateStringLookup();
    
    obj.DoStuff(stringLookup);
 
    return 0;
}
 
private static IMyObject CreateObj()
{
    return new MyObject();
}
 
private static IStringLookup CreateStringLookup()
{
    return new XmlStringLookup(new XmlDocument());
}
 
public interface IMyObject
{
    void DoStuff(IStringLookup stringLookup);
}
 
public class MyObject : IMyObject
{
    public void DoStuff(IStringLookup stringLookup)
    {
        var theFancyString = stringLookup.GetString("FancyString");
        
        // TODO: do stuff with this string
    }
}

In the code snippet above, you’ll see our Main() method creating an instance of “MyObject” which is the thing that’s going to “DoStuff” with our XML string lookup. The important thing to note is that the DoStuff method takes in the interface IStringLookup that our XML class implements.

Now, XML string lookups are great, but let’s show why interfaces make this code extensible. Let’s swap out an XML lookup for an overly simplified CSV string lookup! Here’s the implementation:

public sealed class CsvStringLookup : IStringLookup
{
    private readonly StreamReader _reader;
 
    public CsvStringLookup(StreamReader reader)
    {
        _reader = reader;
    }
 
    public string GetString(string name)
    {
        string line;
        while ((line = _reader.ReadLine()) != null)
        {
            var split = line.Split(',');
            if (split[0] != name)
            {
                continue;
            }
 
            return split[1];
        }
 
        throw new InvalidOperationException("Not found.");
    }
}

Now to leverage this class, we only need to modify ONE line of code from the original posting! Just modify CreateStringLookup() to be:

private static IStringLookup CreateStringLookup()
{
    return new CsvStringLookup(new StreamReader(File.OpenRead(@"pathtosomefile.txt")));
}

And voila! We’ve been able to extend our code to use a COMPLETELY different implementation of a string lookup with relatively no code change. You could make the argument that if you needed to modify the implementation for a buggy class that as long as you were adhering to the interface, you wouldn’t need to modify much surrounding code (just like this example). This would be a point towards improved maintainability in code.

“But wait!” you shout, “I could have done the EXACT same thing with an abstract class instead of the IStringLookup interface you big dummy! Interfaces are garbage!”

And you wouldn’t be wrong about the abstract class part! It’s totally true that IStringLookup could instead have been an abstract class like StringLookupBase (or something…) and¬†the benefits would still apply! That’s a really interesting point, so let’s keep that in mind as we continue on through out this whole series. The little lesson here? It’s not the interface that gives us this bonus, it’s the API boundary and level of abstraction we introduced (something that does string lookups). Both an interface and abstract class happen to help us a lot here.

Continue to Part 2


What Makes Good Code? – Patterns and Practices Series

What Makes Good Code?

It’s been a while since I’ve had a programming oriented post, and I figured this would be a great topic to write about. It’s been a topic I’ve been thinking about more and more over the last year and I’ve been experimenting with certain patterns and practices to see if certain things actually make code “better”. A lot of the information presented in this series will be completely based on my opinion, but I’ll try to back up my opinion with as many concrete examples as I can. If you have a differing opinion, I’d love to hear it in the comments.

I’d also like to call out that much of what I’ll be discussing is in the context of object oriented programming. To be¬†specific, there may be mostly C# examples used. If this isn’t something you’re actively doing, then don’t worry! It would be great to hear if you see parallels in the work you’re doing.

What Is “Good”?

So let’s start by defining what “good” or “better” means (and I’ll leave this high level and we can dive in afterwards)…

  • Extensible: Re-writing of code is minimized when adding more functionality. It feels straight forward to extend the code. Developers won’t do “the wrong thing” when trying to extend the code.
  • Maintainable: Many of the same qualities¬†that go with extensible. Fixing bugs or making tweaks¬†involves touching¬†few places in the code base.
  • Testable: It’s¬†straight forward to write coded tests that can exercise functionality of the code under test.
  • Readable**: ¬†Developers should be able to read code and understand what it’s doing.¬†The flow of execution¬†within and between different modules of code shouldn’t cause anyone a headache,

All of these describe qualities of the code. I could argue that you could write very maintainable, extensible, and testable code and it would be awful if it didn’t actually solve the customers’ needs. I’d like to try and leave this aspect out of the discussion, and focus on the actual patterns/practices that we can implement in code. I’d love to write something separate about writing code that actually solves a problem versus code that may potentially solve some possible problem at some potential point in the future (and yes, all of the uncertainty in that sentence was on purpose).

Why Should We Care?

It’s kind of a funny question, I guess, but I think it’s a fair one to ask. Why should we care what good code is? If we agree on the definition of good code, so what? Maybe those criteria for good code are obvious to some people. Maybe they weren’t so obvious to some people, but they still don’t really care. So why the fuss about what good code is?

If you’ve read other posts on DevLeader or you know me personally, you may know that I started work at a digital forensics startup a few years back in Waterloo Ontario. What you may not know is that that startup has grown significantly, and based on the accolades we’ve received as an entire organization, we’re actually one of the fastest growing software companies in North America in terms of revenue. I’m not trying to do the horn tooting without a reason though… I think one of the reasons we’ve been able to have such great success on the development side of things is because we’re always trying to improve.

We didn’t always write “good” code, and we certainly don’t always write “good” code right now. However, we’re always trying to figure out how we can get better. So¬†why might¬†WE care as developers at our office? I think it comes down to trade-offs.

In the real world of software development, you’re often faced with trade-offs. You can get a product out faster if you don’t test it. Or you can get a product out and test it, but maybe you had to take a lot of shortcuts in the code. Or maybe you can get all the features in the product and tested, but you can’t hit the deadline. There’s countless more combinations of trade-offs that we make in real software development every day.¬†I think that by understanding what “good” code means allows a team to recognize just what kinds of corners they’re cutting sometimes. When teams talk about introducing “tech debt”, there’s a better grasp around what type of debt you’re introducing. If you need to get some extra features and bug fixes in but they’re getting added with some tech debt, what could that end up meaning?

Even if you don’t agree with what my criteria are for good code, I think it’s important that you establish this within your team. If everyone can agree on what good code is, it makes constructive conversations about different implementations much easier. Just because¬†some new code is different doesn’t instantaneously make it scary and/or wrong… Maybe it’s a new way that emphasizes one of the criteria for good code a bit more than another implementation might emphasize. Perhaps it doesn’t… You can at least refer back to a reference point for what “good” is.

It’s also important to recognize that the criteria for “good” may change over time. Revisiting the definition periodically might allow you to recognize when your team is¬†redefining what “good” means to them.

Enough Rambling! Where’s This Going?

Right. Okay. I want to write some follow up posts that will focus on a few of the following items:

  • Does every class need an interface?
  • Does every class need a factory that can create it?
  • Unit tests versus functional tests
  • Is there a benefit to only passing interfaces to objects around?
  • Is there a way to enforce that interfaces HAVE to be passed around?
  • Is the single responsibility principle even helpful?
  • Mutability and immutability
  • Is it always good to follow patterns and practices in all scenarios?

And after I feel that I’ve covered enough on these topics, I’d like to circle back and revisit what “good” code is. It’ll be cool to see if the definition changes at all!

**Readability¬†was an after thought and I’m not sure how… I started writing the first example post for this series and QUICKLY realized I had omitted this.


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I work as a team lead of software engineering at Magnet Forensics (http://www.magnetforensics.com). I'm into powerlifting, bodybuilding, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress