Tag: development

Using Autofac With Unity3D

Autofac With Unity

Why Consider Using Autofac With Unity3D?

I think using a dependency injection framework is really valuable when you’re building a complex application, and in my opinion, a game built in Unity is a great example of this. Using Autofac with Unity3D doesn’t need to be a special case. I wrote a primer for using Autofac, and in it I discuss reasons why it’s valuable and some of the reasons you’d consider switching to using a dependency container framework. Now it doesn’t need to be Autofac, but I love the API and the usability, so that’s my weapon of choice.

Building a game can result in many complex systems working together. Not only that, if you intend to build many games it’s a great opportunity to refactor code into different libraries for re-usability. If we’re practicing writing good code using constructor dependency passing with interfaces, then things really start to line up in favour of using a dependency injection framework.

Getting Set Up

At the end of my autofac primer article, I provided a link to the Nuget package for Autofac. You’ll notice that there’s a version dependency for .NET 4.5, so if you’re not sure how to get Unity3D working with .NET 4.5, you’ll want to check this other article of mine. It’s very simple, so don’t worry!

Unity3D, at the time of writing this and using version 2018.1.1f1, there’s no native Nuget package support. I haven’t spent too much time investigating alternatives, but not to worry. I’ll explain a quick work around. The TL;DR is that we need the binaries from the Nuget package to be loaded up by Unity3D and we’ll miss out on the Nuget-y-ness for now. Not a huge deal since we’ll still have Autofac support!

  • Start a new Visual Studio C# project
    • Ensure that the .NET framework is at least 4.5 and more specifically, the version of .NET that you’d like to use in your Unity3D project
  • Open up the Nuget package manager in Visual Studio
  • Search for Autofac online in the package manager (it should be the same one I referred to above!)
  • Add this package to your visual studio project
  • Compile this visual studio project
  • Assuming you built in debug, go to the output folder (which is in bindebug if you didn’t change anything from default)
  • In the output folder, you’ll find “Autofac.dll”
  • You’ll want to add this into your Unity3D project’s “Assets” folder
    • I like nice folder hierarchies, so I’d suggest making a subfolder inside of “Assets” called “Third Party” or “Dependencies”… Something that’s obvious for what it means
    • Drop in the Autofac.dll file into there
  • Unity3D will add a corresponding *.meta file to go along with this

Great! We’re almost there. If you want to test it out, open up a script from Unity3D. This will launch a new Visual Studio instance if you haven’t opened up one for your Unity project yet. At the very top of your file you should be able to type:

using Autofac;

And the namespace should resolve! If not, sometimes this takes Unity3D a refresh operation to regenerate the project file on disc, so if you switch to Unity3D again and it starts doing some processing, switching back to Visual Studio might resolve this.

Using Autofac With Unity3D

Up until this point, we’ve proven we can reference Autofac. I’m not going to explain all the ins and outs for how you’ll want to organize your Autofac initialization in this post, but we can walk through a quick example!

  • Pick a game object on your scene
  • Add a new C# script to it
    • Call it whatever you’d like, but make sure you know how to open it
  • … now go open it in Visual Studio ūüôā
  • We should have a method in there called Start()
    • If not, feel free to add it:
    • private void Start()
      {
        // TODO: we'll add stuff here
      }
  • Let’s use this code to make a new class that you can put inside the same script file for now:
    • public sealed class MyAutofacObject
      {
      
        public MyAutofacObject()
        {
          Debug.Log("Constructor for our object!");
        }
      
        public void DoThing()
        {
          Debug.Log("Test!");
        }
      }
  • Inside this start method, let’s try doing something VERY simple to prove Autofac works!
    • var containerBuilder = new Autofac.ContainerBuilder();
      containerBuilder.RegisterType<MyAutofacObject>().SingleInstance();
      
      var container = containerBuilder.Build()
      var instance = container.Resolve<MyAutofacObject>();
      
      instance.DoThing();

Now if we run our game, here’s what should happen:

  • The script attached to the game object should run
  • The Start() method on the script should be the first thing that goes
  • The code we added should:
    • Make a new ContainerBuilder
    • Register our MyAutofacObject type as a single instance
    • Build the container
    • Resolve an instance of our type
    • Log out a message saying it’s in the constructor
    • Log out a message that says Test!

And voila! It’s simple, but it should demonstrate that Autofac is working!

Next Steps

This is a very contrived example of using Autofac with Unity3D. It proves that the code can be run, but it doesn’t do too much that’s useful. There are going to be many considerations you’ll need to make for how you want to organize your dependencies, register your classes/interfaces, and so on.

I’ll continue to add into this Unity3D series of posts, but let me know what else you’d like to know about using Autofac with Unity3D! I’d be happy to try and answer, or even create an article to help explain.

Thanks!


Part 1 – Exploring Graphs and Trees

Graphs and Trees - Delta State Algorithm

Graphs and Trees to Start

I was chatting with my colleague about generating maps for a 2D role playing game the other day after getting super excited explaining picking ProjectXyz back up and looking into Unity3D more. He was expressing interest in algorithms for procedural generation and storing data in trees or graphs as an optimal data structure for the scenario we were going over.

It stuck with me though. I’ve been putting a lot of thought into game state management and wanting to address it by using a generic layering/stacking approach. By that, I mean that I want to find a way to take base game state, allow mods or plugins to overlay their state, allow game patches to overlay their state, and then save game data to be overlaid on top of all of that. Conceptually, I believe I can create a generic system for doing this stacking of game data, which I’ll be referring to as my Delta State Algorithm, but I’ve been struggling with a starting point.

But the trees and graph concept stuck with me because the properties of trees and graphs that my colleague was referring to seemed to line up with how I envisioned this algorithm working. A graph or a tree just might be the right structure to represent this game state and the deltas between them, so I wanted to start mocking up some ideas.

A Map is a Game Object… And so is Everything Else?

Confusing subheader? You bet. But it’s because it was an odd realization.

I’ve been re-writing the same back end for an RPG for years now. Literally, it’s probably been at least 10 years that I’ve started, progressed, and scrapped such a game. But one thing I borrowed from Unity3D development was the idea of components and component-based design. I was far too married to object hierarchies, and these would eventually result in me having to refactor a TON of code. If you couple that with the fact I wasn’t using dependency injection properly (see this little primer article), it meant my refactoring efforts got to the point where it just made sense to start over.

So I was working with loading maps from TMX or Tiled map formats, and after getting these to load I was putting some thought into how I wanted to save/load my game objects. My game objects are essentially containers of components, and the containers have no properties at all. What does that mean in practice? My Actor/NPC game object class is the exact same class as an equippable item! It’s just “GameObject”. The difference between them is the variation in the components that I attach to them. In my mind, I figured I’d need some sort of hierarchical serialization paradigm where my game object would serialize itself, then the components would serialize themselves within that process, and if I could have components with components… You get the idea. But this could sort of be represented by a tree structure.

After the discussion with my colleague about trees and graph structures for maps, I connected some dots. Really the tiles and placed objects on a map were just… game objects as well. And that means that my thought process behind hierarchical serialization aligns well with some of the properties he was referring to for maps as graphs/trees. This suggests that “world state” for a game where everything is a “GameObject” with components attached to them (where some of those components can contain other GameObjects) could be represented by a graph or tree data structure

What About Tabulated and Listed Data Sets?

So we’ve come up with a starting point for “World Data” being represented by a graph. However, if we go back to my initial goal, I want plugins/mods/patches to be able to get applied to base “Game State” data.

What’s the difference between world and game state data?

These are just my terms so I should try and clarify. The world data is the data the player sees in the playable world. It’s the status of the objects and the maps they are interacting with. Game state data is the superset of this data and all of the other lookup data for loading content into the game. So this OTHER data includes things like the possible stats players and items can have, the types of monsters that can spawn (not the instances of the spawned monsters, because that’s the world state data), the possible dialog entries, etc… This type of data is really well represented in a relational database where we can create lists of data entries.

But if we take a step back, can’t that relational database content also be represented as a graph?

Could I have:

  • A node called “Stats” as a parent to a “Life” node that is a parent to a “StatRange” node, and “LocalizedNameResource” node?
  • A node called “BaseItems” as a parent to a “Sword” node that is a parent to a “DamageRange” node, “RequiredStats” node, and “EquippableSlots” node?
  • A node called “MonsterSpawns” as a parent to a “Skeleton” node that is a parent to…

You get the idea. I generally think about this data in a relational database paradigm (and perhaps it’s much faster to edit the data this way or look it up this way). However, if I can transform this data into the same type of graph structure I want to use for world state, then it means my delta state algorithm would work on not only world state data but also ALL game state data.

So Everything in the Graph? Always?

This is an important question, and I think one that without actually explicitly thinking about was causing me some headaches and keeping me from progressing down exploring more of my delta state algorithm.

I mentioned that some of the game state data feels better represented in a relational database. It might be much faster to add/remove or look them up when the data feels like it’s just a list of records when you contrast that with a graph. But why not have multiple storage mechanisms?

The delta state algorithm serves a very important goal: Layer deltas of data on top of a base set to get a new set.

That’s it though. So once I have that final transform of data that represents the new game state, who says I can’t transform it into a different format?

Maybe the world state is still better represented as a graph. So things like maps with tiles and game objects in the world still are super efficient to work with in graph form. Maybe things like dialog entries or definitions of player stats are better represented in a relational database… I could populate a database with the necessary data based on the graph data though. And the really cool part? A save game, which is probably the delta that changes the most, probably shouldn’t have the extra game state data in it (likely just has world state deltas represented). So… if a relational database is better suited for the remaining game state data, this could get generated and cached, and then only regenerated when different mods/plugins/patches are applied!

Concluding Thoughts

To summarize some of the major thoughts discussed in this article:

  • Game objects with components can be represented by nodes in a graph just like maps with tiles on them.
  • Other game state that feels better suited for relational databases can still be transformed into nodes in a graph for working with deltas.
  • Once a final game state is calculated (based on base state + deltas for mods/plugins/patches + delta for save game), data can be transformed back into the most suitable form to work with
  • Data does NOT have to exist in only one form! We can transform it as we need to work with it more effectively.

While this article was a bit of a brain dump, this the thought process that got the ball rolling for starting to design this delta state algorithm. In my next post in this series, I want to discuss a quick look at undo/redo approaches used in applications and how they might be similar (or different) to how the delta state algorithm might work.


Unity3D and .NET 4.x Framework

Unity

Unity3D Default .NET Framework

I recently wrote that I wanted to start writing more Unity3D articles because I’m starting to pick up more Unity3D hobby work. It felt like a good opportunity to share some of my learnings so that anyone searching across the web might stumble upon this and get answers to the same problems I had.

Unity3D as of 2018.1.1f1 (which is the version I’m currently using), still defaults to using .NET 3.5 as the framework version. Nothing wrong with that either. I’m sure there are reasons that they have for staying at that version, probably because of Mono and cross platform reasons if I were to guess, so I’m not complaining. For reference, this setting in Unity3D is referred to as “Scripting Runtime Version”. So if you’re googling more about this later, that’s what Unity calls it. For the libraries I was building to use as a game framework, I was using .NET 4.6 and discovered I was going to have a challenge getting them working in Unity3D.

If you want to see what your setting is currently set at, you need to check out the “Player” settings. This was kind of buried in the UI for me so I didn’t know it was a thing that could be adjusted. In Unity3D¬†2018.1.1f1, click Edit->Project Settings->Player. Here’s what it looks like:

Unity3D - Player Settings

In Unity, click Edit->Project Settings->Player

From there, you’re going to get “PlayerSettings” in your Inspector tab. You’ll need to expand the “Other Settings” to see your scripting runtime version:

Unity3D - Other Settings

“Other Settings” accordion control in PlayerSettings Inspector tab

Once you expand that, here’s the setting you’re interested in:

Unity3D - Scripting Runtime Version

Scripting Runtime Version – The selected .NET version Unity will use

Switching Unity3D to .NET 4.x

Now that you know where the setting is… it’s pretty easy ūüôā

Unity3D - Scripting Runtime Version

Use the dropdown to pick which .NET framework version you’d like to use.

You can read more about this setting over at the official Unity3D documentation pages:

https://docs.unity3d.com/Manual/ScriptingRuntimeUpgrade.html

This outlines what things are affected in different platforms and scenarios so YOU SHOULD READ IT to understand what will change.

Hope that makes things a bit easier for you to get up and running with .NET 4.x assemblies in Unity3D!


What Makes Good Code? – Patterns and Practices Series

What Makes Good Code?

It’s been a while since I’ve had a programming oriented post, and I figured this would be a great topic to write about. It’s been a topic I’ve been thinking about more and more over the last year and I’ve been experimenting with certain patterns and practices to see if certain things actually make code “better”. A lot of the information presented in this series will be completely based on my opinion, but I’ll try to back up my opinion with as many concrete examples as I can. If you have a differing opinion, I’d love to hear it in the comments.

I’d also like to call out that much of what I’ll be discussing is in the context of object oriented programming. To be¬†specific, there may be mostly C# examples used. If this isn’t something you’re actively doing, then don’t worry! It would be great to hear if you see parallels in the work you’re doing.

What Is “Good”?

So let’s start by defining what “good” or “better” means (and I’ll leave this high level and we can dive in afterwards)…

  • Extensible: Re-writing of code is minimized when adding more functionality. It feels straight forward to extend the code. Developers won’t do “the wrong thing” when trying to extend the code.
  • Maintainable: Many of the same qualities¬†that go with extensible. Fixing bugs or making tweaks¬†involves touching¬†few places in the code base.
  • Testable: It’s¬†straight forward to write coded tests that can exercise functionality of the code under test.
  • Readable**: ¬†Developers should be able to read code and understand what it’s doing.¬†The flow of execution¬†within and between different modules of code shouldn’t cause anyone a headache,

All of these describe qualities of the code. I could argue that you could write very maintainable, extensible, and testable code and it would be awful if it didn’t actually solve the customers’ needs. I’d like to try and leave this aspect out of the discussion, and focus on the actual patterns/practices that we can implement in code. I’d love to write something separate about writing code that actually solves a problem versus code that may potentially solve some possible problem at some potential point in the future (and yes, all of the uncertainty in that sentence was on purpose).

Why Should We Care?

It’s kind of a funny question, I guess, but I think it’s a fair one to ask. Why should we care what good code is? If we agree on the definition of good code, so what? Maybe those criteria for good code are obvious to some people. Maybe they weren’t so obvious to some people, but they still don’t really care. So why the fuss about what good code is?

If you’ve read other posts on DevLeader or you know me personally, you may know that I started work at a digital forensics startup a few years back in Waterloo Ontario. What you may not know is that that startup has grown significantly, and based on the accolades we’ve received as an entire organization, we’re actually one of the fastest growing software companies in North America in terms of revenue. I’m not trying to do the horn tooting without a reason though… I think one of the reasons we’ve been able to have such great success on the development side of things is because we’re always trying to improve.

We didn’t always write “good” code, and we certainly don’t always write “good” code right now. However, we’re always trying to figure out how we can get better. So¬†why might¬†WE care as developers at our office? I think it comes down to trade-offs.

In the real world of software development, you’re often faced with trade-offs. You can get a product out faster if you don’t test it. Or you can get a product out and test it, but maybe you had to take a lot of shortcuts in the code. Or maybe you can get all the features in the product and tested, but you can’t hit the deadline. There’s countless more combinations of trade-offs that we make in real software development every day.¬†I think that by understanding what “good” code means allows a team to recognize just what kinds of corners they’re cutting sometimes. When teams talk about introducing “tech debt”, there’s a better grasp around what type of debt you’re introducing. If you need to get some extra features and bug fixes in but they’re getting added with some tech debt, what could that end up meaning?

Even if you don’t agree with what my criteria are for good code, I think it’s important that you establish this within your team. If everyone can agree on what good code is, it makes constructive conversations about different implementations much easier. Just because¬†some new code is different doesn’t instantaneously make it scary and/or wrong… Maybe it’s a new way that emphasizes one of the criteria for good code a bit more than another implementation might emphasize. Perhaps it doesn’t… You can at least refer back to a reference point for what “good” is.

It’s also important to recognize that the criteria for “good” may change over time. Revisiting the definition periodically might allow you to recognize when your team is¬†redefining what “good” means to them.

Enough Rambling! Where’s This Going?

Right. Okay. I want to write some follow up posts that will focus on a few of the following items:

  • Does every class need an interface?
  • Does every class need a factory that can create it?
  • Unit tests versus functional tests
  • Is there a benefit to only passing interfaces to objects around?
  • Is there a way to enforce that interfaces HAVE to be passed around?
  • Is the single responsibility principle even helpful?
  • Mutability and immutability
  • Is it always good to follow patterns and practices in all scenarios?

And after I feel that I’ve covered enough on these topics, I’d like to circle back and revisit what “good” code is. It’ll be cool to see if the definition changes at all!

**Readability¬†was an after thought and I’m not sure how… I started writing the first example post for this series and QUICKLY realized I had omitted this.


Yeah, We’re an “Agile” Shop

Everybody Has Gone “Agile”

If you’re a software developer that’s done interviews in the past few years, then you already know that every software development shop has gone agile. Gone are the days of waterfall software development! Developers have learned that waterfall software development is the root of all evil, and the only way to be successful is to be agile. You need to be able to adapt quickly and do standups. You need to put story point estimates on your user stories. You need retrospectives… And agility! And… more buzz words! Yes! Synergy! In the cloud! You need it!

Okay, so why the sarcasm? Every single software development team is touting that they’re following the principles of agile software development, but almost no team truly is. Is it a problem if they aren’t actually following agile principles? Absolutely not, if they’re working effectively to deliver quality software. That’s not for me to say at all. I think the problem is that people are getting confused about “being agile”, but there’s nothing necessarily wrong with how they’re operating if it works for them.

Maybe We’re Not So “Agile”

I work at Magnet Forensics, and for a long time now, I’ve been saying “yeah, we’re an agile shop”. But you know what? I don’t think we are. I also don’t think that’s a problem. I think our software development process is best defined by “continuous improvement”. That’s right. I think we’re a “Continuous Improvement” shop. Our team has identified the things we think work well for us in how we develop software, and we experiment to improve on things that we think aren’t working well. I’m actually happy that we operate that way instead of operating by a set of guidelines that may or may not work for us.

So, why aren’t we¬†agile? When I look at the Agile Manifesto, I feel like there’s a few things we actually don’t do, and we don’t even worry about them. We don’t necessarily have business people working with developers daily through things, for example. Our delivery cycles are much longer than a couple of weeks most of the time. Sometimes people on our teams don’t communicate best face-to-face. I mean, just because we’re not focusing on those things isn’t¬†necessarily proof that we aren’t agile, but I truly don’t think we’re trying to embody all of the components that make up¬†agile.

As I stated previously, I do think that we try to focus on continuous improvement above all else, and I’m absolutely content with that. I think that if over time we continued our retrospectives and our team ended up operating closer to a traditional waterfall process then it would be the better thing for our team. Why? Because we make incremental changes for our team in an attempt to keep improving. Switching to waterfall is¬†a bit of a contrived example, but I definitely stand by it. Another example might be that maybe working with business people daily isn’t actually effective for our team. I don’t know, to be honest, because we’re currently tweaking other parts of our development process to improve them. Maybe we’ll get around to worrying about that at some other point.

I do know that the way we operate, we’re always trying to improve. Whether or not we get better sprint to sprint is for the retrospective to surface for us, but if we took a step back, at least we can try a different path when we try to take our next step forward.

So, We Don’t Need To Be Agile?

I think my only point of writing this post was to get this across: If you’re not actually an agile software development shop, then don’t call yourself that. There’s absolutely nothing wrong with not living and breathing agile. Maybe you’re a software shop that’s transitioning into Agile. Maybe you’re moving away from Agile. Maybe you have no¬†parts of your software development process that are agile. Who’s to say that because you’re not 100% agile your setup is bad?

I can’t advocate that agile software development is the absolute best thing for all software development teams. I personally like to think that it’s a great way to develop software, but… I don’t know your team. I don’t know your codebase. I don’t know your products, services, or clients. How the heck could I tell you the best way to go make your software?

Again, there’s nothing wrong with not being 100% agile, but we should try to be honest with ourselves. Find what works for your team. Find out how you can effectively deliver quality software.


Should My Method Do This? Should My Class?

Whose Job Is It?

I wanted to share my experience that I had working on a recent project. If you’ve been programming for a while, you’ve definitely heard of the single responsibility principle. If you’re new to programming, maybe this is news. The principle states:

That every class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class

You could extend this concept to apply to not only classes, but methods as well. Should you have that one method that is entirely responsible for creating a database connection, connecting to a web service, downloading data, updating the database, uploading some data, and then doing some user interface rendering? What would you even call that?!

The idea is really this: break down your code into separate pieces of functionality.

Easier Said Than Done… Or Is It?

The idea seems easy, right? Then why is it that people keep writing code that doesn’t follow this guideline? I’m guessing it’s because even though it’s an easy rule, it’s even easier to just… code what works.

The recent experience I wanted to share was my work on a project that has a pretty short time frame to prove it was feasible. It was starting something from scratch, so I had all the flexibility in the world to design code however I wanted to. I really made an effort to keep asking myself this one question: Whose job is it?

Every time I asked that question and found that it was not my current method’s responsibility, I would ask “Is this class really responsible for that”? I’d either go make myself a new method in my class or I’d just go immediately make a new class with a single method on it. It seemed like a bit of extra overhead each time I had to do it, but was it worth it in the end?

Absolutely. After the project had proven itself and development continued on, I was easily able to refactor code (where necessary) and mock out functionality in my coded tests. Instead of trying to write test setup code that required a whack of classes I needed to initialize, I could mock out a couple of interfaces and test with ease. It was also really obvious which pieces were responsible for what functionality.

Final Thoughts

If you want to get better at following the single responsibility principle, I think it starts with one question: Whose job is it? Try it out!


Multiple C# Projects In Your Unity 3D Solution

Unity

Problem:¬†Visual Studio and Unity Aren’t Playing Nice!

Disclaimer: I develop on Windows, so I have no idea if any of this even applies to other operating systems. I assume not. Sorry.

I just started poking around in Unity 4.6 and I’ve been having a blast. I’ve made it to the point where I want to actually start hammering out some code, but I came across a bit of a problem: I want to start leveraging other projects I’ve written in my Unity solution while I’m in Visual Studio, and things are blowing up. So, what gives?

Okay, so¬†let me start by explaining why I want to do this. I understand that if I’m making a simple game, I should have no problem breaking out my unity scripts into sub folders and organizing them to be nice and pretty. The problem I’m encountering is that I have existing projects under source control and I don’t want to copy and paste all of the code as scripts into my Unity folder. I also want to be able to create re-usable code for my future games, so I’d like to start breaking things out into libraries as I see fit.

So, if you’ve been playing around in Unity for a bit, you might say “Oh, well you’re a dummy! Unity can totally leverage your C# DLLs once you drop them into your asset folder”! And you’d be 100% correct. But that’s not the workflow I want.

The underlying problem here is this: Unity will re-write your solution and project file when you flip between Unity and Visual Studio. But I’m sure they have it that way for a reason.

The Goal: Visual Studio and Unity Should… Play Nice!

My ideal state would be something like this:

  • Work in visual studio as much as I’d like to new projects to my solution, and reference them accordingly
  • Flip back and forth from Unity and Visual Studio without having to reset things to compile/run again
  • Build from visual studio and have things end up in the right spot… NOT copy DLLs
  • Not copy+paste my entire project(s) already under source control elsewhere

Is this something that can be achieved though? I was pretty determined that I should be able to do *something* to have this working. Could I get it perfect? I wasn’t sure… But I knew I could make it better.

The Solution: Give and Take with Unity

My *almost* perfect sution, which I’ll walk you through, is this: Leveraging Visual Studio tools for Unity, modify the Unity solution as you see fit and use directory junctions (symlinks) to the build output directories of other projects.

  1. Let’s get Visual Studio tools for Unity installed. Visit that link and download the version that you need for the version of Visual Studio that you use. After installing, I opened up my project within Unity and I had to import the Visual Studio Tools package.Import Package
    After selecting this menu item, I was presented with a dialog for picking the items to import. I left it as is.Import Package2After importing these items, I could see that Unity had successfully added these entries under my Assets folder. Okay, now we’re getting somewhere. Next up, I wanted to configure Unity to not modify my solution every time I go back and forth from Unity to Visual Studio. This is the part that kills whether or not I’ve added projects to my solution. For me, it’s critical to have code I’m working on immediately accessible so that I can jump back and forth between projects. Lucky for us, this part is pretty easy. Go to the menu to access your new Visual Studio Tools menu item:

    VS Tools Configure
    Selecting “Configuration” opens up a really simple dialog. Let’s make sure “Generate solution file” is unchecked! It’s that easy.

    VS Tools Configure2
    Once we have all of this setup, we should be able to go into Visual Studio and add other projects to our solution.

  2. The¬†one thing that I *could not* get this solution to do is have Unity leave my main game project alone in Visual Studio. As a result, the rest of this walk through is allowing us to play by Unity’s rules. Unity is good at magically referencing all of the managed DLLs that you include within your assets folder. If you drop DLLs somewhere within “Assets” and switch to Visual Studio, Unity will likely have modified your main project to reference this DLL.My next step was creating a spot where I wanted to drop the build outputs of my extra projects I wanted to reference. In my Visual Studio solution, I have my original game project and some newly added projects I want to build from source. In Unity, I wanted these to end up in “Assets/Dependencies/bin”. No problem. Let’s make that folder structure (or your equivalent if you don’t like my naming):Bin Dependencies
    The next part is probably the “trickiest” part because it’s… well… unusual. You could technically stop here and manually copy DLLs back and forth, but I’m not about that life. I want things to happen automatically. For this, we’re going to use junction points. Browse to your newly created folder in an administrator command prompt. I say administrator because only certain users have permissions to create junction points. Your non-admin user might, but this is my “safe” way of instructing you. On the command prompt, we’re going to use “mklink” to create a junction. The command is “mlkink /D /J <NAME_OF_YOUR_PROJECT> <RELATIVE_PATH_TO_YOUR_PROJECT>”. For example, if you had a C# project you wanted to reference that was “MyCoolLibrary.csproj” and was located in the directory above your Unity project, you might use the command “mlkink /D /J MyCoolLibrary “……..MyCoolLibrarybindebug””. Note that I used two dots to go back up a directory several¬†times (since we’re inside of AssetsDependenciesbin and want to get outside of our Unity project). you should get a success message when your junction is created.

    Repeat this step for as many extra projects as you want to include. You can always come back and add more projects this way too, or remove¬†the junctions if you don’t want to include a project anymore.

    At this point, you’re technically done. If you build from Visual Studio, you should have your other projects’ DLLs end up in your Unity folder, and your main game project will be updated by Unity to reference these now!

  3. But… You’re not done if you use source control for your Unity project and have separate source control on your other projects. The scary thing here is that usually we don’t want our build outputs to be stored in source control… But if we do nothing else, your source control system will likely want to include the newly created “AssetsDependenciesbin” folder and any of the contents you’re building into there. I just modified my git ignore file (I’m sure there’s an equivalent for SVN or other source control) to exclude the contents of “AssetsDependenciesbin”.The reason I didn’t excluded dependencies all together is because I can add other folders and DLL references here that I don’t want to build (like… the normal way). This gives me the flexibility of building the projects I want to control and still be able to just reference other pre-built DLLs!

Summary

In three easy steps, you should be able to use Unity, Visual Studio, and multiple projects in one solution in a what-feels-like-normal way. Because there’s still some dynamic stuff going on with Unity updating your main project, you might find the odd time you need to build twice to fix up compilation problems. I’ve seen this happen maybe once or twice so far, but otherwise it feels like normal. It’s also ¬†important to note that you can’t escape the Unity project updating… don’t add references to your main project manually. That’s what that “AssetsDependencies” folder is for that we made.

Here are a few shots of what my setup looks like (proof that it works):

Solution Explorer

Unity Dependencies

And of course… it’s not the perfect solution. There’s still these things:

  • Unity gets mad at you for using junctions within your project. It actually tells you not to do this because you can mess things up. It’s working awesome for me right now though… So I’m going to just ignore this warning.
  • Remember step 3 where we ignored the AssetsDependenciesbin location in git? This actually ignored your junction points you created too. As a result, anyone else who clones your code will need to create junctions too. I’m working solo, so I’m not too worried about this step… But it’s definitely something that should be fixed up (again, I’m sure it’s doable, but I’m in no rush).

Hope that helps you feel more at home in Unity and Visual Studio! It certainly made it nicer for me.

 


ProjectXyz: Enforcing Interfaces (Part 2)

Enforcing Interfaces

This is my second installment of the series related to my small side project that I started. I mentioned in the first post that one of the things I wanted to try out with this project is coding by interfaces. There’s an article over at CodeProject that I once read (I’m struggling to dig it up now, arrrrrghh) that really gave me a different perspective about using interfaces when I program. Ever since then I’ve been a changed man. Seriously.

The main message behind the article was along the lines of: Have your classes¬†implement your interface, and to be certain nobody is going to come by and muck around with your class’s API, make sure they can’t knowingly make an instance of the class. One of the easiest ways to do this (and bear with me here, I’m not claiming this is right or wrong) is to have a hidden (private or protected) constructor on your class and static methods that let you create new instances of your class. However, the trick here is your static method will return the interface your class implements.

An example of this might look like the following:


public interface IMyInterface
{
    void Xyz();
}

public sealed class MyImplementation : IMyInterface
{
    // we hid the constructor from the outside!
    private MyImplementation()
    {
    }

    public static IMyInterface Create()
    {
        return new MyImplementation();
    }

    public void Xyz()
    {
        // do some awesome things here
    }
}

Interesting Benefits

I was pretty intrigued by this article on enforcing interfaces for a few reasons and if you can stick around long enough to read this whole post, I’ll hit the cons/considerations I’ve encountered from actually implementing things this way. These are obviously my opinion, and you can feel free to agree or disagree with me as much as you like.

  • (In theory) it keeps people from coming along and tacking on random methods to my classes. If I have an object hierarchy that I’m creating, having different child classes magically have random public APIs changing independently seems kind of scary. People have a harder time finding ways to abuse this because they aren’t concerned with the concrete implementation, just the interface.
  • Along with the first point, enforcing interfaces¬†makes people think about what they’re doing when they need to change the public API. Now you need to go change the interface. Now you might be affecting X number of implementations. Are you sure?
  • Sets people up nicely to play with IoC and dependency injection. You’re already always working with interfaces because of this, now rolling out something like Moq or Autofac should be easier for you.
  • Methods can be leveraged to do parameter checks BEFORE entering a constructor. Creating IDisposable¬†implementations can be fun when your constructor fails but and your disposable clean up code was¬†expecting things to be initialized (not a terribly strong argument, but I’ve had cases where this makes life easier for me when working with streams).

Enforcing Interfaces in ProjectXyz

I’ve only implemented a small portion of the back-end of ProjectXyz (from what I imagine the scope to be) but it’s enough where I have a couple of layers, some different class/interface hierarchies that interact with each other, and some tests to exercise the API.¬†The following should help explain the current major hierarchies a bit better:

  • Stats are simple structures¬†representing an ID and a value
  • Enchantments are simple structures¬†representing some information about modifying particular stats (slightly more complex than stats)
  • Items are more complex structures¬†that can contain enchantments
  • ¬†Actors are complex structures that:
    • Have collections of stats
    • Have collections of enchantments
    • Have collections of items

Okay, so that’s the high level. There’s obviously a bit more going on with the multi-layered architecture I’m trying out here too (since the hierarchies are repeated in a similar fashion in both layers). However, this is a small but reasonable amount of code to be trying this pattern on.

I have a good handful of classes and associated interfaces that back them. I’ve designed my classes to take in references to interfaces (which, are of course backed by my other classes) and my classes are largely decoupled from implementations of other classes.

Now that I’ve had some time to play with this pattern, what are my initial thoughts? Well, it’s not pure sunshine and rainbows (which I expected) but there are definitely some cool pros I hadn’t considered and definitely some negative side effects that I hadn’t considered either. Stay tuned

(The previous post in this series is here).


Continuous Improvement – One on One Tweaks

Continuous Improvement - One on One Tweaks

Continuous Improvement – Baby Steps!

Our development team at Magnet Forensics focuses a lot on continuous improvement. It’s one of the things baked into a retrospective often performed in agile software shops. It’s all about acknowledging that no system or process is going to be perfect and that as your landscape changes, a lot of other things will too.

The concept of continuous improvement isn’t limited to just the software we make or the processes we put in place for doing so. You can apply it to anything that’s repeated over time where you can measure positive and negative changes. I figured it was time to apply it to my leadership practices.

The One on One

I lead a team of software developers at Magnet, but I’m not the boss of any of them. They’re all equally my peers and we’re all working toward a common goal. One of my responsibilities is to meet with my team regularly to touch base with them. What are things they’ve been working on? What concerns do they have with the current state of things? What’s going well for them? What sort of goals are they setting?

The one on ones that we have setup are just another version of continuous improvement. It’s up to me to help empower the team to drive that continuous improvement, so I need to facilitate them wherever I can. Often this isn’t a case of “okay, I’ll do that for you” but a “yes, I encourage you to proceed with that” type of scenario. The next time we meet up, I check in to see if they were able to make headway with the goals they had set up and we try to change things up if they’ve hit roadblocks.

No Change, No Improvement

I had been taking the same approach to one-on-ones for a while. I decided it was time for a change. If it didn’t work, it’s okay… I could always try something else. I had a good baseline to measure from, so I felt comfortable trying something different.

One on ones often consisted of my team members handing me a sheet of past actions, concerns, and status of goals before we’d jump into a quick 20 minute meeting together. I’d go over the sheet with them and we’d add in any missing areas and solidify goals for next time. But I wanted a change here. How helpful can I be if I get this sheet as we go into the room together?

I started asking to get these sheets ahead of time and started paraphrasing the whole sheet into a few bullet points. A small and simple change. But what impact did this have?

Most one on ones went from maxing out 20 minutes to only taking around 10 minutes to cover the most important topics. Additionally, it felt like we could really deep dive on topics because I was prepared with some sort of background questions or information to help progress through roadblocks. Myself and my team member could blast through the important pieces of information and then at the end, if I’d check to make sure there’s nothing we’d missed going over. If I had accidentally omitted something, we’d have almost another 10 minutes to at least start discussing it.

Trade Off?

I have an engineering background, so for me it’s all about pros and cons. What was the trade-off for doing this?

The first thing is that initially it seemed like I was asking for the sheets super early. Maybe it still feels like I’m asking for them early. I try to get them by the weekend before the week where I start scheduling one on ones, so sometimes it feels like people had less than a month to fill them out. Is it a problem really? Maybe not. Maybe it just means there’s less stuff to try and cram into there. I think the benefit of being able to go into the meeting with more information on my end can make it more productive.

The second thing is that since I paraphrase the sheet, I might miss something that my team member wanted to go over. However, because the time is used so much more effectively, we’re often able to cover anything ¬†that was missed with time to spare. I think there’s enough trust in the team for them to know that if I miss something that it’s not because I wanted to dodge a question or topic.

I think the positive changes this brought about have certainly outweighed the drawbacks. I think I’ll make this a permanent part of my one on one setup… Until continuous improvement suggests I should try something new!


Refactoring For Interfaces: An Adventure From The Trenches

puzzle

Refactoring: Some Background

If you’re a seasoned programmer you know all about refactoring. If you’re relatively new to programming, you probably have heard of refactoring but don’t have that much experience actually doing it. After all, it’s easier to just rewrite things from scratch instead of trying to make a huge design change part way through, right? In any mature software project, it’s often the case where you’ll get to a point where your code base in its current state cannot properly sustain large changes going forward. It’s not really anyone’s fault–it’s totally natural. It’s impossible to plan absolutely everything that comes up, so it’s probable that at some point at least part of your software project will face refactoring.

In my real life example, I was tasked with refactoring a software project that has a single owner. I’m close with the owner and they’re a very technical person, but they’re also not a programmer. Because I’m not physically near the owner (and I have a full-time job, among other things I’m doing) it’s often difficult to debug any problems that come up. The owner can’t simply open an editor and get down to the code to fix things up.

So there was an obvious solution which I avoided in the first place… Unit tests. Duh. I need unit tests. So that’s an easy solution right? I’ll bust out my favourite testing framework (I’m a fan of xUnit) and start getting some solid code coverage. Well… in an ideal world, like every programming article is ever written, this would have been the case. But that wasn’t the case. My software project does a lot of direct HTTP/FTP requests and interacts with particular hardware on the machine. How awesome is writing a unit test that contacts an HTTP server? Not very awesome.

What was I to do? I need to be able to write unit tests so that I can validate my software before putting it in my customer’s hands, but I can’t test it with unit tests because I don’t have the hardware!

Refactoring for Interfaces

Okay, so the first step in my master plan is to refactor for interfaces. What do I mean by that? Well, I have a lot of code that will call out and make HTTP requests and it has a specific dependency on System.Net.WebRequest. The same thing holds true for my FTP requests I want to make. Because I have that dependency within my classes, it means I have no choice but to call out to the network and go do these things.

I could design it a different way though. What if I abstracted the web requests away so that I didn’t actually have to call that class directly? What if I could have a reference to some instance that met some API that would just do that stuff for me? I mean, my class knows all about it’s on particular job, but it truthfully doesn’t know the first thing about calling out to the Internet to go post some HTTP requests. This means if someone else is responsible for providing me with a mechanism for giving me the ability to post HTTP requests, this other entity could also fool me and not actually send out HTTP requests at all! That sounds like exactly what my test framework would want to do.

My first step was to look at the properties and methods I was using on the WebRequest class. What was shared between the HTTP and FTP requests that I was creating? The few things I had to consider were:

  • Some sort of Send() method to actually send the request
  • A URI to identify where the request is¬†being sent
  • A timeout property

I then created an interface for a web request that had these properties/methods accessible and created some wrapper classes that implemented this interface but encapsulated the functionality of the underlying web requests. The next step was creating a class and interface for a “factory” that could create these requests. This is because my code that needs to make HTTP/FTP requests only knows it needs to make those requests–It doesn’t have any knowledge on how to actually create one.

With my interfaces for my requests and my factory that creates them, I was able to move onto the next step.

Create Mocks for the Interfaces

Now that I had my classes leveraging interfaces instead of concrete classes internally, I could mock the inner-workings of my classes. This would provide two major benefits:

  • I could create tests that wouldn’t have to actually go out to the Internet/network.
  • I could create instrumented mocks that would let me test whether certain web requests were being made.

I started off my writing up some unit tests. I tried to get as much code coverage as I could by doing simple tests (i.e. create an object, check default values, call a method and check a result, etc…). Once I had exhausted a lot of the simple stuff, I targeted the other areas that I wasn’t hitting. I mean, how was my coded¬†test supposed to test my method that does an HTTP request and an FTP of a file under the hood?¬†Mocks.

So this is where you probably draw the line between your integration tests or unit tests and get all pedantic about it. But I don’t care how you want to separate it: I need coded tests that cover a section of my class so that I can ensure it behaves as I expect. But if I’m mocking my dependencies, how do I know my class is actually doing what I expect?

Instrument your mocks! This was totally cool for me to play around with for the first time. I had to create dummy FTP/HTTP requests that met the interface my class under test expected. Pretty easy. But I could actually assert what requests my class under test was actually trying to send out! This meant that if my method was supposed to try and hit a certain URL, I could assert that easily by instrumenting my mocked instance to check just that. Was it supposed to FTP a certain set of bytes? No problem. Use my mocked instance to assert those bytes are actually the ones my class under test is trying to send.

Wrap Up

This was just a general post, and I didn’t put up any code to go along with it. Sorry. I really just wanted to cover my experience with refactoring, interfaces, mocking, and code coverage because it was a great learning for me.

To recap on what I said in this post:

  • Identify the parts you want to mock. These are the things your class or method probably isn’t responsible for creating directly. Going out to the network? Accessing the disk? Accessing the environment your test is running under? Creating complex concrete classes because they hook into some other system for you? Great candidates for this.
  • Create interfaces by looking at the API you’re accessing. You know what classes you want to mock, so look how you’re using the API. If you need to access a few properties and methods, then make that part of your interface. If you see commonality between¬†a few similar things, you might be able to create a single interface to handle all of the scenarios.
  • Inject factories that can create¬†instances for you. These factories know how to create the concrete classes that meet your interfaces. In a real situation, they can create the classes you expect. In a test environment, they can create your mocks.
  • Write coded tests with your mocks! The last part is the most fun. You can finally inject some mocked classes into your classes/methods under test and then instrument them to ensure your code under test is accessing them in the way you expect. Run some code coverage tools after to prove you’re doing a good job.

I hope my experiences down this path are able to help you out!


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I work as a team lead of software engineering at Magnet Forensics (http://www.magnetforensics.com). I'm into powerlifting, bodybuilding, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress