Tag: Visual Studio

Downtime? Time to Build!

The COVID-19 pandemic has caused many of us to stay isolated and at home, but that’s OK! I genuinely enjoy developing software and wanted to take this opportunity to focus on learning. Having some downtime has afforded me to try putting together a system that I otherwise might not have explored building.

In this article, I’ll share different aspects about an application I’m building that purposefully put me outside of my comfort zone. In my opinion, having downtime is an opportunity to learn and grow! It’s time to take advantage of that.

When the app and system is ready to showcase I’ll share more insight into what’s actually being built!

The Client Framework

The application being built was intended to run on multiple mobile platforms, so Xamarin was my choice here. I have briefly used Xamarin several years ago, but my reasons for this time through were:

  • C#, .NET, and Visual Studio support. There were many things I wanted to learn about, but I wanted to limit myself to a familiar foundation so that I could still feel that I’m making progress.
  • Supports iOS and Android right from the start. I thought it would be an interesting software design challenge to be able to build base components in a shared library and then leverage dependency injection for platform-specific components.
  • Traction. Xamarin has been around for a number of years now, and it’s only continuing to gain support. I didn’t want to focus on a platform or SDK that wasn’t getting any love.

Xamarin was easy to get setup with because of the familiar pieces, but I was pushed out of my comfort zone to learn about how certain things are handled differently on iOS and Android

I was able to learn about:

  • Android/iOS permission requests
  • The new UI controls in Xamarin (vs WPF which I’m used to)
  • More practice with async/await and UI experience
  • Different mobile API frameworks (Crashlytics, Google Analytics, user control libraries)

The Server Framework

I’ve joked with my colleagues in the past that “web isn’t my thing”, but what I really mean is that “I don’t have experience making web pages”. I’ve build many client server systems in my professional experience and hobby programming, but serving nice-looking web pages hasn’t been my strength.

For the system that I’m designing in my downtime, I need an application server. I decided to go with ASP.NET Core because I haven’t set up many ASP.NET systems before, and I don’t have experience having them hosted in the cloud. However, I do have experience with C# and Visual Studio, so again, this seemed like a good balance of trying new things with some familiar concepts to ensure I could make progress.

In short order, I was able to get a handful of application server routes setup and communicating with the client application properly. The most difficult part was truly just making sure firewall and SSH settings were configured locally, and a handful of times of cursing at my phone for not having it on WiFi (and thus not seeing the development server on my local network)!

I was able to learn about:

  • Authentication attributes (and JWT token handling)
  • Routes with query parameters
  • Serving static content as well as application requests

The Authentication Framework

This one was fun. Having a professional career in software development, one thing that scares everyone away is designing authentication and user management. Nobody wants to because it’s complex, has plenty of edge cases, and… it’s probably critical to your system working ūüôā

Thankfully, Firebase saved the day. I wrote about this already, but Firebase truly made authentication and user management way more straight forward than I’m used to. The hardest parts of working with Firebase had nothing to do with Firebase and everything to do with implementing OAuth for the providers of my choosing.

Because I could use OAuth to authenticate users and have identifiable information provided via a JWT, having a simple registration and login system that mapped OAuth’d users to some sort of internal-system user identifier was trivial. All of the routes for my web application could also authenticate and control access via this same JWT! One of the scariest components about building a system became a relatively light lift.

I was able to learn about:

  • OAuth for popular providers (Google, Facebook, etc…)
  • OAuth scopes
  • JWT tokens
  • Firebase SDK from a server and client side
  • Route authentication in ASP.NET using JWTs

The Database Framework

As part of the journey for exploring unfamiliar technology in my downtime, I decided I’d like to pursue a database that wasn’t SQL-based. I was already using Firebase for authentication and Google offers an intriguing document store in Firebase that provides real-time update triggers.

Being unused to document databases (I’m much more familiar with relational databases), I spent some time trying to design my schemas that I intended to use. One thing that caught me off guard pretty quickly was that in order to modify to a list of things, I’d need to have a local copy of the list, manipulate the collection, and then push the entire structure back to replace the existing structure. This seemed like overkill what I was trying to do, but the alternative was that I could modify objects in the data store to add/remove child items, but each child item would receive another identifier object as a linking object. So a list of X things actually meant 2X things (one for the identifier, one for the entry). Again, this was overkill.

I decided to go back to familiar technology but explore a not so familiar space! I have a good deal of experience working with SQLite and MySQL from my career. What I don’t have a lot of experience with is the management and provision of a MySQL instance with availability in the cloud! Enter Amazon RDS!

Switching to Amazon RDS meant a bit of a learning curve for making sure I could host and configure an instance of MySQL in the cloud. I was able to learn about various Amazon AWS services and roles and how they play together. But once the instance was up and running, I was able to get up and running effectively.

The Tooling

I had help with this one, fortunately, from a friend and old colleague of mine Graeme Harvey. If you’ve worked with me professionally or on a hobby project, one of the things I admit pretty quickly is that I really dislike tinkering to get a configuration right. Generally this is because there’s a lot of tweaking to get something set up right, documentation to understand, and frustrating trial and error.

Source control was going to be git. I didn’t even want to consider anything else. Git is so widely used now that I didn’t see a real benefit to trying to learn a new source control system. Repository management though? BitBucket. I’m a huge fan of the Atlassian suite of products having used them professionally for many many years.

And with that said Jira for issue management. Again, I’ve used Jira professionally for many years. What I’ve never done is had my own Jira instance though! This was made very simple by Atlassian, and not being a company that’s generating big bucks I was able to get the free tier that handles all of my needs. Jira is straight up awesome for issue management and visibility into your ongoing work.

Another no-brainer for me was getting Slack setup. Slack is something I’ve used a lot professionally but like Jira, I’ve never had my own Slack instance. Simple to setup and works just like I’m used to in my career. This wasn’t really a huge requirement but working with another person provided a nice workspace for chatting about stuff we were working on.

And finally… builds. I wrote about using Circle CI to get our server builds up and running already, and to re-iterate it was extremely simple. I even have them wired up to report back to Slack when we push code up to BitBucket! Where we’re still having some fun is figuring out how to deploy our application server builds to an EC2 instance automatically. This would allow us to “release” to a branch and have production hosting of our application server get updated in the cloud!

But we’re building a mobile application in Xamarin, so we have three outputs:

  • The application server in ASP.NET
  • The Android client
  • The iOS client

Mobile app development gets interesting because what you’re actually building is intended to run on a device and not a desktop/server… And why that matters is that generally your desktop or server application will be output as a binary, but your mobile application will be some sort of package you’ll need to sign and distribute on an app store.

After some back and forth, we decided to explore App Center. If I’m being honest, this was equally as easy to setup for our iOS and Android apps using Xamarin as our server was to setup using Circle CI for builds. App Center provided a simple wizard for triggering off of our BitBucket repositories getting new commits to a branch, and the rest was done for us.

What I learned:

  • Git+BitBucket = Free git repository hosting (private if you want) and plenty of integrations
  • Jira = Free issue management and kanban board with plenty of integrations
  • Slack = Free chat workspace with plenty of integrations
  • CircleCI = Free continuous builds with integrations into ALL the other things I just mentioned ūüôā
  • App Center = Free continuous builds for iOS and Android Xamarin apps with plenty of integrations!

In Summary

It’s been a couple of weeks of getting to try building out this project and setting up these different systems to go to work for me. I’ve been able to learn a lot about new or previously unexplored SDKs/technology and even learn some different facets of things I already have professional experience with.

I’m not one to sit idle… So using my downtime to learn cool things and build something has been an awesome time! I’d highly recommend that if you’re in quarantine, lockdown, or otherwise unable to really get out and do too much that you try your hand at something new! Get creative. Get learning.

Be safe and stay healthy!


xUnit Tests Not Running With .NET Standard

Having worked with C# for quite some time now writing desktop applications, I’ve begun making the transition over to .NET standard. In my professional working experience, it was a much slower transition because of product requirements and time, but in my own personal development there’s no reason why I couldn’t get started with it. And call me crazy, but I enjoy writing coded tests for the things I make. My favourite testing framework for my C# development is xUnit, and naturally as I started writing some new code with .NET Standard I wanted to make sure I could get my tests to run.

Here’s an example of some C# code I wrote for my unit tests of a simple LRU cache class I was playing around with:

    [ExcludeFromCodeCoverage]
    public sealed class LruCachetests
    {
        [Fact]
        public void Constructor_CapacityTooSmall_ThrowsArgumentException()
        {
            Assert.Throws<ArgumentException>(() => new LruCache<int, int>(0));
        }

        [Fact]
        public void ContainsKey_EntryExists_True()
        {
            var cache = new LruCache<int, int>(1);
            cache.Add(0, 1);
            var actual = cache.ContainsKey(0);
            Assert.True(
                actual,
                $"Unexpected result for '{nameof(LruCache<int, int>.ContainsKey)}'.");
        }
    }

Pretty simple stuff. I know that for xUnit in Visual Studio, I need to get a nuget package for the test runner to work right in the IDE. Simple enough, I just need to add the “xunit.runner.visualstudio” package alongside the xunit package I had already included into my test project.

Nuget package management for project in visual studio showing required xUnit packages.
Required xUnit nuget packages

Ready to rock! So I go run all my tests in the solution but I’m met with this little surprise:

[3/24/2020 3:59:10.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0622045) ==========
[3/24/2020 3:59:20.510 PM] ---------- Discovery started ----------
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find C:[redacted]binDebugnetstandard2.0testhost.dll. Please publish your test project and retry.
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
   at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
[3/24/2020 3:59:20.570 PM] ========== Discovery aborted: 0 tests found (0:00:00.0600179) ==========
Executing all tests in project: [redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========

Please publish your test project and retry? Huh?

As any software engineer does, I set out to Google for answers. I came across this Stack Overflow post: https://stackoverflow.com/q/54770830/2704424

And fortunately someone had responded with a link to the xUnit documentation: Why doesn’t xUnit.net support netstandard?

The answer was right at the top!

netstandard is an API, not a platform. Due to the way builds and dependency resolution work today, xUnit.net test projects must target a platform (desktop CLR, .NET Core, etc.) and run with a platform-specific runner application.

https://xunit.net/docs/why-no-netstandard

My solution was that I changed my test project to build for one of the latest .NET Frameworks… and voila! I chose .NET 4.8 as the latest available at the time of writing.

My next attempt at running all of my tests looked like this:

Executing all tests in project: [Redacted].Tests
[3/24/2020 3:59:20.635 PM] ---------- Run started ----------
[3/24/2020 3:59:20.639 PM] ========== Run finished: 0 tests run (0:00:00.0039314) ==========
[3/24/2020 4:08:14.898 PM] ---------- Discovery started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.40]   Discovering: [Redacted].Tests
[xUnit.net 00:00:00.47]   Discovered:  [Redacted].Tests
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Universal Windows)
[3/24/2020 4:08:16.289 PM] ========== Discovery finished: 2 tests found (0:00:01.3819229) ==========
Executing all tests in project: [Redacted].Tests
[3/24/2020 4:08:17.833 PM] ---------- Run started ----------
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (32-bit Desktop .NET 4.0.30319.42000)
[xUnit.net 00:00:00.41]   Starting:    [Redacted].Tests
[xUnit.net 00:00:00.66]   Finished:    [Redacted].Tests
[3/24/2020 4:08:19.337 PM] ========== Run finished: 2 tests run (0:00:01.4923808) ==========

And I was back on my path to success! Hopefully if you run into this same issue you can resolve it in the same fashion. Happy testing!


Autofac Modules and Code Organization

Organizing Code With Autofac Modules

What are Autofac Modules?

I’ve been writing a little bit about Autofac and why it’s rad, but today I want to talk about Autofac modules. In my previous post on this, I talk about one of drawbacks to the constructor dependency pattern is that at some point in your application, generally in the entry point, you get allllll of this spaghetti code that is the setup for your code base.

Essentially, we’ve balanced having nice clean testable classes with having a really messy spot in the code. But it’s only ONE spot and the rest of your code is nice. So it’s a decent trade off. But we can do better than that, can’t we?

Autofac modules!

We can use Autofac modules to organize some of the code that we have in our entry point into logical groupings. So an Autofac module is an implementation of a class that registers types to our dependency container to be resolved at a later time. You could do this all in one big module, but like many things in programming, having some giant monolothic thing that does ALLLL the work usually isn’t the best.

An Example of Converting to Autofac Modules

Let’s create a simple application as an example. I’ll describe it in words, and then I’ll toss up some code to show a simple representation if it. We’ll assume we’re using dependencies passed as interfaces via constructors as one of our best practices, which makes this conversion much easier!

So our app will have a main window with a main content area and a header area. These will be represented by three objects. Our application will also have a logger instance that we pass around so classes that need logging abilities can take an ILogger in their constructor. But our logger will have some simple configuration that we need to do before we use it.

Let’s assume to start our Program.cs file looks like this:

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";

        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        window.Show();
    }
}

Before getting comfortable with Autofac, my initial first step would be to logically group things in the main method. In this particular case, we have something simple and surprise… it’s all grouped. But my next step would usually be to pull these things out into their own methods. I do this because it helps me identify if my groupings make sense and where my dependencies are. Let’s try it!

internal sealed class Program
{
    private static void Main(string[] args)
    {
        var logger = InitializeLogging();
        var window = InitializeGui(logger);
        window.Show();
    }

    // no params passed in, so no dependencies
    // return value is an ILogger, so we have a
    // logical grouping that will provide us a logger
    private static ILogger InitializeLogging()
    {
        var logger = new FileLogger();
        logger.LogLevel = LogLevel.Debug;
        logger.FilePath = "log.txt";
        return logger;
    }

    // only parameter is a logger, so that's our dependency
    // return value is a window, so this grouping provides
    // a window for us
    private IWindow InitializeGui(ILogger logger)
    {
        var header = new FancyHeader(logger);
        var content = BoringMainContent();
        var window = new MainWindow(header, content);
        return window;
    }
}

Alright cool. So yes, this is a bit of extra code compared to the initial example, but I promise you grouping these things out into separate methods as a starting point when you have a LOT of initialization logic will help a ton. Once they are in methods, you can pull them out into their own classes. Refactoring 101 for single responsibility principle going on here ūüėČ BUT, we’re interested in Autofac. So what’s the next step?

We have two logical groupings going on here in our example. One is logging and the other is for the GUI. So we can actually go ahead and make two Autofac modules that do this work for us.

public sealed class LoggingModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FileLogger>()
            .AsImplementedInterfaces() // FileLogger will be resolved as an ILogger
            .SingleInstance() // we only ever need to use one logger instance for our app
            .OnActivated(x =>
            {
                // this handles our extra setup we had for this object
                x.Instance.LogLevel = LogLevel.Debug;
                x.Instance.FilePath = "log.txt";
            });
    }
}

public sealed class GuiModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        builder
            .RegisterType<FancyHeader>() // this has a dependency on ILogger, but autofac will figure it out for us
            .AsImplementedInterfaces() // FancyHeader will be resolved as IHeader
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<BoringMainContent>()
            .AsImplementedInterfaces() // BoringMainContent will be resolved as IContent
            .SingleInstance(); // we only ever need to use one instance for our app
        builder
            .RegisterType<MainWindow>() // Autofac will resolve our IHeader and IContent dependencies for us
            .AsImplementedInterfaces() // MainWindow will be resolved as IWindow
            .SingleInstance(); // we only ever need to use one instance for our app
    }
}

And those are our two logical groupings for modules! So, how do we use this and what does our Main() method look like now? I’ll demonstrate with one way that works for a couple modules, but I want to follow up with another post that talks about dynamically loading modules. If you can imagine this scenario blown out across MANY modules, you’ll understand why it might be helpful.

The idea for our Main() method is that we just want to resolve the one main dependency manually and let Autofac do the rest. So in this case, it’s our MainWindow.

private static void Main(string[] args)
{
    // create an autofac container builder
    var containerBuilder = new ContainerBuilder();

    // manually register our two new modules we made
    containerBuilder.RegisterModule<LoggingModule>();
    containerBuilder.RegisterModule<GuiModule >();

    // create the dependency container
    var container = containerBuilder.Build();

    // resolve and use our main dependency by it's interface
    // (because we shouldn't care what the implementation is...
    // that was up to the configuration via modules!)
    var window = container.Resolve<IWindow>();
    window.Show();
}

In Summary…

This example showed us how to group your main initialization logic out into groups that would play nice as Autofac modules. In a really simple example, having modules might look like bloated extra code, but it already illustrated that your entry point is very simple and follows a pattern to extend (just register another module for more dependencies… and I’ll add more on this later). There’s also an obvious way to group more new logic into your application for dependencies! So discussed logging and GUI initialization, but you could extend this to:

  • User Settings
  • Analytics/Telemetry
  • Error Reporting
  • Database Configuration
  • Etc… Just add more modules!

Sometimes the pain of having a really hectic entry point isn’t realized until you’ve had to work on teams where people are modifying the same beast of an entry point all the time:

  • Simple merge conflicts in your “using” statements… Because there’s hundreds of lines of using statements at the top of the file
  • Visual studio actually CANNOT use intellisense properly when the file gets too unwieldly
  • The debugger cannot resolve variables properly when the main entry point gets too big
  • Merging and auto-conflict resolution sometimes results in code just getting blown away in the entry point… And good luck finding what went wrong in your thousands of lines of initilization

So what’s next? Well, if you keep building out your app you might notice you have tons of modules now. Your single GUI module might have to get broken out into modules for certain parts of the GUI, for example, just to keep them more manageable. Maybe you want plugins to extend the application dynamically, which is really powerful! Our method for registering modules just isn’t really extensible at that point, but it’s very explicit. I’ll be sharing some information about automatic Autofac module discovery and registration next!


RPG Development Progress Pulse – Entry 1

Progress Pulse

Progress Pulse – Entry 1

For the first entry in the progress pulse series I’ll touch on some things from the past week or so. There’s been a lot of smaller things being churned in the code base, some of them interesting, and others less interesting so I want to highlight a few. As a side note, it’s really cool to see that the layout and architecture is allowing for features to be added pretty easily, so I’ll dive a bit deeper on that. Overall, I’m pretty happy with how things are going.

Unity3D – Don’t Fight It!

I heard from a colleague before that Unity3D does some things you might not like, but don’t try to fight it, just go with it. To me, that’s a challenge. If I’m going to be spending time coding in something I want it to be with an API that I enjoy. I don’t want to spend time fighting it. An example of this is how I played with the stitching pattern to make my Autofac life easier with Unity3D behaviours.

However, I met my match recently. At work, we were doing an internal hackathon where we could work on projects of our choosing over a 24 hour period, and they didn’t have to be related to work at all. It’s a great way to collaborate with your peers and learn new things. I worked on Macerus and ProjectXyz. I was reaching a point where I had enough small seemingly corner-case bugs switching scenes and resetting things that I decided it was dragging my productivity down. It wasn’t exciting work, but I had to do something about it.

After debugging some console logs (I still have to figure out how to get visual studio properly attached for debugging… Maybe I’ll write an article on that when I figure it out?) I noticed I had a scenario that could only happen if one of my objects was running some work at the same time… as itself? Which shouldn’t happen. Basically, I had caught a scenario where my asynchronous code was running two instances of worker threads and it was a scenario in my game that should never occur.

I tried putting in task cancellation and waiting into my unity game. I managed to hang the main thread on scene switching and application close. No dice. I spent a few hours trying to play around with a paradigm here where I could make my ProjectXyz game engine object run asynchronously within Unity and not be a huge headache.

I needed to stop fighting it though. There was an easier solution.

I could make a synchronous and asynchronous API to my game engine. If you have a game where you want the engine on a thread, call it Async(). Unity3D already has its own game engine loop. Why re-invent it? So in Unity3D, I can simply just call the synchronous version of the game engine’s API. With this little switch, suddenly I fixed about 3 or 4 bugs. I had to stop fighting the synchronous pattern with my asynchronous code.

The lesson? Sometimes you can just come up with a simple solution that’s an alternative instead of hammering away trying to fix a problem you created yourself.

DevOps – Build & Copy Dependencies

This one for me has been one of my biggest nightmares so far.

The structure of my current game setup is as follows:

  • ProjectXyz.sln: The solution that contains all of my back-end shared game framework code. This is the really generic stuff I’m trying to build up so that I can build other games with generic pieces if I wanted to.
  • Macerus.sln: The game-specific business logic for my RPG built using ProjectXyz as a dependency. Strictly business logic though.
  • Macerus Unity: The project that Unity3D creates. This contains presentation layer code built on Macerus.sln outputs and ProjectXyz.sln outputs.

I currently don’t have my builds set up to create nuget packages. This would probably be an awesome route to go, but I also think it might result in a ton of churn right now too as the different pieces are constantly seeing churn. It’s probably something I’ll revisit as things harden, but for now it seems like too much effort given the trade off.

So what have I been doing?

  • I build ProjectXyz.sln.
    • The outputs go into this solution’s bin folder
  • I build Macerus.sln
    • There’s a prebuild step that copies ProjectXyz dependencies over
    • The outputs go into this solutions bin folder
  • I use a custom in-editor menu to copy dependencies into my Unity project
    • This resets my current “dependencies” asset folder
    • The build outputs form the other solutions are copied over
  • I can run the project with new code!

This is a little tedious, sure. But it hasn’t been awful. The problem? Visual studio can only seem to clean what it has knowledge about.

I’ve been refactoring and renaming assemblies to better fit the structure I want. A side note worth mentioning is that MUCH of my code is pluggable… The framework is very light and most things are injected via Autofac from enumerating plugin modules. One of the side effects is that downstream dependencies of ProjectXyz.sln (i.e. Macerus.sln) have build outputs that include some of the old DLLs prior to the rename. And now… Visual Studio doesn’t seem to want to clean then up on build. So what happens?

Unity3D starts automatically referencing these orphaned dlls and the auto-plugin loading is having some crazy behaviour. I’ve been seeing APIs show up that haven’t existed for weeks because some stale DLL is now showing up after an update to the dependencies. This kind of thing was chewing up HOURS of my debugging time. Not going to fly.

I decided to expand my menu a bit more. I now call MSBuild.exe on my dependency solutions prior to copying over dependencies. This removes two completely manual steps from the process I also purged my local bin directories. Now when I encounter this problem of orphaned DLLs, my single click to update all my content can let me churn iterations faster, and shorten my debugging time. Unfortunately still not an ultimate solution to the orphaned dependencies lingering around, but it’s better.

The lesson learned here was that sometimes you don’t need THE solution to your problem, but if you can make temporarily fixing it or troubleshooting it easier then it might be good enough to move forward for now.


Stitching – Combining Unity3D And Autofac

Stitching - Combining Unity3D And Autofac

Before We Talk About Stitching…

In Unity3D, the scripts we write and attach to GameObjects inherit from a base class called MonoBehaviour (and yes, that says Behaviour with a U in it, not the American spelling like Behavior… Just a heads up). MonoBehaviour instances can be attached to GameObjects in code by calling the AddComponent method, which takes a type parameter or type argument, and returns the new instance of the attached MonoBehaviour that it creates.

This API usage means that:

  • We cannot attach existing instances of a MonoBehaviour to a GameObject
  • Unity3D takes care of instantiating MonoBehaviours for us (thanks Unity!)
  • … We can’t pass parameters into the constructor of a MonoBehaviour because Unity3D only handles parameterless constructors (boo Unity!)

So what’s the problem with that? It kind of goes against some design patterns I’m a big fan of, where you pass your object’s dependencies in via the constructor. You can read my little primer about constructor parameter passing, dependency injection, and Autofac to learn more.

The challenge I’m trying to address is that my non-MonoBehaviour classes are all going to be setup to use constructor parameter passing as much as possible but the MonoBehaviour classes cannot. So I’d like to reduce the amount of disjoint coding styles as much as I can and make the MonoBehaviour classes feel like the rest of my stuff!

What Is “Stitching”?

Here’s where this little pattern I created called “Stitching” comes into play. Stitching involves using a class referred to as a Stitcher that’s single purpose is to take parameters in via a constructor, and wire them up to either public properties or public fields (but I REALLY suggest using properties) on the MonoBehaviour that we instantiate through the GameObject.AddComponent() API.

The code ends up looking something like this:

public sealed class MyComponentStitcher
{
  private readonly IDependency _dependency;

  public MyComponentStitcher(IDependency dependency)
  {
    // take in our dependencies and save them as fields
    _dependency = dependency;
  }

  public MyComponent Stitch(GameObject gameObject)
  {
    // create the MonoBehaviour instance using the Unity3D API
    var componentInstance = gameObject.AddComponent<MyComponent>();

    // wire up our dependencies (assign our field to a property on the component)
    componentInstance.Dependency = _dependency;

    return componentInstance;
  }
}

Where you can see that:

  • We inject dependencies into the Stitcher’s constructor
  • We call AddComponent() with the component type we want on the object we want to “stitch” to
  • We mutate the component
  • We return the newly made component

How Do We Use Stitching In Practice?

Now that we see the pattern for a how a Stitcher works, how do we actually use Stitching in practice? Let’s start by using another example:

public sealed class SomeClass
{
  private readonly IMyComponentStitcher _stitcher;

  public SomeClass(IMyComponentStitcher stitcher)
  {
    _stitcher = stitcher;
  }

  public void MyMethod()
  {
    // create a new Unity3D game object
    var gameObject = new GameObject("My Game Object");

    // "stitch" our 
    var myComponent = _stitcher.Stitch(gameObject);

    // we can use some information that would have been injected into the constructor
    // this should print the injected value
    Debug.Log(myComponent.InjectedInfo);
  }
}

From this, you can see that:

  • We have a class called MyClass following our constructor parameter passing paradigm
  • The method MyMethod()
    • Creates a new game object
    • Adds a MyComponent instance to our game object by calling the Stitch() method
    • Using our imagination and the example above, pretend our Stitcher implementation takes a parameter in its constructor to assign to the InjectedInfo property of of MonoBehaviour
  • Logs out the value of the InjectedInfo property found on our newly created instance

So What Makes Stitching Better?

You might feel like this is extra code right now, but this is where the power of Autofac comes into play. You can read my article about using Autofac with Unity3D for more information.

By creating a Stitcher, we can register it to our Autofac container. The Autofac container will then resolve any dependencies that our Stitcher requires for us. The net effect of this is that when we Stitch MonoBehaviours to GameObjects, we get what feels like Autofac resolving dependencies for our MonoBeaviours. We don’t need to mutate MonoBehaviour fields/properties all over our code to assign the dependencies the script needs to use. Instead, we treat the Stitcher class like a factory for our MonoBehaviour.

So in summary:

  • Stitching allows us to leverage Autofac for instantiating MonoBehaviours
  • Stitcher classes essentially become a factory class for our MonoBehaviours (with the side effect that they *must* mutate the GameObject that we need to attach the MonoBehaviour to)
  • Allows assignment of MonoBehaviour fields/properties for initialization to exist in one spot so we can put the bad object mutating code in one spot that feels hidden

Multiple C# Projects In Your Unity 3D Solution

Unity

Problem:¬†Visual Studio and Unity Aren’t Playing Nice!

Disclaimer: I develop on Windows, so I have no idea if any of this even applies to other operating systems. I assume not. Sorry.

I just started poking around in Unity 4.6 and I’ve been having a blast. I’ve made it to the point where I want to actually start hammering out some code, but I came across a bit of a problem: I want to start leveraging other projects I’ve written in my Unity solution while I’m in Visual Studio, and things are blowing up. So, what gives?

Okay, so¬†let me start by explaining why I want to do this. I understand that if I’m making a simple game, I should have no problem breaking out my unity scripts into sub folders and organizing them to be nice and pretty. The problem I’m encountering is that I have existing projects under source control and I don’t want to copy and paste all of the code as scripts into my Unity folder. I also want to be able to create re-usable code for my future games, so I’d like to start breaking things out into libraries as I see fit.

So, if you’ve been playing around in Unity for a bit, you might say “Oh, well you’re a dummy! Unity can totally leverage your C# DLLs once you drop them into your asset folder”! And you’d be 100% correct. But that’s not the workflow I want.

The underlying problem here is this: Unity will re-write your solution and project file when you flip between Unity and Visual Studio. But I’m sure they have it that way for a reason.

The Goal: Visual Studio and Unity Should… Play Nice!

My ideal state would be something like this:

  • Work in visual studio as much as I’d like to new projects to my solution, and reference them accordingly
  • Flip back and forth from Unity and Visual Studio without having to reset things to compile/run again
  • Build from visual studio and have things end up in the right spot… NOT copy DLLs
  • Not copy+paste my entire project(s) already under source control elsewhere

Is this something that can be achieved though? I was pretty determined that I should be able to do *something* to have this working. Could I get it perfect? I wasn’t sure… But I knew I could make it better.

The Solution: Give and Take with Unity

My *almost* perfect sution, which I’ll walk you through, is this: Leveraging Visual Studio tools for Unity, modify the Unity solution as you see fit and use directory junctions (symlinks) to the build output directories of other projects.

  1. Let’s get Visual Studio tools for Unity installed. Visit that link and download the version that you need for the version of Visual Studio that you use. After installing, I opened up my project within Unity and I had to import the Visual Studio Tools package.Import Package
    After selecting this menu item, I was presented with a dialog for picking the items to import. I left it as is.Import Package2After importing these items, I could see that Unity had successfully added these entries under my Assets folder. Okay, now we’re getting somewhere. Next up, I wanted to configure Unity to not modify my solution every time I go back and forth from Unity to Visual Studio. This is the part that kills whether or not I’ve added projects to my solution. For me, it’s critical to have code I’m working on immediately accessible so that I can jump back and forth between projects. Lucky for us, this part is pretty easy. Go to the menu to access your new Visual Studio Tools menu item:

    VS Tools Configure
    Selecting “Configuration” opens up a really simple dialog. Let’s make sure “Generate solution file” is unchecked! It’s that easy.

    VS Tools Configure2
    Once we have all of this setup, we should be able to go into Visual Studio and add other projects to our solution.

  2. The¬†one thing that I *could not* get this solution to do is have Unity leave my main game project alone in Visual Studio. As a result, the rest of this walk through is allowing us to play by Unity’s rules. Unity is good at magically referencing all of the managed DLLs that you include within your assets folder. If you drop DLLs somewhere within “Assets” and switch to Visual Studio, Unity will likely have modified your main project to reference this DLL.My next step was creating a spot where I wanted to drop the build outputs of my extra projects I wanted to reference. In my Visual Studio solution, I have my original game project and some newly added projects I want to build from source. In Unity, I wanted these to end up in “Assets/Dependencies/bin”. No problem. Let’s make that folder structure (or your equivalent if you don’t like my naming):Bin Dependencies
    The next part is probably the “trickiest” part because it’s… well… unusual. You could technically stop here and manually copy DLLs back and forth, but I’m not about that life. I want things to happen automatically. For this, we’re going to use junction points. Browse to your newly created folder in an administrator command prompt. I say administrator because only certain users have permissions to create junction points. Your non-admin user might, but this is my “safe” way of instructing you. On the command prompt, we’re going to use “mklink” to create a junction. The command is “mlkink /D /J <NAME_OF_YOUR_PROJECT> <RELATIVE_PATH_TO_YOUR_PROJECT>”. For example, if you had a C# project you wanted to reference that was “MyCoolLibrary.csproj” and was located in the directory above your Unity project, you might use the command “mlkink /D /J MyCoolLibrary “……..MyCoolLibrarybindebug””. Note that I used two dots to go back up a directory several¬†times (since we’re inside of AssetsDependenciesbin and want to get outside of our Unity project). you should get a success message when your junction is created.

    Repeat this step for as many extra projects as you want to include. You can always come back and add more projects this way too, or remove¬†the junctions if you don’t want to include a project anymore.

    At this point, you’re technically done. If you build from Visual Studio, you should have your other projects’ DLLs end up in your Unity folder, and your main game project will be updated by Unity to reference these now!

  3. But… You’re not done if you use source control for your Unity project and have separate source control on your other projects. The scary thing here is that usually we don’t want our build outputs to be stored in source control… But if we do nothing else, your source control system will likely want to include the newly created “AssetsDependenciesbin” folder and any of the contents you’re building into there. I just modified my git ignore file (I’m sure there’s an equivalent for SVN or other source control) to exclude the contents of “AssetsDependenciesbin”.The reason I didn’t excluded dependencies all together is because I can add other folders and DLL references here that I don’t want to build (like… the normal way). This gives me the flexibility of building the projects I want to control and still be able to just reference other pre-built DLLs!

Summary

In three easy steps, you should be able to use Unity, Visual Studio, and multiple projects in one solution in a what-feels-like-normal way. Because there’s still some dynamic stuff going on with Unity updating your main project, you might find the odd time you need to build twice to fix up compilation problems. I’ve seen this happen maybe once or twice so far, but otherwise it feels like normal. It’s also ¬†important to note that you can’t escape the Unity project updating… don’t add references to your main project manually. That’s what that “AssetsDependencies” folder is for that we made.

Here are a few shots of what my setup looks like (proof that it works):

Solution Explorer

Unity Dependencies

And of course… it’s not the perfect solution. There’s still these things:

  • Unity gets mad at you for using junctions within your project. It actually tells you not to do this because you can mess things up. It’s working awesome for me right now though… So I’m going to just ignore this warning.
  • Remember step 3 where we ignored the AssetsDependenciesbin location in git? This actually ignored your junction points you created too. As a result, anyone else who clones your code will need to create junctions too. I’m working solo, so I’m not too worried about this step… But it’s definitely something that should be fixed up (again, I’m sure it’s doable, but I’m in no rush).

Hope that helps you feel more at home in Unity and Visual Studio! It certainly made it nicer for me.

 


IronPython: A Quick WinForms Introduction

IronPython: A Quick WinForms Introduction

Background

A few months ago I wrote up an article on using PyTools, Visual Studio, and Python all together. I received some much appreciated positive feedback for it, but really for me it was about exploring. I had dabbled with Python a few years back and hadn’t really touched it much since. I spend the bulk of my programming time in Visual Studio, so it was a great opportunity to try and bridge that gap.

I had an individual contact me via the Dev Leader Facebook group that had come across my original article. However, he wanted a little bit more out of it. Since I had my initial exploring out of the way, I figured it was probably worth trying to come up with a semi-useful example. I could get two birds with one stone here–Help out at least one person, and get another blog post written up!

The request was really around taking the output from a Python script and being able to display it in a WinForm application. I took it one step further and created an application that either lets you choose a Python script from your file system or let you type in a basic script directly on the form. There isn’t any fancy editor tools on the form, but someone could easily take this application and extend it into a little Python editor if they wanted to.

Leveraging IronPython

In my original PyTools article, I mention how to get IronPython installed into your Visual Studio project. In Visual Studio 2012 (and likely a very similar approach for other versions of Visual Studio), the following steps should get you setup with IronPython in your project:

  • Open an existing project or start a new one.
  • Make sure your project is set to be at least .NET 4.0
    • Right click on the project within your solution explorer and select “Properties”
    • Switch to the “Application” tab.
    • Under “Target framework”, select ¬†“.NET Framework 4.0”.
  • Right click on the project within your solution explorer and select “Manage NuGet Packages…”.
  • In the “Search Online” text field on the top right, search for “IronPython”.
  • Select “IronPython” from within the search results and press the “Install” button.
  • Follow the instructions, and you should be good to go!

Now that we have IronPython in a project, we’ll need to actually look at some code that gets us up and running with executing Python code from within C#. If you followed my original post, you’ll know that it’s pretty simple:


var py = Python.CreateEngine();
py.Execute("your python code here");

And there you have it. If it seems easy, that’s because it is. But what about the part about getting the output from Python? What if I wanted to print something to the console in Python and see what it spits out? After all, that’s the goal I was setting out to accomplish with this article. If you try the following code, you’ll notice you see a whole lot of nothing:


var py = Python.CreateEngine();
py.Execute("print('I wish I could see this in the console...')");

What gives? How are we supposed to see the output from IronPython? Well, it all has to do with setting the output Stream of the IronPython engine. It has a nice little method for letting you specify what stream to output to:


var py = Python.CreateEngine();
py.Runtime.IO.SetOutput(yourStreamInstanceHere);

In this example, I wanted to output the stream directly into my own TextBox. To accomplish this, I wrote up my own little stream¬†wrapper that takes in a TextBox and appends the stream contents directly to the Text property of the TextBox. Here’s what my stream implementation looks like:


private class ScriptOutputStream : Stream
{
  #region Fields
  private readonly TextBox _control;
  #endregion

  #region Constructors
  public ScriptOutputStream(TextBox control)
  {
    _control = control;
  }
  #endregion

  #region Properties
  public override bool CanRead
  {
    get { return false; }
  }

  public override bool CanSeek
  {
    get { return false; }
  }

  public override bool CanWrite
  {
    get { return true; }
  }

  public override long Length
  {
    get { throw new NotImplementedException(); }
  }

  public override long Position
  {
    get { throw new NotImplementedException(); }
    set { throw new NotImplementedException(); }
  }
  #endregion

  #region Exposed Members
  public override void Flush()
  {
  }

  public override int Read(byte[] buffer, int offset, int count)
  {
    throw new NotImplementedException();
  }

  public override long Seek(long offset, SeekOrigin origin)
  {
    throw new NotImplementedException();
  }

  public override void SetLength(long value)
  {
    throw new NotImplementedException();
  }

  public override void Write(byte[] buffer, int offset, int count)
  {
    _control.Text += Encoding.GetEncoding(1252).GetString(buffer, offset, count);
  }
  #endregion
}

Now while this isn’t pretty, it serves one purpose: Use the stream API to allow binary data to be appended to a TextBox. The magic is happening inside of the Write() method where I take the binary data that IronPython will be providing to us, convert it to a string via code page 1252 encoding, and then append that directly to the control’s Text property. In order to use this, we just need to set it up on our IronPython engine:


var py = Python.CreateEngine();
py.Runtime.IO.SetOutput(new ScriptOutputStream(txtYourTextBoxInstance), Encoding.GetEncoding(1252));

Now, any time you output to the console in IronPython you’ll get your console output directly in your TextBox! The ScriptOutputStream implementation and calling SetOutput() are really the key points in getting output from IronPython.

The Application at a Glance

I wanted to take this example a little bit further than the initial request. I didn’t just want to show that I could take the IronPython output and put it in a form control, I wanted to demonstrate being able to pick the Python code to run too!

Firstly, you’re able to browse for Python scripts using the default radio button. Just type in the path to your script or use the browse button:

IronPython - Run script from file

Enter a path or browse for your script. Press “Run Script” to see the output of your script in the bottom TextBox.

Next, press “Run Script”, and you’re off! This simply uses a StreamReader to get the contents of the file and then once in the contents are stored in a string, they are passed into the IronPython engine’s Execute() method. As you might have guessed, my “helloworld.py” script just contains a single line that prints out “Hello, World!”. Nothing too fancy in there!

Let’s try running a script that we type into the input TextBox instead. There’s some basic error handling so if your script doesn’t execute, I’ll print out the exception and the stack trace to go along with it. In this case, I tried executing a Python script that was just “asd”. Clearly, this is invalid and shouldn’t run:

python_error_asd

Python interpreted the input we provided but, as expected, could not find a definition for “asd”.

That should be along the lines of what we expected–The script isn’t valid, and IronPython tells us why. What other errors can we see? Well, the IronPython engine will also let you know if you have bad syntax:

python_error_bad_syntax

Python interpreted the script, but found a syntax error in our silly input.

Finally, if we want to see some working Python we can do some console printing. Let’s try a little HelloWorld-esque script:

python_pass_hello_world

Python interpreted our simple Hello World script.

Summary

This sample was pretty short but that just demonstrates how easy it is! Passing in a script from C# into the IronPython is straight forward, but getting the output from IronPython is a bit trickier. If you’re not familiar with the different parts of the IronPython engine, it can be difficult to find the things you need to get this working. With a simple custom stream implementation we’re able to get the output from IronPython easily. All we had to do was create our own stream implementation and pass it into the SetOutput() method that’s available via the IronPython engine class. Now we can easily hook the output of our Python scripts!

As always, all of the source for you to try this out is available online:

Some next steps might include:

  • Creating your own Python IDE. Figure out some nice text-editing features and you can run Python scripts right from your application.
  • Creating a test script dashboard. Do you write test scripts for other applications in Python? Why not have a dashboard that can report on the results of these scripts?
  • Add in some game scripting! Sure, you could have done this with IronPython alone, but maybe now you can skip the WinForms part of this and just make your own stream wrapper for getting script output. Cook up some simple scripts in a scripting engine and voila! You can easily pass information into Python and get the results back out.

Let me know in the comments if you come up with some other cool ideas for how you can leverage this!


Movember Prep – Weekly Article Dump

MoMagnets - Magnet Forensics' Movember Team

Movember Preparation

You might think we’re a bit early on this one, but at Magnet Forensics we’re going to take Movember to a whole new level this year. If you’re not familiar with Movember, you may want to head over here and get a rundown of the history of it. Movember started in Australia between a group of people who wanted to (somewhat jokingly) bring the moustache back into style. The next year they started getting people to grow mo’s for causes. Now people participate in Movember to raise awareness for men’s health, and it’s bigger than ever.

Our team members of MoMagnets have started discussing the various styles of mo’s that they’ll grow this year. It looks like there’s going to be some intra-team competition to grow the best mo. The top contenders? It’s looking like:

Matthew Chang - Movember

Matthew “The Chang” “Changarang” Chang sporting a well-groomed black moustache. Although it’s a standard ‘stache, the care put into keeping this beauty mo in tip-top shape is obvious. Can he do it again for this Movember?

Cameron Sapp - Movember

Cameron Sapp showing off a rock solid handle bar mo. The bars on this ‘stache are so impressive that it almost gives the illusion that this mo is taller than it is wide. Wait… is it?!

Check out the¬†MoMagnets¬†page and keep track of us! Please contribute what you can to help raise awareness for men’s health.

Articles

  • Python, Visual Studio, and C#‚Ķ So. Sweet.: First one on the list this week is the post I put out on Monday about using Python, C#, and Visual Studio all together. It’s definitely for the developers out there, but for those of you who aren’t programmers, it’s still interesting to see how PyTools and IronPython have bridged a gap between C# and Visual Studio. I was pretty happy with the number of people who responded on social media and thought that it was a good read. The tweets actually led me to find a related post by Scott Hanselman from earlier this year (that I wish I saw sooner). My article has also received some pretty good visibility at Code Project¬†which I’m excited about. Feel free to check it out over there too (people seem more likely to engage in discussion at Code Project versus on my blog)!
  • Want To Build A Business? Lead With Trust: David Hassell wrote an article that really hit home with me. Having a successful business means crafting a team and culture built upon trust. It needs to be the foundation of your team. Having high levels of trust makes everything else in the business come together more easily, but lacking trust can really make everything fall apart. Teams need to trust their leaders, and leaders need to trust their team members–it goes both ways.
  • Amazon CEO Jeff Bezos Had His Top Execs Read These Three Books: John Fortt¬†discusses his interview with Amazon CEO Jeff Bezos. Now while I don’t read as much as I should (and I’m consciously trying to get better at it), I thought this little list of books might be great to keep my eye out for:
  • Confidence ‘boosts pupils’ academic success: I thought this article was a great find. It’s primarily around research that’s shown confidence plays a big role in students’ success, but I believe it applies outside of the realm of formal education. As a leader or mentor, I think it’s incredibly important to instill confidence. You want your team members to know you trust them with what they’re doing. They need to know they can make mistakes and learn without having to be punished for doing so. Having that confidence is going to be what makes them successful.
  • Leadership Lessons From LEGO: What do leadership and Lego have in common? A whole lot according to¬†John Kotter. Consider innovation (get creative with those bricks!), overcoming challenges (can’t find that piece you were looking for?), team work (building things with friends is way more fun), and quality (it’s as good as you make it). It was an unexpected article for me to stumble upon, but I thought the parallels were interesting!
  • The Four Most Powerful Lessons in Management: Joel Peterson¬†has some great points on being a successful leader or manager. Among them, putting actions behind your words, bring the right people on board (noticing a trend with having the right people yet?), and having a meaningful mission.
  • What is a Thought Leader?: I found myself asking this question at one point, which is why I wanted to share¬†Daniel Tunkelang‘s article. It seems straight forward really. It’s important to have an area of expertise in the ideas you want to share, and it’s important that the things you’re sharing have meaning. In my case with Dev Leader, I certainly haven’t mastered leadership and programming, but I’m sharing the ideas that I’m hoping will some day get me there.
  • 17 Things You Should Never Say to Your Boss: This was definitely a great read. At first, I started thinking “How could anyone in their right mind say these things to their boss”? But then I realized I had actually heard some of these things (or similar things) and it really got me thinking. Dave Kerpen¬†has put together a great list, and although it’s humourous, it’s still something important to watch out for. Just in it for the money? Not your role? Some people need to get a grip or find something else to do in their career.
  • Why These Happiness ‚ÄúBoosters‚ÄĚ Might Actually Make You Feel Worse: Gretchen Rubin¬†shares some ideas on why certain things we do to make us happier may actually be counter-productive. One interesting one I thought was the idea of your attitude shaping your behaviour may actually be your behaviour shaping your attitude. On weekends I often hang around in a pair of shorts until I have to head out of my condo. If I got in the habit of being prepped to leave the house and be productive from the beginning of the day, would I find that I’m actually more productive? Worth trying!
  • What Makes Developers Really Great: Deane Barker shares his experience with a software developer that was giving off some bad vibes. So what’s a good developer? Is it just someone who can code? Do they need to know all the latest and best languages, dream in code, and have four computer science degrees? It certainly helps (and I don’t think many would dismiss it), but the one thing that’s really important is their attitude and ability to work in their team. Check out the comments on that blog post. If you’re working on a team and you can’t fit in the team, you’ll bring the whole team down. This means if you’re all soft skills and no hard skills, you can’t contribute squat. If you’re all hard skills and no soft skills, you’re going to be a road block to your team. You need to have both to be a really great developer.

Remember to check out the¬†MoMagnets¬†page! We’d really appreciate it.¬†Follow¬†Dev Leader¬†on social media outlets to get these updates through the week.

Nick Cosentino – LinkedIn
Nick Cosentino – Twitter
Dev Leader – Facebook
Dev Leader – Google+

You can also check out Dev Leader on FlipBoard.


Python, Visual Studio, and C#… So. Sweet.

Python, Visual Studio, and C#

Python & C# – Background

Let’s clear the air. Using Python and C# together isn’t anything new. If you’ve used one of these languages and at least heard of the other, then you’ve probably heard of IronPython. IronPython lets you use both C# and Python together. Pretty legit. If you haven’t tried it out yet, hopefully your brain is starting to whir and fizzle thinking about the possibilities.

My development experiences is primarily in C# and before that it was VB .NET (So I’m pretty attached to the whole .NET framework… We’re basically best friends at this point). However, pretty early in my career (my first co-op at Engenuity Corporation, really) I was introduced to Python. I had never really used a dynamic or implicitly typed language, so it was quite an adventure and learning experience.

Unfortunately, aside from my time at EngCorp, I hadn’t really had a use to continue on with Python development. Lately, I’ve had a spark of curiosity. I’m comfortable with C#, sure, but is that enough? There’s lots of great programming languages out there! It’s hard for me to break out of my comfort zone though. I’m used to C# and the awesomeness of Visual Studio, so how could I ever break free from these two things?

Well… I don’t have to yet.

Python Tools for Visual Studio

This was a nice little treasure to stumble upon:

But I didn’t really know what it was all about. I had heard of IronPython, and I knew I could use Python with C# together, so what exactly is “Python Tools“?

After¬†I watched the video that the Visual Studio team tweeted out, I was captivated. Did this mean I could revisit python without having to leave the comfort of my favourite IDE? You bet. First thing I did after watching this video (and yes, I somehow managed to hold back the excitement and wait until the video was done) was fire up Visual Studio. I run with Visual Studio 2012 (the dark theme too) so in my screenshots that’s what you’ll be seeing. Once Visual Studio has loaded:

  • Go to the “Tools” menu at the top of the IDE.
  • Select the “Extensions and Updates…” menu item.
  • You should see the “Extensions and Updates” dialog window now.

You’re going to want to search for “Python Tools” after you’ve selected the “Online” option on the left side of the dialog. It should look something like this:

Python Tools - Visual Studio Extensions and Updates

Installing Python Tools for Visual Studio is pretty easy. Make sure you’re searching online and search for “Python Tools”.

After you’ve followed all of the installation instructions, it’s time to make sure the installation worked. Simple enough!

  • Go to the “File” menu at the top of the IDE.
  • Go to the “New” menu item.
  • Select the “Project…” menu item.
  • You should now see the “New Project” dialog

To ensure Python is now available, try seeing if you have Python project templates available:

Verify Python in Visual Studio

To verify that Python is now available in Visual Studio, check under the installed templates. It should be under “Other Languages”.

Hopefully it’s there. If not, or if you have any other install questions, I highly recommend you refer to the official site and follow along there. This is what got me up and running with my current machine, but if your setup is slightly different you should definitely follow their instructions. That’s it! You have Python Tools! But what else would make your C#, Python, and Visual Studio experience EVEN BETTER? The answer to that question is of course IronPython. Head on over to this page and get yourself setup with the latest cut of IronPython. Once that’s setup, you should have all the fancy tools you need!

Print to Console – Your First C#/Python Application

I’m sure you feel the excitement building. I’ll start by saying the code is all available online, so even though I’ll have snippets and pictures here, you can download all of the source and follow along that way if you want. Otherwise, I’ll do my best to walk you through how I set things up! This application is going to be pretty simple. It’s a tiny bit bigger than a “Hello World” application, with the difference being that you tell Python what you want to print to the console. Easy-peasy, right?

First up, let’s make a new C# console project.

  • From Visual Studio, go to the “File” menu at the top of the IDE.
  • Select the “New” menu item.
  • Select the “Project” menu item.
  • You should see the “New Project” dialog.
  • Select the “Visual C#” template on the left of the dialog.
  • Select “Console Application”.
  • In the framework dropdown at the top of the dialog, select .NET 4.5
  • Fill in the details for where you want to save your project.
  • Press “OK”! And we’re off!

Now that you have a console application you’re going to want to add in all the dependencies we need. If you look at the project in your solution explorer, you’re going to want to add the following dependencies:

IronPython Dependencies in Visual Studio

Add the IronPython and Microsoft.Scripting dependencies through the solution explorer in Visual Studio.

If you’re having trouble getting the dependencies set up, remember you can always download the source projects I’ve put together. Now that you have all the necessary dependencies, here’s the source for our little application:

using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;

using IronPython.Hosting;

namespace PrintToConsole
{
    internal class Program
    {
        private static void Main()
        {
            Console.WriteLine("What would you like to print from python?");
            var input = Console.ReadLine();

            var py = Python.CreateEngine();
            try
            {
                py.Execute("print('From Python: " + input + "')");
            }
            catch (Exception ex)
            {
                Console.WriteLine("Oops! We couldn't print your message because of an exception: " + ex.Message);
            }

            Console.WriteLine("Press enter to exit...");
            Console.ReadLine();
        }
    }
}

Let’s walk through what this code is doing:

  • First we’re getting input from the user. This is some pretty basic C# stuff, but we’re simply printing a message to the console and taking in the text the user enters before they press enter.
  • Next, we create a Python engine instance. This is the class that’s going to be responsible for executing python for us!
  • The code that exists within the try block tells our engine instance to execute some python code.
    • The print() method that you see being passed to the engine is the syntax since Python 3.0.
    • The parameter that we’re passing into the print() method is a python string… but we’re sticking our user input inside of it as well!
    • It’s also important to note that we’re building up a C# string that contains all of the Python code that will be executed and passing that to the engine.
  • I have a catch block here to catch any unexpected problems. Can you think of any?
    • What happens if your user input some text with a single quote?
  • The last part of the application just asks the user to press enter when they are all done.

Simple! There’s your first C# + Python application! You can see the source for the whole thing over here.

Run External Script

So this is great: you can now run some python code from within C#. Totally awesome. But what about all those python scripts you have written up already? Do you need to start copying and pasting them into C# code files and start to try and format them nicely? The answer is no, thankfully! Let’s start by following the exact same steps as outlined in the first example. You should be able to set up a new .NET 4.5 C# console project and add in all the same dependencies. Once you have that put together, you can use the following source code:

using System;
using System.Collections.Generic;
using System.Text;

using IronPython.Hosting;

namespace RunExternalScript
{
    internal class Program
    {
        private static void Main(string[] args)
        {
            Console.WriteLine("Press enter to execute the python script!");
            Console.ReadLine();

            var py = Python.CreateEngine();
            try
            {
                py.ExecuteFile("script.py");
            }
            catch (Exception ex)
            {
                Console.WriteLine("Oops! We couldn't execute the script because of an exception: " + ex.Message);
            }

            Console.WriteLine("Press enter to exit...");
            Console.ReadLine();
        }
    }
}

This script looks similar, right? Before I explain what it does, let’s add in the Python script that you’ll be executing from this console application.

  • Right click on your project in the solution explorer.
  • Select the “Add” menu item from the context menu.
  • Select the “New Item…” menu item.
  • You should see the “Add New Item” dialog.
  • You’ll want to add a new text file called “script.py”.

It should look a little something like this:

Add new Python script in Visual Studio

In the “Add New Item” dialog, select “Text File” and rename it to “script.py”.

The next really important step is to ensure that this script gets copied to the output directory. To do this, select your newly added script file in the solution explorer and change the “Copy to Output Directory” setting to “Copy Always”. Now when you build your project, you should see your script.py file get copied to the build directory. Woo! You can put any python code you want inside of the script file, but I started with something simple:

print('Look at this python code go!')

Okay, so back to the C# code now. This example looks much like the first example.

  • Wait for the user to press enter before executing the Python script. Just to make sure they’re ready!
  • Create our engine instance, just like in the first example.
  • In the try block, we tell the engine to execute our script file. Because we had the file copy to the output directory, we can just use a relative path to the file here.
  • Again, we’ve wrapped the whole thing inside of a try/catch to ensure any mistakes you have in your python script get caught.
    • Try putting some erroneous Python code in the script file and running. What happens?
  • Finally, make sure the user is content with the output and wait for them to press Enter before exiting.

Look how easy that was! Now you can choose to execute Python code generated in C# OR execute external Python scripts!

Summary

It’s awesome to see that you expressed an interest in trying to marry these two languages together inside of a powerful IDE. We’re only breaking through the surface here, and admittedly I’m still quite new to integrating Python and C# together. I need to re-familiarize myself with Python, but I can already see there is a ton of potential for writing some really cool applications this way.

In the near future, I’ll be discussing how the dynamic keyword in C# can actually allow you to create classes in Python and use them right inside of C#… Dynamically!

Both of these pages were helpful in getting me up and running with C# and Python together:

Source code for these projects is available at the following locations:


  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • Nick Cosentino

    Nick Cosentino

    I have nearly a decade of professional hands on software engineering experience in parallel to leading multiple engineering teams to great results. I'm into bodybuilding, modified cards, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress