Tag: CSharp

What Makes Good Code? – Should Every Class Have An Interface? Pt 2

Should Every Class Have an Interface?

This is part two in the sub-series of “Should Every Class Have an Interface?“, and part of the bigger “What Makes Good Code?” series.

Other Peoples’ Code

So in the last post, we made sure we could get an interface for every class we made. Okay, well that’s all fine and dandy (I say half sarcastically). But you and I are smart programmers, so we like to re-use other peoples’ code in our own projects. But wait just a second! It looks like Joe Shmoe didn’t use interfaces in his API that he created! We refuse to pollute our beautiful interface-rich code with his! What can we do about it?

Wrap it.

That’s right! If we add a little bit of code we can get all the benefits as the example we walked through originally. It’s not going to completely fix “the problem”, but I’ll touch on that after. So, we all remember our good friend encapsulation, right?

Let’s pretend that Joe Shmoe wrote some cool code that does string lookups from an Excel file. We want to use it in our code, but Joe didn’t use the IStringLookup interface (because… it’s in OUR code, not his) and he didn’t even use ANY interfaces. The constructor for his class looks like:


public ExcelParser(string pathToExcelFile);

On this class, there’s two methods. One method allows us to find the column index for a certain heading, and the other method allows us to get a cell’s value given a column and row index. The method calls looks like:


public int GetColumnIndex(string columnName);

public string GetCellValue(int columnIndex, int rowIndex);

We can wrap that class by creating a wrapper class that meets our interface, like so:


public sealed class ExcelStringLookup
{
  // ugh... we have to reference the class directly!
  private readonly ExcelParser _excelParser;

  // ugh... we have to reference the class directly!
  public ExcelStringLookup(ExcelParser excelParser)
  {
    _excelParser = excelParser;
  }

  public string GetString(string name)
  {
    var columnIndex = _excelParser.GetColumnIndex(name);
    // assumes all of our strings will be under a column header
    var cellValue = _excelParser.GetCellValue(columnIndex, 1);
    return cellValue;
  }
}

And now this will plug right into the rest of our code that we defined originally.

This doesn’t totally eliminate “the problem” though (the problem being that some class doesn’t have an interface (what this post is trying to answer)). There’s still a class we’re making use of that doesn’t have an interface, but it looks like we’ve reduced the exposure of that problem to JUST this class and the spot that would construct this class. Are we okay with that?

Thoughts So Far…

Let’s do a little recap on what we’ve seen so far:

  • Having interfaces for our classes is a nice way to introduce a layer of abstraction
  • Interfaces are just *one* tool to get layers of abstraction introduced
  • If you wanted to have interfaces for all of the classes in your code and some third party didn’t use interfaces, that code is likely not as common in your code base (especially if you wrap it like I mentioned above). This may not always be true in your code base, but it’s likely the case.
  • The amount of work to wrap things can vary greatly. Some things are straight forward to wrap, but you need to add many methods/properties. Sometimes it’s the inverse and you only have a few things to wrap but they’re not straight forward.
  • The number of classes you’d need to wrap to get to this state can vary greatly… Since even built-in System classes aren’t all backed with interfaces!
  • There’s certainly a trade off between the original work + maintenance to wrap a class in an interface versus the benefits it provides.

Is that last point blasphemy?! So there may actually be times we DON’T want to have an interface for a class?

Watch this space for part 3 where we start to look at a counter-example!

 


What Makes Good Code? – Should Every Class Have An Interface? Pt 1

What Makes Good Code? - Should Every Class Have An Interface?

What’s An Interface?

I mentioned in the first post of this series that I’ll likely be referring to C# in most of these posts. I think the concept of an interface in C# extends to other languages–sometimes by a different name–so the discussion here may still be applicable. Some examples in C++, Javaand Python to get you going for comparisons.

From MSDN:

An interface contains definitions for a group of related functionalities that a class or a struct can implement.
By using interfaces, you can, for example, include behavior from multiple sources in a class. That capability is important in C# because the language doesn’t support multiple inheritance of classes. In addition, you must use an interface if you want to simulate inheritance for structs, because they can’t actually inherit from another struct or class.

It’s also important to note that an interface decouples the definition of something from its implementation. Decoupled code is, in general, something that programmers are always after. If we refer back to the points I defined for what makes good code (again, in my opinion), we can see how interfaces should help with that.

  • Extensibility: Referring to interfaces in code instead of concrete classes allows a developer to swap out the implementation easier (i.e. extend support for different data providers in your data layer). They provide a specification to be met should a developer want to extend the code base with new concrete implementations.
  • Maintainability: Interfaces make refactoring an easier job (when the interface signature doesn’t have to change). A developer can get the flexibility of modifying the implementation that already exists or creating a new one provided that it meets the interface.
  • Testability: Referring to interfaces in code instead of concrete classes allows mocking frameworks to leverage mocked objects so that true unit tests are easier to write.
  • Readability: I’m neutral on this. I don’t think interfaces are overly helpful for making code more readable, but I don’t think they inherently make code harder to read.

I’m only trying to focus on some of the pro’s here, and we’ll use this sub-series to explore if these hold true across the board. So… should every class have a backing interface?

An Example

Let’s walk through a little example. In this example, we’ll look at an object that “does stuff”, but it requires something that can do a string lookup to “do stuff” with. We’ll look at how using an interface can make this type of code extensible!

First, here is our interface that we’ll use for looking up strings:

public interface IStringLookup
{
    string GetString(string name);
}

And here is our first implementation of something that can lookup strings for us. It’ll just lookup an XML node and pull a value from it. (How it actually does this stuff isn’t really important for the example, which is why I’m glossing over it):

public sealed class XmlStringLookup : IStringLookup
{
    private readonly XmlDocument _xmlDocument;

    public XmlStringLookup(XmlDocument xmlDocument)
    {
        _xmlDocument = xmlDocument;
    }

    public string GetString(string name)
    {
        return _xmlDocument
            .GetElementsByTagName(name)
            .Cast<XmlElement>()
            .First()
            .Value;
    }
}

This will be used to plug into the rest of the code:

private static int Main(string[] args)
{
    var obj = CreateObj();
    var stringLookup = CreateStringLookup();
    
    obj.DoStuff(stringLookup);
 
    return 0;
}
 
private static IMyObject CreateObj()
{
    return new MyObject();
}
 
private static IStringLookup CreateStringLookup()
{
    return new XmlStringLookup(new XmlDocument());
}
 
public interface IMyObject
{
    void DoStuff(IStringLookup stringLookup);
}
 
public class MyObject : IMyObject
{
    public void DoStuff(IStringLookup stringLookup)
    {
        var theFancyString = stringLookup.GetString("FancyString");
        
        // TODO: do stuff with this string
    }
}

In the code snippet above, you’ll see our Main() method creating an instance of “MyObject” which is the thing that’s going to “DoStuff” with our XML string lookup. The important thing to note is that the DoStuff method takes in the interface IStringLookup that our XML class implements.

Now, XML string lookups are great, but let’s show why interfaces make this code extensible. Let’s swap out an XML lookup for an overly simplified CSV string lookup! Here’s the implementation:

public sealed class CsvStringLookup : IStringLookup
{
    private readonly StreamReader _reader;
 
    public CsvStringLookup(StreamReader reader)
    {
        _reader = reader;
    }
 
    public string GetString(string name)
    {
        string line;
        while ((line = _reader.ReadLine()) != null)
        {
            var split = line.Split(',');
            if (split[0] != name)
            {
                continue;
            }
 
            return split[1];
        }
 
        throw new InvalidOperationException("Not found.");
    }
}

Now to leverage this class, we only need to modify ONE line of code from the original posting! Just modify CreateStringLookup() to be:

private static IStringLookup CreateStringLookup()
{
    return new CsvStringLookup(new StreamReader(File.OpenRead(@"pathtosomefile.txt")));
}

And voila! We’ve been able to extend our code to use a COMPLETELY different implementation of a string lookup with relatively no code change. You could make the argument that if you needed to modify the implementation for a buggy class that as long as you were adhering to the interface, you wouldn’t need to modify much surrounding code (just like this example). This would be a point towards improved maintainability in code.

“But wait!” you shout, “I could have done the EXACT same thing with an abstract class instead of the IStringLookup interface you big dummy! Interfaces are garbage!”

And you wouldn’t be wrong about the abstract class part! It’s totally true that IStringLookup could instead have been an abstract class like StringLookupBase (or something…) and the benefits would still apply! That’s a really interesting point, so let’s keep that in mind as we continue on through out this whole series. The little lesson here? It’s not the interface that gives us this bonus, it’s the API boundary and level of abstraction we introduced (something that does string lookups). Both an interface and abstract class happen to help us a lot here.

Continue to Part 2


Multiple C# Projects In Your Unity 3D Solution

Unity

Problem: Visual Studio and Unity Aren’t Playing Nice!

Disclaimer: I develop on Windows, so I have no idea if any of this even applies to other operating systems. I assume not. Sorry.

I just started poking around in Unity 4.6 and I’ve been having a blast. I’ve made it to the point where I want to actually start hammering out some code, but I came across a bit of a problem: I want to start leveraging other projects I’ve written in my Unity solution while I’m in Visual Studio, and things are blowing up. So, what gives?

Okay, so let me start by explaining why I want to do this. I understand that if I’m making a simple game, I should have no problem breaking out my unity scripts into sub folders and organizing them to be nice and pretty. The problem I’m encountering is that I have existing projects under source control and I don’t want to copy and paste all of the code as scripts into my Unity folder. I also want to be able to create re-usable code for my future games, so I’d like to start breaking things out into libraries as I see fit.

So, if you’ve been playing around in Unity for a bit, you might say “Oh, well you’re a dummy! Unity can totally leverage your C# DLLs once you drop them into your asset folder”! And you’d be 100% correct. But that’s not the workflow I want.

The underlying problem here is this: Unity will re-write your solution and project file when you flip between Unity and Visual Studio. But I’m sure they have it that way for a reason.

The Goal: Visual Studio and Unity Should… Play Nice!

My ideal state would be something like this:

  • Work in visual studio as much as I’d like to new projects to my solution, and reference them accordingly
  • Flip back and forth from Unity and Visual Studio without having to reset things to compile/run again
  • Build from visual studio and have things end up in the right spot… NOT copy DLLs
  • Not copy+paste my entire project(s) already under source control elsewhere

Is this something that can be achieved though? I was pretty determined that I should be able to do *something* to have this working. Could I get it perfect? I wasn’t sure… But I knew I could make it better.

The Solution: Give and Take with Unity

My *almost* perfect sution, which I’ll walk you through, is this: Leveraging Visual Studio tools for Unity, modify the Unity solution as you see fit and use directory junctions (symlinks) to the build output directories of other projects.

  1. Let’s get Visual Studio tools for Unity installed. Visit that link and download the version that you need for the version of Visual Studio that you use. After installing, I opened up my project within Unity and I had to import the Visual Studio Tools package.Import Package
    After selecting this menu item, I was presented with a dialog for picking the items to import. I left it as is.Import Package2After importing these items, I could see that Unity had successfully added these entries under my Assets folder. Okay, now we’re getting somewhere. Next up, I wanted to configure Unity to not modify my solution every time I go back and forth from Unity to Visual Studio. This is the part that kills whether or not I’ve added projects to my solution. For me, it’s critical to have code I’m working on immediately accessible so that I can jump back and forth between projects. Lucky for us, this part is pretty easy. Go to the menu to access your new Visual Studio Tools menu item:

    VS Tools Configure
    Selecting “Configuration” opens up a really simple dialog. Let’s make sure “Generate solution file” is unchecked! It’s that easy.

    VS Tools Configure2
    Once we have all of this setup, we should be able to go into Visual Studio and add other projects to our solution.

  2. The one thing that I *could not* get this solution to do is have Unity leave my main game project alone in Visual Studio. As a result, the rest of this walk through is allowing us to play by Unity’s rules. Unity is good at magically referencing all of the managed DLLs that you include within your assets folder. If you drop DLLs somewhere within “Assets” and switch to Visual Studio, Unity will likely have modified your main project to reference this DLL.My next step was creating a spot where I wanted to drop the build outputs of my extra projects I wanted to reference. In my Visual Studio solution, I have my original game project and some newly added projects I want to build from source. In Unity, I wanted these to end up in “Assets/Dependencies/bin”. No problem. Let’s make that folder structure (or your equivalent if you don’t like my naming):Bin Dependencies
    The next part is probably the “trickiest” part because it’s… well… unusual. You could technically stop here and manually copy DLLs back and forth, but I’m not about that life. I want things to happen automatically. For this, we’re going to use junction points. Browse to your newly created folder in an administrator command prompt. I say administrator because only certain users have permissions to create junction points. Your non-admin user might, but this is my “safe” way of instructing you. On the command prompt, we’re going to use “mklink” to create a junction. The command is “mlkink /D /J <NAME_OF_YOUR_PROJECT> <RELATIVE_PATH_TO_YOUR_PROJECT>”. For example, if you had a C# project you wanted to reference that was “MyCoolLibrary.csproj” and was located in the directory above your Unity project, you might use the command “mlkink /D /J MyCoolLibrary “……..MyCoolLibrarybindebug””. Note that I used two dots to go back up a directory several times (since we’re inside of AssetsDependenciesbin and want to get outside of our Unity project). you should get a success message when your junction is created.

    Repeat this step for as many extra projects as you want to include. You can always come back and add more projects this way too, or remove the junctions if you don’t want to include a project anymore.

    At this point, you’re technically done. If you build from Visual Studio, you should have your other projects’ DLLs end up in your Unity folder, and your main game project will be updated by Unity to reference these now!

  3. But… You’re not done if you use source control for your Unity project and have separate source control on your other projects. The scary thing here is that usually we don’t want our build outputs to be stored in source control… But if we do nothing else, your source control system will likely want to include the newly created “AssetsDependenciesbin” folder and any of the contents you’re building into there. I just modified my git ignore file (I’m sure there’s an equivalent for SVN or other source control) to exclude the contents of “AssetsDependenciesbin”.The reason I didn’t excluded dependencies all together is because I can add other folders and DLL references here that I don’t want to build (like… the normal way). This gives me the flexibility of building the projects I want to control and still be able to just reference other pre-built DLLs!

Summary

In three easy steps, you should be able to use Unity, Visual Studio, and multiple projects in one solution in a what-feels-like-normal way. Because there’s still some dynamic stuff going on with Unity updating your main project, you might find the odd time you need to build twice to fix up compilation problems. I’ve seen this happen maybe once or twice so far, but otherwise it feels like normal. It’s also  important to note that you can’t escape the Unity project updating… don’t add references to your main project manually. That’s what that “AssetsDependencies” folder is for that we made.

Here are a few shots of what my setup looks like (proof that it works):

Solution Explorer

Unity Dependencies

And of course… it’s not the perfect solution. There’s still these things:

  • Unity gets mad at you for using junctions within your project. It actually tells you not to do this because you can mess things up. It’s working awesome for me right now though… So I’m going to just ignore this warning.
  • Remember step 3 where we ignored the AssetsDependenciesbin location in git? This actually ignored your junction points you created too. As a result, anyone else who clones your code will need to create junctions too. I’m working solo, so I’m not too worried about this step… But it’s definitely something that should be fixed up (again, I’m sure it’s doable, but I’m in no rush).

Hope that helps you feel more at home in Unity and Visual Studio! It certainly made it nicer for me.

 


C# Dev Connect 1 – Intro To Threading

C# Dev Connect

C# Dev Connect 1: Intro to Threading

In my last post, I mentioned we’d be hosting a C# Dev Connect meetup at our Magnet Forensics HQ in Waterloo. I figured I’d post to talk about how the event went so that if you couldn’t make it, you’ll have an idea for next time (and if you did make it, maybe you can comment on how you thought the event went). Our first Dev Connect was lead by a colleague of mine, Chris Sippel, who wanted to give a talk on threading basics in C#. Threading can quickly become a really complex topic, so Chris wanted to keep it high level and talk about the different approaches you can use to start threading in your C# applications.

Dev Connect: Before the Talk

Before Chris gave his talk on threading, we had our attendees slowly streaming in. Sure, we had a lot of our Magnet development team show up to offer support for Chris, but I was still pleasantly surprised to see others from the area that I’ve never met before showing an interest in C#. Our colleague Amaris had done an awesome job coordinating the first event for us, and even got us stocked up with pizza and pop. For the first hour or so, we had our guests fed and introduced to each other. A great start to the evening!

Dev Connect: The Talk

After everyone was settled in, we had our attendees pull up some chairs at tables or post up in our comfy soft seating to listen to what Chris had to say. Chris walked us through various slides and coding examples and was able to show us working examples of code to back up what he was saying. I was really proud of Chris for taking the leap to be our first speaker, and I think he did a great job. His slides and sample code are available at the Dev Connect git hub if you’re interested in taking a peek.

Dev Connect: The Aftermath

Once Chris was done talking, we had a few attendees leave to carry on with the rest of their evening, but it was great to see the majority hang back for further discussion. Some people were trying out the threading exercises that Chris had put together, others had tried it beforehand and had questions on it, and a bunch of people were sticking around just to talk .NET. When I envisioned what Dev Connect might have looked like, this was it. It felt great to help facilitate a positive discussion around using the .NET framework and C#, especially because people were clearly benefiting from it.

If you came for our first meetup, then thanks! If you couldn’t make it or you’re interested in the next one, be sure to check out our meetup page. We’ll see you next time!


First C# Dev Connect is Coming Up

C# Dev Connect

 

C# Dev Connect Meetup!

About a year ago I had thrown around the idea of creating a C#-specific group that would meet at a regular interval with some of my colleagues. I saw that there was interest, but between all of the things we had going on in our personal lives and work lives, we just hadn’t been able to co-ordinate something. I’m excited to announce that with some more solid planning over the last couple of months, C# Dev Connect will be able to host their first meetup! The company I work for, Magnet Forensics, has graciously offered our new office to host the event which will help tremendously. We’ll have a group of people from Magnet Forensics their to help out, but the only thing “Magnet” about the event is really just that it’s hosted at the office.

What’s on the Dev Connect Agenda?

This upcoming Tuesday (Tuesday January 20th, 2015) C# Dev Connect will be hosting their first monthly meetup on the topic of Threading in C#. Directly from the event’s Meetup page:

Overview of the the basics of threading in C# language. Threading is a very complex idea with many different ways of handling the same problem, however, you have to learn to crawl before you can walk. We’ll be discussing the basics of threads in .NET 2.0 and .NET 4.0. In .NET 2.0 we’ll be discussing the Thread object, various ways to start/stop threads, and potential stumbling blocks when it comes to threading in C#. In .NET 4.0 we’ll be talking about the async and await operators and how to use them.

A colleague of mine, Chris Sippel, will be giving the talk. People are encouraged to bring their laptops so they can try out some C# exercises related to the discussion. This initial talk may be more geared at an introductory-level, but our goal is to be able to cover topics for all levels of knowledge in C# (From never used it before, to expert level). We’ll even provide some food! All you have to do is show up and be ready to learn some C#, or share your C# knowledge.

If you’re looking for our venue, we had this little map put together:

C# Dev Connect Venue Map

Go into the back of 156 Columbia Street West in Waterloo (at the corner of Phillip and Columbia). If you’re familiar with the area, this used to be called RIM/Blackberry 5.

 

More Dev Connect Info

Here are a few additional links to get you to more C# Dev Connect information online:

We’re excited for you to join us!


MyoSharp – Update On The Horizon

MyoSharp

If you haven’t checked it out already, my friend Tayfun and I created an open source C# wrapper for Thalmic’s Myo. It’s hosted on GitHub over here, so you can browse and pull down code whenever you want. We’ve had some great feedback from users of our API, so we continue to welcome it (both positive and negative!) in order to improve the usability.

Thalmic has plans to release a firmware update to allow more data to be accessible through their API. Right now, MyoSharp is a bit out of date, but once this big firmware update lands we’ll take some more time to get it up to date again. Remember, it’s open source so you can feel free to contribute!

Troubleshooting

The most common question I receive is “I keep getting an exception about not being able to connect when I run the sample code”. I’ve tried to help a few people through this so I just figured I’d mention it right here for clarity: It’s more than likely that your MyoConnect version and the version we packaged with MyoSharp have become out of date. You probably keep your Myo SDK more up to date than MyoSharp is.

Don’t worry! So far we’ve had reasonably good luck just replacing the Myo DLLs in the x86 and x64 folder of the solution. Provided Thalmic didn’t break any API compatibility, things should actually just work out of the box. If they *DID* break backwards compatibility, it’s likely not that big of a deal either. You can update the PInvokes used to match the signatures they expect, and again, you should be up and running pretty quickly.

With that said, hold tight! We’ll get something updated soon. If you can’t wait, then that’s my suggestion for how to get up and running. Please don’t hesitate to contact Tayfun or myself for troubleshooting. Just post in the comments here and we can try to help out!


ProjectXyz: Enforcing Interfaces (Part 2)

Enforcing Interfaces

This is my second installment of the series related to my small side project that I started. I mentioned in the first post that one of the things I wanted to try out with this project is coding by interfaces. There’s an article over at CodeProject that I once read (I’m struggling to dig it up now, arrrrrghh) that really gave me a different perspective about using interfaces when I program. Ever since then I’ve been a changed man. Seriously.

The main message behind the article was along the lines of: Have your classes implement your interface, and to be certain nobody is going to come by and muck around with your class’s API, make sure they can’t knowingly make an instance of the class. One of the easiest ways to do this (and bear with me here, I’m not claiming this is right or wrong) is to have a hidden (private or protected) constructor on your class and static methods that let you create new instances of your class. However, the trick here is your static method will return the interface your class implements.

An example of this might look like the following:


public interface IMyInterface
{
    void Xyz();
}

public sealed class MyImplementation : IMyInterface
{
    // we hid the constructor from the outside!
    private MyImplementation()
    {
    }

    public static IMyInterface Create()
    {
        return new MyImplementation();
    }

    public void Xyz()
    {
        // do some awesome things here
    }
}

Interesting Benefits

I was pretty intrigued by this article on enforcing interfaces for a few reasons and if you can stick around long enough to read this whole post, I’ll hit the cons/considerations I’ve encountered from actually implementing things this way. These are obviously my opinion, and you can feel free to agree or disagree with me as much as you like.

  • (In theory) it keeps people from coming along and tacking on random methods to my classes. If I have an object hierarchy that I’m creating, having different child classes magically have random public APIs changing independently seems kind of scary. People have a harder time finding ways to abuse this because they aren’t concerned with the concrete implementation, just the interface.
  • Along with the first point, enforcing interfaces makes people think about what they’re doing when they need to change the public API. Now you need to go change the interface. Now you might be affecting X number of implementations. Are you sure?
  • Sets people up nicely to play with IoC and dependency injection. You’re already always working with interfaces because of this, now rolling out something like Moq or Autofac should be easier for you.
  • Methods can be leveraged to do parameter checks BEFORE entering a constructor. Creating IDisposable implementations can be fun when your constructor fails but and your disposable clean up code was expecting things to be initialized (not a terribly strong argument, but I’ve had cases where this makes life easier for me when working with streams).

Enforcing Interfaces in ProjectXyz

I’ve only implemented a small portion of the back-end of ProjectXyz (from what I imagine the scope to be) but it’s enough where I have a couple of layers, some different class/interface hierarchies that interact with each other, and some tests to exercise the API. The following should help explain the current major hierarchies a bit better:

  • Stats are simple structures representing an ID and a value
  • Enchantments are simple structures representing some information about modifying particular stats (slightly more complex than stats)
  • Items are more complex structures that can contain enchantments
  •  Actors are complex structures that:
    • Have collections of stats
    • Have collections of enchantments
    • Have collections of items

Okay, so that’s the high level. There’s obviously a bit more going on with the multi-layered architecture I’m trying out here too (since the hierarchies are repeated in a similar fashion in both layers). However, this is a small but reasonable amount of code to be trying this pattern on.

I have a good handful of classes and associated interfaces that back them. I’ve designed my classes to take in references to interfaces (which, are of course backed by my other classes) and my classes are largely decoupled from implementations of other classes.

Now that I’ve had some time to play with this pattern, what are my initial thoughts? Well, it’s not pure sunshine and rainbows (which I expected) but there are definitely some cool pros I hadn’t considered and definitely some negative side effects that I hadn’t considered either. Stay tuned

(The previous post in this series is here).


Lambdas: An Example in Refactoring Code

Lambdas: An Example in Refactoring Code

Background: Lambdas and Why This Example is Important

Based on your experience in C# or other programming languages, you may or may not be familiar with what a lambda is. If the word “Lambda” is new and scary to you, don’t worry. Hopefully after reading this you’ll have a better idea of how you can use them. My definition of a lambda expression is a function that you can define in local scope to pass as an argument provided it meets the delegate signature. It’s probably pretty obvious to you that you can pass in object references and value types into all kinds of functions… But what about passing in a whole function as an argument? And what if you just want to declare a simple anonymous method right when you want to provide it to a function? Lambdas.

So now you at least have a basic idea of what a Lambda is. What’s this article all about? I wanted to discuss a real-world coding experience that helped demonstrate the value of lambdas to me. In my honest opinion, I think having real world programming topics to learn from is more beneficial than many of the “ideal” scenario examples/tutorials you end up reading on the Internet. We can argue and debate that certain things are better or worse in an ideal sense, but when you have a real practical example, it really helps to drive the point home.

So for me, I love working with events. I’m very comfortable with the concept of delegation in C#. I can have one object that may notify anyone that’s interested that something is happening, and the other objects that do care are able to handle the event. Thus, actions can get delegated to those objects that care to be notified. One of my weaknesses at this point in my development experience is leveraging the concept of delegation outside of the realm of events. Delegation is powerful, but it’s certainly not limited to hooking onto events with event handlers.

The particular example I want to illustrate is a parallel of a real coding scenario. I was refactoring some code that was leveraging close to zero OOP practices. I wanted to create a nice extensible framework and class hierarchy to replace it. Once I was done, a few colleagues of mine at Magnet Forensics picked up on a bit of a code smell. We all agreed the new framework and class hierarchy was better, but there seemed to be a lot of boiler plate code going on. We got into the discussion of how lambdas could reduce a lot of the light-weight classes I had introduced. After taking their thoughts and refactoring my changes just a little bit more, the benefits of the lambdas were obvious to me.

So obvious, I had to write about it to share with all of you! Feel free to skip ahead to the downloads section to get the code and follow along with it. There are plenty of options for downloading.

The Scenario

I mentioned that this was a real world scenario. I’ve contrived a parallel example that hopefully demonstrates some of the real world issues while illustrating how lambdas are useful. Let’s imagine we have some big chunk of logic that does data processing. In my real-world scenario, this may have existed as one monolithic function. I would have one big function that, based on all the parameters I provide, can figure out how to process the data I feed it.

Problems:

  • Hard to test (You need to test the whole function even if you’re really just wanting to target a small part of it)
  • Error prone (Any small change to one part can potentially break an entire other part of the function as it grows in complexity)
  • Not extensible (As soon as you need to deviate a little bit from the structure that’s existed, suddenly things get really complicated)

By switching to more of an OOP approach, I can start to address all of the above problems. So in this example, I’ll illustrate what my initial refactoring would have looked like by introducing classes. Afterward, I’ll show what my second refactor may have looked like after taking lambdas into account. In order to stay true to some of the real world problems you might encounter when performing a big refactor like this, I’ve opted to include some fictitious dependency. I refer to this at the “mandatory argument” or “important reference”. You’ll notice in the code that I don’t really use it to do any work, but it’s demonstrating having to pass down some other critical information to my classes that the original function may have had easy access to.

Pre-Refactor: No Lambdas Here!

Let’s start with our new OOP layout. I want to have a factory that can create data processor instances for me. So let’s define what those look like.

First, we have the interface for our data processors:

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing
{
  public interface IProcessor
  {
    bool TryProcess(object input);
  }
}

And then a simple interface for a factory that can create the data processor instances for us:

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing
{
  public interface IProcessorFactory
  {
    IProcessor Create(ProcessorType type, object mandatoryArgument, object value);
  }
}

As you may have noticed, the factory interface I’ve provided above takes a ProcessorType enumeration. You may or may not agree that using an enumeration as an argument for the factory is good practice, but I’m using it to make my example simple. Here’s what our enumeration will look like:

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing
{
  public enum ProcessorType
  {
    GreaterThan,
    LessThan,
    NumericEqual,
    StringEqual,
    StringNotEqual,
    /* we could add countless more types of processors here. realistically,
     * an enum may not be the best option to accomplish this, but for
     * demonstration purposes it'll make things much easier.
     */
  }
}

And now we have a definition for all of the basic building blocks defined. These will also be used later when we refactor, so I wanted to get them out of the way right in the beginning.

Right. So, let’s create an extensible IProcessor implementation. We can address some of our basic requirements (like our artificial dependency) and create something that can easily be built on top of. All of our child classes will just have to handle validating their constructor input and overriding a single method. Easy!

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing.PreRefactor
{
  public abstract class Processor : IProcessor
  {
    private readonly object _importantReference;

    public Processor(object mandatoryArgument)
    {
      if (mandatoryArgument == null)
      {
        throw new ArgumentNullException("mandatoryArgument");
      }

      _importantReference = mandatoryArgument;
    }

    public bool TryProcess(object input)
    {
      if (input == null)
      {
        return false;
      }

      return Process(_importantReference, input);
    }

    protected abstract bool Process(object importantReference, object input);
  }
}

And now let’s provide the factory that’s going to be making all of these instances for us. Please not that the factory is left incomplete on purpose. I’ll only be providing two actual processor implementations and I’ll leave it up to you to try and fill out the rest!

using System;
using System.Collections.Generic;
using System.Text;

using LambdaRefactor.Processing.PreRefactor.Numeric;
using LambdaRefactor.Processing.PreRefactor.String;

namespace LambdaRefactor.Processing.PreRefactor
{
  public class ProcessorFactory : IProcessorFactory
  {
    public IProcessor Create(ProcessorType type, object mandatoryArgument, object value)
    {
      switch (type)
      {
        case ProcessorType.GreaterThan:
          return new GreaterProcessor(mandatoryArgument, value);
        case ProcessorType.StringEqual:
          return new StringEqualsProcessor(mandatoryArgument, value);
        /*
         * we still have to go implement all the other classes!
         */
        default:
          throw new NotImplementedException("The processor type '" + type + "' has not been implemented in this factory.");
      }
    }
  }
}

And now that we have a factory that can easily create our processors for us, let’s actually define some of our processor implementations.

We’ll start off with a simple processor for checking if some input is greater than a defined value. It should really only work with numeric values, but one of the challenges we need to work with is that our data is only provided to us as an object. As a result, we’ll have to do some type checking on our own.

using System;
using System.Collections.Generic;
using System.Text;
using System.Globalization;

namespace LambdaRefactor.Processing.PreRefactor.Numeric
{
  public class GreaterProcessor : Processor
  {
    private readonly decimal _value;

    public GreaterProcessor(object mandatoryArgument, object value)
      : base(mandatoryArgument)
    {
      if (value == null)
      {
        throw new ArgumentNullException("value");
      }

      _value = Convert.ToDecimal(value, CultureInfo.InvariantCulture); // will throw exception on mismatch
    }

    protected override bool Process(object importantReference, object input)
    {
      decimal numericInput;
      try
      {
        numericInput = Convert.ToDecimal(input, CultureInfo.InvariantCulture);
      }
      catch (Exception)
      {
        return false;
      }

      return numericInput > _value;
    }
  }
}

And to put a spin on things, let’s implement a processor that operates on string values only. We’ll implement the processor that checks if strings are equal. Like the GreaterProcessor, we’re forced to get object references passed in. We’ll need to convert these to strings to work with them.

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing.PreRefactor.String
{
  public class StringEqualsProcessor : Processor
  {
    private readonly string _value;

    public StringEqualsProcessor(object mandatoryArgument, object value)
      : base(mandatoryArgument)
    {
      if (value == null)
      {
        throw new ArgumentNullException("value");
      }

      _value = (string)value; // will throw exception on mismatch
    }

    protected override bool Process(object importantReference, object input)
    {
      return Convert.ToString(input, System.Globalization.CultureInfo.InvariantCulture).Equals(_value);
    }
  }
}

Where can we go from here?

  • We can make simple inverse processors by overriding others and inverting the return value on the Process() function. Want a StringDoesNotEqual processor? It’s just as easy as  inheriting from the StringEqualsProcessor and then modifying the return of Process(). Then we add this to our factory.
  • Adding other various types of processors is easy. We just have to extend our base class and add a couple of lines to our factory.
  • This code is much easier to test than one monolithic function that does all types of processing. We can now put a nice testing framework around this, and test each method on each class individually.

Post-Refactor: All of the Lambdas!

So… Why don’t we stop here? Because we can do better.

I mentioned that to make a simple inverse processor, all I had to do was override a class and invert the return value of Process(). That’s pretty easy to do… Except I need an entire new class to do it. If I want to make more types of numeric processing, I need to provide similar type checking and conversion. This code gets duplicated every time I go to add another simple class.

I also have my factory class responsible for creating my processor instances. They’re relatively coupled already, but I want developers to have to use my factory to construct instances of processor interface and not worry about the specific implementations. So what if my factory had a bit more say in the construction if the processors? I could use lambdas to pass in the logic that’s unique to each type of processor, and keep each type of processor pretty bare bones. This would move more logic into the factory, but reduce the number of processor implementations I have to make.

So let’s do better!

Let’s start with our new IProcessor implementation. We’ll provide a delegate signature that will be the basis for the lambda expressions we pass in:

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing.PostRefactor
{
  public abstract class Processor : IProcessor
  {
    private readonly object _importantReference;

    public Processor(object mandatoryArgument)
    {
      if (mandatoryArgument == null)
      {
        throw new ArgumentNullException("mandatoryArgument");
      }

      _importantReference = mandatoryArgument;
    }

    public delegate bool ProcessDelegate<T>(object importantReference, T processorValue, T input);

    public bool TryProcess(object input)
    {
      if (input == null)
      {
        return false;
      }

      return Process(_importantReference, input);
    }

    protected abstract bool Process(object importantReference, object input);
  }
}

From here, we can come up with some child classes that that are generic enough for us to work with using lambas that still provide enough functionality for them to exist on their own. We can break our processors up based on the type of data they’ll be working with. That is, we can have a processor for numeric values and a processor for string values. This will cover a lot of the duplicated functionality that exists in the current state of our refactor if we wanted to keep creating new IProcessor implementations.

Let’s start with our NumericProcessor:

using System;
using System.Collections.Generic;
using System.Text;
using System.Globalization;

namespace LambdaRefactor.Processing.PostRefactor.Numeric
{
  public class NumericProcessor : Processor
  {
    private readonly decimal _value;
    private readonly ProcessDelegate<decimal> _processDelegate;

    public NumericProcessor(object mandatoryArgument, object value, ProcessDelegate<decimal> processDelegate)
      : base(mandatoryArgument)
    {
      if (value == null)
      {
        throw new ArgumentNullException("value");
      }

      if (processDelegate == null)
      {
        throw new ArgumentNullException("processDelegate");
      }

      _value = Convert.ToDecimal(value, CultureInfo.InvariantCulture); // will throw exception on mismatch
      _processDelegate = processDelegate;
    }

    protected override bool Process(object importantReference, object input)
    {
      decimal numericInput;
      try
      {
        numericInput = Convert.ToDecimal(input, CultureInfo.InvariantCulture);
      }
      catch (Exception)
      {
        return false;
      }

      return _processDelegate(importantReference, _value, numericInput);
    }
  }
}

And similarly, a StringProcessor:

using System;
using System.Collections.Generic;
using System.Text;

namespace LambdaRefactor.Processing.PostRefactor.String
{
  public class StringProcessor : Processor
  {
    private readonly string _value;
    private readonly ProcessDelegate<string> _processDelegate;

    public StringProcessor(object mandatoryArgument, object value, ProcessDelegate<string> processDelegate)
      : base(mandatoryArgument)
    {
      if (value == null)
      {
        throw new ArgumentNullException("value");
      }

      if (processDelegate == null)
      {
        throw new ArgumentNullException("processDelegate");
      }

      _value = (string)value; // will throw exception on mismatch
      _processDelegate = processDelegate;
    }

    protected override bool Process(object importantReference, object input)
    {
      return _processDelegate(importantReference, _value, Convert.ToString(input, System.Globalization.CultureInfo.InvariantCulture));
    }
  }
}

With these two basic child classes built upon our new IProcessor implementation, we can restructure a new IProcessorFactory implementation. As I mentioned, we can leverage lambdas to move some logic back into the factory class and keep the processor implementations relatively basic.

Here’s the new factory:

using System;
using System.Collections.Generic;
using System.Text;

using LambdaRefactor.Processing.PostRefactor.Numeric;
using LambdaRefactor.Processing.PostRefactor.String;

namespace LambdaRefactor.Processing.PostRefactor
{
  public class ProcessorFactory : IProcessorFactory
  {
    public IProcessor Create(ProcessorType type, object mandatoryArgument, object value)
    {
      switch (type)
      {
        case ProcessorType.GreaterThan:
          return new NumericProcessor(mandatoryArgument, value, (_, x, y) => x <; y);
        case ProcessorType.StringEqual:
          return new StringProcessor(mandatoryArgument, value, (_, x, y) => x == y);
        /*
         * Look how easy it is to add new processors! Exercise for you:
         * implement the remaining processors in the enum!
         */
        default:
          throw new NotImplementedException("The processor type '" + type + "' has not been implemented in this factory.");
      }
    }
  }
}

As you can see, our new factory is simple like our first implementation. The major difference? We’re passing very simple lambdas that would have otherwise been functionality defined in a very light-weight child class. This allows us to move away from having many potentially very bare-bones classes and minimizes the amount of boilerplate duplication.

Summary

I didn’t post it here, but the original implementation that this example paralleled  in real life was a pain to deal with. It was hard to test, brittle to modify/extend, and just downright unwieldly. It was obvious to me that switching to a refactored object-oriented implementation was going to make this style of code easy to extend and easy to test.

The initial refactor posted in this example was a great step in the right direction. The code became easy to build upon by relying on simple OOP principals, and granular parts of the functionality became really easy to test. If I just wanted to test certain types of numeric processing, I didn’t have set up a test for my entire massive “process” function. All I’d have to do is make an instance of the processor I want to test, and call the methods I’d like to cover. Incredibly easy.

Lambdas took this to the next level though. By leveraging lambads, I could refactor even more common code into a base class. This meant that  in order to use my processors properly, the final factory class implementation definitely became required to use. It caused a paradigm shift where instead of making lots of light-weight child classes for additional processor implementations, I’d only need to implement some logic in the factory. All of my existing processors could be refactored into a handful of generic processor classes, and the factory would be responsible for passing in the necessary lambdas.

Lambdas let you accomplish some pretty powerful things, and this refactoring example was one case where they made code much easier to manage. Hopefully you can find a good use for lamba expressions in your next up-coming programming task!

Code Downloads


Cameron Sapp – Recognizing The New Guy

 

 

Cameron Sapp (Rocking awesome handlebars for Movember)

Cameron Sapp and a Little Background

A couple weeks ago I mentioned that I wanted to start publicly acknowledging some of my teammates. While this is the first one, it certainly won’t be the last. At Magnet Forensics, I’m surrounded by many individuals that bring a lot to the table. There’s certainly no reason and no way I’d only be able to pick one person to write about. Now there wasn’t a particular reason I picked this individual first, but I think I had some concrete things fresh in my head that I wanted to share. Without too much more rambling, I’d like to introduce Cameron Sapp!

New Kid on the Block

Cameron joined our team earlier this year. I don’t think any of us doubted his technical abilities and we were all excited to bring him on board. After all, we have a ton of stuff to work on and we need more great minds working with us! We were getting pretty impatient waiting for him to start, but it was definitely worth the wait.

Cam fit in to the work culture really well and really quickly too. Heck, he’s one of Team Magnet’s awesome volleyball players! Something people may not pay attention to is how much a work culture fit is important in a small organization. Being able to get along with all of your teammates and share a common vision is absolutely crucial for being successful. Luckily for us, Cameron fits in well with the team and definitely embraces the Magnet culture!

I was recently told by a bright individual, Dan Silivestru of tinyHippos, that there will be a time where someone younger is going to show up and surprise me with what they know. Of course, it’s not that I walk around doubting the ability of people, but unfortunately it’s pretty common for age and/or experience to bring about big assumptions for people’s abilities. I’m still young and early in my career, so I don’t think age is something I’m concerned with–but I might be guilty of thinking highly of my technical abilities. While Cameron isn’t the first, and certainly won’t be the last, he definitely was able to pull some tricks from his sleeves to impress me. For that, I would like to applaud him and recognize him here on The Internetz.

Whatcha Gonna Do With All Them Lambdas?

I’ve been programming in Microsoft’s C# for quite a few years now. I’m certainly not a master by any stretch of the imagination, but I’d say I’m pretty well versed. I’ve also written in the past about how I like to use events a lot when I’m programming (like here, here, and here) and almost always try to find an event-driven approach to things. But what does this have to do with Mr Cameron Sapp?

Well, you see… In C# it’s often the case where you hook up events like this:


someObject.DidSomething += SomeObject_DidSomething;

private void SomeObject_DidSomething(Object sender, EventArgs e)
{
    // do some awesome stuff.
}

That’s not so bad, right? Well, except if you’re making these suckers everywhere… And when you don’t want to have to type out a big ugly signature… Or when the type of your event arguments is obnoxiously long… Well, you get the point. If you’re not a C# programmer, take my word for it: if you use events a lot, having these event handlers all over the place sometimes just sucks to have to look at.

Enter… The lambda.

So once upon a time, Cameron stuck up a code review. Things were looking pretty good (as per usual with Cam’s code), but I noticed something right before I was going to give it the stamp of approval.


// Some code...

someClass.SomeEvent += (s, e) =>
{
    // event handler logic
}

// Some more code...

What the heck is that?! My alarms for event handler memory leaks weren’t going off (since this handler needed to exist for the entire lifetime of the objects in question), but I had no idea what I was looking at. Cameron’s a pretty smart guy, I remember thinking, so this code definitely had to compile on his machine before he pushed it up for me to review. Still… What was I looking at?

This was my first real shocker where someone caught me off guard for something I always felt really comfortable with. I mean… C# and events are my bread and butter. How was this guy showing me something I hadn’t seen before regarding events? How can he know something about them I don’t?! Well, he did it. And I’m sure that he’s got a lot more up there in that head of his that I don’t know. And I can’t wait for him to teach me it!

Summary

So this was pretty quick, and it probably doesn’t do Cameron enough justice, but I think it’s a start. We’re really fortunate to have Cameron as part of our team–both from a culture fit and a technical perspective. He’s a rock solid developer that is not only willing to adapt to our coding environment, but he’s also got lots of insight to bring to the table.

It’s important that we never put ourselves in a position where we think we know it all. As soon as you get comfortable with what you know, you stop learning. When you stop learning, you have people like Cameron show up and send you a wake up call. There isn’t a single person out there who knows everything, and you might be surprised who can teach you a thing or two.

Thanks for being part of our team, Cam. Let’s show ’em how it’s done.

More team member recognition to come! Stay tuned.


Dynamic Programming with Python and C#

Dynamic Coding with C# and Python

Dynamic Code: Background

Previously, I was expressing how excited I was when I discovered Python, C#, and Visual Studio integration. I wanted to save a couple examples regarding dynamic code for a follow up article… and here it is! (And yes… there is code you can copy and paste or download).

What does it mean to be dynamic? As with most things, wikipedia provides a great start. Essentially, much of the work done for type checking and signatures is performed at runtime for a dynamic language. This could mean that you can write code that calls a non-existent method and you wont get any compilation errors. However, once execution hits that line of code, you might get an exception thrown. This Stack Overflow post’s top answer does a great job of explaining it as well, so I’d recommend checking that out if you need a bit more clarification. So we have statically bound and dynamic languages. Great stuff!

So does that mean Python is dynamic? What about C#?

Well Python is certainly dynamic. The code is interpreted and functions and types are verified at run time. You won’t know about type exceptions or missing method exceptions until you go to execute the code. For what it’s worth, this isn’t to be confused with a loosely typed language. Ol’ faithful Stack Overflow has another great answer about this. The type of the variable is determined at runtime, but the variable type doesn’t magically change. If you set a variable to be an integer, it will be an integer. If you set it immediately after to be a string, it will be a string. (Dynamic, but strongly typed!)

As for C#, in C# 4 the dynamic keyword was introduced. By using the dynamic keyword, you can essentially get similar behaviour to Python. If you declare a variable of type dynamic, it will take on the type of whatever you assign to it. If I assign a string value to my dynamic variable, it will be a string. I can’t perform operations like pre/post increment (++) on the variable when it’s been assigned a string value without getting an exception. If I assign an integer value immediately after having assigned a string value, my variable will take on the integer type and my numeric operators become available.

Where does this get us with C# and Python working together then?

Example 1: A Simple Class

After trying to get some functions to execute between C# and Python, I thought I needed to take it to the next level. I know I can declare classes in Python, but how does that look when I want to access it from C#? Am I limited to only calling functions from Python with no concept of classes?

The answer to the last question is no. Most definitely not. You can do some pretty awesome things with IronPython. In this example, I wanted to show how I can instantiate an instance of a class defined within a Python script from C#. This script doesn’t have to be created in code (you can use an external file), so if you need more clarification on this check out my last Python/C# posting, but I chose to do it this way to have all the code in one spot. I figured it might be easier to show for an example.

We’ll be defining a class in Python called “MyClass” (I know, I’m not very creative, am I?). It’s going to have a single method on it called “go” that will take one input parameter and print it to the console. It’s also going to return the input string so that we can consume it in C# and use it to validate that things are actually going as planned. Here’s the code:

using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.Scripting.Hosting;

using IronPython.Hosting;

namespace DynamicScript
{
    internal class Program
    {
        private static void Main(string[] args)
        {
            Console.WriteLine("Enter the text you would like the script to print!");
            var input = Console.ReadLine();

            var script =
                "class MyClass:\r\n" +
                "    def __init__(self):\r\n" +
                "        pass\r\n" +
                "    def go(self, input):\r\n" +
                "        print('From dynamic python: ' + input)\r\n" +
                "        return input";

            try
            {
                var engine = Python.CreateEngine();
                var scope = engine.CreateScope();
                var ops = engine.Operations;

                engine.Execute(script, scope);
                var pythonType = scope.GetVariable("MyClass");
                dynamic instance = ops.CreateInstance(pythonType);
                var value = instance.go(input);

                if (!input.Equals(value))
                {
                    throw new InvalidOperationException("Odd... The return value wasn't the same as what we input!");
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine("Oops! There was an exception while running the script: " + ex.Message);
            }

            Console.WriteLine("Press enter to exit...");
            Console.ReadLine();
        }
    }
}

Not too bad, right? The first block of code just takes some user input. It’s what we’re going to have our Python script output to the console. The next chunk of code is our Python script declaration. As I said, this script can be loaded from an external file and doesn’t necessarily have to exist entirely within our C# code files.

Within our try block, we’re going to setup our Python engine and “execute” our script. From there, we can ask Python for the type definition of “MyClass” and then ask the engine to create a new instance of it. Here’s where the magic happens though! How can we declare our variable type in C# if Python actually has the variable declaration? Well, we don’t have to worry about it! If we make it the dynamic type, then our variable will take on whatever type is assigned to it. In this case, it will be of type “MyClass”.

Afterwards, I use the return value from calling “go” so that we can verify the variable we passed in is the same as what we got back out… and it definitely is! Our C# string was passed into a Python function on a custom Python class and spat back out to C# just as it went in. How cool is that?

Some food for thought:

  • What happens if we change the C# code to call “go1” instead of “go”? Do we expect it to work? If it’s not supposed to work, will it fail at compile time or runtime?
  • Notice how our Python method “go” doesn’t have any type parameters specified for the argument “input”? How and why does all of this work then?!

Example 2: Dynamically Adding Properties

I was pretty excited after getting the first example working. This meant I’d be able to create my own types in Python and then leverage them directly in C#. Pretty fancy stuff. I didn’t want to stop there though. The dynamic keyword is still new to me, and so is integrating Python and C#. What more could I do?

Well, I remembered something from my earlier Python days about dynamically modifying types at run-time. To give you an example, in C# if I declare a class with method X and property Y, instances of this class are always going to have method X and property Y. In Python, I have the ability to dynamically add a property to my class. This means that if I create a Python class that has method X but is missing property Y, at runtime I can go right ahead and add property Y. That’s some pretty powerful stuff right there. Now I don’t know of any situations off the top of my head where this would be really beneficial, but the fact that it’s doable had me really interested.

So if Python lets me modify methods and properties available to instances of my type at runtime, how does C# handle this? Does the dynamic keyword support this kind of stuff?

You bet. Here’s the code for my sample application:

using System;
using System.Collections.Generic;
using System.Text;

using Microsoft.CSharp.RuntimeBinder;

using IronPython.Hosting;

namespace DynamicClass
{
    internal class Program
    {
        private static void Main(string[] args)
        {
            Console.WriteLine("Press enter to read the value of 'MyProperty' from a Python object before we actually add the dynamic property.");
            Console.ReadLine();

            // this script was taken from this blog post:
            // http://znasibov.info/blog/html/2010/03/10/python-classes-dynamic-properties.html
            var script =
                "class Properties(object):\r\n" +
                "    def add_property(self, name, value):\r\n" +
                "        # create local fget and fset functions\r\n" +
                "        fget = lambda self: self._get_property(name)\r\n" +
                "        fset = lambda self, value: self._set_property(name, value)\r\n" +
                "\r\n" +
                "        # add property to self\r\n" +
                "        setattr(self.__class__, name, property(fget, fset))\r\n" +
                "        # add corresponding local variable\r\n" +
                "        setattr(self, '_' + name, value)\r\n" +
                "\r\n" +
                "    def _set_property(self, name, value):\r\n" +
                "        setattr(self, '_' + name, value)\r\n" +
                "\r\n" +
                "    def _get_property(self, name):\r\n" +
                "        return getattr(self, '_' + name)\r\n";

            try
            {
                var engine = Python.CreateEngine();
                var scope = engine.CreateScope();
                var ops = engine.Operations;

                engine.Execute(script, scope);
                var pythonType = scope.GetVariable("Properties");
                dynamic instance = ops.CreateInstance(pythonType);

                try
                {
                    Console.WriteLine(instance.MyProperty);
                    throw new InvalidOperationException("This class doesn't have the property we want, so this should be impossible!");
                }
                catch (RuntimeBinderException)
                {
                    Console.WriteLine("We got the exception as expected!");
                }

                Console.WriteLine();
                Console.WriteLine("Press enter to add the property 'MyProperty' to our Python object and then try to read the value.");
                Console.ReadLine();

                instance.add_property("MyProperty", "Expected value of MyProperty!");
                Console.WriteLine(instance.MyProperty);
            }
            catch (Exception ex)
            {
                Console.WriteLine("Oops! There was an exception while running the script: " + ex.Message);
            }

            Console.WriteLine("Press enter to exit...");
            Console.ReadLine();
        }
    }
}

Let’s start by comparing this to the first example, because some parts of the code are similar. We start off my telling  the user what’s going to happen and wait for them to press enter. Nothing special here. Next, we declare our Python script (again, you can have this as an external file) which I pulled form this blog. It was one of the first hits when searching for dynamically adding properties to classes in Python, and despite having limited Python knowledge, it worked exactly as I had hoped. So thank you, Zaur Nasibov.

Inside our try block, we have the Python engine creation just like our first example. We execute our script right after too and create an instance of our type defined in Python. Again, this is all just like the first example so far. At this point, we have a reference in C# to a type declared in Python called “Properties”. I then try to print to the console the value stored inside my instances property called “MyProperty”. If you were paying attention to what’s written in the code, you’ll notice we don’t have a property called “MyProperty”! Doh! Obviously that’s going to throw an exception, so I show that in the code as well.

So where does that leave us then? Well, let’s add the property “MyProperty” ourselves! Once we add it, we should be able to ask our C# instance for the value of “MyProperty”. And… voila!

Some food for thought:

  • When we added our property in Python, we never specified a type. What would happen if we tried to increment “MyProperty” after we added it? What would happen if we tried to assign an integer value of 4 to “MyProperty”?
  • When might it be useful to have types in C# dynamically get new methods or properties?

Summary

With this post, we’re still just scratching the surface of what’s doable when integrating Python and C#. Historically, these languages have been viewed as very different where C# is statically bound and Python is a dynamic language. However, it’s pretty clear with a bit of IronPython magic that we can quite easily marry the two languages together. Using the “dynamic” keyword within C# really lets us get away with a lot!

Source code for these projects is available at the following locations:


  • Nick Cosentino

    Nick Cosentino

    I work as a team lead of software engineering at Magnet Forensics (http://www.magnetforensics.com). I'm into powerlifting, bodybuilding, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress