Tag: Software

API: Top-Down? Bottom-Up? Somewhere in the Middle?

A Quick Brain-Dump on API Desgin

I’ll keep this one pretty brief as I haven’t totally nailed down my thoughts on this. I still thought it was worth a quick little post:

When you’re creating a brand new API to expose some functionality of a system, should you design it with a strong focus on how the internals work? Should you ignore how internals work and make it as easy to consume as possible? Or is there an obvious balance?

I find myself trying to answer this question without ever explicitly asking it. Any time I’m looking to extend or connect systems, this is likely to come up.

Most Recently…

Most recently I started trying to look at creating an API over AMQP to connect my game back-end to a Unity 3D front-end. I had been developing the back-end for a little while now, so I had a pretty good idea for how things needed to work from that perspective. The front-end? Not so much really. I knew some basic actions that I was considering, so I tried coding up an early API for them.

A lot of my focus was around how I was going to implement the code on the back-end to make this API work. It resulted in some of the API calls looking a little bit gross. But the idea was that if I settled on an API that would make the back-end easy to implement, I could get it up and running faster.

After a little while, I feel like my API isn’t getting any cleaner from a consumer perspective, and funny enough, it’s not actually as easy to implement as I was hoping on the back-end. Which had me reflect on a work example…

Once Upon a Time at Work…

Well, it was only a couple of months ago, really. I was working with a colleague on integrating a new system into an existing code base. We decided we wanted to approach the API problem from a consumer perspective. We said “let’s make this as easy to call and use as possible so that people ACTUALLY want to use it”.

We set out with this mission, and created a pretty simplistic API. The challenge? There was a lot of heavy lifting and a bit of voodoo going on behind the scenes. But you know what? We hid the magic in one spot of the code (instead of having ugly stuff scattered all over the code base) and it ended up being a very usable API.

So…

So does this consumer-first, top-down approach to API design always work? I’m not sure. Some similarities/differences in the scenarios:

  • In my current situation, I have a back-end implemented and very minimal code implemented on the caller side. In the work example, we had nothing implemented on the back-end, and a ton of code implemented where the caller side would be.
  • In both examples, at least one of the caller side or back-end side was reasonably well understood. For my game, the back-end was pretty well understood. For work, the caller side was pretty well understood and we had experience with what we’d call a “failed’ back-end implementation (that we were actually setting out to redesign).
  • The work example was a relatively small subset for an API, but the game example was about to be a very specific implementation that I’d need to adapt into a pattern for all messaging in my AMQP system

So there’s a few things to consider there. I think I’m at the point in my game where I’d like to revisit how I’m forming this API and try it from a client-first perspective. Now that I know some of the catches, maybe I’ll shed some new light!

How do you approach API design?


What Makes Good Code? – Should Every Class Have An Interface? Pt 1

What Makes Good Code? - Should Every Class Have An Interface?

What’s An Interface?

I mentioned in the first post of this series that I’ll likely be referring to C# in most of these posts. I think the concept of an interface in C# extends to other languages–sometimes by a different name–so the discussion here may still be applicable. Some examples in C++, Javaand Python to get you going for comparisons.

From MSDN:

An interface contains definitions for a group of related functionalities that a class or a struct can implement.
By using interfaces, you can, for example, include behavior from multiple sources in a class. That capability is important in C# because the language doesn’t support multiple inheritance of classes. In addition, you must use an interface if you want to simulate inheritance for structs, because they can’t actually inherit from another struct or class.

It’s also important to note that an interface decouples the definition of something from its implementation. Decoupled code is, in general, something that programmers are always after. If we refer back to the points I defined for what makes good code (again, in my opinion), we can see how interfaces should help with that.

  • Extensibility: Referring to interfaces in code instead of concrete classes allows a developer to swap out the implementation easier (i.e. extend support for different data providers in your data layer). They provide a specification to be met should a developer want to extend the code base with new concrete implementations.
  • Maintainability: Interfaces make refactoring an easier job (when the interface signature doesn’t have to change). A developer can get the flexibility of modifying the implementation that already exists or creating a new one provided that it meets the interface.
  • Testability: Referring to interfaces in code instead of concrete classes allows mocking frameworks to leverage mocked objects so that true unit tests are easier to write.
  • Readability: I’m neutral on this. I don’t think interfaces are overly helpful for making code more readable, but I don’t think they inherently make code harder to read.

I’m only trying to focus on some of the pro’s here, and we’ll use this sub-series to explore if these hold true across the board. So… should every class have a backing interface?

An Example

Let’s walk through a little example. In this example, we’ll look at an object that “does stuff”, but it requires something that can do a string lookup to “do stuff” with. We’ll look at how using an interface can make this type of code extensible!

First, here is our interface that we’ll use for looking up strings:

public interface IStringLookup
{
    string GetString(string name);
}

And here is our first implementation of something that can lookup strings for us. It’ll just lookup an XML node and pull a value from it. (How it actually does this stuff isn’t really important for the example, which is why I’m glossing over it):

public sealed class XmlStringLookup : IStringLookup
{
    private readonly XmlDocument _xmlDocument;

    public XmlStringLookup(XmlDocument xmlDocument)
    {
        _xmlDocument = xmlDocument;
    }

    public string GetString(string name)
    {
        return _xmlDocument
            .GetElementsByTagName(name)
            .Cast<XmlElement>()
            .First()
            .Value;
    }
}

This will be used to plug into the rest of the code:

private static int Main(string[] args)
{
    var obj = CreateObj();
    var stringLookup = CreateStringLookup();
    
    obj.DoStuff(stringLookup);
 
    return 0;
}
 
private static IMyObject CreateObj()
{
    return new MyObject();
}
 
private static IStringLookup CreateStringLookup()
{
    return new XmlStringLookup(new XmlDocument());
}
 
public interface IMyObject
{
    void DoStuff(IStringLookup stringLookup);
}
 
public class MyObject : IMyObject
{
    public void DoStuff(IStringLookup stringLookup)
    {
        var theFancyString = stringLookup.GetString("FancyString");
        
        // TODO: do stuff with this string
    }
}

In the code snippet above, you’ll see our Main() method creating an instance of “MyObject” which is the thing that’s going to “DoStuff” with our XML string lookup. The important thing to note is that the DoStuff method takes in the interface IStringLookup that our XML class implements.

Now, XML string lookups are great, but let’s show why interfaces make this code extensible. Let’s swap out an XML lookup for an overly simplified CSV string lookup! Here’s the implementation:

public sealed class CsvStringLookup : IStringLookup
{
    private readonly StreamReader _reader;
 
    public CsvStringLookup(StreamReader reader)
    {
        _reader = reader;
    }
 
    public string GetString(string name)
    {
        string line;
        while ((line = _reader.ReadLine()) != null)
        {
            var split = line.Split(',');
            if (split[0] != name)
            {
                continue;
            }
 
            return split[1];
        }
 
        throw new InvalidOperationException("Not found.");
    }
}

Now to leverage this class, we only need to modify ONE line of code from the original posting! Just modify CreateStringLookup() to be:

private static IStringLookup CreateStringLookup()
{
    return new CsvStringLookup(new StreamReader(File.OpenRead(@"pathtosomefile.txt")));
}

And voila! We’ve been able to extend our code to use a COMPLETELY different implementation of a string lookup with relatively no code change. You could make the argument that if you needed to modify the implementation for a buggy class that as long as you were adhering to the interface, you wouldn’t need to modify much surrounding code (just like this example). This would be a point towards improved maintainability in code.

“But wait!” you shout, “I could have done the EXACT same thing with an abstract class instead of the IStringLookup interface you big dummy! Interfaces are garbage!”

And you wouldn’t be wrong about the abstract class part! It’s totally true that IStringLookup could instead have been an abstract class like StringLookupBase (or something…) and the benefits would still apply! That’s a really interesting point, so let’s keep that in mind as we continue on through out this whole series. The little lesson here? It’s not the interface that gives us this bonus, it’s the API boundary and level of abstraction we introduced (something that does string lookups). Both an interface and abstract class happen to help us a lot here.

Continue to Part 2


What Makes Good Code? – Patterns and Practices Series

What Makes Good Code?

It’s been a while since I’ve had a programming oriented post, and I figured this would be a great topic to write about. It’s been a topic I’ve been thinking about more and more over the last year and I’ve been experimenting with certain patterns and practices to see if certain things actually make code “better”. A lot of the information presented in this series will be completely based on my opinion, but I’ll try to back up my opinion with as many concrete examples as I can. If you have a differing opinion, I’d love to hear it in the comments.

I’d also like to call out that much of what I’ll be discussing is in the context of object oriented programming. To be specific, there may be mostly C# examples used. If this isn’t something you’re actively doing, then don’t worry! It would be great to hear if you see parallels in the work you’re doing.

What Is “Good”?

So let’s start by defining what “good” or “better” means (and I’ll leave this high level and we can dive in afterwards)…

  • Extensible: Re-writing of code is minimized when adding more functionality. It feels straight forward to extend the code. Developers won’t do “the wrong thing” when trying to extend the code.
  • Maintainable: Many of the same qualities that go with extensible. Fixing bugs or making tweaks involves touching few places in the code base.
  • Testable: It’s straight forward to write coded tests that can exercise functionality of the code under test.
  • Readable**:  Developers should be able to read code and understand what it’s doing. The flow of execution within and between different modules of code shouldn’t cause anyone a headache,

All of these describe qualities of the code. I could argue that you could write very maintainable, extensible, and testable code and it would be awful if it didn’t actually solve the customers’ needs. I’d like to try and leave this aspect out of the discussion, and focus on the actual patterns/practices that we can implement in code. I’d love to write something separate about writing code that actually solves a problem versus code that may potentially solve some possible problem at some potential point in the future (and yes, all of the uncertainty in that sentence was on purpose).

Why Should We Care?

It’s kind of a funny question, I guess, but I think it’s a fair one to ask. Why should we care what good code is? If we agree on the definition of good code, so what? Maybe those criteria for good code are obvious to some people. Maybe they weren’t so obvious to some people, but they still don’t really care. So why the fuss about what good code is?

If you’ve read other posts on DevLeader or you know me personally, you may know that I started work at a digital forensics startup a few years back in Waterloo Ontario. What you may not know is that that startup has grown significantly, and based on the accolades we’ve received as an entire organization, we’re actually one of the fastest growing software companies in North America in terms of revenue. I’m not trying to do the horn tooting without a reason though… I think one of the reasons we’ve been able to have such great success on the development side of things is because we’re always trying to improve.

We didn’t always write “good” code, and we certainly don’t always write “good” code right now. However, we’re always trying to figure out how we can get better. So why might WE care as developers at our office? I think it comes down to trade-offs.

In the real world of software development, you’re often faced with trade-offs. You can get a product out faster if you don’t test it. Or you can get a product out and test it, but maybe you had to take a lot of shortcuts in the code. Or maybe you can get all the features in the product and tested, but you can’t hit the deadline. There’s countless more combinations of trade-offs that we make in real software development every day. I think that by understanding what “good” code means allows a team to recognize just what kinds of corners they’re cutting sometimes. When teams talk about introducing “tech debt”, there’s a better grasp around what type of debt you’re introducing. If you need to get some extra features and bug fixes in but they’re getting added with some tech debt, what could that end up meaning?

Even if you don’t agree with what my criteria are for good code, I think it’s important that you establish this within your team. If everyone can agree on what good code is, it makes constructive conversations about different implementations much easier. Just because some new code is different doesn’t instantaneously make it scary and/or wrong… Maybe it’s a new way that emphasizes one of the criteria for good code a bit more than another implementation might emphasize. Perhaps it doesn’t… You can at least refer back to a reference point for what “good” is.

It’s also important to recognize that the criteria for “good” may change over time. Revisiting the definition periodically might allow you to recognize when your team is redefining what “good” means to them.

Enough Rambling! Where’s This Going?

Right. Okay. I want to write some follow up posts that will focus on a few of the following items:

  • Does every class need an interface?
  • Does every class need a factory that can create it?
  • Unit tests versus functional tests
  • Is there a benefit to only passing interfaces to objects around?
  • Is there a way to enforce that interfaces HAVE to be passed around?
  • Is the single responsibility principle even helpful?
  • Mutability and immutability
  • Is it always good to follow patterns and practices in all scenarios?

And after I feel that I’ve covered enough on these topics, I’d like to circle back and revisit what “good” code is. It’ll be cool to see if the definition changes at all!

**Readability was an after thought and I’m not sure how… I started writing the first example post for this series and QUICKLY realized I had omitted this.


Yeah, We’re an “Agile” Shop

Everybody Has Gone “Agile”

If you’re a software developer that’s done interviews in the past few years, then you already know that every software development shop has gone agile. Gone are the days of waterfall software development! Developers have learned that waterfall software development is the root of all evil, and the only way to be successful is to be agile. You need to be able to adapt quickly and do standups. You need to put story point estimates on your user stories. You need retrospectives… And agility! And… more buzz words! Yes! Synergy! In the cloud! You need it!

Okay, so why the sarcasm? Every single software development team is touting that they’re following the principles of agile software development, but almost no team truly is. Is it a problem if they aren’t actually following agile principles? Absolutely not, if they’re working effectively to deliver quality software. That’s not for me to say at all. I think the problem is that people are getting confused about “being agile”, but there’s nothing necessarily wrong with how they’re operating if it works for them.

Maybe We’re Not So “Agile”

I work at Magnet Forensics, and for a long time now, I’ve been saying “yeah, we’re an agile shop”. But you know what? I don’t think we are. I also don’t think that’s a problem. I think our software development process is best defined by “continuous improvement”. That’s right. I think we’re a “Continuous Improvement” shop. Our team has identified the things we think work well for us in how we develop software, and we experiment to improve on things that we think aren’t working well. I’m actually happy that we operate that way instead of operating by a set of guidelines that may or may not work for us.

So, why aren’t we agile? When I look at the Agile Manifesto, I feel like there’s a few things we actually don’t do, and we don’t even worry about them. We don’t necessarily have business people working with developers daily through things, for example. Our delivery cycles are much longer than a couple of weeks most of the time. Sometimes people on our teams don’t communicate best face-to-face. I mean, just because we’re not focusing on those things isn’t necessarily proof that we aren’t agile, but I truly don’t think we’re trying to embody all of the components that make up agile.

As I stated previously, I do think that we try to focus on continuous improvement above all else, and I’m absolutely content with that. I think that if over time we continued our retrospectives and our team ended up operating closer to a traditional waterfall process then it would be the better thing for our team. Why? Because we make incremental changes for our team in an attempt to keep improving. Switching to waterfall is a bit of a contrived example, but I definitely stand by it. Another example might be that maybe working with business people daily isn’t actually effective for our team. I don’t know, to be honest, because we’re currently tweaking other parts of our development process to improve them. Maybe we’ll get around to worrying about that at some other point.

I do know that the way we operate, we’re always trying to improve. Whether or not we get better sprint to sprint is for the retrospective to surface for us, but if we took a step back, at least we can try a different path when we try to take our next step forward.

So, We Don’t Need To Be Agile?

I think my only point of writing this post was to get this across: If you’re not actually an agile software development shop, then don’t call yourself that. There’s absolutely nothing wrong with not living and breathing agile. Maybe you’re a software shop that’s transitioning into Agile. Maybe you’re moving away from Agile. Maybe you have no parts of your software development process that are agile. Who’s to say that because you’re not 100% agile your setup is bad?

I can’t advocate that agile software development is the absolute best thing for all software development teams. I personally like to think that it’s a great way to develop software, but… I don’t know your team. I don’t know your codebase. I don’t know your products, services, or clients. How the heck could I tell you the best way to go make your software?

Again, there’s nothing wrong with not being 100% agile, but we should try to be honest with ourselves. Find what works for your team. Find out how you can effectively deliver quality software.


MyoSharp – Update On The Horizon

MyoSharp

If you haven’t checked it out already, my friend Tayfun and I created an open source C# wrapper for Thalmic’s Myo. It’s hosted on GitHub over here, so you can browse and pull down code whenever you want. We’ve had some great feedback from users of our API, so we continue to welcome it (both positive and negative!) in order to improve the usability.

Thalmic has plans to release a firmware update to allow more data to be accessible through their API. Right now, MyoSharp is a bit out of date, but once this big firmware update lands we’ll take some more time to get it up to date again. Remember, it’s open source so you can feel free to contribute!

Troubleshooting

The most common question I receive is “I keep getting an exception about not being able to connect when I run the sample code”. I’ve tried to help a few people through this so I just figured I’d mention it right here for clarity: It’s more than likely that your MyoConnect version and the version we packaged with MyoSharp have become out of date. You probably keep your Myo SDK more up to date than MyoSharp is.

Don’t worry! So far we’ve had reasonably good luck just replacing the Myo DLLs in the x86 and x64 folder of the solution. Provided Thalmic didn’t break any API compatibility, things should actually just work out of the box. If they *DID* break backwards compatibility, it’s likely not that big of a deal either. You can update the PInvokes used to match the signatures they expect, and again, you should be up and running pretty quickly.

With that said, hold tight! We’ll get something updated soon. If you can’t wait, then that’s my suggestion for how to get up and running. Please don’t hesitate to contact Tayfun or myself for troubleshooting. Just post in the comments here and we can try to help out!


Controlling a Myo Armband with C#

Controlling a Myo Armband with C#

Background

Thalmic Labs has started shipping their Myo armband that allows the wearer’s arm movements and gestures to control different pieces of integrated technology. How cool is that? My friend and I decided we wanted to give one a whirl and see what we could come up with. We’re both C# advocates, so we were a bit taken back when we saw the only C# support in the SDK was made for Unity. We decided to take things into our own hands and open source a Myo C# library. We’re excited to introduce the first version of MyoSharp!

The underlying Myo components are written in C++, and there’s only several functions that are exposed from the library that we can access. In order to do this, we need to leverage platform invocation (PInvokes) from C# to tap into this functionality. Once you have the PInvokes set up you can begin to play around!

The Workflow

Getting setup with the Myo is pretty straightforward, but it wasn’t obvious to us right away. We didn’t have anyone to walk us through how the different components were supposed to work together (just some good ol’ fashioned code) so we had to tinker around. Once we had everything mapped out, it was quite simple though.

  1. The first step is opening a communication channel with the Bluetooth module. You don’t need to worry about the implementation here since it’s all done in C++ by the Thalmic devs. Calling the correct methods using PInvokes from C# allows us to tap into a stream of “events” that come through the Bluetooth module.
  2. Now that we can intercept events, we need to be able to identify a Myo. After all, working with Myos is our main goal here! There’s a “pair” event that we can listen to from the Bluetooth module that notifies us of when a Myo has paired and provides us a handle to the device. This handle gets used for identifying events for a particular Myo or sending a particular Myo messages.
  3. There’s a connect event that will fire when a Myo connects after it’s been paired with the Bluetooth module. A Myo can be paired but disconnected.
  4. Now that we can uniquely identify a Myo, the only things we need to do are intercept events for a particular Myo and make sense of the data coming from the devices! Orientation change? Acceleration change? There’s a host of information that the device sends back, so we need to interpret it.
  5. When a Myo disconnects, there’s an event that’s sent back for that as well.

Getting Started with MyoSharp

I’m going to start this off with some simple code that should illustrate just how easy it is to get started with MyoSharp. I’ll describe what’s going on in the code immediately after.


using System;

using MyoSharp.Device;
using MyoSharp.ConsoleSample.Internal;

namespace MyoSharp.ConsoleSample
{
    /// <summary>
    /// This example will show you the basics for setting up and working with
    /// a Myo using MyoSharp. Primary communication with the device happens
    /// over Bluetooth, but this C# wrapper hooks into the unmanaged Myo SDK to
    /// listen on their "hub". The unmanaged hub feeds us information about
    /// events, so a channel within MyoSharp is responsible for publishing
    /// these events for other C# code to consume. A device listener uses a
    /// channel to listen for pairing events. When a Myo pairs up, a device
    /// listener publishes events for others to listen to. Once we have access
    /// to a channel and a Myo handle (from something like a Pair event), we
    /// can create our own Myo object. With a Myo object, we can do things like
    /// cause it to vibrate or monitor for poses changes.
    /// </summary>
    internal class BasicSetupExample
    {
        #region Methods
        private static void Main(string[] args)
        {
            // create a hub that will manage Myo devices for us
            using (var hub = Hub.Create())
            {
                // listen for when the Myo connects
                hub.MyoConnected += (sender, e) =>
                {
                    Console.WriteLine("Myo {0} has connected!", e.Myo.Handle);
                    e.Myo.Vibrate(VibrationType.Short);
                    e.Myo.PoseChanged += Myo_PoseChanged;
                };

                // listen for when the Myo disconnects
                hub.MyoDisconnected += (sender, e) =>
                {
                    Console.WriteLine("Oh no! It looks like {0} arm Myo has disconnected!", e.Myo.Arm);
                    e.Myo.PoseChanged -= Myo_PoseChanged;
                };

                // wait on user input
                ConsoleHelper.UserInputLoop(hub);
            }
        }
        #endregion

        #region Event Handlers
        private static void Myo_PoseChanged(object sender, PoseEventArgs e)
        {
            Console.WriteLine("{0} arm Myo detected {1} pose!", e.Myo.Arm, e.Myo.Pose);
        }
        #endregion
    }
}

In this example, we create a hub instance. A hub will manage a collection of Myos that come online and go offline and notify listeners that are interested. Behind the scenes, the hub creates a channel instance and passes this into a device listener instance. The channel and device listener combination allows for being notified when devices come online and is the core of the hub implementation. You can manage Myos on your own by completely bypassing the Hub class and creating your own channel and device listener if you’d like. It’s totally up to you.

In the code above, we’ve hooked up several event handlers. There’s an event handler to listen for when Myo devices connect, and a similar one for when the devices disconnect. We’ve also hooked up to an instance of a Myo device for when it changes poses. This will simply give us a console message every time the hardware determines that the user is making a different pose.

When devices go offline, the hub actually keeps the instance of the Myo object around. This means that if you have device A and you hook up to it’s PoseChanged event, if it goes offline and comes back online several times, your event will still be hooked up to the object that represents device A. This makes managing Myos much easier compared to trying to re-hook event handlers every time a device goes on and offline. Of course, you’re free to make your own implementation using our building blocks, so there’s no reason to feel forced into this paradigm.

It’s worth mentioning that the UserInputLoop() method is only used to keep the program alive. The sample code on GitHub actually lets you use some debug commands to read some Myo statuses if you’re interested. Otherwise, you could just imagine this line is replaced by Console.ReadLine() to block waiting for the user to press enter.

Pose Sequences

Without even diving into the accelerometer, orientation, and gyroscope readings, we were looking for some quick wins to building up on the basic API that we created. One little improvement we wanted to make was the concept of pose sequences. The Myo will send events when a pose changes, but if you were interested in grouping some of these together there’s no way to do this out of the box. With a pose sequence, you can declare a series of poses and get an event triggered when the user has finished the sequence.

Here’s an example:


using System;

using MyoSharp.Device;
using MyoSharp.ConsoleSample.Internal;
using MyoSharp.Poses;

namespace MyoSharp.ConsoleSample
{
    /// <summary>
    /// Myo devices can notify you every time the device detects that the user 
    /// is performing a different pose. However, sometimes it's useful to know
    /// when a user has performed a series of poses. A 
    /// <see cref="PoseSequence"/> can monitor a Myo for a series of poses and
    /// notify you when that sequence has completed.
    /// </summary>
    internal class PoseSequenceExample
    {
        #region Methods
        private static void Main(string[] args)
        {
            // create a hub to manage Myos
            using (var hub = Hub.Create())
            {
                // listen for when a Myo connects
                hub.MyoConnected += (sender, e) =>
                {
                    Console.WriteLine("Myo {0} has connected!", e.Myo.Handle);

                    // for every Myo that connects, listen for special sequences
                    var sequence = PoseSequence.Create(
                        e.Myo, 
                        Pose.WaveOut, 
                        Pose.WaveIn);
                    sequence.PoseSequenceCompleted += Sequence_PoseSequenceCompleted;
                };

                ConsoleHelper.UserInputLoop(hub);
            }
        }
        #endregion

        #region Event Handlers
        private static void Sequence_PoseSequenceCompleted(object sender, PoseSequenceEventArgs e)
        {
            Console.WriteLine("{0} arm Myo has performed a pose sequence!", e.Myo.Arm);
            e.Myo.Vibrate(VibrationType.Medium);
        }
        #endregion
    }
}

The same basic setup occurs as the first example. We create a hub that listens for Myos, and when one connects, we hook a new PoseSequence instance to it. If you recall how the hub class works from the first example, this will hook up a new pose sequence each time the Myo connects (which, in this case, isn’t actually ideal). Just for demonstration purposes, we were opting for this shortcut though.

When creating a pose sequence, we only need to provide the Myo and the poses that create the sequence. In this example, a user will need to wave their hand out and then back in for the pose sequence to complete. There’s an event provided that will fire when the sequence has completed. If the user waves out and in several times, the event will fire for each time the sequence is completed. You’ll also notice in our event handler we actually send a vibrate command to the Myo! Most of the Myo interactions are reading values from Myo events, but in this case this is one of the commands we can actually send to it.

Held Poses

The event stream from the Myo device only sends events for poses when the device detects a change. When we were trying to make a test application with our initial API, we were getting frustrated with the fact that there was no way to trigger some action as long as a pose was being held. Some actions like zooming, panning, or adjusting levels for something are best suited to be linked to a pose being held by the user. Otherwise, if you wanted to make an application that would zoom in when the user makes a fist, the user would have to make a fist, relax, make a fist, relax, etc… until they zoomed in or out far enough. This obviously makes for poor usability, so we set out to make this an easy part of our API.

The code below has a similar setup to the previous examples, but introduces the HeldPose class:


using System;

using MyoSharp.Device;
using MyoSharp.ConsoleSample.Internal;
using MyoSharp.Poses;

namespace MyoSharp.ConsoleSample
{
    /// <summary>
    /// Myo devices can notify you every time the device detects that the user 
    /// is performing a different pose. However, sometimes it's useful to know
    /// when a user is still holding a pose and not just that they've 
    /// transitioned from one pose to another. The <see cref="HeldPose"/> class
    /// monitors a Myo and notifies you as long as a particular pose is held.
    /// </summary>
    internal class HeldPoseExample
    {
        #region Methods
        private static void Main(string[] args)
        {
            // create a hub to manage Myos
            using (var hub = Hub.Create())
            {
                // listen for when a Myo connects
                hub.MyoConnected += (sender, e) =>
                {
                    Console.WriteLine("Myo {0} has connected!", e.Myo.Handle);

                    // setup for the pose we want to watch for
                    var pose = HeldPose.Create(e.Myo, Pose.Fist, Pose.FingersSpread);

                    // set the interval for the event to be fired as long as 
                    // the pose is held by the user
                    pose.Interval = TimeSpan.FromSeconds(0.5);

                    pose.Start();
                    pose.Triggered += Pose_Triggered;
                };

                ConsoleHelper.UserInputLoop(hub);
            }
        }
        #endregion

        #region Event Handlers
        private static void Pose_Triggered(object sender, PoseEventArgs e)
        {
            Console.WriteLine("{0} arm Myo is holding pose {1}!", e.Myo.Arm, e.Pose);
        }
        #endregion
    }
}

When we create a HeldPose instance, we can pass in one or more poses that we want to monitor for being held. In the above example, we’re watching for when the user makes a fist or when they have their fingers spread. We can hook up to the Triggered event on the held pose instance, and the event arguments that we get in our event handler will tell us which pose the event is actually being triggered for.

If you take my zoom example that I started describing earlier, we could have a single event handler responsible for both zooming in and zooming out based on a pose being held. If we picked two poses, say fist and fingers spread, to mean zoom in and zoom out respectively, then we could check the pose on the event arguments in the event handler and adjust the zoom accordingly. Of course, you could always make two HeldPose instances (one for each pose) and hook up to the events separately if you’d like. This would end up creating two timer threads behind the scenes–one for each HeldPose instance.

The HeldPose class also has an interval setting. This allows the programmer to adjust the frequency that they want the Triggered event to fire, provided that a pose is being held by the user. For example, if the interval is set to be two seconds, as long as the pose is being held the Triggered event will fire every two seconds.

Roll, Pitch, and Yaw

The data that comes off the Myo can become overwhelming unless you’re well versed in vector math and trigonometry. Something that we’d like to build up and improve upon is the usability of data that comes off the Myo. We don’t want each programmer to have to write similar code to get the values from the Myo into a usable form for their application. Instead, if we can build that into MyoSharp, then everyone will benefit.

Roll, pitch, and yaw are values that we decided to bake into the API directly. So… what exactly are these things? Here’s a diagram to help illustrate:

Roll, Pitch, and Yaw - MyoSharp

Roll, pitch, and yaw describe rotation around one of three axes in 3D space.

The following code example shows hooking up to an event handler to get the roll, pitch, and yaw data:


using System;

using MyoSharp.Device;
using MyoSharp.ConsoleSample.Internal;

namespace MyoSharp.ConsoleSample
{
    /// <summary>
    /// This example will show you how to hook onto the orientation events on
    /// the Myo and pull roll, pitch and yaw values from it.
    /// </summary>
    internal class OrientationExample
    {
        #region Methods
        private static void Main(string[] args)
        {
            // create a hub that will manage Myo devices for us
            using (var hub = Hub.Create())
            {
                // listen for when the Myo connects
                hub.MyoConnected += (sender, e) =>
                {
                    Console.WriteLine("Myo {0} has connected!", e.Myo.Handle);
                    e.Myo.OrientationDataAcquired += Myo_OrientationDataAcquired;
                };

                // listen for when the Myo disconnects
                hub.MyoDisconnected += (sender, e) =>
                {
                    Console.WriteLine("Oh no! It looks like {0} arm Myo has disconnected!", e.Myo.Arm);
                    e.Myo.OrientationDataAcquired -= Myo_OrientationDataAcquired;
                };

                // wait on user input
                ConsoleHelper.UserInputLoop(hub);
            }
        }
        #endregion

        #region Event Handlers
        private static void Myo_OrientationDataAcquired(object sender, OrientationDataEventArgs e)
        {
            Console.Clear();
            Console.WriteLine(@"Roll: {0}", e.Roll);
            Console.WriteLine(@"Pitch: {0}", e.Pitch);
            Console.WriteLine(@"Yaw: {0}", e.Yaw);
        }
        #endregion
    }
}

Of course, if we know of more common use cases that people will be using the orientation data for, then we’d love to bake this kind of stuff right into MyoSharp to make it easier for everyone.

Closing Comments

That’s just a quick look at how you can leverage MyoSharp to make your own C# application to work with a Myo! As I said, MyoSharp is open source so we’d love to see contributions or ideas for suggestions. We’re aiming to provide as much base functionality as we can into our framework but designing it in a way that developers can extend upon each of the individual building blocks.


Continuous Improvement – One on One Tweaks

Continuous Improvement - One on One Tweaks

Continuous Improvement – Baby Steps!

Our development team at Magnet Forensics focuses a lot on continuous improvement. It’s one of the things baked into a retrospective often performed in agile software shops. It’s all about acknowledging that no system or process is going to be perfect and that as your landscape changes, a lot of other things will too.

The concept of continuous improvement isn’t limited to just the software we make or the processes we put in place for doing so. You can apply it to anything that’s repeated over time where you can measure positive and negative changes. I figured it was time to apply it to my leadership practices.

The One on One

I lead a team of software developers at Magnet, but I’m not the boss of any of them. They’re all equally my peers and we’re all working toward a common goal. One of my responsibilities is to meet with my team regularly to touch base with them. What are things they’ve been working on? What concerns do they have with the current state of things? What’s going well for them? What sort of goals are they setting?

The one on ones that we have setup are just another version of continuous improvement. It’s up to me to help empower the team to drive that continuous improvement, so I need to facilitate them wherever I can. Often this isn’t a case of “okay, I’ll do that for you” but a “yes, I encourage you to proceed with that” type of scenario. The next time we meet up, I check in to see if they were able to make headway with the goals they had set up and we try to change things up if they’ve hit roadblocks.

No Change, No Improvement

I had been taking the same approach to one-on-ones for a while. I decided it was time for a change. If it didn’t work, it’s okay… I could always try something else. I had a good baseline to measure from, so I felt comfortable trying something different.

One on ones often consisted of my team members handing me a sheet of past actions, concerns, and status of goals before we’d jump into a quick 20 minute meeting together. I’d go over the sheet with them and we’d add in any missing areas and solidify goals for next time. But I wanted a change here. How helpful can I be if I get this sheet as we go into the room together?

I started asking to get these sheets ahead of time and started paraphrasing the whole sheet into a few bullet points. A small and simple change. But what impact did this have?

Most one on ones went from maxing out 20 minutes to only taking around 10 minutes to cover the most important topics. Additionally, it felt like we could really deep dive on topics because I was prepared with some sort of background questions or information to help progress through roadblocks. Myself and my team member could blast through the important pieces of information and then at the end, if I’d check to make sure there’s nothing we’d missed going over. If I had accidentally omitted something, we’d have almost another 10 minutes to at least start discussing it.

Trade Off?

I have an engineering background, so for me it’s all about pros and cons. What was the trade-off for doing this?

The first thing is that initially it seemed like I was asking for the sheets super early. Maybe it still feels like I’m asking for them early. I try to get them by the weekend before the week where I start scheduling one on ones, so sometimes it feels like people had less than a month to fill them out. Is it a problem really? Maybe not. Maybe it just means there’s less stuff to try and cram into there. I think the benefit of being able to go into the meeting with more information on my end can make it more productive.

The second thing is that since I paraphrase the sheet, I might miss something that my team member wanted to go over. However, because the time is used so much more effectively, we’re often able to cover anything  that was missed with time to spare. I think there’s enough trust in the team for them to know that if I miss something that it’s not because I wanted to dodge a question or topic.

I think the positive changes this brought about have certainly outweighed the drawbacks. I think I’ll make this a permanent part of my one on one setup… Until continuous improvement suggests I should try something new!


Refactoring For Interfaces: An Adventure From The Trenches

puzzle

Refactoring: Some Background

If you’re a seasoned programmer you know all about refactoring. If you’re relatively new to programming, you probably have heard of refactoring but don’t have that much experience actually doing it. After all, it’s easier to just rewrite things from scratch instead of trying to make a huge design change part way through, right? In any mature software project, it’s often the case where you’ll get to a point where your code base in its current state cannot properly sustain large changes going forward. It’s not really anyone’s fault–it’s totally natural. It’s impossible to plan absolutely everything that comes up, so it’s probable that at some point at least part of your software project will face refactoring.

In my real life example, I was tasked with refactoring a software project that has a single owner. I’m close with the owner and they’re a very technical person, but they’re also not a programmer. Because I’m not physically near the owner (and I have a full-time job, among other things I’m doing) it’s often difficult to debug any problems that come up. The owner can’t simply open an editor and get down to the code to fix things up.

So there was an obvious solution which I avoided in the first place… Unit tests. Duh. I need unit tests. So that’s an easy solution right? I’ll bust out my favourite testing framework (I’m a fan of xUnit) and start getting some solid code coverage. Well… in an ideal world, like every programming article is ever written, this would have been the case. But that wasn’t the case. My software project does a lot of direct HTTP/FTP requests and interacts with particular hardware on the machine. How awesome is writing a unit test that contacts an HTTP server? Not very awesome.

What was I to do? I need to be able to write unit tests so that I can validate my software before putting it in my customer’s hands, but I can’t test it with unit tests because I don’t have the hardware!

Refactoring for Interfaces

Okay, so the first step in my master plan is to refactor for interfaces. What do I mean by that? Well, I have a lot of code that will call out and make HTTP requests and it has a specific dependency on System.Net.WebRequest. The same thing holds true for my FTP requests I want to make. Because I have that dependency within my classes, it means I have no choice but to call out to the network and go do these things.

I could design it a different way though. What if I abstracted the web requests away so that I didn’t actually have to call that class directly? What if I could have a reference to some instance that met some API that would just do that stuff for me? I mean, my class knows all about it’s on particular job, but it truthfully doesn’t know the first thing about calling out to the Internet to go post some HTTP requests. This means if someone else is responsible for providing me with a mechanism for giving me the ability to post HTTP requests, this other entity could also fool me and not actually send out HTTP requests at all! That sounds like exactly what my test framework would want to do.

My first step was to look at the properties and methods I was using on the WebRequest class. What was shared between the HTTP and FTP requests that I was creating? The few things I had to consider were:

  • Some sort of Send() method to actually send the request
  • A URI to identify where the request is being sent
  • A timeout property

I then created an interface for a web request that had these properties/methods accessible and created some wrapper classes that implemented this interface but encapsulated the functionality of the underlying web requests. The next step was creating a class and interface for a “factory” that could create these requests. This is because my code that needs to make HTTP/FTP requests only knows it needs to make those requests–It doesn’t have any knowledge on how to actually create one.

With my interfaces for my requests and my factory that creates them, I was able to move onto the next step.

Create Mocks for the Interfaces

Now that I had my classes leveraging interfaces instead of concrete classes internally, I could mock the inner-workings of my classes. This would provide two major benefits:

  • I could create tests that wouldn’t have to actually go out to the Internet/network.
  • I could create instrumented mocks that would let me test whether certain web requests were being made.

I started off my writing up some unit tests. I tried to get as much code coverage as I could by doing simple tests (i.e. create an object, check default values, call a method and check a result, etc…). Once I had exhausted a lot of the simple stuff, I targeted the other areas that I wasn’t hitting. I mean, how was my coded test supposed to test my method that does an HTTP request and an FTP of a file under the hood? Mocks.

So this is where you probably draw the line between your integration tests or unit tests and get all pedantic about it. But I don’t care how you want to separate it: I need coded tests that cover a section of my class so that I can ensure it behaves as I expect. But if I’m mocking my dependencies, how do I know my class is actually doing what I expect?

Instrument your mocks! This was totally cool for me to play around with for the first time. I had to create dummy FTP/HTTP requests that met the interface my class under test expected. Pretty easy. But I could actually assert what requests my class under test was actually trying to send out! This meant that if my method was supposed to try and hit a certain URL, I could assert that easily by instrumenting my mocked instance to check just that. Was it supposed to FTP a certain set of bytes? No problem. Use my mocked instance to assert those bytes are actually the ones my class under test is trying to send.

Wrap Up

This was just a general post, and I didn’t put up any code to go along with it. Sorry. I really just wanted to cover my experience with refactoring, interfaces, mocking, and code coverage because it was a great learning for me.

To recap on what I said in this post:

  • Identify the parts you want to mock. These are the things your class or method probably isn’t responsible for creating directly. Going out to the network? Accessing the disk? Accessing the environment your test is running under? Creating complex concrete classes because they hook into some other system for you? Great candidates for this.
  • Create interfaces by looking at the API you’re accessing. You know what classes you want to mock, so look how you’re using the API. If you need to access a few properties and methods, then make that part of your interface. If you see commonality between a few similar things, you might be able to create a single interface to handle all of the scenarios.
  • Inject factories that can create instances for you. These factories know how to create the concrete classes that meet your interfaces. In a real situation, they can create the classes you expect. In a test environment, they can create your mocks.
  • Write coded tests with your mocks! The last part is the most fun. You can finally inject some mocked classes into your classes/methods under test and then instrument them to ensure your code under test is accessing them in the way you expect. Run some code coverage tools after to prove you’re doing a good job.

I hope my experiences down this path are able to help you out!


IronPython: A Quick WinForms Introduction

IronPython: A Quick WinForms Introduction

Background

A few months ago I wrote up an article on using PyTools, Visual Studio, and Python all together. I received some much appreciated positive feedback for it, but really for me it was about exploring. I had dabbled with Python a few years back and hadn’t really touched it much since. I spend the bulk of my programming time in Visual Studio, so it was a great opportunity to try and bridge that gap.

I had an individual contact me via the Dev Leader Facebook group that had come across my original article. However, he wanted a little bit more out of it. Since I had my initial exploring out of the way, I figured it was probably worth trying to come up with a semi-useful example. I could get two birds with one stone here–Help out at least one person, and get another blog post written up!

The request was really around taking the output from a Python script and being able to display it in a WinForm application. I took it one step further and created an application that either lets you choose a Python script from your file system or let you type in a basic script directly on the form. There isn’t any fancy editor tools on the form, but someone could easily take this application and extend it into a little Python editor if they wanted to.

Leveraging IronPython

In my original PyTools article, I mention how to get IronPython installed into your Visual Studio project. In Visual Studio 2012 (and likely a very similar approach for other versions of Visual Studio), the following steps should get you setup with IronPython in your project:

  • Open an existing project or start a new one.
  • Make sure your project is set to be at least .NET 4.0
    • Right click on the project within your solution explorer and select “Properties”
    • Switch to the “Application” tab.
    • Under “Target framework”, select  “.NET Framework 4.0”.
  • Right click on the project within your solution explorer and select “Manage NuGet Packages…”.
  • In the “Search Online” text field on the top right, search for “IronPython”.
  • Select “IronPython” from within the search results and press the “Install” button.
  • Follow the instructions, and you should be good to go!

Now that we have IronPython in a project, we’ll need to actually look at some code that gets us up and running with executing Python code from within C#. If you followed my original post, you’ll know that it’s pretty simple:


var py = Python.CreateEngine();
py.Execute("your python code here");

And there you have it. If it seems easy, that’s because it is. But what about the part about getting the output from Python? What if I wanted to print something to the console in Python and see what it spits out? After all, that’s the goal I was setting out to accomplish with this article. If you try the following code, you’ll notice you see a whole lot of nothing:


var py = Python.CreateEngine();
py.Execute("print('I wish I could see this in the console...')");

What gives? How are we supposed to see the output from IronPython? Well, it all has to do with setting the output Stream of the IronPython engine. It has a nice little method for letting you specify what stream to output to:


var py = Python.CreateEngine();
py.Runtime.IO.SetOutput(yourStreamInstanceHere);

In this example, I wanted to output the stream directly into my own TextBox. To accomplish this, I wrote up my own little stream wrapper that takes in a TextBox and appends the stream contents directly to the Text property of the TextBox. Here’s what my stream implementation looks like:


private class ScriptOutputStream : Stream
{
  #region Fields
  private readonly TextBox _control;
  #endregion

  #region Constructors
  public ScriptOutputStream(TextBox control)
  {
    _control = control;
  }
  #endregion

  #region Properties
  public override bool CanRead
  {
    get { return false; }
  }

  public override bool CanSeek
  {
    get { return false; }
  }

  public override bool CanWrite
  {
    get { return true; }
  }

  public override long Length
  {
    get { throw new NotImplementedException(); }
  }

  public override long Position
  {
    get { throw new NotImplementedException(); }
    set { throw new NotImplementedException(); }
  }
  #endregion

  #region Exposed Members
  public override void Flush()
  {
  }

  public override int Read(byte[] buffer, int offset, int count)
  {
    throw new NotImplementedException();
  }

  public override long Seek(long offset, SeekOrigin origin)
  {
    throw new NotImplementedException();
  }

  public override void SetLength(long value)
  {
    throw new NotImplementedException();
  }

  public override void Write(byte[] buffer, int offset, int count)
  {
    _control.Text += Encoding.GetEncoding(1252).GetString(buffer, offset, count);
  }
  #endregion
}

Now while this isn’t pretty, it serves one purpose: Use the stream API to allow binary data to be appended to a TextBox. The magic is happening inside of the Write() method where I take the binary data that IronPython will be providing to us, convert it to a string via code page 1252 encoding, and then append that directly to the control’s Text property. In order to use this, we just need to set it up on our IronPython engine:


var py = Python.CreateEngine();
py.Runtime.IO.SetOutput(new ScriptOutputStream(txtYourTextBoxInstance), Encoding.GetEncoding(1252));

Now, any time you output to the console in IronPython you’ll get your console output directly in your TextBox! The ScriptOutputStream implementation and calling SetOutput() are really the key points in getting output from IronPython.

The Application at a Glance

I wanted to take this example a little bit further than the initial request. I didn’t just want to show that I could take the IronPython output and put it in a form control, I wanted to demonstrate being able to pick the Python code to run too!

Firstly, you’re able to browse for Python scripts using the default radio button. Just type in the path to your script or use the browse button:

IronPython - Run script from file

Enter a path or browse for your script. Press “Run Script” to see the output of your script in the bottom TextBox.

Next, press “Run Script”, and you’re off! This simply uses a StreamReader to get the contents of the file and then once in the contents are stored in a string, they are passed into the IronPython engine’s Execute() method. As you might have guessed, my “helloworld.py” script just contains a single line that prints out “Hello, World!”. Nothing too fancy in there!

Let’s try running a script that we type into the input TextBox instead. There’s some basic error handling so if your script doesn’t execute, I’ll print out the exception and the stack trace to go along with it. In this case, I tried executing a Python script that was just “asd”. Clearly, this is invalid and shouldn’t run:

python_error_asd

Python interpreted the input we provided but, as expected, could not find a definition for “asd”.

That should be along the lines of what we expected–The script isn’t valid, and IronPython tells us why. What other errors can we see? Well, the IronPython engine will also let you know if you have bad syntax:

python_error_bad_syntax

Python interpreted the script, but found a syntax error in our silly input.

Finally, if we want to see some working Python we can do some console printing. Let’s try a little HelloWorld-esque script:

python_pass_hello_world

Python interpreted our simple Hello World script.

Summary

This sample was pretty short but that just demonstrates how easy it is! Passing in a script from C# into the IronPython is straight forward, but getting the output from IronPython is a bit trickier. If you’re not familiar with the different parts of the IronPython engine, it can be difficult to find the things you need to get this working. With a simple custom stream implementation we’re able to get the output from IronPython easily. All we had to do was create our own stream implementation and pass it into the SetOutput() method that’s available via the IronPython engine class. Now we can easily hook the output of our Python scripts!

As always, all of the source for you to try this out is available online:

Some next steps might include:

  • Creating your own Python IDE. Figure out some nice text-editing features and you can run Python scripts right from your application.
  • Creating a test script dashboard. Do you write test scripts for other applications in Python? Why not have a dashboard that can report on the results of these scripts?
  • Add in some game scripting! Sure, you could have done this with IronPython alone, but maybe now you can skip the WinForms part of this and just make your own stream wrapper for getting script output. Cook up some simple scripts in a scripting engine and voila! You can easily pass information into Python and get the results back out.

Let me know in the comments if you come up with some other cool ideas for how you can leverage this!


Be a Better Programmer – Weekly Article Dump

Be a Better Programmer - Weekly Article Dump (Image by http://www.sxc.hu/)

Be a Better Programmer

It’s a new year and that means it’s all about resolutions, right? Well, I’m not a huge fan of keeping around a resolution that needs to wait for a new year, but I am a fan of reflecting on your goals and your skills. If you’re a programmer like me, then maybe this will be a great starting point. In my weekly article dumps I usually would just provide a couple of comments on a link like this, but I felt I should dive in a little bit more. You can find the original article by Amy Jollymore over here. Please have a look! I shared it with the whole dev team at Magnet Forensics because I felt there was a little bit of something for everyone.

Number one on this list, and perhaps the one I’d personally like to focus on more out of this list, is checking your code before blaming others. Blaming other people–in general, not just programming–is an easy way out. When a problem occurs, it’s simple to assume that all of your work is right and that it must be someone else’s fault. But if everyone starts thinking like this, it turns into a nasty blame war. So next time the build breaks or your shiny new feature stops working as expected, don’t go blaming other people. Investigate what the problem is. See what your most recent changes were and if they could have caused the problem. As you start to gain confidence that your changes aren’t responsible for the issue, try sitting down with one or two other people you think might have been around the problem area recently–But don’t go accusing them! Putting your heads together to figure out the problem can speed up the process and might even shed some light on some miscommunication over a design or some assumptions in the code that don’t actually hold true. It’s a lot more embarrassing to blame someone when it’s actually your fault compared to putting in the effort and admitting you might have goofed up. Try it out!

Number two is also a great item. You should never put an end to your learning… especially as an individual in a technology space. There are so many great suggestions listed for this point that there’s no point in me repeating them. Just go read them! An interesting point worth mentioning is using podcasts for learning. This is a great option if you find you’re brain is still spinning when you lay down in bed or if you have a long commute to work (or something else you’re involved in). The author also mentions that you don’t need to be learning programming… What about domain expertise? If you’re writing code for banks, lawyers, or digital forensics… Why  not learn about that too?!

The last point I’ll touch on from the article is number three: don’t be afraid to break things. I love this point. If you’re working on a big piece of software, there are almost certainly areas that seem brittle, scary, or just plain incomprehensible. If your project is still small, it very well get to this point. It doesn’t mean that the code is bad or that you’re working with the worst programmers… It’s just something that happens when you’re continuously trying to build on your software. The real problem occurs when nobody is willing to take the time to go change things. If you have big scary brittle parts of code, then set aside some time, take a deep breath, and go refactor it! It might seem like hell at first, but once you get into it (and especially after it’s done) you’ll feel a million times better. Plus, now your code can continue to be built upon without people running in fear when you mention that section of code. Code can get nasty, but consider using a “tech debt” system or regularly set aside time for refactoring parts of your code base.

Again, the original article is located at: 7 Ways to be a Better Programmer in 2014. Check it out!

Articles

  • How to Manage Dynamic Tensions — and Master the Balancing Act: This was an interesting article on some parts of leadership that often oppose each other. Author Chris Cancialosi does an excellent job in discussing balance between internal and external influences as well as leading and managing. A good take away from this article is at least acknowledging that there are certainly some things to balance. You may want to have the most flexible team, but have you considered if there’s a “too flexible”? Just a bit of perspective that this article might bring to light.
  • A Crash Course In Leadership For 20-Something CEOs: Barry Salzberg‘s article is geared toward young CEOs, but I think that means we can apply the lessons to anyone looking to lead! A few of the points I’d like to mention include being tough on problems and not on people. Your people are the one’s who are going to solve problems and bring great ideas to the table. They’ll invest their time into your organization in order to accomplish great things–so don’t be hard on them. Instead, acknowledge that your problems and challenges are the things you want to crush, and work with your team to make sure you conquer every challenge that gets in the way of your goal. Another point is on taking risks. Never taking risks is a great way to stagnate. You need to learn from your failures, but keep pushing the boundaries. Finally, be ready to adapt. As your organization grows or as the market you’re working within evolves, you need to be ready to adapt and change. You might get lucky and things don’t change all that much over a long period of time, but the odds of that are pretty low. Be ready to adapt so when the time comes, you don’t need to worry about everything falling apart.
  • Leading at Scale with Agility: Brad Smith has a few great points on what leading a team should encompass. First, a team should have a goal that it is trying to achieve. If that team is part of a larger organization, the team’s goal should align with the goal of the entire organization. Secondly, decisions for the team should involve those on the team. It’s easy to sit back and speculate what might be best, but why not involve the people directly affected? Of course, this is more difficult for large teams but maybe that’s an indication your teams would be more effective if they were smaller. Next, empower teams to arrive at solutions on their own. If a plan worked out well, try communicating it to others to try out. Conversely, if the plan had some problems, let others on the team (or other teams) know about the hurdles. Finally, Brad has a point on trust. Trust is arguably one of the most important parts of leading a team. Each team member needs to be able to trust the others. There should be an easy assumption that everyone is operating with best intentions.
  • For Leaders, Today is History: In this article by Steven Thompson, he gives a high-level overview of his focus. Specifically, he focuses on the future and not right now. Steven says the teams he is in charge of are often looking at the problems of “right now” and perhaps a little bit in the future. It would be counter productive for him to try and butt-in to try helping with those problems because he’s so far removed from them. Instead, those individuals have been empowered to focus on those problems. Instead, Steven focuses on the future–the direction of the teams. As a leader, it’s important to try and be thinking at least one step ahead.
  • What If You Had to Write a “User Manual” About Your Leadership Style?: After I read Adam Bryant‘s article, I thought the idea of a leadership “user manual” would be pretty cool. Even if there isn’t a single other individual who would benefit from it, at least it would help reveal to myself some of my leadership quirks. That’s useful on it’s own! I’ll be sure to post up my leadership “user manual” when I have it complete… and I imagine I’ll have to keep updating it over time as my style evolves. It’ll be really interesting to see the evolution of my leadership style! Why not consider doing one for yourself?
  • What Bosses Should Never Ask Employees to Do: Jeff Haden‘s article was a little bit controversial in my opinion–and in the opinion of some of the commenters. I think I get the underlying message behind a lot of what Jeff is saying for each of his points, but as one commenter said, it sounds like a bit of a personal complaint the whole way through. Consider the topic of donating to charities at work. The feel I get after reading that segment is that your organization should not attempt to do fundraising through employees. While I don’t actually think that’s what Jeff is saying, that’s how I feel after reading it. I know that we’ve been able to do several charity events at Magnet, and we’ve always said that they are completely voluntary. I think that’s the crucial part. It’s the holiday season and your budget is a bit tight? How could anyone get mad at you for backing out of a completely optional charity donation? Busy with some personal matters or want to focus on finishing up something at work the day we’re doing a charity event? No big deal, it’s optional. Anyway, the point is that perhaps based on the wording in the article, I felt like some of the messaging will be misinterpreted. I think there are some good points buried in there. Check it out and let me know if you agree or not!

Follow Dev Leader on social media outlets to get these updates through the week. Thanks!


  • Nick Cosentino

    Nick Cosentino

    I work as a team lead of software engineering at Magnet Forensics (http://www.magnetforensics.com). I'm into powerlifting, bodybuilding, and blogging about leadership/development topics over at http://www.devleader.ca.

    Verified Services

    View Full Profile →

  • Copyright © 1996-2010 Dev Leader. All rights reserved.
    Jarrah theme by Templates Next | Powered by WordPress