Software gardening

Butchart-Gardens-CanadaA few months ago I discovered a new manifesto related to software development: the Software Gardening Manifesto. Even if the manifesto has been initiated in 2015 (by Papapetrou Patroklos), the concept of software gardening exists for several years, Jeff Atwood wrote about this topic seven years ago.

If you work in software development like I do, you certainly know the struggle when it comes to explain your work to non-technical persons. Using the metaphor of a garden can be a helpful metaphor. But software gardening is not only that, it is also values and principles. I like what I read in this manifesto and this is why I am creating this blog post.

Deep dive into the manifesto

So, what is the definition of a software gardener when we take a look at the manifesto?

Software Gardeners are professionals who perceive software development in a different way. Our goal is to spread the word of our core values, be mentors for the new developers and create software that it is developed at the highest quality bar.

As someone who values the principles of the software craftsmanship movement, I share this definition since it looks similar as the definition of a software craftsman.

To find the analogy with a garden, let’s have a look at the “beliefs” present in the manifesto.

We treat software systems as gardens and code as flowers. Although we don’t disagree regarding software as a “craft”, we believe that software is a living and breathing “being”, not just an object, created using the best materials.

The fact that systems and code are considered as living entities implies that they need attention over time. Your code gets updated to add feature or to fix bugs, the system changes over time to be improved, refactoring phases occur. It is likely that the project you work on is updated everyday even if it is already in production, of course depending on the context, some features are removed to make room for new ones.

We constantly mentor young developers and we share our knowledge at every opportunity. Junior developers are like flowers that need to be irrigated to blossom. We are the water, the sun, the soil, the fertilizer for every (young) software professional.

Software gardening is not only about taking care of the code but also a community where the more experienced developers help the younger ones by sharing their knowledge and skills (water, sun, soil and fertilizer). It is about creating a community which is a value shared also by the software craftsmanship manifesto.

Software development is a lot more than slinging code. We know the practices and we apply them effectively. We make use of the most productive tools and our skill-set includes both soft and technical skills. We also understand that our overall attitude is what defines us as software gardeners.

Creating software and growing a garden both require tools and skills. And it is definitely better to have the right tools for the job and knowing how to use them effectively. Having values is also important in order to provide the best work as possible.

We care about our code and this care, we show it continuously, day by day, every moment in every single line of code we write.

Code is like a flower, it is delicate and needs proper attention in order to flourish. A mistake can have painful repercussions later, do not water your plants and they will die, do not refactor your code when needed and you will have to deal with technical debt.

We are not only able to respond to change but we are prepared about the endless – internal and external – environment reform.

I find this one very interesting because of the notion of “environment”. Growing a garden does not always require the same tools and techniques, it depends on the context. Is your garden in a forest, a plain, a jungle or maybe a desert? You definitely need to adapt to the constraints of the environment you work in. And it is the same for a software project, do you work for a small company or a big one? Is there a lot of processes or not at all? Is your team made of 4 developers or 20? Is the project new or an old legacy system from 10 years ago.

You need to adapt to all of these variables and they might change over time, nothing is finite, everything evolves, remember that: “The only constant in this world is change”.

We treat customers as the people who will walk in our garden and smell our fragrant flowers. Having said that we engage them from the first day, to make sure that all their needs, requirements and expectations will be met.

Collaboration is essential for a software gardener, working closely with the customers is mandatory in order to create the best possible result. Understanding them is important to be able to have a beautiful garden years after years, during all seasons.

Conclusion

Software gardening is about commitment, quality, pride and love for the code and systems we work on. In ways it is similar to the software craftsmanship with an emphasis on the “living” part of the code and the analogy of the garden environment with the software environment.

Remember also that it is just a manifesto so you have to read between the lines in order to fully understand the values and principles behind this approach.

To me it is refreshing to see a new metaphor for our profession/passion/craft/art that is not related to classical engineering work. Which is, in my opinion, not always a good comparison for software development. I recommend you to read Chris Aitchison‘s article on software gardener: You are NOT a Software Engineer!

See you next time!


Image credits:

http://positivegardening.com/top-10-most-beautiful-gardens-around-the-world/

Unit tests and protected methods

protected-padlockWhen working with Object Oriented Programming (OOP) languages we have the possibility to design our code and our classes using encapsulation. C# defines the “protected” accessibility level, which is accessible only from the containing class and from the types that inherits from this class. It is very helpful when you follow the Open-Closed Principle (OCP).

On the other side you might ask yourself how can we test the behavior of the methods if they are protected. And how to do so without breaking the encapsulation of the class.

Testing a protected method

I created a small custom class to expose my way of doing unit tests on protected methods, here is a class to test:

public class MyClass
{
    public int Counter { get; private set; }

    protected void IncrementCounter()
    {
        Counter++;
    }
}

The easiest thing I could do to be able to test my method is to replace the “protected” keyword by “public”. But I will not do that because, in my opinion when you write tests after the production code you should try to avoid updating the code as much as possible even to test it.

One other possibility is to make the method “internal”, this keyword specifies that the method can only be access from the current assembly. And then, for your tests project you can make them visible using the following attribute in the Properties/AssemblyInfo.cs file of your production project:

[assembly: InternalsVisibleTo("MyTestProject")]

This option is helpful when you want to limit the scope of an API defined in an assembly but without losing the ability to add unit tests on the entire project.

But again I will not use this trick to test my method, instead I will expose it using inheritance for my tests by creating a new class, only for testing purposes.

public class MyTestClass : MyClass
{
    public void ExposeIncrementCounter()
    {
         base.IncrementCounter();
    }
}

With this new class I can now test the behavior of the protected method without modifying it. And here is my test:

[TestMethod]
public void Test_MyClass()
{
    var myClass = new MyTestClass();
    myClass.ExposeIncrementCounter();
    myClass.Counter.ShouldBe(1);
}

Note that, once again, I use the Shouldly library to make my assertion.

Testing a call to a protected method

The other day one of my coworker was asking how can he test that a protected method is called when testing a public method. This is a tricky question and it depends on the context of your code. In his case the given method was not part of our project and therefore impossible to modify. He was still able to test the behavior of the method by mocking another class used by this protected method.

I think this experience is interesting because it shows that you can still write automated tests by going one level deeper. You might ask yourself if this is still a unit test. Maybe, maybe not but it is a test for your project.

And this story made me think: “How to test the call to a protected method?”. I created the following class to show you a way to be able to do so.

public class MySecondClass
{
    protected void InnerMethod()
    {
        //a lot of logic and dependencies
    }

    public void DoWork(bool doWork)
    {
        if (doWork)
        {
            InnerMethod();
        }
    }
}

In my case I want to test that, depending on the parameter’s value, the DoWork method calls the InnerMethod method. But without executing the code of the latter because it does too many things and therefore the test will be too long (if you have a similar situation you might have some code smells to fix).

To write my tests, this time I will slightly update the code to be able to do what I want. I am not a big fan of this but I don’t really see another option that will not imply more refactoring.

protected virtual void InnerMethod()
{
    //a lot of logic and dependencies
}

I simply made the method virtual in order to be able to override it. I did not really break the encapsulation, I just gave me more options. Now I can create a new class for testing purposes:

public class MySecondTestClass : MySecondClass
{
    public bool InnerMethodHasBeenCalled { get; private set; }

    protected override void InnerMethod()
    {
        InnerMethodHasBeenCalled = true;
    }
}

This class inherits from the class I want to test and does not modify its behavior regarding the DoWork method. It is a test double using the spy technique. I can now write the two following tests to check the behavior of the code.

[TestMethod]
public void Test_MySecondClass_False()
{
    var myClass = new MySecondTestClass();
    myClass.DoWork(false);
    myClass.IncrementCounterHasBeenCalled.ShouldBe(false);
}

[TestMethod]
public void Test_MySecondClass_True()
{
    var myClass = new MySecondTestClass();
    myClass.DoWork(true);
    myClass.IncrementCounterHasBeenCalled.ShouldBe(true);
}

If you write tests after writing the production code it is still possible to do so without changing too much of the code even with protected methods. If the code has been written using encapsulation there must be a reason for that and therefore the tests should not force you to break everything down.

What about private methods?

Now you might ask yourself, how to do this with private methods. With these kind of methods we cannot use inheritance for tests, so how?

Well you can use Microsoft Fakes to mock the calls to private methods, but in my opinion this is definitely overkill. Our projects should be testable without tools like this.

The other solution is to change the accessibility of the method to make it testable, because you believe that “Tests trump Encapsulation“.

But first, I think that when you want to test a private method, you should ask yourself some questions first. Why is this method private? Does it really needs to be private? Can I extract the logic inside into another class I will be able to test? Can I include its logic in the test of another method?

Using encapsulation should mean something from a design point of view. Writing tests after the code is better than writing no test at all. It also make you ask questions about the code you wrote: “What do I want to test?”.

And I think this is why we saw the arrival of practice such as Test Driven Development (TDD) to force us to ask these questions first in order to help us with the design and encapsulation of the production code. I will not enter in the debate of whether should you practice TDD or not. Or does TDD lead to good design or not. I don’t know enough to answer these questions and I especially think that it depends on the developer.

In this blog post I wanted to show you how to test protected methods without having too much impacts on the code, I hope it will help you. And always ask yourself about the Why when it comes to tests and only after the How.

See you next time!

Writing acceptance tests with Specflow

Specflow logo
Specflow logo

I already spoke about acceptance testing in older blog posts (here and here) but without showing any tool available to write them, until now! In this article I will show how it is possible to write acceptance tests using Specflow.

This framework allows you to write executable scenarios using the Gherkin syntax which is non technical and can be used by domain experts, business analysts, testers. And then the developers implement the tests details alongside the feature covered by these acceptance tests.

Setting up

Specflow can work easily with Visual Studio, do to so you have to install the extension available in the extensions manager (see screenshot below).

specflow-extension

This extension offers various file templates allowing to write test scenarios, I will come to this later. You will also need the nuget package for the test project where you will add the acceptance tests:

Install-Package Specflow

By default Specflow uses NUnit as unit test provider but you can change it through configuration if you want to, for example I will use MsTest for the demo, then I have to update the App.config file:

<configuration>
  <configSections>
    <section name="specFlow" type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow" />
  </configSections>
  <specFlow>
    <unitTestProvider name="MsTest"/>
  </specFlow>
</configuration>

You should now have everything needed to write your scenarios.

Adding a feature

It is time to create our first Feature file to define the behavior of our system. When adding a new file to the test project you should be able to choose the Feature File option (installed by Specflow):

adding-feature

This will create a .feature file containing scenarios definitions using the Gherkin syntax. For the example I have created a scenario testing the behavior of a simple calculator:

# Calculator.feature
Feature: Calculator
	I want to test the behavior of a Calculator

Scenario: Add two numbers
	Given I have a Calculator
	When I add 17 to 25
	Then the result should be 42

As you can see the syntax is very user-friendly and can be understood by everyone in the team (it’s just English!). Now if you try to run this test (the scenario) it will neither fail nor pass, it has no implementation therefore it will be skipped:

skipped-test

The test name has been generated from the scenario name, this allow you to make them clear. Scenarios can be written and committed/checked-in into the source control without adding failing tests which might break a build (depending on its configuration). Therefore they can be added early in the process and not at the end of the development cycle.

Creating the steps

In order to implement the scenarios of a feature, Specflow needs Step definitions to operate, you can use another file template to create a class defining the steps.

adding-steps

Or you can create the class yourself in order to obtain the following code.

[Binding]
public class CalculatorSteps
{
    [Given(@"I have a Calculator")]
    public void GivenIHaveACalculator()
    {
        ScenarioContext.Current.Pending();
    }
 
    [When(@"I add 17 to 25")]
    public void WhenIAddTo()
    {
        ScenarioContext.Current.Pending();
    }
 
    [Then(@"the result should be 42")]
    public void ThenTheResultShouldBe()
    {
        ScenarioContext.Current.Pending();
    }
}

The BindingAttribute allows Specflow to know that step definitions are present in the file, if you omit it there will be no binding between the feature and the steps, don’t forget it! It is also possible to generate the steps from the feature file using the right-click and the “Generate Step Definitions” option (very helpful).

With this implementation the scenario will still be skipped without failing, you now have a skeleton to work with. Now it is time for the developers to do their part of the work.

Implementing the code

I will now create a Calculator class with an Add method having two integer parameters and a Result property to expose the result.

public class Calculator
{
    public int Result { get; private set; }
 
    public void Add(int first, int second)
    {
        Result = first + second;
    }
}

I will also implement the steps to use my new Calculator class.

[Binding]
public class CalculatorSteps
{
    private Calculator _calculator;
 
    [Given(@"I have a Calculator")]
    public void GivenIHaveACalculator()
    {
        _calculator = new Calculator();
    }
 
    [When(@"I add 17 to 25")]
    public void WhenIAddTo()
    {
        _calculator.Add(17, 25);
    }
 
    [Then(@"the result should be 42")]
    public void ThenTheResultShouldBe()
    {
        _calculator.Result.ShouldBe(42);
    }
}

I used Shouldly to make my assertion in the last step. And now the test is green!

passing-test

We have implemented our first test using Specflow, congratulations! Yet this demo is not over, there are a few more functionalities I would like to show.

Using scenario parameters

Right now our scenario test only one combination for the Add method and I definitely don’t want to write a scenario for each combination I want to test. Of course Specflow can handle this, using a scenario outline.

# Calculator.feature
Feature: Calculator
	I want to test the behavior of a Calculator

Scenario Outline: Add two numbers
	Given I have a Calculator
	When I add <first> to <second>
	Then the result should be <result>
	
	Examples: 
	| first | second | result |
	| 17    | 25     | 42     |
	| 10    | 78     | 88     |
	| 2     | 25     | 27     |
	| 5     | -17    | -12    |

I will also need to update the step definitions to match the scenario outline, it is possible to add method parameters to steps and match them using regular expressions.

[Binding]
public class CalculatorSteps
{
    private Calculator _calculator;
 
    [Given(@"I have a Calculator")]
    public void GivenIHaveACalculator()
    {
        _calculator = new Calculator();
    }
 
    [When(@"I add (.*) to (.*)")]
    public void WhenIAddTo(int first, int second)
    {
        _calculator.Add(first, second);
    }
 
    [Then(@"the result should be (.*)")]
    public void ThenTheResultShouldBe(int result)
    {
        _calculator.Result.ShouldBe(result);
    }
}

My scenario can now take parameters and test several cases, I will have one test per example:

bad-test-naming

But as you can see the test names are not very relevant, Specflow use the first parameter value to generate these names and if you have several examples using the same example value it will be even less clear.

I know a trick to prevent that: adding a description parameter only for naming purposes.

| description | first | second | result |
| 17_25       | 17    | 25     | 42     |
| 10_78       | 10    | 78     | 88     |
| 2_25        | 2     | 25     | 27     |
| 5_minus12   | 5     | -17    | -12    |

good-test-naming

It is not perfect but it might help. If you know a better way, feel free to share it.

Using existing steps

Once a step is implementing you can use it again in another scenario, or even another feature so you will avoid a lot of duplication. In my example I will now add a scenario to test the subtraction of the Calculator class.

Scenario Outline: Sub two numbers
	Given I have a Calculator
	When I subtract <first> to <second>
	Then the result should be <result>

	Examples: 
	| description | first | second | result |
	| 17_59       | 17    | 59     | -42    |
	| 10_3        | 10    | 3      | 7      |

I will add the following method to the calculator class and the implementation of the new “when” step, the others already exist.

//Calculator.cs
public void Subtract(int first, int second)
{
    Result = first - second;
}
 
//CalculatorSteps.cs
[When(@"I subtract (.*) to (.*)")]
public void WhenISubtractTo(int first, int second)
{
    _calculator.Subtract(first, second);
}

And this is it, I now have two more tests without having to write a lot of testing logic.

sub-tests

It is essential to reuse existing steps when working with Specflow in order to avoid too much duplication.

Using the scenario context

Now I want to get rid of the Result property in my Calculator class because I think it does not “feel” right, I prefer having methods directly returning the result.

public class Calculator
{
    public int Add(int first, int second)
    {
        return first + second;
    }
 
    public int Subtract(int first, int second)
    {
        return first - second;
    }
}

But now I have a problem inside my steps, how can I test the result if it is returned in another step? I don’t want to refactor the entire feature. The good news is that I don’t need to, I will change the step definitions without touching the feature file.

I will use the ScenarioContext to pass data from one step to another, like this:

[Binding]
public class CalculatorSteps
{
    private Calculator _calculator;
 
    [Given(@"I have a Calculator")]
    public void GivenIHaveACalculator()
    {
        _calculator = new Calculator();
    }
 
    [When(@"I add (.*) to (.*)")]
    public void WhenIAddTo(int first, int second)
    {
        var result = _calculator.Add(first, second);
        ScenarioContext.Current.Add("result", result);
    }
 
    [When(@"I subtract (.*) to (.*)")]
    public void WhenISubtractTo(int first, int second)
    {
        var result = _calculator.Subtract(first, second);
        ScenarioContext.Current.Add("result", result);
    }
 
 
    [Then(@"the result should be (.*)")]
    public void ThenTheResultShouldBe(int result)
    {
        var actualResult = ScenarioContext.Current.Get<int>("result");
        actualResult.ShouldBe(result);
    }
}

The tests are passing again, I was able to refactor my production code (the Calculator class) without modifying the scenarios. Specflow creates a separation between the feature specification and its implementation.

Time to conclude

This is the end of my introduction to Specflow, which I believe is a great tool for writing acceptance tests. It is easy to write feature scenarios you can use as executable specifications. In my example I use English but it is also available in several languages, so your domain experts should have a very good excuse for not writing them.

Specflow is also simple to configure to be used inside Visual Studio to be coupled with your favorite automated testing framework and there is even auto completion in the feature file for the Gherkin syntax or to find existing steps.

It has a lot more functionalities I did not mention in this blog post. It can also work with Selenium in order to create end-to-end scenarios for web applications. You now have the tool to practice Behavior Driven Development (BDD) with your team.

Let me know if you would like to know more about Specflow.

See you next time!

Better testing experience with Shouldly

Shouldly Logo
Shouldly Logo

I am a strong believer in unit testing, I like to write lots of tests for my code in order to make sure I do not regress when implementing new features on my projects. I love having a safety net when working.

Then it is important to make sure that the tests are easy to write and that they give a good feedback when failing. When a test fails you should be able to know exactly where the code under test has an incorrect behavior to fix it as soon as possible.

Shouldly is a .NET assertion library very helpful for this matter, it provides a lot of extensions for various types making them easy to test. It is available on GitHub and as a Nuget package:

Install-Package Shouldly

Simple assertions

Let’s dive into the topic to see how it can be used, I’ll start with simple assertions on an integer value:

var value = 5;
value.ShouldBe(5);
value.ShouldBeGreaterThan(4);
value.ShouldBeGreaterThanOrEqualTo(5);
value.ShouldBeInRange(0, 10);
value.ShouldBeLessThan(6);
value.ShouldBeOneOf(2, 5, 7);

It is as simple as that, not need for any Assert method, Shouldly does it for you. And there are many more extensions to test about everything you want.

Now, we can have a look at the assertions available for the string type:

var myString = "foo";
myString.ShouldNotBeEmpty();
myString.ShouldNotBe(null);
myString.ShouldStartWith("f");
myString.ShouldEndWith("oo");
myString.ShouldNotContain("bar");
myString.ShouldBe("FOO", Case.Insensitive);

Likewise, Shouldly offers various methods in order to test a string value to make sure everything behaves as expected.

The library offers also some assertions to check the behavior of a method through actions:

Should.NotThrow(() => SuccessMethod());
Should.CompleteIn(() => SuccessMethod(), TimeSpan.FromMilliseconds(50));

As you can see you can write performances tests for your methods using the CompleteIn() method.

Specific error messages

Now that we have written some tests using Shouldly, let’s have a look at what happens when a test fails. As I said it is important to have comprehensive error messages when an error occurred in order to be able to debug quickly.

int value = 5;
value.ShouldBeGreaterThan(6);

Of course this test fails, and generates the following error message:

    value
        should be greater than
    6
        but was
    5

I find that this message is quite clear and gives us a good understanding of what went wrong, it even contains the name of the tested variable. Now let’s see how is the message with an enumerable.

var numbers = new[] { 1, 2, 4 };
numbers.ShouldBe(new []{1, 2, 3});
    numbers
        should be
    [1, 2, 3]
        but was
    [1, 2, 4]
        difference
    [1, 2, *4*]

The library highlights the difference for us, again I find the message pretty clean, especially when you compare it with the MSTest version:

var numbers = new[] { 1, 2, 4 };
CollectionAssert.AreEqual(new[] { 1, 2, 3 }, numbers);
CollectionAssert.AreEqual failed. (Element at index 2 do not match.)

Here are a few more examples.

Should.NotThrow(() => FailingMethod());
    Should
        not throw 
    System.ApplicationException
        but does
Should.CompleteIn(() => LongMethod(), TimeSpan.FromMilliseconds(50));
    Task
        should complete in
    00:00:00.0500000
        but did not

Shouldly is easy to install and easy to use and, in my opinion, makes testing code much clearer. It provides a lot of helpful extensions and if you find that one is missing you can contribute to the project to make it available to everyone.

I think that testing code should be treated with the same respect than production code and this is why it should me made as clean as possible and as understandable as possible, this way your tests are a part of the documentation of your code.

See you next time!

Testing Akka.NET actors with Akka.TestKit

In my last blog post I introduced the actor model using the Akka.NET framework. And since I like to write unit tests for my projects I was wondering how to test the few actors I have created even if there is not much logic inside them.

I believe that understanding how to write tests for a given technology early is important in order to avoid creating too much coupling. And also tests provide rapid feedback helping us to debug what we are creating, if you can debug quickly you can learn quickly.

When working with Akka.NET, you can use Akka.TestKit to write the tests for your actor system. Then there are several nuget packages depending on the testing framework you use.

Install-Package Akka.TestKit.VsTest

Install-Package Akka.TestKit.NUnit

Install-Package Akka.TestKit.Xunit

In my case I will use the VsTest package for MSTest.

Writing your first test

I will start by testing the behavior of the BlueActor, here is its definition:

public class BlueActor : ReceiveActor
{
    private const string ActorName = "BlueActor";
    private const ConsoleColor MessageColor = ConsoleColor.Blue;
 
    private int _counter = 0;
 
    protected override void PreStart()
    {
        base.PreStart();
        Become(HandleString);
    }
 
    private void HandleString()
    {
        Receive<string>(s =>
        {
            PrintMessage(s);
            _counter++;
            Sender.Tell(new MessageReceived(_counter));
        });
    }
 
    private void PrintMessage(string message)
    {
        Console.ForegroundColor = MessageColor;
        Console.WriteLine(
            "{0} on thread #{1}: {2}",
            ActorName,
            Thread.CurrentThread.ManagedThreadId,
            message);
    }
}

As you can see, there is not much to test, I can’t really test that the actor is writing something on the Console. But I might be able to test that it replies to the sender with a message containing a counter.

In my example the sender was an instance of a GreenActor but in a testing context it does not have to be that way. Let’s use TestKit to write a test.

[TestClass]
public class BlueActorTests : TestKit
{
    [TestCleanup]
    public void Cleanup()
    {
        Shutdown();
    }
 
    [TestMethod]
    public void BlueActor_Respond_With_Counter()
    {
        var actor = ActorOfAsTestActorRef<BlueActor>();
        actor.Tell("test");
        var answer = ExpectMsg<MessageReceived>();
        Assert.AreEqual(1, answer.Counter);
    }
}

The test class inherits from TestKit in order to access a set of functionalities allowing to test an actor. Since an actor “lives” inside an actor system in an asynchronous way, it would be a real pain to mock everything, TestKit does it for us.

I can now use the ActorOfAsTestActorRef<T> method in order to get an IActorRef wrapped in a specific instance dedicated to testing. In this context the message sender is the test class, I can then test that the actor replies with a MessageReceived instance. To do so I can use the ExpectMsg<T> method to do the check and to retrieve the message, if there is no message of this type the test fails.

It is also important to call the Shutdown method after running the tests in order to dispose of the actor system without risking to produce memory leak.

Testing another actor

I will now write a test for the YellowActor, see definition below.

public class YellowActor : UntypedActor
{
    private const string ActorName = "YellowActor";
    private const ConsoleColor MessageColor = ConsoleColor.Yellow;
 
    private IActorRef _greenActor;
 
    protected override void PreStart()
    {
        base.PreStart();
 
        _greenActor = Context.ActorOf<GreenActor>();
    }
 
    protected override void OnReceive(object message)
    {
        if (message is string)
        {
            var msg = message as string;
 
            PrintMessage(msg);
            _greenActor.Tell(msg);
        }
    }
 
    private void PrintMessage(string message)
    {
        Console.ForegroundColor = MessageColor;
        Console.WriteLine(
            "{0} on thread #{1}: {2}",
            ActorName,
            Thread.CurrentThread.ManagedThreadId,
            message);
    }
}

There is no reply to check, this actor just transfer the message to another actor after printing it. I can still write a test if I want.

[TestClass]
public class YellowActorTests : TestKit
{
    [TestCleanup]
    public void Cleanup()
    {
        Shutdown();
    }
 
    [TestMethod]
    public void YellowActor_Should_Not_Answer()
    {
        var actor = ActorOfAsTestActorRef<YellowActor>();
        actor.Tell("test");
        ExpectNoMsg();
    }
}

In this test I use the ExpectNoMsg method to verify that the actor does not answer. The test pass but I would like to test that the message is transferred to another actor. Unfortunately at the moment I cannot (or I did not find the way to do it) without changing the code of the actor.

The greenActor IActorRef is created by the actor itself and therefore is not a TestActorRef. To change this, I will inject the actor reference in the constructor, this way I will be able to use a testing reference when running tests. I add the following constructor to the YellowActor class:

public YellowActor(IActorRef greenActor)
{
    _greenActor = greenActor;
}

And I remove the PreStart method. At this point the code does not compile, I have to update the code creating the actor.

var yellowProps = Props.Create<YellowActor>(actorSystem.ActorOf<GreenActor>());
var yellowActor = actorSystem.ActorOf(yellowProps);

This is one of several way to inject dependencies to an actor. And I can now update my test to check that a message is sent.

[TestMethod]
public void YellowActor_Should_Send_String_Message()
{
    var props = Props.Create<YellowActor>(TestActor);
    var actor = ActorOfAsTestActorRef<YellowActor>(props);
    actor.Tell("test");
    var message = ExpectMsg<string>();
    StringAssert.Equals("test", message);
}

I used the TestActor property available in the test class as dependency for the yellow actor. And now I am able to verify that a message is sent using the ExpectMsg method. If I had let the ExpectNoMsg method the test would have failed because the actor system in my test context has received a message.

This is the end of this small introduction to testing with Akka.TestKit for Akka.NET.

Wrapping up

I am definitely not an expert regarding Akka.NET and even less with actor testing, so if you find mistakes regarding my approach feel free to teach me. I wanted to introduce testing early in order to make me think deeper when using Akka.NET. From this experience I learned that the actor model works as a whole and it is something to consider in order to avoid making mistakes.

I am also glad to have been able to write some tests for my actors without too much pain using the tool provided by the community, big thank to them for the hard work.

I will continue my journey with this technology and I will share my thoughts with blog posts. There is much to discover and to learn.

See you next time!

The Actor Model with Akka.NET

akkadotnet-logoA few weeks ago I discovered a framework implementing the actor-model: Akka.NET. Since I was quite unaware regarding this programming model, I decided to give it a try. I really like what I saw and the concepts behind the actor-model and this is why I’m creating this blog post.

The actor-model adopts the idea that everything is an actor, an actor is an entity that receive messages, creates other actors and adapts its behavior depending on the message it receives. And the entire system relies on asynchronous communication making it inherently concurrent.

Akka.NET is an open-source project (available on GitHub) bringing the actor-model to the .NET world (C# & F#), it is based on the Akka framework (Scala & Java).

To install Akka.NET for one of your project, you just have to install a nuget package:

Install-Package Akka

Creating your first actor

I will now show you how to use the project to create your first actor-model. You’ll find below the definition of an actor in C#.

public class YellowActor : UntypedActor
{
    private const string ActorName = "YellowActor";
    private const ConsoleColor MessageColor = ConsoleColor.Yellow;
 
    protected override void OnReceive(object message)
    {
        if (message is string)
        {
            var msg = message as string;
            PrintMessage(msg);
        }
    }
 
    private void PrintMessage(string message)
    {
        Console.ForegroundColor = MessageColor;
        Console.WriteLine(
            "{0} on thread #{1}: {2}",
            ActorName,
            Thread.CurrentThread.ManagedThreadId,
            message);
    }
}

In this case I created a class that inherits from UntypedActor, this allow me to override the OnReceive method. This way the system knows that this class represents an actor able to treat messages. Creating a basic actor is simple, I don’t need to write too much code.

As you can see the OnReceive method is protected, if I want to use this actor I will have to create an actor system, I cannot just instantiate this class to use it. Akka.NET provides a set of functionalities to deal with the actor model, the ActorSystem is one of them. This is how you create a system and an actor:

var actorSystem = ActorSystem.Create("myActorSystem");
 
var yellowActor = actorSystem.ActorOf<YellowActor>();
 
actorSystem.AwaitTermination();

Since my actor is quite simple, I don’t need more code to have an up-and-running actor system with an actor. The WaitTermination method blocks the current thread to avoid it to exit the program, it waits for the actor system to shut down.

Now I can send a message to the actor to see what will happen. To do so I will add the following lines:

Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("Starting actor system on thread: {0}", Thread.CurrentThread.ManagedThreadId);
 
yellowActor.Tell("Message to yellow");

Let’s run the application to see what is happening.

YellowActor

The YellowActor has received the message and has printed it, on a different thread, which makes sense since the one that has created the actor system is blocked by this same system.

There is one important concept to know when manipulating actors with Akka.NET, in my example the yellowActor variable is not an instance of YellowActor, it is an IActorRef. Sending messages to actors is always done with reference, the framework will know how to deliver the message to the associated actor.

Adding a second actor

Now that we have a working actor I will add a second one which will be used by the YellowActor. This way I will show you how to “create” an actor from another one. Let’s define a GreenActor:

public class GreenActor : ReceiveActor
{
    private const string ActorName = "GreenActor";
    private const ConsoleColor MessageColor = ConsoleColor.Green;
 
    protected override void PreStart()
    {
        base.PreStart();
 
        Become(HandleString);
    }
 
    private void HandleString()
    {
        Receive<string>(s =>
        {
            PrintMessage(s);
        });
    }
 
    private void PrintMessage(string message)
    {
        Console.ForegroundColor = MessageColor;
        Console.WriteLine(
            "{0} on thread #{1}: {2}",
            ActorName,
            Thread.CurrentThread.ManagedThreadId,
            message);
    }
}

This time I made the class inherits from ReceiveActor allowing me to use the Receive<T> property telling the actor how to handle different types of message, in my case it’s still a string.

Now I will update the YellowActor to use this new actor.

private IActorRef _greenActor;
 
protected override void PreStart()
{
    base.PreStart();
 
    _greenActor = Context.ActorOf<GreenActor>();
}
 
 
protected override void OnReceive(object message)
{
    if (message is string)
    {
        var msg = message as string;
 
        PrintMessage(msg);
        _greenActor.Tell(msg);
    }
}

The actor has an IActorRef field “pointing” toward the green actor. This actor will not only print the message it receives but it will also send it to the green actor. Let’s see what we have now.

GreenActor

The new actor has received the message and treat it on a different thread. I just show you how to easily create actors and how to pass messages between them. You might think that the second actor looks more complicated for the work it does, and you are right! Yet it will come handy in just a minute when we will create a third actor.

One more actor

It is time to create the BlueActor, again this actor will display the message it receives. And will also respond with a message telling how many messages it has displayed. Here is the blue actor:

public class MessageReceived
{
    public int Counter { get; private set; }
 
    public MessageReceived(int counter)
    {
        Counter = counter;
    }
}
 
public class BlueActor : ReceiveActor
{
    private const string ActorName = "BlueActor";
    private const ConsoleColor MessageColor = ConsoleColor.Blue;
 
    private int _counter = 0;
 
    protected override void PreStart()
    {
        base.PreStart();
        Become(HandleString);
    }
 
    private void HandleString()
    {
        Receive<string>(s =>
        {
            PrintMessage(s);
            _counter++;
            Sender.Tell(new MessageReceived(_counter));
        });
    }
 
    private void PrintMessage(string message)
    {
        Console.ForegroundColor = MessageColor;
        Console.WriteLine(
            "{0} on thread #{1}: {2}",
            ActorName,
            Thread.CurrentThread.ManagedThreadId,
            message);
    }
}

To send back a message inside an actor, you can use the Sender property which is an IActorRef “pointing” to the actor which sent the message. This is helpful if you have an actor hierarchy where a high-level actor delegates some work to lower-level actors.

It is time to update the GreenActor to make it use the BlueActor and to make it handle the MessageReceived type of message, the following code is added to the class:

private const ConsoleColor ResponseColor = ConsoleColor.DarkGreen;
 
private IActorRef _blueActor;
 
protected override void PreStart()
{
    base.PreStart();
 
    var lastActorProps = Props.Create<BlueActor>();
    _blueActor = Context.ActorOf(lastActorProps);
 
    Become(HandleString);
}
 
private void HandleString()
{
    Receive<string>(s =>
    {
        PrintMessage(s);
        _blueActor.Tell(s);
    });
 
    Receive<MessageReceived>(m => PrintResponse(m));
}
 
private void PrintResponse(MessageReceived message)
{
    Console.ForegroundColor = ResponseColor;
    Console.Write("{0} on thread #{1}: ",
        ActorName,
        Thread.CurrentThread.ManagedThreadId);
    Console.WriteLine("Receive message with counter: {0}",
        message.Counter);
}

I was able to use Receive again to specify how to handle MessageReceived instances, an actor can handle several types of message. An actor can even send messages to itself using the Self property. Let’s try out our actor system.

BlueActor

All three actors have displayed the message and the green actor has received an answer from the blue actor. Let’s see if the counter is working as expected, we will tell the program to send 5 messages to the yellow actor.

SeveralMessages

The green actor received a message from the blue actor with a counter of 5, looks good! The actors communicate with each other using different threads.

Time to conclude

This blog post is just an introduction of the actor model using Akka.NET, the project offers a lot of powerful functionalities. I always try to find new ways for developing software and the actor-model offers me a new paradigm I was not familiar with.

For now I think that this programming model can be very helpful in some situations and I will try to post more on the topic, I really like the project so far!

If you want to learn more about Akka.NET I highly recommend you to try the Petabridge’s bootcamp which will help you to understand the concepts behind the actor model and how they are implemented in Akka. You can find the bootcamp for free here. I also made a presentation a few days ago and you can find the slides here.

See you next time!

Testing is a developer job

lack-of-testingMaybe you have heard about the various discussions about Test-Driven Development (TDD). Is it worth it? Does it lead to good design? … In this blog post I will not speak about this kind of practice, just “classical” tests.

When I started my software developer career, I knew nothing about automated testing (unit or not). I wish I did, it would have save me a lot of time and a lot of trouble back then.

The pain of legacy code

I started my life as a software developer in a small company, I had no experience and I was alone on the project, which was several years old. I had to deal with a “big ball of mud” where I was afraid of touching anything because I did not knew anything about the consequences it might have.

Yet, I had to fix bugs and to implement new functionalities in order to improve the application. Of course, I did not test much my changes, only that the bug is fixed or the new feature works as expected on my local machine. And every time it had unseen consequences because the code is highly coupled and changing one part of the source code change the behavior elsewhere.

I wish I had tests at that time to prevent me from working in fear, fear of breaking things, fear of regression. But I’m also guilty in this story because the number of tests I have added during this period is ZERO… My contribution was to make the whole thing worse by adding more legacy code.

I now realise that I was behaving un-professionally, legacy system is a real pain to work with and it is my job as a software developer to avoid creating this kind of mess. We have the tool and practices to make things better, we can add tests, we can refactor bad written code.

Whatever… QA will test it

Now I work for a larger company with several development teams, each one of them has a QA to validate the work done by the developers. I think that having QAs within the teams is a wonderful thing, they will check new feature and potential regression before a production release.

But sometimes I feel like that some developers see this situation as an excuse to be lazy. “I just code, I won’t test it, this is the job of the QA”. What?! Are you serious? Your code does not even compile! Sure the QA will test it and they will just say: “It doesn’t work”. They can’t even test a single feature because the entire system cannot be built.

This might look far-fetched but I’ve seen situations like this one, several times unfortunately.

Sometimes, the code “works” but what has been asked is not done, the code has been written and it compiles. Yet, when opening the page (example of a website), the new element is not present… It’s the developer job to open the site to make sure that it works as expected from end to end, at least locally. Again, I’ve seen it many times, with my own work as well.

In my opinion, QA should find nothing, if they do I have failed at some point. If the issue is technical, I made a fault and I need to fix it ASAP and learn from it. What do I’ve missed? How to prevent that from happening again? Is there a unit test I can write? If the problem is a business issue (not doing was it is supposed to do), then again: what did I missed? Is there an acceptance test that needs to be written? Did I know all the domain related details? If not, why?

Always learn from your mistakes and a feature not validated by QA is a mistake. QAs are not hired to piss developers off, they are paid to make sure the products are viable from a quality point of view. We are not paid to write code, we are paid to automate process, and make them work! QAs are here to help, not to do our job.

I’ve been down this road and this is why I make this blog post, to share my experience and my failures. I love my job as a software developer and I want to be proud of what I’m creating, I want to be considered as a real software professional. There are ways to improve how we work, from a technical point of view and an attitude point of view.

Testing is a developer job, unit testing, integration testing, manual testing, all of them. It’s our job to make sure everything works.

See you next time!


Image credits:

http://www.mobilisationlab.org/six-testing-ideas-for-your-next-email-campaign/

Primitive type obsession

the-old-vs-the-newAs professional developers, we are asked to automate behaviors by writing high quality code. Our favorite programming languages allow us to do so by providing some primitive types we can manipulate in our program. Yet he can be easy to fall into the trap of primitive type obsession. This code smell occurred when we use primitive data types (string, int, bool, …) to model our domain ideas.

The problem

In one of my previous blog post, I demonstrate how to create custom model validation using ASP .NET MVC using an IP address as an example. I had the following property in my Model:

public string IPAddress { get; set; }

This is clearly a primitive type obsession, because using a string to store an IP address can be dangerous, the compiler will not prevent me from writing the following code:

model.IPAddress = null;
model.IPAddress = "";
model.IPAddress = "foobar";

In C# .NET a string can contain much more information than an IP address, the later is far more specific and should be designed accordingly. I will then use Object Oriented Programming and encapsulation to create a custom IPAddress type.

The refactoring

In the other article I explained that the address follows the IPv4 format which is composed of 4 bytes, then let’s create a class representing this concept.

public class IPAddress
{
    public byte FirstByte { get; set; }
    public byte SecondByte { get; set; }
    public byte ThirdByte { get; set; }
    public byte FourthByte { get; set; }
}

Does it make sense to be able to instantiate an IPAddress object without setting the bytes? Not really but for now it is possible, let’s fix that!

public class IPAddress
{
    public byte FirstByte { get; private set; }
    public byte SecondByte { get; private set; }
    public byte ThirdByte { get; private set; }
    public byte FourthByte { get; private set; }
 
    public IPAddress(byte first, byte second, byte third, byte fourth)
    {
        FirstByte = first;
        SecondByte = second;
        ThirdByte = third;
        FourthByte = fourth;
    }
}

I now have an object that actually represents an IP address that must be instantiated with all the data it requires. And I now have a class where I can add related logic to ease its use.

public class IPAddress
{
    public byte FirstByte { get; private set; }
    public byte SecondByte { get; private set; }
    public byte ThirdByte { get; private set; }
    public byte FourthByte { get; private set; }
 
    public IPAddress(byte first, byte second, byte third, byte fourth)
    {
        FirstByte = first;
        SecondByte = second;
        ThirdByte = third;
        FourthByte = fourth;
    }
 
    public static bool TryParse(string ipAddress, out IPAddress ipaddress)
    {
        // some validation logic & parsing logic
        // ...
    }
 
    public override string ToString()
    {
        return string.Format("{0}.{1}.{2}.{3}", FirstByte, SecondByte, ThirdByte, FourthByte);
    }
}

I switched from primitive type obsession to “domain modeling” by creating a value object. This kind of refactoring is very helpful for a lot of other cases. For example country codes stored in string while they are only 2 or 3 characters long with only a pre-defined number of possible values. Or an amount represented by an integer which can be negative and completely leaves out the currency of this amount. These are only a few examples among many.

It’s easy to fall into the trap of this code smell, using primitive data types is quick but it can allow unwanted side effects you will have to check whereas value objects protect the code from such behaviors.

See you next time!


Image credits:

http://erikapov.blogspot.fr/2010/01/old-vs-new.html

NCrafts 2015 – EventStorming Workshop

sticky-notesEarlier this year, in may, I attended a 2 days conference about software craftsmanship in Paris: NCrafts. I also been able to attend a one day workshop about EventStorming.

This workshop was hosted by Alberto Brandolini and Mathias Verraes. Alberto is the founder of the Italian Domain-Driven Design (DDD) community, he runs Avanscoperta, a training company in Italy where he is a consultant as well. You can also encounter him as a speaker in various conferences across Europe. He is the one that came with the word “EventStorming”.

Mathias is an independent Belgian consultant focusing on software practices, on dealing with legacy systems and especially on DDD. Like Alberto you can meet him during conferences or during workshops, there is a list of all of his incoming ones on his personal website.

So you probably guess that EventStorming is about Domain-Driven Design and how to model it. I did this workshop because I consider that as professional software developers we have to understand the domain we work on in order to create the best possible applications.

Prerequisites

To practice EventStorming you need space on a wall, a lot of space. You also need sticky notes, a lot of them with different colors. You have to bring markers for everybody as well, the rule is at least one per person. Remove the tables and the chairs to be able to move freely in front of the wall.

Once the room is ready you need people with questions (the developers) and people with answers (domain experts, business analysts) in order to model the domain of the application.

An events driven approach

Everyone is ready, they have sticky notes and a marker, time to ask the first question: What is the most important event in the system? (from a domain point of view). In other words: what is the goal of the entire application? If the team works on a project without knowing what the main goal is, they will probably end-up with a “not so good” product.

Once you know the Event, write it on a sticky note (orange one) and put it on the wall. Most of the time the event are written using participle past phrases (InvoicePaid, UserRegistered, …). Now it’s time to model around this main event by turning back time: What is the event that occurred before? And before that?

The goal is to find the entire event chain that leads to the main event, this will give you a good idea of what is happening in the system from a business perspective. You will create something like the following picture (this is just a small example):

events-chain

If you are not sure about the necessity of an event, you can ask the domain experts if they care about the event or not, they know if it is relevant or not. For instance a “ButtonClicked” event is probably not relevant from a business point of view, except if the domain you work for is about buttons being clicked. Focus only on domain events, not on technical ones.

Linking the events

Now you have your events chain, the work is far from over, you need to add the “links” between them. An application cannot just be a succession of events. What happened between an event and the one before? What kind of action is required? For example, what happened between “InvoiceSent” and “InvoicePaid”? The client has to pay the invoice!

This last sentence looks obvious but it contains two important components for the EventStorming approach. The first one is the Command: “pay the invoice”, this is the action that has to be done. Time to change the color of your sticky notes, take a blue and write the command on it (Pay invoice) and put it between the two events.

Sometimes when you think of the command needed for an event to occur you might discover that there are more events in the system than you initially thought about or some are missing. If so add them on the wall where they belong, you can add new events at any time.

In “the client has to pay the invoice” there is another valuable information: the Actor, the person/system that acts on the application and launch the command. In this case the client is the actor, put a yellow sticky with the note “Client” on the command post-it.

Now you have a more detailed version of the domain, something like on the following picture.

eventstorming-event-command-user

One last thing regarding the commands, their execution can produce several events, it is not something to avoid, it all depends on your business domain.

And actors can be external to the system, in this case you can use a different color of post-it (pink) in order to identify them quickly.

Adding more information

Producing an event is the work of a command, yet the output of a command is not necessarily always the same, it depends on the Business Rules. For example, given a scenario where a user wants to log in, the expected event is UserLoggedIn. But what should happen if the username and the password are incorrect? We definitely don’t want the user to be logged in the application, in this case the “Log in” command has the following business rule: username is known and password is correct for the given username.

The business rules can be written on another kind of sticky notes (big and yellow) when the rules are quite specific, not like in the example I gave. Otherwise you will pollute the wall with irrelevant post-its containing obvious information.

To apply the business rules, a command needs information to know what it should do. In the log-in example this information is the username and the password. It is called the Message and it holds the data for the system.

Use a post-it (green) to list the data needed by the command and put it beside the related business rule/command couple. This way you are able to see where are the commands with complex business rules and what are the data needed.

After all of this you will have something like this:

full-eventstorming

As you can see, you really need a lot of space and you quickly locate every type of notes. And on this picture the entire model was not complete even after an entire day.

Do not be frightened by the time it took, you are not forced to model the entire application in a row, do it step by step, start with a single sub-domain. You will add the others during the next EventStorming sessions using the base you have already created.

Conclusion

I really enjoyed this workshop, I liked the collaborative approach of the EventStorming, everyone can be involved. It gives a very good representation of the business domain, what it should do and how.

Since the exercise is done with domain experts they use domain terms and therefore you are able to extract the ubiquitous language from the EventStorming session.

The format also favors story telling which are very helpful to gain knowledge of the business domain (“Most of the time it works that way but on rare occasion it works differently like this time when…”).

To summarize: An Event is the result of a Command, triggered by an Actor, following a set of Business Rules using the data of a Message.

A big thanks to Alberto and Mathias for making this workshop an awesome experience, do not hesitate to check their works on the topic, there is plenty of information they can teach you about EventStorming.

See you next time!

And here are some more pictures.

This slideshow requires JavaScript.

Extreme Programming: Open Workspace

open-workspaceAs professional developers it is important that we can communicate with each other easily. Since the face-to-face conversation mode has the highest bandwidth it is essential to have a workspace making these conversations easy.

The whole team should share the same office space to work as closely as possible from each other. There should be no wall between the team members.

Face-to-face conversations

Imagine you need to ask a quick question to one of your co-worker about the task you are working on. If this person is just next to you, you can simply ask and get the answer you are looking for almost instantly.

On the other hand if this person is in an office across the building you might think twice before going to see this person to get the information you need. Maybe you have an instant-messaging software to reach this colleague quickly, but it is not as effective as face-to-face conversation.

In an environment where face-to-face conversations are facilitated, the entire team communication is improved. A developer can ask another developer about a technical topic. A developer can check his/her assumptions with the business analyst (domain expert) and correct them if needed. A tester can show an invalid behavior to the developer to help him reproduce the issue to fix it. The project manager can ask a developer to give a situation report for a specific user story.

Of course all of these communication examples can be done with instant-messaging or email but it will take at least minutes (most likely hours, days?) to share the same amount of information where it takes seconds with face-to-face conversations.

Collaborative work

With an open workspace for the team it is likely that you will also have plenty of space on the walls. These walls are perfect to display information for the team, you can put a scrum board or a kanban board on one of them, this way the entire team is able to see what have been done and what has to be done.

Whiteboards are also very helpful for the team, they allow the developers to draw diagrams to share and discuss them with the rest of the team. And since this activity is made in the open-space more people can come in to give their inputs to improve it: collaborative work is done easier.

And of course since Extreme Programming (XP) favors pair-programming it is way easier to achieve it with an open workspace where each developer can sit next to another one. I’ve never worked in a cubicle environment but I don’t think it fits pair-programming sessions. And no, I don’t consider cubicle environment as an open workspace.

Having a “war room” kind of workspace for a development team can increase the communication by facilitating face-to-face conversations which is the most effective way to share information. There is also a lot a space to display boards, graphics and various information for the whole team, everything is visible and becomes more “concrete”.

See you next time!