Caffeine-Powered Life

A Pox on Static Languages

It’s no secret that I am tired of .NET, C#, and static language stupidity.

My curent project went in for a code review, and I’m going to share the before snippet. This is a simple part of the project, containing no business logic — it’s just a way to check the status to make sure that the server is up.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class Status
{
  public Status()
  {
    ServerTime = Clock.Instance.UtcNow;
    Timestamp = Clock.Instance.UtcNow.ToEpoch();
    Version = GetVersionString();
  }

  public DateTime ServerTime { get; set; }

  public long Timestamp { get; set; }

  public string Version { get; set; }

  private static string GetVersionString()
  {
    var version = Assembly.GetExecutingAssembly().GetName().Version;
    var versionString = string.Format("{0}.{1}", version.Build, version.Revision);
    return versionString;
  }
}

public class StatusController
{
  public Status Get()
  {
    return new Status();
  }
}

That’s all the relevant code in this particular example. Here is the reviewers comment.

Factor all of the work out of the Status class’s constructor. Treat Status as a DTO. Put unit tests around the new factory to ensure it is properly initializing Status.

Damn. The problem with that above review is that (1) it is a lot more code to write for such a simple thing, and (2) I completely agree that’s how it should be done in C# (and probably other static languages, too). From a testing and code design point of view, the above comment represents all of the best practices that good code should follow. It keeps the result object as a plain DTO. It makes sure that the controller is responsible for nothing beyond negotiation. The reviewer’s comment is valid and accepted for work.

If I sound too put off by this, please realize that this is less than 30 minutes of work for me to do. This is much more about my distaste with the current state of static language patterns than my critique of the code reviewer.

So, let’s start by adding a factory, moving the initialization logic to the factory. We are using Ninject, so we’ll also need to be sure to register this with our IoC container.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public interface IStatusFactory
{
  Status Create();
}

public class StatusFactoryImpl : IStatusFactory
{
  public Status Create()
  {
    return new Status
    {
      ServerTime = Clock.Instance.UtcNow,
      Timestamp = Clock.Instance.UtcNow.ToEpoch(),
      Version = GetVersionString()
    }
  }

  private static string GetVersionString()
  {
    var version = Assembly.GetExecutingAssembly().GetName().Version;
    var versionString = string.Format("{0}.{1}", version.Build, version.Revision);
    return versionString;
  }
}

// In our registration...
Kernel.Bind<IStatusFactory>().To<StatusFactoryImpl>().InRequestScope();

We also need to modify our existing StatusController.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class StatusController
{
  private readonly IStatusFactory statusFactory;

  public StatusController(IStatusFactory statusFactory)
  {
    if (statusFactory == null)
      throw new ArgumentNullException("statusFactory");
    this.statusFactory = statusFactory;
  }

  public Status Get()
  {
    var status = statusFactory.Create();
    return status;
  }
}

Finally, let’s modify the Status object, removing the logic from the constructor.

1
2
3
4
5
6
public class Status
{
  public DateTime ServerTime { get; set; }
  public long Timestamp { get; set; }
  public string Version { get; set; }
}

Finally, we need to modify our StatusControllerTests file to get all of our tests to pass. We will introduce a mock object to the controller’s constructor.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class StatusControllerTests
{
  private StatusController controller;
  private Mock<IStatusFactory> mockStatusFactory;

  [SetUp]
  public void Before_each_test()
  {
    mockStatusFactory = new Mock<IStatusFactory>();
    mockStatusFactory.Setup(x => x.Create()).Returns(new Status());
    controller = new StatusController(mockStatusFactory.Object);
  }

  /* snip */
}

All of this is easier how? The answer, obviously, is that this isn’t easier. I’m not even convinced that it is cleaner, even if it is more clearly separating concerns.

Total changes?

  • 6 files modified (Status, StatusController, StatusControllerTests, StatusTests, plus two .csproj files)
  • 3 files added (IStatusFactory, StatusFactoryImpl, StatusFactoryImplTests)

Ugh.

Reconciling with Ruby on Rails

How would I write the exact same thing in Ruby on Rails, and have the exact same level of testability (including mocking), etc.?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Status
  attr_reader :server_time, :timestamp, :version

  def initialize
    @server_time = Time.now
    @timestamp = @server_time.to_i
    @version = Application::VERSION
  end
end

class StatusController < ApplicationController
  def index
    @status = Status.new
    render json: @status, status: :ok
  end
end

Wait. Doesn’t that look like the original code? Yes, it does. Thanks to Ruby’s basic attr_reader, I can quickly create the attributes for a class with very little code. The Ruby version is so much shorter because there is no reason to add extra layers (interfaces, factories) for the sake of object independence and testability in the dynamic objects world. We never require an actual type in dynamic languages, just something that conforms to the contract.

This is just another reason why I love Ruby. I can get more done faster, less typing, more clarity, and the same (or even better testability) than anything C# even comes close to accomplishing.

The Easiest WebAPI Authorization Tutorial Ever

So, it took a lot of scouring of the internets, but I was finally able to piece together the bare minimum necessary to create your own authoritization client in WebAPI.

Step 1: Create a DelegatingHandler

The DelegatingHandler is a class that will be attached to all requests coming into your API. This is going to be the easiest authorization handler ever. Here are the rules.

  1. If there is a token, any token at all, the request is valid.
  2. Otherwise, the request is unauthorized.

Yes, this is simple. In reality, you’d want to check the value of the token against something, at the very least a local database. That is left as an exercise for the reader.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
public class AuthorizationHeaderHandler : DelegatingHandler
{
  private string authorizationParameter;

  public bool HasAuthorizationParameter
  {
    get { return authorizationParameter != null; }
  }

  protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
  {
    ParseAuthorizationHeader(request);
    if (HasAuthorizationParameter)
    {
      ConfigureClaimsIdentity();
      return base.SendAsync(request, cancellationToken);
    }

    return ConfigureUnauthorizedRequest();
  }

  private void ConfigureClaimsIdentity()
  {
    var claims = new List<Claim>();
    // Ideally, you'd add some claims here.
    var claimsIdentity = new ClaimsIdentity(claims);
    Thread.CurrentPrincipal = new ClaimsPrincipal(claimsIdentity);
  }

  private Task<HttpResponseMessage> ConfigureUnauthorizedRequest()
  {
    var response = new HttpResponseMessage(HttpStatusCode.Unauthorized);
    var taskCompletionSource = new TaskCompletionSource<HttpResponseMessage>();
    taskCompletionSource.SetResult(response);
    return taskCompletionSource.Task;
  }

  private void ParseAuthorizationHeader(HttpRequestMessage request)
  {
    if (request.Headers == null || request.Headers.Authorization == null || request.Headers.Authorization.Parameter == null)
      return;

    authorizationParameter = request.Headers.Authorization.Parameter;
  }
}

Step 2: Wire Your Handler to Your Configuration

Somewhere, you probably have a class that sets up your WebAPI configuration. You need to add this line to your configuration.

1
2
3
4
private static void ConfigureMessageHandlers(HttpConfiguration config)
{
  config.MessageHandlers.Add(new AuthorizationHeaderHandler());
}

Step 3: Add an Authorization Header to Your Requests

For this to work, you’ll need to add an authorization header to all of your requests. It needs to be in the following format.

1
Authorization: <scheme> <parameter>

Here’s a sample. (No don’t actually use this token. It’s just the SHA256 hash of Hello, World!.)

1
Authorization: Token DFFD6021BB2BD5B0AF676290809EC3A53191DD81C7F70A4B28688A362182986F

Step 4: Test With and Without an Authorization Header

Fire up Fiddler. Let’s create a request without an authorization header.

The request.

The reponse. We can see the 401 Unauthorized coming back from the server.

Now, let’s look at the same request with an authorization header.

The response. Now with more 200 OK.

We’re successfully grabbing the token from the request. Everything else is left as an exercise for the reader!

Unit Testing an MVC Authorize Attribute

When working with unit tests, I usually consider it a code smell when you have more fixture setup than production code. Today, I ran into just that condition.

This class has been written for a while, but it wasn’t covered by any tests. I decided to change that. First, I’ll show you the class. The class under question is called SessionBasedAuthorizeAttribute. It looks to the ASP.NET session to get the username, instead of relying on a forms token.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class SessionBasedAuthorizeAttibute : AuthorizeAttribute
{
  protected override bool AuthorizeCore(HttpContextBase httpContext)
  {
    // If the session object has not yet been created, then there is nothing
    // we can do with this authorization.
    if (httpContext.Session == null)
      return false;

    IHttpSessionAdapter httpSessionAdapter = HttpSessionAdapter.Deserialize(httpContext.Session);
    if (httpSessionAdapter.IsAuthenticated)
    {
      SetHttpContextUser(httpContext, httpSessionAdapter);
      return true;
    }
    else
    {
      return false;
    }
  }

  private static void SetHttpContextUser(HttpContextBase httpContext, IHttpSessionAdapter httpSession)
  {
    var mediator = Mediator.Instance;
    var query = new UserByUsernameQuery(httpSession.Username);
    var user = mediator.Request(query);
    httpContext.User = user ?? new Guest();
  }
}

Writing your own AuthorizeAttibute isn’t difficult. It’s just a few lines, really, to pull a value from session, fetch a user from the database, and set the user. But what about unit testing?

Here’s the result.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
[TestFixture]
public class SessionBasedAuthorizeAttribute_Tests
{
  private AuthorizeAttribute attribute;
  private AuthorizationContext authorizationContext;

  [SetUp]
  public void Before_each_test()
  {
    attribute = new SessionBasedAuthorizeAttibute();

    var controller = new FakeController();
    MvcTest.SetupControllerContext(controller);

    authorizationContext = new AuthorizationContext();
    authorizationContext.Controller = controller;
    authorizationContext.HttpContext = controller.HttpContext;

    var controllerDescriptor = new ReflectedControllerDescriptor(typeof(FakeController));
    var method = typeof(FakeController).GetMethod("Nothing");
    authorizationContext.ActionDescriptor = new ReflectedActionDescriptor(method, "Nothing", controllerDescriptor);
  }

  [Test]
  public void Fails_when_session_is_not_authenticated()
  {
    SetupSession(false);
    Authorize();
    AssertUnauthorized();
  }

  [Test]
  public void Fails_when_session_is_null()
  {
    Mock.Get(authorizationContext.HttpContext).SetupGet(x => x.Session).Returns((HttpSessionStateBase)null);
    Authorize();
    AssertUnauthorized();
  }

  [Test]
  public void Succeeds_when_session_is_authenticated()
  {
    SetupSession(true);
    SetupServices();
    Authorize();
    AssertSuccess();
  }

  private void AssertSuccess()
  {
    var user = authorizationContext.HttpContext.User;
    user.Should().BeOfType<User>();
    user.Identity.IsAuthenticated.Should().BeTrue();
    user.Identity.Name.Should().Be("tester");
  }

  private void AssertUnauthorized()
  {
    authorizationContext.Result.Should().BeOfType<HttpUnauthorizedResult>();
  }

  private void Authorize()
  {
    attribute.OnAuthorization(authorizationContext);
  }

  private void SetupServices()
  {
    var mediator = new Mock<IMediator>();
    mediator.Setup(x => x.Request(It.IsAny<UserByUsernameQuery>())).Returns(new User { Username = "tester" });
    Mediator.Instance = mediator.Object;
  }

  private void SetupSession(bool isAuthorized = true)
  {
    var mockContext = Mock.Get(authorizationContext.HttpContext);
    var mockSession = new Mock<HttpSessionStateBase>();
    mockSession.SetupGet(x => x["IsAuthenticated"]).Returns(isAuthorized);

    // Return the mock.
    mockContext.SetupGet(x => x.Session).Returns(mockSession.Object);
  }

  internal class FakeController : Controller
  {
    public ActionResult Nothing()
    {
      return new EmptyResult();
    }
  }
}

By the way, there’s even more setup code that happens as a part of the MvcTest.SetupControllerContext() method call, but I had already written that for another purpose.

13 lines of production code. 25 lines of fixture setup.

It turns out, that’s much more difficult, because we are relying on a lot of the inner guts of MVC. I had to pull up the source code for the OnAuthorize method to see what all the NullReferenceExceptions were. When mocking out the HttpContext, you also have to mock out the Session (knew that) and Items (didn’t know that) properties. There are also some ActionDescriptor values that must be set for OnAuthorize to work, and I have never worked with those classes before, either.

I suppose the first conclusion is to praise the powers at Microsoft for open sourcing the MVC framework. Without access to the source, I would have never been able to write these tests. Even with the source, the terrible nature of the NullReferenceException made this much more difficult than it ought to be. Any time you have a chain of w.x.y.z, finding which of those threw the exception becomes a total PITA. Yeah, I know. The Law of Demeter is in play, too.

Check out Code Contracts. It makes this kind of in-code documentation much easier.

The second conclusion is the value of this kind of test. I’ve already had one coworker respond with, “this is where I begin to question the value add of unit testing.” I kinda have to agree with him. This was a hard test to write, even though the production code itself was quite easy. On one extreme, some people say that if it is important enough to write, then it is important enough to test. I think I agree with that sentiment. I believe in the value of testing. I’m just not sure what I could have done to make this testing easier and still remain within the bounds set by the MVC framework.

BCrypt Is Slow

For some reason, this just made my day.

See those yellow dots? That’s NCrunch telling me that these tests are slow. It’s BCrypt. It’s supposed to be slow! That’s the whole point of using BCrypt. If you’re storing passwords, it’s the only hashing algorithm you should be using.

JavaScript Objects

This is always a good topic to review. There are just too many developers who don’t know what they are doing when it comes to writing good JavaScript. The code for this project has been published to Github.

Understanding Scope

The first thing we must understand is how variables are scoped in JavaScript. When it comes to JS: there’s only one rule.

Variables are scoped to the function that defines them.

There’s no concept of block scope like in C# or Java. This means that we’re going to make use of something called a functional closure.

The Specs

Specs are a wonderful way of defining how a class should work. Don’t tell me this is BDUF or not how TDD is supposed to work. Both of these methodologies would say to only write one test, then write the code. Repeat until you are finished. This is why I refuse to strictly follow a dogma. If you know the specs up front, then write the specs.

The following tests are written with the Jasmine unit testing framework.

1
2
3
4
5
6
7
8
9
10
11
describe("Speaker", function () {

  it("can be instantiated");                     // 1.
  it("has a name attribute");                    // 2.
  it("can change the name in the constructor");  // 3.
  it("returns the expected message");            // 4.
  it("adds the message to an output buffer");    // 5.
  it("counts the number of times spoken");       // 6.
  it("counts the number of things said");        // 7.

});

Declaring the Constructor

1
2
3
4
5
6
7
8
var Speaker = (function () {

  function Speaker() {
  }

  return Speaker;

})();

Yes, we had to type the name of the class three times and the word function twice. I’m sorry.

The Functional Closure

The above code example is a functional closure. The more you work with good JavaScript, the more you’ll see this pattern. Yes, it’s a lot of character noise. I’m sorry Ruby developers.

1
(function(){})();

The above line creates an anonymous function that is executed immediately. In this example, we are creating a new Speaker function and then returning that function. The functional closure will let us hide variables that we don’t want exposed outside of class.

This will let us create new instances of our Speaker class.

1
var speaker = new Speaker(); // Passes #1.

Objects Are Instances of Functions

What syntax makes function Speaker() any different from function speaker()? Nothing at all. The only reason that we are capitalizing Speaker is by convention of those who have come before us.

1
2
3
console.log(typeof Speaker);   // returns "function"
var speaker = new Speaker();
console.log(typeof speaker);   // returns "object"

Adding Properties

We also think our speaker should have a name. I like to know to whom I am speaking.

1
2
3
4
5
6
7
8
9
var Speaker = (function () {

  function Speaker() {
    this.name = "Ann";
  }

  return Speaker;

})();

We can now test that the speaker really does have a name attribute now.

1
2
3
4
it("has a name attribute", function () {
  var speaker = new Speaker();
  expect(speaker.name).toBe("Ann"); // Passes #2.
});

Constructor Options

We should get used to using an options object in our JavaScript. Instead of having an explicit list of arguments, we instead accept a single options object as a method parameter. Remembering that JavaScript does not have a compiler, this allows us to be very clear about the expected inputs to our methods.

Having an options parameter also lets us set some defaults. Both jQuery and Underscore have an extend method that let us map settings. They both do the exact same thing. For no particular reason, I’m choosing Underscore over jQuery.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var Speaker = (function () {

  var defaults = {
    name: "Ann"
  };

  function Speaker(options) {
    this.options = _.extend({}, defaults, options);
    this.name = this.options.name; // Passes #3.
  }

  return Speaker;

})();

So, by default, our speaker will be named “Ann,” but we can still change it by adding a name attribute to our options argument.

1
var speaker = new Speaker({ name: "Bonnie" });

Also, notice that the defaults are private to the functional closure. There is no way to access these defaults from outside the Speaker object. While we can access speaker.options.name, we cannot, in any way, access speaker.defaults.

Adding a Functional Prototype

To expose a public function, we need to use the prototype attribute.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var Speaker = (function () {

  var defaults = {
    name: "Ann"
  };

  function Speaker(options) {
    this.options = _.extend({}, defaults, options);
    this.name = this.options.name;
  }

  Speaker.prototype.say = function(toSay) {
    var message = '"' + toSay + '," said ' + this.name + '.';
    return message; // Passes #4.
  };

  return Speaker;

})();

Duck Typing

I’m not going to get into duck typing today. A lot has already been said by developers much smarter than I. Check out Eric Lippert’s post and Phil Haack’s first and second posts. Instead, here’s the short, short version…

I’m going to call something - a method or property - and you had better just “work.”

That’s all we are going to do with the output buffer in this case. I’m going to call a .write() method. I don’t care what you give me, but I’m going to call that method.

We’re going to go back to our constructor here.

  • AlertBuffer: Writes to window.alert.
  • ConsoleBuffer: Writes to console.log.
  • MemoryBuffer: Writes to an internal array.

I wrote all three versions. They are available in the repository linked above. Here’s the code for the MemoryBuffer.

1
2
3
4
5
6
7
8
var MemoryBuffer = (function () {
  function MemoryBuffer() {
    this.messages = [];
  }
  MemoryBuffer.prototype.write = function(message) {
    this.messages.push(message);
  };
})();

We’ll set our buffer in the constructor of Speaker. We’ll go ahead and default to the MemoryBuffer. If you want the MemoryBuffer, then leave it alone. If not, then give us a new buffer when you create your Speaker instance.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var Speaker = (function () {

  var defaults = {
    buffer: new MemoryBuffer(),
    name: "Ann"
  };

  function Speaker(options) {
    this.options = _.extend({}, defaults, options);
    this.name = this.options.name;
  }

  Speaker.prototype.say = function(toSay) {
    var message = '"' + toSay + '," said ' + this.name + '.';
    this.options.buffer.write(message); // Passes #5.
    return message;
  };

  return Speaker;

})();

Creating a Private Function

Now we want to create a private function. This function needs to be exposed to the Speaker instance, but not outside of the instance. We can’t use a prototype for this, so how do we accomplish this? Answer: we just create a method inside of the function. This will scope it locally, preventing it from being exposed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var Speaker = (function () {

  var defaults = {
    buffer: new MemoryBuffer(),
    name: "Ann"
  };

  function Speaker(options) {
    this.options = _.extend({}, defaults, options);
    this.name = this.options.name;
    this.thingsSaid = [];
    this.timesSpoken = 0;
  }

  Speaker.prototype.say = function(toSay) {
    var message = '"' + toSay + '," said ' + this.name + '.';
    this.options.buffer.write(message);
    addToThingsSaid(this, toSay);
    this.timesSpoken += 1; // Passes #7.
    return message;
  };

  function addToThingsSaid(speaker, toSay) {
    if (!_.contains(speaker.thingsSaid, toSay)) {
      speaker.thingsSaid.push(toSay); // Passes #6.
    }
  }

  return Speaker;

})();

With that, we will now pass all of our specs.

Relying on Unreliable IDs

Relying on IDs means that we are checking that a value set by the database. This may be an autoincrementing value — a SQL Server identity or an Oracle sequence.

Why This Happens?

This anti-pattern occurs because it is thought to be easier to hard code values into the software than to make changes to the database schema. When the schema is wrong, fix it!

Example: Include/Exclude by Value

In this case, this particular department is a virtual department, not a real department, and we don’t want it displayed in this particular UI screen.

1
var departments = departmentRepository.Where(d => d.ID != 6).ToList();

However, in this program, the ID column is a SQL Server identity. The value of 6 was assigned to the row in the developer’s environment. However, there are three more environments to consider: QA, Staging, and Production.

There are two anti-patterns to this approach.

  1. It assumes that data will be entered in the exact same order. It also assumes that if there was a mistake before, the mistake would be repeated, since a failed insert will increment the identity.
  2. It assumes there will never be another department that we wish to exclude.

Solution: alter the schema to reflect the intent

The solution is to alter the database to reflect this logic.

1
2
ALTER TABLE [departments] ADD [exclude_from_user_management_ui] BIT NOT NULL DEFAULT (0);
UPDATE [departments] SET [exclude_from_user_management_ui] = 1 WHERE [id] = 6;

The application code is now extremely easy to understand, and there is no reliance on “magic” values.

1
var departments = departmentRepository.Where(d => d.ExcludeFromUserManagementUI == false).ToList();

Example: Setting a Default Value

In this example, we are using a value to reflect that something should have a default setting.

1
2
3
4
var task = new Task();
task.Description = "Be more awesome!";
task.Status = statuses.First(s => s.ID == "new");
task.CreatedAt = DateTime.UtcNow;

Again, this assumes that the “new” status will always be present. If you, ask the developer have complete control over the environments and deployments, then this does work. Also, you’re luckier than I, since I have never been at a client where I have complete control over deployments.

Solution: alter the schema to reflect the intent

1
2
ALTER TABLE [statuses] ADD [is_default] BIT NOT NULL DEFAULT (0);
UPDATE TABLE [statuses] SET [is_default] = 1 WHERE [id] = 'new';

The application code is similarly trivial.

1
2
3
4
var task = new Task();
task.Description = "Be more awesome!";
task.Status = statuses.First(s => s.IsDefault);
task.CreatedAt = DateTime.UtcNow;

Now, whether the query for .IsDefault should be a call to .First() or .Single() is an argument I don’t really want to get into. Ideally, it is probably correct that only one value can ever be the default value. However, I’ve been places where I don’t have any control over those particular administration screens, and the code quality is rather dubious. In that case, I chose the .First() method because I didn’t want my code to break because of another developer’s poor decisions.

Yes, I realize my code will still break if there are no default statuses. The [status_id] column in the database is not nullable, so .FirstOrDefault() is just prolonging the issue.

Lesson Learned: Don’t rely on magic.

See Also

DRYing Up Controller Actions With Responders

I read a few books over the long weekend. One of those books was Crafting Rails 4 Applications by José Valim. I’m going to say that this is a pretty advanced book on Rails, since it really digs into the Rails framework internals, especially when it comes to overriding the default functionality.

One of the topics is how to DRY up your controllers by writing custom responders. That got me thinking. Can we do the same thing in .NET?

The “Before”

I have two controllers in my Portfolio application: a TagsController and a TasksController. These controllers contain the following lines.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// In TasksController
[HttpDelete]
public ActionResult Delete(DeleteTaskCommand command)
{
  var task = mediator.Send(command);
  var message = string.Format("Successfully deleted task: {0}", task.Title);
  this.Flash("success", message);
  return Json(new { success = true });
}

// In TagsController
[HttpDelete]
public ActionResult Delete(DeleteTagCommand command)
{
  var tag = mediator.Send(command);
  var message = string.Format("Successfully deleted tag: {0}", tag.Description));
  this.Flash("success", message);
  return Json(new { success = true });
}

That is some pretty similar code. In fact, that’s about as un-DRY as you can get. One might say that’s going to happen if you try to create some sort of standard in what your controllers and actions should look like.

Delete Is Easy

Notice that [HttpDelete] attribute on our Delete() methods? That means that the only way these methods are ever going to be called is via XHR. Currently, no browsers can create a DELETE request without the support of JavaScript. That means that our controller pattern has no reason to be complicated.

The DeleteResponder

It’s pretty obvious what we’re trying to achieve here, so let’s start by creating a branch and stubbing out some tests.

> git checkout -b responder-spike

Writing out your tests first is a good way to define a spec, so let’s do that. We can already tell what happens in our controller action, and we want our responder to mimic this behavior.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class DeleteResponderTests
{
  private DeleteResponder deleteResponder;

  [SetUp]
  public void Before_each_test()
  {

  }

  [Test]
  public void It_should_send_a_request_to_the_mediator()
  {
    Assert.Fail();
  }

  [Test]
  public void It_should_add_a_successful_flash_message()
  {
    Assert.Fail();
  }

  [Test]
  public void It_should_have_an_expected_JSON_result()
  {
    Assert.Fail();
  }
}

What is our responder going to look like? I have a good idea what I think it should look like up front.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class DeleteResponder
{
  private readonly ApplicationController controller;

  public DeleteResponder(ApplicationController controller)
  {
    if (controller == null) throw new ArgumentNullException("controller");
    this.controller = controller;
  }

  public ActionResult RespondWith<T>(IMediator mediator, ICommand<T> command)
  {
    throw new NotImplementedException();
  }
}

The easiest part is to send the command to the mediator. We can write that test fairly easily. We’re also going to need a fake controller.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class DeleteResponderTests
{
  private ICommand<object> command;
  private DeleteResponder deleteResponder;
  private Mock<IMediator> mockMediator;

  [SetUp]
  public void Before_each_test()
  {
    mockMediator = new Mock<IMediator> { DefaultValue = DefaultValue.Mock };
    deleteResponder = new DeleteResponder(new FakeController());
    command = new FakeCommand();
  }

  [Test]
  public void It_should_send_a_command_to_the_mediator()
  {
    deleteResponder.RespondWith(mockMediator.Object, command);
    mockMediator.Verify(x => x.Send(command));
  }

  class FakeCommand : ICommand<object> { }

  class FakeController : Controller { }
}

This command implementation is extremely simple.

1
2
3
4
5
public ActionResult RespondWith<T>(IMediator mediator, ICommand<T> command)
{
  mediator.Send(command);
  return null;
}

Next, we want to make sure that the JSON result is returned appropriately.

1
2
3
4
5
6
7
8
[Test]
public void It_should_have_an_expected_JSON_result()
{
  var result = deleteResponder.RespondWith(mockMediator.Object, command);
  var jsonResult = MvcTest.HasExpectedActionResult<JsonResult>(result);
  var objectDictionary = jsonResult.Data.ToDictionary();
  Assert.IsTrue(objectDictionary["success"]);
}

This implementation is also fairly simple.

1
2
3
4
5
6
7
8
9
10
public ActionResult RespondWith<T>(IMediator mediator, ICommand<T> command)
{
  mediator.Send(command);
  var jsonResult = new JsonResult();
  jsonResult.Data = new
  {
    success = true
  };
  return jsonResult;
}

The only remaining piece is how we add the flash message. I’m actually going to leave this piece alone. If you want to see how I did it, then check out my version of the DeleteResponder.

The “After”

Our TagsController and TasksController now look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// TagsController
[HttpDelete]
public ActionResult Delete(DeleteTagCommand command)
{
  return new DeleteResponder(this)
      .RespondWith(mediator, command);
}

// TasksController
[HttpDelete]
public ActionResult Delete(DeleteTaskCommand command)
{
  return new DeleteResponder(this)
      .RespondWith(mediator, command);
}

Eight total lines of code become two total lines of code, and the common functionality is moved to a utility class.

Also, just because the delete is easy, that doesn’t mean that everything has to be difficult. Your controllers should be rather thin. They invoke a service. They do a query. They return an ActionResult. This responder pattern can be applied to any form post.

Learning Code Contracts

Taking the advice of Patrick Smacchia, I decided to start learning more about Code Contracts. What a better way to use them, than to work with them? So, I added them to my MVCFlashMessages project.

What Code Contracts Are

Code contracts are assertions in your code, much like unit tests are assertions. They provide both runtime and compile-time checking of conditions. Contracts are a debugging tool. They help direct you to finding (and eliminating) bugs. They are an elegant way of saying, “I expected this condition to be true. If it is, then go on. If not, I am going to throw an exception.”

Let’s look at an example from the FlashMessageCollection class.

1
2
3
4
5
6
7
8
9
10
11
12
public class FlashMessageCollection
{
  private readonly TempDataDictionary storage;

  public FlashMessageCollection(TempDataDictionary storage)
  {
    Contract.Requires<ArgumentNullException>(storage != null);
    this.storage = storage;
  }

  // snip
}

The .Requires<>() line is expecting that storage not be null. If the condition is failed, an ArgumentNullException will be thrown. It is functionally equivalent to this very common line of code.

1
2
if (storage == null)
  throw new ArgumentNullException("storage");

The fun part about code contracts is, of course, their integration into Visual Studio.The code contract can be statically checked; the if-then-throw exception cannot. This means that VS gives you hints that what you are about to do could possibly throw an exception. Neat!

There are three types of contracts you should be aware of.

  1. Preconditions
  2. Postconditions
  3. Assertions

Preconditions

Preconditions check state before a method starts. Usually, this is all the parameter checks or internal state validations that happen at the start of a method.

Here’s another example, this time requiring a valid value for the indexer.

1
2
3
4
5
6
7
8
9
public FlashMessage this[int index]
{
  get
  {
    Contract.Requires<ArgumentOutOfRangeException>(0 <= index && index < this.Count);
    EnsureFlashMessagesIsInitialized();
    return flashMessages[index];
  }
}

There’s nothing too fancy about .Requires(). As developers, we are pretty used to checking input parameters.

Postconditions

Postconditions are promises about the return values of methods. This is a newer concept for code contracts. While they appear at the top of a method, they are telling you information about the return clause.

1
2
3
4
5
6
7
8
9
public int Count
{
  get
  {
    Contract.Ensures(Contract.Result<int>() >= 0);
    EnsureFlashMessagesIsInitialized();
    return flashMessages.Count;
  }
}

The above line of code says the following.

I promise that the value I am about to return will be greater than or equal to 0. If not, I am going to throw an exception at runtime.

The only other place where I have used an .Ensures() clause is to state that a return value will never be null.

Thus, code contracts fulfill the design requirement that code should fail as early as possible.

Assertions

Assertions happen in the middle of your code and are neither a precondition or a postcondition. Again, you’re just checking that some condition is true. Unlike preconditions and postconditions, which can only appear at the beginning of a method block, assertions can appear anywhere in your code.

1
2
3
4
5
6
7
8
9
10
11
private void EnsureFlashMessagesIsInitialized()
{
  // If there are any flash messages already in the collection, do not do this
  // step again, or you will risk populating the collection twice.
  if (flashMessages.Any())
    return;

  IEnumerable<FlashMessage> objectFromStorage = storage.GetFlashMessages();
  Contract.Assert(objectFromStorage != null);
  flashMessages.AddRange(objectFromStorage);
}

We know that if we try to .AddRange(null), we will get an ArgumentNullException, so let’s add a contract to make sure that doesn’t happen.

This really isn’t so different from checking a null value early.

1
2
3
4
5
6
7
// 1. With code contracts, you will get a ContractException instead of a
//    NullReferenceException.
Contract.Assert(obj != null);

// 2. Or you can use a classic null check.
if (obj == null)
    throw new NullReferenceException("The obj is null!");

The above two examples do the exact same thing.

What Code Contracts Are Not

Code contracts are a debugging tool. They are not a replacement for unit tests. Good code contracts and good unit tests work together.

Have you ever said to yourself or your team something like the following?

What’s the point of unit tests? We can’t possibly cover every scenario that would ever happen!

That’s where code contracts can really come into play. Have you ever seen the bowling kata? It’s basically a way to teach TDD. Imagine how we would add contracts to a bowling game.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public class Game
{
  private int currentFrame = 1;
  private int currentScore = 0;
  private List<int> rolls = new List<int>();

  public int CurrentFrame
  {
    get
    {
      Contract.Ensures(0 <= Contract.Returns<int>() && Contract.Returns<int>() <= 10);
      return currentFrame;
    }
  }

  public int CurrentScore
  {
    get
    {
      Contract.Ensures(0 <= Contract.Returns<int>() && Contract.Returns<int>() <= 300);
      return currentScore;
    }
  }

  public void Roll(int pins)
  {
    Contract.Requires<ArgumentOutOfRangeException>(0 <= pins && pins <= 10);
    rolls.Add(pins);
    // etc.
  }
}

We are still going to write tests. We are still going to follow all the rules of TDD. However, without knowing anything else about what a Game is going to look like, I absolutely know that the above contracts must always be true. A bowling frame of 11 or a score of 301 is never correct.

Code Contracts Are Cancerous

If you turn on the feature “Fail build on warnings,” be prepared to spend some time working through the issue of Contracts Cascading Errors. Here’s an example.

See that purple squiggly line? My FlashMessageCollection constructor has a code contract on it that says the TempDataDictionary can never be null. The IDE is giving me a warning. I can eliminate that warning by adding an additional contract.

1
2
3
4
5
Contract.Requires<ArgumentNullException>(controller != null);
Contract.Requires<ArgumentException>(controller.TempData != null);
FlashMessageCollection flashMessageCollection = new FlashMessageCollection(controller.TempData);
FlashMessage flashMessage = new FlashMessage(key, message);
flashMessageCollection.Add(flashMessage);

The only problem with this is that my FlashMessage constructor also has contracts on the key and message parameters. Since there’s no guarantee on public methods, I need to cover these contracts as well. To eliminate all the build warnings, this is what my method finally looks like.

1
2
3
4
5
6
7
8
9
10
public static void Flash(this Controller controller, string key, string message)
{
  Contract.Requires<ArgumentNullException>(controller != null);
  Contract.Requires<NullReferenceException>(controller.TempData != null);
  Contract.Requires<ArgumentNullException>(key != null);
  Contract.Requires<ArgumentNullException>(message != null);
  FlashMessageCollection flashMessageCollection = new FlashMessageCollection(controller.TempData);
  FlashMessage flashMessage = new FlashMessage(key, message);
  flashMessageCollection.Add(flashMessage);
}

Thus, the “cancer” of contracts. Much like calling something dynamic or using async/await, once you put contracts in one part of your code, the contracts will start to spread. In this example, I have more contracts than lines of production code. This isn’t a bad thing. Remember that contracts enforce the design rule that you should fail as early as possible. It is something you should be aware of if you treat build warnings as errors.

Happy coding!

Why the Repository?

The great news is that we have myriad ways to communicate, and the word is getting out there! As time marches on, developers are asking questions. Those questions are getting answered on blogs and StackOverflow. The problem, though, is that these wonderful communication tools are shortcuts.

I am finishing up work on a series of ASP.NET MVC projects. The work I’m seeing looks something like this…

1
2
3
4
5
6
7
public ActionResult NewItem(ItemViewModel form)
{
  var itemsRepository = new ItemsRepository();
  var item = ConvertViewModelToEntity(form);
  itemsRepository.Insert(item);
  return RedirectToAction("ListItems");
}

Mostly, it’s not so terrible. I think we can all agree that we’ve see a lot worse.

The problem here is that this type of code doesn’t really provide us anything. When we look into the ItemsRepository, we see something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
public class ItemsRepository
{
  public Item[] GetAllItems()
  {
    using (SqlConnection connection = OpenConnection())
    {
      // etc.
    }
  }

  public Item GetItem(int id)
  {
    using (SqlConnection connection = OpenConnection())
    {
      // etc.
    }
  }

  public int Insert(Item item)
  {
    using (SqlConnection connection = OpenConnection())
    {
      // etc.
    }
  }

  public void Update(Item item)
  {
    using (SqlConnection connection = OpenConnection())
    {
      // etc.
    }
  }

  private static SqlConnection OpenConnection()
  {
    var connection  = new SqlConnection(ConfigurationManager.ConnectionStrings[0].ConnectionString);
    connection.Open();
    return connection;
  }
}

Pay no attention to the fact that it is late 2013 and we are writing .NET 1.1 style data access.

This developer has pushed everything down one layer, but hasn’t made anything easier to test or decoupled anyclass from another. Furthermore, the way the data access layer is written, we cannot even code against an IDbConnection interface. We are using some pieces dependent upon the SqlConnection class.

Why do we write interfaces? Why do we inject dependencies? Why do we isolate classes? Why do we write small methods? Testing! These tools, these SOLID principles, we use these practices for a reason. Ultimately, they exist to improve code quality. If you’re not writing testable code, then what’s the point?

Related Posts

Data Access Still Matters

This blog post was inspired by Bill Karwin’s book SQL Antipatterns:Avoiding the Pitfalls of Database Programming. I read this over Thanksgiving weekend. If you are a developer, and you do database work, this is on the must-read list.

I agreed with just about everything in his book except a few little details. In Chapter 4, “ID Required,” Mr. Karwin lays out two antipatterns.

  1. The primary key is always an autoincrementing column named id.
  2. The primary key is not necessary when there is another unique restriction present naturally in the data.

Regarding the First

The first condition is something like this.

1
2
3
4
5
6
7
8
CREATE TABLE employees (
  id INT NOT NULL IDENTITY(1, 1),
  last_name VARCHAR(64) NOT NULL,
  first_name VARCHAR(64) NOT NULL,
  department_id INT NOT NULL,
  -- ...
  CONSTRAINT [pk_employees] PRIMARY KEY CLUSTERED ([id] ASC)
);

I agree that not every table requires a primary key named id. There are lots of situations where not using id or not having autoincrementing values is helpful. However, understand that with technologies like NHibernate with fluent mapping or Rails’ ActiveRecord, it is terribly damn convenient for the developer.

NHibernate with explicit mapping won’t much matter, since you must define the exact purpose of each and every column. It’s only when you’re using some of the implicit mapping magic that this comes into play.

Also, if you’re using PostgreSQL, there is some syntactic sugar you can use when joining tables.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- Using id
SELECT e.id AS employee_id, e.first_name, e.last_name, SUM(te.total_hours) AS sum_total_hours
FROM employees e
  INNER JOIN time_entries te ON e.id = te.employee_id
WHERE te.entry_date = N'2013-06-16'
GROUP BY e.id, e.first_name, e.last_name

-- Using table-specific columns
SELECT e.employee_id
SELECT e.id AS employee_id, e.first_name, e.last_name, SUM(te.total_hours) AS sum_total_hours
FROM employees e
  INNER JOIN time_entries te USING (employee_id)
WHERE te.entry_date = N'2013-06-16'
GROUP BY e.id, e.first_name, e.last_name

In the above example, you can use the USING keyword to join two tables on columns when the names match. If you’re explicit about calling the Employee ID employee_id instead of id, you can take advantage of syntax.

Consider your data access. If you think you’ll be handcrafting a lot of SQL, then maybe this syntactic sugar is worth it. Are you using a database where this even matters? Are you using a data access tool (NHibernate with fluent mapping, ActiveRecord) that takes advantages of the primary key being called id? There’s a lot to consider here beyond, “Don’t do it.”

Regarding the Second

Mr. Karwin also describes the following as an antipattern.

1
2
3
4
5
6
CREATE TABLE users (
  user_id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
  username VARCHAR(64),
  -- ...
);
CREATE UNIQUE INDEX [unique_username] ON [users] ([username]);

He says that this table has two unique identifiers, since both user_id and username unique identify a row. So I tried this in my Portfolio project. I have the ability to create tags and apply these tags to events. The slug for these tags are each unique. So I tried dropping the id column in favor of just using the slug column as the primary key. I made sure to turn on cascading updates every place a tag was referenced. Imagine my surprise when I saw this error message when I tried to edit a tag.

OK, I wasn’t surprised. I knew this would happen. But only because I’ve been burned by it before.

NHibernate won’t let you change the primary key of a loaded record. As it turns out, neither will Entity Framework or ActiveRecord. So I would offer up just one little addendum to the second snafu. It is fine to use a real-valued column if that value isn’t going to change, depending on what kind of data access technology you are using in your application. Are usernames allowed to change? Are tag slugs going to be modified? Are you sure employees never get the social security numbers wrong? If so, I wouldn’t recommend using those as primary keys — with or without cascading.