# The Easiest WebAPI Authorization Tutorial Ever

So, it took a lot of scouring of the internets, but I was finally able to piece together the bare minimum necessary to create your own authoritization client in WebAPI.

### Step 1: Create a DelegatingHandler

The DelegatingHandler is a class that will be attached to all requests coming into your API. This is going to be the easiest authorization handler ever. Here are the rules.

1. If there is a token, any token at all, the request is valid.
2. Otherwise, the request is unauthorized.

Yes, this is simple. In reality, you’d want to check the value of the token against something, at the very least a local database. That is left as an exercise for the reader.

Somewhere, you probably have a class that sets up your WebAPI configuration. You need to add this line to your configuration.

For this to work, you’ll need to add an authorization header to all of your requests. It needs to be in the following format.

Here’s a sample. (No don’t actually use this token. It’s just the SHA256 hash of Hello, World!.)

### Step 4: Test With and Without an Authorization Header

Fire up Fiddler. Let’s create a request without an authorization header.

The request.

The reponse. We can see the 401 Unauthorized coming back from the server.

Now, let’s look at the same request with an authorization header.

The response. Now with more 200 OK.

We’re successfully grabbing the token from the request. Everything else is left as an exercise for the reader!

# Unit Testing an MVC Authorize Attribute

When working with unit tests, I usually consider it a code smell when you have more fixture setup than production code. Today, I ran into just that condition.

This class has been written for a while, but it wasn’t covered by any tests. I decided to change that. First, I’ll show you the class. The class under question is called SessionBasedAuthorizeAttribute. It looks to the ASP.NET session to get the username, instead of relying on a forms token.

Writing your own AuthorizeAttibute isn’t difficult. It’s just a few lines, really, to pull a value from session, fetch a user from the database, and set the user. But what about unit testing?

By the way, there’s even more setup code that happens as a part of the MvcTest.SetupControllerContext() method call, but I had already written that for another purpose.

### 13 lines of production code. 25 lines of fixture setup.

It turns out, that’s much more difficult, because we are relying on a lot of the inner guts of MVC. I had to pull up the source code for the OnAuthorize method to see what all the NullReferenceExceptions were. When mocking out the HttpContext, you also have to mock out the Session (knew that) and Items (didn’t know that) properties. There are also some ActionDescriptor values that must be set for OnAuthorize to work, and I have never worked with those classes before, either.

I suppose the first conclusion is to praise the powers at Microsoft for open sourcing the MVC framework. Without access to the source, I would have never been able to write these tests. Even with the source, the terrible nature of the NullReferenceException made this much more difficult than it ought to be. Any time you have a chain of w.x.y.z, finding which of those threw the exception becomes a total PITA. Yeah, I know. The Law of Demeter is in play, too.

Check out Code Contracts. It makes this kind of in-code documentation much easier.

The second conclusion is the value of this kind of test. I’ve already had one coworker respond with, “this is where I begin to question the value add of unit testing.” I kinda have to agree with him. This was a hard test to write, even though the production code itself was quite easy. On one extreme, some people say that if it is important enough to write, then it is important enough to test. I think I agree with that sentiment. I believe in the value of testing. I’m just not sure what I could have done to make this testing easier and still remain within the bounds set by the MVC framework.

# BCrypt Is Slow

For some reason, this just made my day.

See those yellow dots? That’s NCrunch telling me that these tests are slow. It’s BCrypt. It’s supposed to be slow! That’s the whole point of using BCrypt. If you’re storing passwords, it’s the only hashing algorithm you should be using.

# JavaScript Objects

This is always a good topic to review. There are just too many developers who don’t know what they are doing when it comes to writing good JavaScript. The code for this project has been published to Github.

### Understanding Scope

The first thing we must understand is how variables are scoped in JavaScript. When it comes to JS: there’s only one rule.

Variables are scoped to the function that defines them.

There’s no concept of block scope like in C# or Java. This means that we’re going to make use of something called a functional closure.

### The Specs

Specs are a wonderful way of defining how a class should work. Don’t tell me this is BDUF or not how TDD is supposed to work. Both of these methodologies would say to only write one test, then write the code. Repeat until you are finished. This is why I refuse to strictly follow a dogma. If you know the specs up front, then write the specs.

The following tests are written with the Jasmine unit testing framework.

### Declaring the Constructor

Yes, we had to type the name of the class three times and the word function twice. I’m sorry.

### The Functional Closure

The above code example is a functional closure. The more you work with good JavaScript, the more you’ll see this pattern. Yes, it’s a lot of character noise. I’m sorry Ruby developers.

The above line creates an anonymous function that is executed immediately. In this example, we are creating a new Speaker function and then returning that function. The functional closure will let us hide variables that we don’t want exposed outside of class.

This will let us create new instances of our Speaker class.

### Objects Are Instances of Functions

What syntax makes function Speaker() any different from function speaker()? Nothing at all. The only reason that we are capitalizing Speaker is by convention of those who have come before us.

We also think our speaker should have a name. I like to know to whom I am speaking.

We can now test that the speaker really does have a name attribute now.

### Constructor Options

We should get used to using an options object in our JavaScript. Instead of having an explicit list of arguments, we instead accept a single options object as a method parameter. Remembering that JavaScript does not have a compiler, this allows us to be very clear about the expected inputs to our methods.

Having an options parameter also lets us set some defaults. Both jQuery and Underscore have an extend method that let us map settings. They both do the exact same thing. For no particular reason, I’m choosing Underscore over jQuery.

So, by default, our speaker will be named “Ann,” but we can still change it by adding a name attribute to our options argument.

Also, notice that the defaults are private to the functional closure. There is no way to access these defaults from outside the Speaker object. While we can access speaker.options.name, we cannot, in any way, access speaker.defaults.

To expose a public function, we need to use the prototype attribute.

### Duck Typing

I’m not going to get into duck typing today. A lot has already been said by developers much smarter than I. Check out Eric Lippert’s post and Phil Haack’s first and second posts. Instead, here’s the short, short version…

I’m going to call something - a method or property - and you had better just “work.”

That’s all we are going to do with the output buffer in this case. I’m going to call a .write() method. I don’t care what you give me, but I’m going to call that method.

We’re going to go back to our constructor here.

• AlertBuffer: Writes to window.alert.
• ConsoleBuffer: Writes to console.log.
• MemoryBuffer: Writes to an internal array.

I wrote all three versions. They are available in the repository linked above. Here’s the code for the MemoryBuffer.

We’ll set our buffer in the constructor of Speaker. We’ll go ahead and default to the MemoryBuffer. If you want the MemoryBuffer, then leave it alone. If not, then give us a new buffer when you create your Speaker instance.

### Creating a Private Function

Now we want to create a private function. This function needs to be exposed to the Speaker instance, but not outside of the instance. We can’t use a prototype for this, so how do we accomplish this? Answer: we just create a method inside of the function. This will scope it locally, preventing it from being exposed.

With that, we will now pass all of our specs.

# Relying on Unreliable IDs

Relying on IDs means that we are checking that a value set by the database. This may be an autoincrementing value — a SQL Server identity or an Oracle sequence.

### Why This Happens?

This anti-pattern occurs because it is thought to be easier to hard code values into the software than to make changes to the database schema. When the schema is wrong, fix it!

### Example: Include/Exclude by Value

In this case, this particular department is a virtual department, not a real department, and we don’t want it displayed in this particular UI screen.

However, in this program, the ID column is a SQL Server identity. The value of 6 was assigned to the row in the developer’s environment. However, there are three more environments to consider: QA, Staging, and Production.

There are two anti-patterns to this approach.

1. It assumes that data will be entered in the exact same order. It also assumes that if there was a mistake before, the mistake would be repeated, since a failed insert will increment the identity.
2. It assumes there will never be another department that we wish to exclude.

#### Solution: alter the schema to reflect the intent

The solution is to alter the database to reflect this logic.

The application code is now extremely easy to understand, and there is no reliance on “magic” values.

### Example: Setting a Default Value

In this example, we are using a value to reflect that something should have a default setting.

Again, this assumes that the “new” status will always be present. If you, ask the developer have complete control over the environments and deployments, then this does work. Also, you’re luckier than I, since I have never been at a client where I have complete control over deployments.

#### Solution: alter the schema to reflect the intent

The application code is similarly trivial.

Now, whether the query for .IsDefault should be a call to .First() or .Single() is an argument I don’t really want to get into. Ideally, it is probably correct that only one value can ever be the default value. However, I’ve been places where I don’t have any control over those particular administration screens, and the code quality is rather dubious. In that case, I chose the .First() method because I didn’t want my code to break because of another developer’s poor decisions.

Yes, I realize my code will still break if there are no default statuses. The [status_id] column in the database is not nullable, so .FirstOrDefault() is just prolonging the issue.

Lesson Learned: Don’t rely on magic.

# DRYing Up Controller Actions With Responders

I read a few books over the long weekend. One of those books was Crafting Rails 4 Applications by José Valim. I’m going to say that this is a pretty advanced book on Rails, since it really digs into the Rails framework internals, especially when it comes to overriding the default functionality.

One of the topics is how to DRY up your controllers by writing custom responders. That got me thinking. Can we do the same thing in .NET?

### The “Before”

I have two controllers in my Portfolio application: a TagsController and a TasksController. These controllers contain the following lines.

That is some pretty similar code. In fact, that’s about as un-DRY as you can get. One might say that’s going to happen if you try to create some sort of standard in what your controllers and actions should look like.

### Delete Is Easy

Notice that [HttpDelete] attribute on our Delete() methods? That means that the only way these methods are ever going to be called is via XHR. Currently, no browsers can create a DELETE request without the support of JavaScript. That means that our controller pattern has no reason to be complicated.

### The DeleteResponder

It’s pretty obvious what we’re trying to achieve here, so let’s start by creating a branch and stubbing out some tests.

> git checkout -b responder-spike


Writing out your tests first is a good way to define a spec, so let’s do that. We can already tell what happens in our controller action, and we want our responder to mimic this behavior.

What is our responder going to look like? I have a good idea what I think it should look like up front.

The easiest part is to send the command to the mediator. We can write that test fairly easily. We’re also going to need a fake controller.

This command implementation is extremely simple.

Next, we want to make sure that the JSON result is returned appropriately.

This implementation is also fairly simple.

The only remaining piece is how we add the flash message. I’m actually going to leave this piece alone. If you want to see how I did it, then check out my version of the DeleteResponder.

### The “After”

Our TagsController and TasksController now look like the following:

Eight total lines of code become two total lines of code, and the common functionality is moved to a utility class.

Also, just because the delete is easy, that doesn’t mean that everything has to be difficult. Your controllers should be rather thin. They invoke a service. They do a query. They return an ActionResult. This responder pattern can be applied to any form post.

# Learning Code Contracts

Taking the advice of Patrick Smacchia, I decided to start learning more about Code Contracts. What a better way to use them, than to work with them? So, I added them to my MVCFlashMessages project.

### What Code Contracts Are

Code contracts are assertions in your code, much like unit tests are assertions. They provide both runtime and compile-time checking of conditions. Contracts are a debugging tool. They help direct you to finding (and eliminating) bugs. They are an elegant way of saying, “I expected this condition to be true. If it is, then go on. If not, I am going to throw an exception.”

Let’s look at an example from the FlashMessageCollection class.

The .Requires<>() line is expecting that storage not be null. If the condition is failed, an ArgumentNullException will be thrown. It is functionally equivalent to this very common line of code.

The fun part about code contracts is, of course, their integration into Visual Studio.The code contract can be statically checked; the if-then-throw exception cannot. This means that VS gives you hints that what you are about to do could possibly throw an exception. Neat!

There are three types of contracts you should be aware of.

1. Preconditions
2. Postconditions
3. Assertions

#### Preconditions

Preconditions check state before a method starts. Usually, this is all the parameter checks or internal state validations that happen at the start of a method.

Here’s another example, this time requiring a valid value for the indexer.

There’s nothing too fancy about .Requires(). As developers, we are pretty used to checking input parameters.

#### Postconditions

Postconditions are promises about the return values of methods. This is a newer concept for code contracts. While they appear at the top of a method, they are telling you information about the return clause.

The above line of code says the following.

I promise that the value I am about to return will be greater than or equal to 0. If not, I am going to throw an exception at runtime.

The only other place where I have used an .Ensures() clause is to state that a return value will never be null.

Thus, code contracts fulfill the design requirement that code should fail as early as possible.

#### Assertions

Assertions happen in the middle of your code and are neither a precondition or a postcondition. Again, you’re just checking that some condition is true. Unlike preconditions and postconditions, which can only appear at the beginning of a method block, assertions can appear anywhere in your code.

We know that if we try to .AddRange(null), we will get an ArgumentNullException, so let’s add a contract to make sure that doesn’t happen.

This really isn’t so different from checking a null value early.

The above two examples do the exact same thing.

### What Code Contracts Are Not

Code contracts are a debugging tool. They are not a replacement for unit tests. Good code contracts and good unit tests work together.

Have you ever said to yourself or your team something like the following?

What’s the point of unit tests? We can’t possibly cover every scenario that would ever happen!

That’s where code contracts can really come into play. Have you ever seen the bowling kata? It’s basically a way to teach TDD. Imagine how we would add contracts to a bowling game.

We are still going to write tests. We are still going to follow all the rules of TDD. However, without knowing anything else about what a Game is going to look like, I absolutely know that the above contracts must always be true. A bowling frame of 11 or a score of 301 is never correct.

### Code Contracts Are Cancerous

If you turn on the feature “Fail build on warnings,” be prepared to spend some time working through the issue of Contracts Cascading Errors. Here’s an example.

See that purple squiggly line? My FlashMessageCollection constructor has a code contract on it that says the TempDataDictionary can never be null. The IDE is giving me a warning. I can eliminate that warning by adding an additional contract.

The only problem with this is that my FlashMessage constructor also has contracts on the key and message parameters. Since there’s no guarantee on public methods, I need to cover these contracts as well. To eliminate all the build warnings, this is what my method finally looks like.

Thus, the “cancer” of contracts. Much like calling something dynamic or using async/await, once you put contracts in one part of your code, the contracts will start to spread. In this example, I have more contracts than lines of production code. This isn’t a bad thing. Remember that contracts enforce the design rule that you should fail as early as possible. It is something you should be aware of if you treat build warnings as errors.

Happy coding!

# Why the Repository?

The great news is that we have myriad ways to communicate, and the word is getting out there! As time marches on, developers are asking questions. Those questions are getting answered on blogs and StackOverflow. The problem, though, is that these wonderful communication tools are shortcuts.

I am finishing up work on a series of ASP.NET MVC projects. The work I’m seeing looks something like this…

Mostly, it’s not so terrible. I think we can all agree that we’ve see a lot worse.

The problem here is that this type of code doesn’t really provide us anything. When we look into the ItemsRepository, we see something like this.

Pay no attention to the fact that it is late 2013 and we are writing .NET 1.1 style data access.

This developer has pushed everything down one layer, but hasn’t made anything easier to test or decoupled anyclass from another. Furthermore, the way the data access layer is written, we cannot even code against an IDbConnection interface. We are using some pieces dependent upon the SqlConnection class.

Why do we write interfaces? Why do we inject dependencies? Why do we isolate classes? Why do we write small methods? Testing! These tools, these SOLID principles, we use these practices for a reason. Ultimately, they exist to improve code quality. If you’re not writing testable code, then what’s the point?

# Data Access Still Matters

This blog post was inspired by Bill Karwin’s book SQL Antipatterns:Avoiding the Pitfalls of Database Programming. I read this over Thanksgiving weekend. If you are a developer, and you do database work, this is on the must-read list.

I agreed with just about everything in his book except a few little details. In Chapter 4, “ID Required,” Mr. Karwin lays out two antipatterns.

1. The primary key is always an autoincrementing column named id.
2. The primary key is not necessary when there is another unique restriction present naturally in the data.

### Regarding the First

The first condition is something like this.

I agree that not every table requires a primary key named id. There are lots of situations where not using id or not having autoincrementing values is helpful. However, understand that with technologies like NHibernate with fluent mapping or Rails’ ActiveRecord, it is terribly damn convenient for the developer.

NHibernate with explicit mapping won’t much matter, since you must define the exact purpose of each and every column. It’s only when you’re using some of the implicit mapping magic that this comes into play.

Also, if you’re using PostgreSQL, there is some syntactic sugar you can use when joining tables.

In the above example, you can use the USING keyword to join two tables on columns when the names match. If you’re explicit about calling the Employee ID employee_id instead of id, you can take advantage of syntax.

Consider your data access. If you think you’ll be handcrafting a lot of SQL, then maybe this syntactic sugar is worth it. Are you using a database where this even matters? Are you using a data access tool (NHibernate with fluent mapping, ActiveRecord) that takes advantages of the primary key being called id? There’s a lot to consider here beyond, “Don’t do it.”

### Regarding the Second

Mr. Karwin also describes the following as an antipattern.

He says that this table has two unique identifiers, since both user_id and username unique identify a row. So I tried this in my Portfolio project. I have the ability to create tags and apply these tags to events. The slug for these tags are each unique. So I tried dropping the id column in favor of just using the slug column as the primary key. I made sure to turn on cascading updates every place a tag was referenced. Imagine my surprise when I saw this error message when I tried to edit a tag.

OK, I wasn’t surprised. I knew this would happen. But only because I’ve been burned by it before.

NHibernate won’t let you change the primary key of a loaded record. As it turns out, neither will Entity Framework or ActiveRecord. So I would offer up just one little addendum to the second snafu. It is fine to use a real-valued column if that value isn’t going to change, depending on what kind of data access technology you are using in your application. Are usernames allowed to change? Are tag slugs going to be modified? Are you sure employees never get the social security numbers wrong? If so, I wouldn’t recommend using those as primary keys — with or without cascading.

# Fixing allowDefinition=MachineToApplication Build Error

Just a quick fix for this build error.

It is an error to use a section registered as allowDefinition=‘MachineToApplication’ beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

If you look in your project’s obj folder, you will probably see multiple outputs. If you haven’t made any changes to the default build configurations, then you will have both Debug and Release folders. This is the cause of this error.

The fix is simple, we just need to get rid of these folders. We’re going to add a pre-build event that will get rid of all obj folders except the one we are building. Paste the following into your Pre-build event command line.

rmdir /s /q $(ProjectDir)\obj mkdir$(ProjectDir)\obj
mkdir $(ProjectDir)\obj\$(ConfigurationName)


Like so…

I need this on every project I work on. Time to put it out here instead of looking it up from scratch each time.