Running an ASP.NET MVC application on a fresh IIS8 install

IIS has ever increasing amounts of security, you can’t publish a basic ASP.NET MVC website anymore and expect IIS to host it without some additional work. The default config settings that the MVC uses are locked down in IIS, so it issues an error when you try to navigate to your fresh site.

Initially you may get a screen that says something bland and non-descriptive, like “Internal Server Error” with no further information.

To get the more detailed error messages modify your web application’s web.config file and add the following line to the system.webServer section:

<httpErrors errorMode="Detailed" />

Now, you’ll get a more detailed error message. It will look something like this:

The key to the message is: This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false".

The “Config Source” section of the error message will highlight in red the part that is denied.

In order to allow the web.config to modify the the identified configuration element you need to find and modify the ApplicationHost.config file. It is located in C:\Windows\System32\inetsrv\config. You’ll need to be running as an Administrator level user in order to modify the file.

Find the section group the setting belongs to, e.g.

<sectionGroup name="system.webServer">

Then the section itself:

<section name="handlers" overrideModeDefault="Deny" />

And update overrideModeDefault to "Allow" in order to allow the web.config to override it.

When you refresh the page for the website the error will be gone (or replaced with an error for the next section that you are not permitted to override)

Tip of the day: Quickly finding commented out code in C-like languages

This is a quick rough-and-ready way of finding commented out code in languages like C#, C++, javaScript and Java. It won’t find all instances, but it will probably find most.

Essentially, you can use a regular expression to search through the source code looking for a specific pattern.

The following works for Visual Studio for searching C#. The same expression would also likely work in other similar languages where the comment line starts with a double slash and ends in a semi-colon.

So for Visual Studio, press Ctrl+F to bring up the search bar, set the search option to be be “Regular Expression” and set it to look in the “Entire Solution” (or what ever extent you are searching) then enter the regular expression into the search box:

^\s*//.*;\s*$
Visual Studio 2012 Search Bar

Visual Studio 2012 Search Bar

Then tell it to “Find all” and you’ll be presented with a list of lines of code that appear to be commented out.

Caveats

It is not fully proof. It does not find code commented out in the style:

/*
* Commented = out.code.here();
*/

It also does not find commented out code on lines that do not end in a semi-colon such as namespace, class, method declarations and the like, nor on lines that simple contain scope operators (The { and } braces to define the scope of a namespace, class, method, etc.)

And although I’ve said C-like languages in the title, I don’t recall that C ever had the double-slash comment style (although many languages based on it do, hence “C-like”)

Update: Updated the regular expression to ignore white space at the start and end of the line.

DROP/CREATE vs ALTER on SQL Server Stored Procedures

We have a number of migration scripts that modify the database as our applications progress. As a result we have occasion to alter stored procedures, but during development the migrations may be run out of sequence on various test and development databases. We sometimes don’t know if a stored procedure will exist on a particular database because of the state of various branches of development.

So, which is better, dropping a stored procedure and recreating it, or just simply altering the existing one, and if we’re altering it, what needs to happen if it simply doesn’t exist already?

DROP / CREATE

This is the code we used for the DROP/CREATE cycle of adding/updating stored procedures

IF EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'MyProc')
BEGIN
       DROP PROCEDURE MyProc;
END
GO
 
CREATE PROCEDURE MyProc
AS
	-- Body of proc here
GO

CREATE / ALTER

We recently changed the above for a new way of dealing with changes to stored procedures. What we do now is detect if the procedure exists or not, if it doesn’t then we create a dummy. Regardless of whether the stored procedure previously existed or not, we can now ALTER the stored procedure (whether it was an existing procedure or the dummy one we just created

That code looks like this

IF NOT EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'MyProc')
BEGIN
       EXEC('CREATE PROCEDURE [dbo].[MyProc] AS BEGIN SET NOCOUNT ON; END')
END
GO
 
ALTER PROCEDURE [dbo].[MyProc]
AS
	-- Body of proc here
GO

This looks counter-intuitive to create a dummy procedure just to immediately alter it but it has some advantages over drop/create.

If an error occurs with drop/create while creating the stored procedure and there had been a stored procedure to drop originally you are now left without anything. With the create/alter in the event of an error when altering the stored procedure the original is still available.

Altering a stored procedure keeps any security settings you may have on the procedure. If you drop it, you’ll lose those settings. The same goes for dependencies.

Tip of the day: Forcing a file in to your Git repository

Normally, you’ll have a .gitignore file that defines what should not go into a git repository. If you find that there is a file or two that is ignored that you really need in the repository (e.g. a DLL dependency in a BIN folder for some legacy application that doesn’t do NuGet, or other package management) then you need a way to force that into Git.

The command you need is this:

git add --force <filename>

Replace <filename> with the file that you need to force in. This will force stage the file ready for your next commit.

You can also use . in place of a file name to force in an entire directory, or you can use wild cards to determine what is staged.

Afterwards you can use

git status

to see that it has been staged.

Injecting a Dependency into an IHttpModule with Unity

We’re starting a new project, and as part of that we want to get better at certain things. One is unit testing the things we didn’t last time around that were in hard-to-reach places. Pretty much most things that interact with ASP.NET have hard-to-reach places. Even ASP.NET MVC, which was supposed to be wonderful and much more unit testable that vanilla ASP.NET, has lots of places where this falls down completely. However, we’re gradually finding way to overcome these obstacles.

In this post, I’m going to concentrate on custom IHttpModule implementations.

We have a custom IHttpModule that requires the services of another class. It is already set up in our IoC container and we just want to inject it into the module we’re writing. However, modules are instantiated by ASP.NET before our IoC framework can get to it.

How I got around this was by creating an additional module (an InjectorModule) that wired up all the other modules that needed dependencies injected using Unity’s BuildUp method to inject the dependency into the existing object.

Setting up the HttpApplication

The application object stores the container and implements an interface that the InjectorModule can access the container through.

public interface IHttpUnityApplication
{
    IUnityContainer UnityContainer { get; } 
}

And the Application class in the global.asax.cs file looks like this:

public class MvcApplication : System.Web.HttpApplication, IHttpUnityApplication
{
    // This is static because it is normal for ASP.NET to create
    // several HttpApplication objects (pooling) but only the first
    // will run Application_Start(), which is where this is set.
    private static IUnityContainer _unityContainer;

    protected void Application_Start()
    {
        _unityContainer = UnityBootstrapper.Initialise();
	// Do other initialisation stuff here
    }

    // This implements the IHttpUnityApplication interface
    public IUnityContainer UnityContainer
    {
        get { return _unityContainer; }
    }
}

The UnityBootstrapper initialises the container for MVC, it is created by the Unity.Mvc4 NuGet package (there’s also a Unity.Mvc3 package too). You can read more about it here.

The InjectorModule

Next up the InjectorModule is created

public class InjectorModule : IHttpModule
{
    public void Init(HttpApplication context)
    {
        // Get the IoC container from the application class through
        // the common interace.
        var app = (IHttpUnityApplication) context;
        IUnityContainer container = app.UnityContainer;

        // Wire up each module that is registered with the IoC container
        foreach (var module in context.GetRegisteredModules(container))
            container.BuildUp(module.GetType(), module);
    }

    public void Dispose()
    {
    }
}

I’ve also been a wee bit sneaky and created an extension method on HttpApplication to work out which are the registered modules so that the code above is a bit nicer. That code is:

public static class HttpApplicationExtensions
{
    public static IEnumerable GetRegisteredModules(this HttpApplication context, IUnityContainer container)
    {
        var allModules = context.Modules.AllKeys.Select(k => context.Modules[k]);
        var registeredModules = allModules.Where(m => container.IsRegistered(m.GetType()));
        return registeredModules;
    }
}

Wiring it all up

The container must be told which modules have dependencies to inject and what properties to set. e.g.

container.RegisterType<MyCustomModule>(
    new InjectionProperty("TheDependentProperty"));

MyCustomModule is the class that implements the IHttpModule interface, and you need to supply an InjectionProperty for each of the properties through which the IoC containers will inject a dependency.

You can also decorate the properies with the [Dependency] attribute, but then you are just wiring in a dependency on the IoC container itself… which is not good.

Finally, this new module has to be wired up in the web.config file

  <system.webServer>
    <modules>
      <add name="InjectorModule"
           type="Web.IoC.Unity.InjectorModule" />
      <!-- Other modules go here, after the Injector Module -->
    </modules>

By putting the Injector module ahead of other modules in the web.config it means it gets a chance to run and inject the depedencies into other modules that have yet to be initialised.

Other considerations

The IHttpModule interface defines a method, Init(), that takes an HttpApplication as a parameter. Naturally, that’s difficult to mock out in a unit test.

What I did was to extract all the bits that I needed in the Init() method and pass them to another method to do the work. For example, HttpContext is easy to do because ASP.NET MVC provides an HttpContextWrapper and the method that is doing all the work just takes an HttpContextBase, which is easily mocked in a unit test.

public void Init(HttpApplication context)
{
   var wrapper = new HttpContextWrapper(context.Context);
   InitImpl(wrapper);
}
public void InitImpl(HttpContextBase httpContext)
{
    // Do stuff with the HttpContext via the abstract base class
}

Tip of the day: How to tell why your app couldn’t log on to SQL Server

When you get a log in failure on SQL Server the message you get back from SQL Server Management Studio, or in a .NET Exception is vague for security. They don’t want to give away too much information just in case.

For example, the exception message will be something like “Login failed for user ‘someUser’.” which doesn’t give you much of a clue as to what is actually happening. There could be a multitude of reasons that login failed.

If you want more information about why a log-in failed you can open up the event viewer on the machine that SQL Server is installed on and have a look. You’ll find a more detailed message there.

The wider messages may be things like:

  • “Login failed for user ‘someUser’. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]”
  • Login failed for user ‘someUser’. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]
  • Login failed for user ‘someUser’. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]
    Note: This could be because the database doesn’t exist, or because the user doesn’t have permissions to the database.

Linking Perforce Merge to Git

Git’s built in Merge conflict resolution is awful. Although all the information is there it is difficult to use for all but the simplest of conflicts. Luckily, it is relatively easy to wire up a third party diff and merge tools to help.

Setting up as a diff tool.

You can download the Perforce Visual Merge Tool here. The only part of the installer that is needed is the “Visual Merge Tool (P4Merge)”

Perforce Installation Wizard - Feature Selection

Perforce Installation Wizard – Feature Selection

To configure Git to use the p4merge as the diff tool, the global config needs to be edited. The global config, on Windows 7 and 8 is found in c:\users\<username>\.gitconfig

The following needs to be added:

[diff]
    tool = P4Merge
[difftool "P4Merge"]
    cmd = p4merge "$LOCAL" "$REMOTE"

The [diff] section sets up the default tool to use, you can configure as many as you like. The [difftool "toolname"] section sets up the options for a specific tool.

Now, in Git Bash, you can type git difftool and it will show the diffs in the perforce merge tool between the current file and the previous commit.

If you have multiple files that have changes it will prompt one-by-one to view them in the diff tool.

If you’ve already staged the files (prior to a commit) then you’ll need to type git difftool --cached in order for them to show up.

If you wish to see just a specific file you can use git difftool name-of-file

Again, add the --cached option (just before the filename) if you’ve already staged the file prior to a commit.

Setting up as a Merge Tool

Open up the .gitconfig file, as above, and make some changes to it. Add the following sections to it which are similar to the diff tool.

[merge]
    tool = P4Merge
[mergetool "P4Merge"]
    cmd = p4merge "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
    keepTemporaries = false
    trustExitCode = false
    keepBackup = false

If you get a merge conflict when merging branches or pulling down from the remote repository you can now use git mergetool to merge the changes.

Getting Tortoise Git to work with GitHub repositories

In this post I’ll walk you through installing Tortoise Git in a way that allows it to interact easily with GitHub repositories.

Download msysgit

First off download msysgit, a prerequisite for running Tortoise Git. (A the time of writing this was v1.8.3).

For the installation, I mostly accepted all the default options. The only change I made was to allow the system’s PATH environment variable to be updated. This will be required for a latter step.

I also left the default “Checkout windows-style, commit unix-style endings”, which is equivalent to the git option core.autocrlf being set to true. You probably also want to set this on if you don’t have it set already. GitHub also has an article on their site about file specific options that you might want to include in a .gitattributes file in your repository.

If you have any existing repositories on your system you can now use GitBash to work with them. At the moment each command, however, will require you to type your user name and password.

Download Tortoise Git

Then download Tortoise Git (v1.8.4 at the time of writing). If you have Windows 8 you should go for the 64-bit edition. Again, I just accepted all the default installation options.

As with GitBash in the msysgit installation, once this is set up you’ll be able to work with any existing repositories, and again each operation will require a user name and password to be allowed.

Download the git-credential-winstore

GitHub has an article on how to set up password caching (skip to “password caching” for download link) if you are using tools other than GitHub for Windows. The file requires that the path variable has the git bin folder in it. This will be the case if the option above was made when installing msysgit. I also found that a machine reboot was required before installing this as it didn’t immediately find git in the path after installing msysgit.

The git-credential-winstore install very quickly. It asks one slightly confusingly worded question, “Do you want to install git-credential-winstore to prompt for passwords?”. The correct answer is “yes”. It doesn’t mean that it will always prompt you instead of at the command line or the GUI tool, it will only prompt for a password if it does not know the credentials to use, after that it uses what’s in its credential store so you don’t get asked all the time.

When git-credentials-winstore is installed it will create a [credentials] section in your .gitconfig file which should be at C:\Users\<username>\.gitconfig

Be aware, however, that GitHub does have a nasty habit of removing the [credentials] section of the .gitconfig file. To get around this, copy the credential section to the gitconfig file in the msysgit directory (If you followed the installation defaults it will probably be in C:\Program Files (x86)\Git\etc.) You’ll have to run as administrator in order to edit that gitconfig file due to its location.

If you have multiple users on your machine you may also want to move the installed location of git-credentials-winstore as it installs in your AppData directory. However, I’ve not tried this as I’m the only user on my machine.

You can now use GitBash and Tortoise Git with your GitHub repository.

Getting Started with AngularJS – The Application Module

As with all applications there has to be a starting point. Where does the application start? In AngularJS that starting point is the module.

And because a module is, well, modular, you can plug modules into each other to build the application, share components and so on.

Actually, I suppose in Angular it actually starts with a directive, that points to the module to start with, because, if you have more than one, which one do you start with?

<html ng-app="angularCatalogue">
  
</html>

The ng-app directive bootstraps the application by telling AngularJS which module contains the root of the application.

A module is defined like this:

angular.module("angularCatalogue",[])

The name of the above module is "angularCatalogue", the name of the application, which is what is placed in the ng-app directive in the html element previously.

You can also add as a second parameter to the module an array of other modules to inject. The modules don’t necessarily have to be loaded in any particular, so it is okay to refer to a module that may not exist at that point.

The module function returns a Module object, which you can then set up as you need it. Typically an application will have some sort of configuration, controllers, directives, services and so on.

Wiring up the view

In the html you will need to indicate where the view is to be placed.

You can do this via the ng-view directive, which can look like this:

<ng-view></ng-view>

or

<div ng-view></div>

Everything inside the element will be replaced with the contents of the view.

The application then needs to be told where the view is. You can configure the application module with that information, like this:

angular.module("angularCatalogue") 
    .config(["$routeProvider", function($routeProvider){
        $routeProvider.when("/",
            {
                templateUrl:"/ngapp/templates/search.html",
                controller: "productSearchController"
            });
    }]);

The config call on the module allows the module to be configured. It takes an array that consisted of the names of objects to be injected into the configuration and the function that performs the configuration.

The function has a $routeProvider injected into it. This allows routing to be set up. In the example above a route is set up from the home page of the application ("/") that inserts the given template into the element (ng-view) that designated the view and it uses the given controller.

I’ll move onto controllers in an upcoming post.

A note on the dependency injection

If you never minify you javaScript you can get away with something like this:

angular.module('myApplication')
    .config(function($routeProvider){
        ...
     });

You’ll notice that there is no array, it is just taking a function. Angular can work out from the parameter names what needs to be injected. However, if the code is minified most minifiers will alter the parameter names to save space in which case angular’s built in dependency injection framework fails because it no longer knows what to resolve things to. Minifiers do not, however, minify string literals. If the string literals exist then it will use them as to determine what gets resolved into which parameter position. The strings must match the position of their counterpart in the function parameters.

Therefore the minifier friendly version of the previous snippet becomes:

angular.module('myApplication')
    .config(['$routeProvider', function($routeProvider){
        ...
     }]);

A note on naming conventions

You can name things what you like but AngularJS has some conventions reserved for itself.

  • Its own services are prefixed with a $ (dollar). Never name your services with a dollar prefix as your code may become incompatible with future versions of angular.
  • Its own directives are prefixed with ng. Similarly to the previous convention, don’t name any of your directives with an ng prefix as it may clash with what’s in future versions of angular.
  • In javaScript everything is camel cased (the first word is all lower cased, subsequent words have the first letter capitalised), in the HTML dashes separate the words. So if you create a directive called myPersonalDirective when that directive is placed in HTML it becomes my-personal-directive.

Getting Started with AngularJS – Bundling the files

When you are building AngularJS apps you will probably want to store all your various controllers, directives, filters, etc. into different files to keep it all nicely separated and easy to manage. However, putting script blocks to all those files in your HTML is not efficient in the least. Not only do you have several round-trips to the server, the browser will be downloading a lot of code that is designed to be readable and maintainable, potentially with lots of additional whitespace and comments.

If the back end of your application is using .NET then you can bundle together CSS and Javascript files to make them more optimised.

For example, I have a small AngularJS prototype application that uses bundling so that, when it is run with the optimisations turned on, it will need less files and more compact javascript and CSS. The method that creates these bundles looks like this:

public static void RegisterBundles(BundleCollection bundles)
{
    bundles.Add(new StyleBundle("~/Content/base-styles.css")
        .Include("~/Content/bootstrap.css")
        .Include("~/Content/angular-ui.css")
        .Include("~/Content/angularCatalogue.css"));

    bundles.Add(new ScriptBundle("~/Scripts/base-frameworks.js")
        .Include("~/Scripts/jquery-{version}.js")
        .Include("~/Scripts/angular.js")
        .Include("~/Scripts/angular-resource.js")
        .Include("~/Scripts/angular-ui.js")
        .Include("~/Scripts/bootstrap.js"));

    bundles.Add(new ScriptBundle("~/Scripts/angular-catalogue.js")
    // Configure the Angular Application
      .Include("~/ngapp/app.js")

    // Filters
      .Include("~/ngapp/filters/idFilter.js")
      .Include("~/ngapp/filters/allBut.js")

    // The services
      .Include("~/ngapp/Services/colourService.js")
      .Include("~/ngapp/Services/brandService.js")
      .Include("~/ngapp/Services/productTypeService.js")
      .Include("~/ngapp/Services/productService.js")
      .Include("~/ngapp/Services/sizeService.js")

  // Directives
      .Include("~/ngapp/Directives/userFilter.js")
      .Include("~/ngapp/Directives/productDetailsDirective.js")

  // Controllers
      .Include("~/ngapp/Controllers/productSearchController.js")
      .Include("~/ngapp/Controllers/productDetailController.js")
      .Include("~/ngapp/Controllers/editProductController.js"));
}

This method is called from the Application_Start() method in global.asax.cs.

What this does is set up a number of bundles. In this case three bundles are set up. One for the CSS, and two for javascript (one is a set of standard third party libraries, the other is the angularJS application itself).

In the layout or view you can then reference these bundles using the path passed in to the constructor. Like this:

<html>
  <head>
    <!-- Other bits go here -->
    @Styles.Render("~/Content/base-styles.css")
  </head>
  <body>
    @RenderBody()
    @Scripts.Render("~/Scripts/base-frameworks.js")
    @Scripts.Render("~/Scripts/angular-catalogue.js")
  </body>
</html>

Remember to use the tilde notation just like in the code that defines the bundles.

When the optimisations are turned off the scripts will render as a script block per include. When the optimisations are turned on then it outputs one script block. When the server receives a request for that script it resolves the name to match the bundle, it then sends back an amalgamated and minified  version of that script file. This then loads much faster on the client as there are less roundtrips to the server and takes much less bandwidth.

Here’s what the two scenarios look like:

Optimisations turned off

This is what the two @Script.Render() blocks at the end of the HTML look like:

<script src="/Scripts/jquery-1.9.1.js"></script>
<script src="/Scripts/angular.js"></script>
<script src="/Scripts/angular-resource.js"></script>
<script src="/Scripts/angular-ui.js"></script>
<script src="/Scripts/bootstrap.js"></script>

<script src="/ngapp/app.js"></script>
<script src="/ngapp/filters/idFilter.js"></script>
<script src="/ngapp/filters/allBut.js"></script>
<script src="/ngapp/Services/colourService.js"></script>
<script src="/ngapp/Services/brandService.js"></script>
<script src="/ngapp/Services/productTypeService.js"></script>
<script src="/ngapp/Services/productService.js"></script>
<script src="/ngapp/Services/sizeService.js"></script>
<script src="/ngapp/Directives/userFilter.js"></script>
<script src="/ngapp/Directives/productDetailsDirective.js"></script>
<script src="/ngapp/Controllers/productSearchController.js"></script>
<script src="/ngapp/Controllers/productDetailController.js"></script>
<script src="/ngapp/Controllers/editProductController.js"></script>

And when this is rendered in the browser, the following calls are made.

There are 18 requests in the above example. 901kb is transferred to the browser and it took 911ms to complete loading everything (the above does not show images, css or ajax calls that are also downloaded as part of the page)

Optimisations turned on

Now, compare the above to this representation of the same section of the page:

<script src="/Scripts/base-frameworks.js?v=oHeDdLNj8HfLhlxvF-JO29sOQaQAldq0rEKGzugpqe01"></script>
<script src="/Scripts/angular-catalogue.js?v=fF1y8sFMbNn8d7ARr-ft_HBP_vPDpBfWVNTMCseNPC81"></script>

And when rendered in the browser, it makes the following requests:

There are now just two requests, one for the base-framework bundle, and one for the angular-catalogue (our application code) bundle.

Because the bundling process minifies the files the amount of data transferred is much smaller too, in this case 223kb (a saving of 678kb or roughly 75%). For established frameworks that ship with a *.min.js version the bundling framework will use that convention and use the existing minified file. If it can’t find one it will minify the file for you.

And because there is less data to transfer and less network round-trips to wait for the time to fully load the page has been reduced  to 618ms (a saving of 293ms or roughly ⅔ of the time of the previous request).

More information

There is a lot more to bundling than I’ve mentioned here. For a more in depth view of bundling read Scott Guthrie’s blog on Bundling and Minification Support.

Follow

Get every new post delivered to your Inbox.