Recruitment Agents take note

A few times a week I get emails from recruitment agencies, they are are pretty much all along the same lines. The email seems to be a standard template that tells me absolutely nothing of importance about the job and gives me next to zero incentive to find out more.

I’m in a pretty great job at the moment that I’m really enjoying, so I’m not actually looking to move, but had this been maybe about a year ago (before things got restructured) I would have moved if anyone gave me a reasonable incentive for doing so. Based on the generic emails that say nothing of consequence that recruitment agents send out it is better the devil you know than the devil you don’t know. And I really don’t know.

So, here’s an example of something I received earlier this week:

From: <Name-of-agent>
Sent: <date>
To: Colin Mackay
Subject: Possible Synergy

Hi Colin,

We’ve not spoken before, I’m a head-hunter in the <technology-misspelled> development space and your name has came to light as a top talent in the <technology-misspelled> space.

I know your not actively on the market and I would not be contacting you if I didn’t feel I had something truly exceptional.

My role not only gives you interesting programme work in the <technology-misspelled> space but also strong career progression route in a growing business, work life balance, supportive environment, stability and a final salary pension. <name-of-city-I-live-in> based role.

Are you free for a discreet chat about this, what is the best time and number to call you on?

Kind Regards,

<name-of-agent>

<contact details>

This tells very little. She has at least identified that I work with the relevant technology (although sometimes I think that might just be a fluke given the number of emails I receive about things that I’m not remotely competent in) and the city I live in, so I suppose that’s a good start.

Pretty much every recruitment agent send out something similar. Every email I receive says the job is “truly exceptional”, “exciting” or that it’s an “amazing opportunity”. Those words are so over used that more often the email gets binned at that point. A lesson from many a primary school teacher trying to improve her pupils vocabulary is that they can’t use the word “nice” any more and they’ll get marked down if they do.

Nothing here sells me on the idea that change would be a good idea even although they acknowledged I’m not actively on the market.

The agent did not mentioned the type of company. Even if they can’t mention the name of the company at this stage the following would be useful: Is it a software consultancy? a digital agency? a software house with a defined product? An internal software department in a larger company? Which industry is the company operating in?

Some of the answers might turn me off, but it is better to know now than waste time to find out later. Some of the answers may pique my interest, which is obviously a good thing.

They mention the “<name-of-technology> space”. For the moment, we’ll ignore that it was misspelled (lots of technologies have strange ways of spelling or capitalising things, but it doesn’t take long to find out the official way).

They don’t really define what “XYZ space” actually means. There are so many subgroups of technology in that “space” that it could mean anything, including things I’m either unsuitable for or have no interest in. What’s the database technology (assuming there is one)? What is the front end technology (assuming there is one)? Or is the role wholly at one end or the other (e.g. mostly in the business logic or mostly in the front end)? What tool sets and frameworks are involved? (e.g. Visual Studio 2012, include version numbers. I’m interested in progressing forward, but if they’re still on Visual Studio 2008 I’m not interested and it would be better that you know that now). Is the company all single-vendor based (i.e. only using a tool if that vendor produced it) or do they use technologies from third parties (open source or commercial)?

There is nothing about training in the description they’ve provided. That would be a big bonus to me. I already spend in the region of £2000 a year keeping myself up-to-date (books, on-line videos, conferences, etc.), it would be nice to find an employer that is genuinely interested in contributing in that area beyond buying occasional books or giving me the occasional day-off outside of my annual leave to attend a conference that I’m already paying for. After all, they are the ones benefiting from all that training. However, occasionally emails do mention training, but it is sometimes couched in language that suggests a reluctance (e.g. “as an when required by the business”), but it’s there because the company or agent knows it will attract potential candidates if they mention training.

If the prospective company doesn’t provide training then I’d remind them that it is “Better to train your developers and risk they leave, than keep them stupid and risk they stay”. If the prospective company has a really negative view to training then I really wouldn’t want to work for them – I have already worked with a company that seemed to proactively provide disincentives for any sort of training.

Finally, there is no mention about salary. While, on the whole, I’m more interested in other things, I do have a mortgage to pay. If the salary won’t cover my bills with enough left over for a nice holiday (it’s no fun sitting at home watching Jeremy Kyle on days off) then that would be a showstopper even if all other things were perfect.

Also, stating salary as “£DOE” or “£market rate" is equally useless. Companies have a budget. They might say “£DOE” (depending on experience), but if it goes above their budget then that’s all they are going to offer. If that is not enough then it is better to know that up front than later on.

I’ve also been in situations where I’ve felt that the recruitment agent knew my salary expectation wasn’t going to fly with the hiring company, but strung me along for a bit until finally saying that they rejected my CV. It would be better to let potential recruits know up front without wasting everybody’s time.

While providing more information up front might reduce the interest from some potential candidates, at least they are not going to waste their valuable time and the recruitment agent’s valuable time pursuing something that is not going to come to anything. On the other hand, providing more information might be the catalyst to getting someone who is not actively looking to sit up and think about making that change.

Certainly, if I keep receiving the generic emails like the one above, especially that acknowledge I’m not actively looking, then I’m never going to look unless my current employer does something to make me question why I am there.

Injecting a Dependency into an IHttpModule with Unity

We’re starting a new project, and as part of that we want to get better at certain things. One is unit testing the things we didn’t last time around that were in hard-to-reach places. Pretty much most things that interact with ASP.NET have hard-to-reach places. Even ASP.NET MVC, which was supposed to be wonderful and much more unit testable that vanilla ASP.NET, has lots of places where this falls down completely. However, we’re gradually finding way to overcome these obstacles.

In this post, I’m going to concentrate on custom IHttpModule implementations.

We have a custom IHttpModule that requires the services of another class. It is already set up in our IoC container and we just want to inject it into the module we’re writing. However, modules are instantiated by ASP.NET before our IoC framework can get to it.

How I got around this was by creating an additional module (an InjectorModule) that wired up all the other modules that needed dependencies injected using Unity’s BuildUp method to inject the dependency into the existing object.

Setting up the HttpApplication

The application object stores the container and implements an interface that the InjectorModule can access the container through.

public interface IHttpUnityApplication
{
    IUnityContainer UnityContainer { get; } 
}

And the Application class in the global.asax.cs file looks like this:

public class MvcApplication : System.Web.HttpApplication, IHttpUnityApplication
{
    // This is static because it is normal for ASP.NET to create
    // several HttpApplication objects (pooling) but only the first
    // will run Application_Start(), which is where this is set.
    private static IUnityContainer _unityContainer;

    protected void Application_Start()
    {
        _unityContainer = UnityBootstrapper.Initialise();
	// Do other initialisation stuff here
    }

    // This implements the IHttpUnityApplication interface
    public IUnityContainer UnityContainer
    {
        get { return _unityContainer; }
    }
}

The UnityBootstrapper initialises the container for MVC, it is created by the Unity.Mvc4 NuGet package (there’s also a Unity.Mvc3 package too). You can read more about it here.

The InjectorModule

Next up the InjectorModule is created

public class InjectorModule : IHttpModule
{
    public void Init(HttpApplication context)
    {
        // Get the IoC container from the application class through
        // the common interace.
        var app = (IHttpUnityApplication) context;
        IUnityContainer container = app.UnityContainer;

        // Wire up each module that is registered with the IoC container
        foreach (var module in context.GetRegisteredModules(container))
            container.BuildUp(module.GetType(), module);
    }

    public void Dispose()
    {
    }
}

I’ve also been a wee bit sneaky and created an extension method on HttpApplication to work out which are the registered modules so that the code above is a bit nicer. That code is:

public static class HttpApplicationExtensions
{
    public static IEnumerable GetRegisteredModules(this HttpApplication context, IUnityContainer container)
    {
        var allModules = context.Modules.AllKeys.Select(k => context.Modules[k]);
        var registeredModules = allModules.Where(m => container.IsRegistered(m.GetType()));
        return registeredModules;
    }
}

Wiring it all up

The container must be told which modules have dependencies to inject and what properties to set. e.g.

container.RegisterType<MyCustomModule>(
    new InjectionProperty("TheDependentProperty"));

MyCustomModule is the class that implements the IHttpModule interface, and you need to supply an InjectionProperty for each of the properties through which the IoC containers will inject a dependency.

You can also decorate the properies with the [Dependency] attribute, but then you are just wiring in a dependency on the IoC container itself… which is not good.

Finally, this new module has to be wired up in the web.config file

  <system.webServer>
    <modules>
      <add name="InjectorModule"
           type="Web.IoC.Unity.InjectorModule" />
      <!-- Other modules go here, after the Injector Module -->
    </modules>

By putting the Injector module ahead of other modules in the web.config it means it gets a chance to run and inject the depedencies into other modules that have yet to be initialised.

Other considerations

The IHttpModule interface defines a method, Init(), that takes an HttpApplication as a parameter. Naturally, that’s difficult to mock out in a unit test.

What I did was to extract all the bits that I needed in the Init() method and pass them to another method to do the work. For example, HttpContext is easy to do because ASP.NET MVC provides an HttpContextWrapper and the method that is doing all the work just takes an HttpContextBase, which is easily mocked in a unit test.

public void Init(HttpApplication context)
{
   var wrapper = new HttpContextWrapper(context.Context);
   InitImpl(wrapper);
}
public void InitImpl(HttpContextBase httpContext)
{
    // Do stuff with the HttpContext via the abstract base class
}

Tip of the Day: Make Outlook 2013 display the Weather in Celsius

At work we’re upgrading to Outlook 2013. One of the new features is that the Calendar will display the weather for the next few days.

Outlook 2013 Displaying the Weather in Fahrenheit

If you like this, that’s great. However, most of the world uses Celcius, but out of the box Outlook displays Fahrenheit (regardless of the locale set up on your machine). In fact, it also defaults to New York as the city.

Changing the Weather Location

It is easy enough to change the weather location. Just click on the name of the city and you get a drop down box.

Weather Location Drop Down Box

If the city you want is not in the list, just select “Add Location” and you get a search box to type in the city that you want.

Search for Weather Location

Change the Temperature Scale to Celsius

Changing the temperature scale to Celsius is a little bit more involved. First click the “File” menu in the top right corner of the window.

File Menu Button in Outlook 2013

Then select the “Options” button from the new menu.

File Menu Details

This will display the options dialog. First, click the “Calendar” button in the left menu. Then scroll down to the bottom of the options that are presented. The Weather options are last. You’ll see you can then select the temperature scale (“Celsius” or “Fahrenheit”) that you want. Or you can even turn off the weather too.

Options Dialog

Then you can just “OK” the dialog and the weather will be updated into Celsius. Much more civilised.

Much more civilised, the temperature in Celsius

 

 

Tip of the Day: Getting TFS to remember you each time you open Visual Studio

Because the TFS Server where I work is not on the domain, it will prompt you for credentials each time you log in (unless you’ve previously used the web access and checked the “Remember Me” option). If you don’t want to use the web access portal, you can still get TFS to remember your credentials and not ask you each time you log in.

Go in to the control panel and select “User Accounts”

In the next screen click “Manage Windows Credentials”

In the next screen click “Add Windows Credential”

Then type your details into the form, and press “OK”

You’ll see your new set of credentials appear in the Credential Manager page:

Now when you open up Visual Studio it won’t prompt you for your credentials all the time.

Tip of the day: How to tell why your app couldn’t log on to SQL Server

When you get a log in failure on SQL Server the message you get back from SQL Server Management Studio, or in a .NET Exception is vague for security. They don’t want to give away too much information just in case.

For example, the exception message will be something like “Login failed for user ‘someUser’.” which doesn’t give you much of a clue as to what is actually happening. There could be a multitude of reasons that login failed.

If you want more information about why a log-in failed you can open up the event viewer on the machine that SQL Server is installed on and have a look. You’ll find a more detailed message there.

The wider messages may be things like:

  • “Login failed for user ‘someUser’. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]”
  • Login failed for user ‘someUser’. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]
  • Login failed for user ‘someUser’. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]
    Note: This could be because the database doesn’t exist, or because the user doesn’t have permissions to the database.

Linking Perforce Merge to Git

Git’s built in Merge conflict resolution is awful. Although all the information is there it is difficult to use for all but the simplest of conflicts. Luckily, it is relatively easy to wire up a third party diff and merge tools to help.

Setting up as a diff tool.

You can download the Perforce Visual Merge Tool here. The only part of the installer that is needed is the “Visual Merge Tool (P4Merge)”

Perforce Installation Wizard - Feature Selection

Perforce Installation Wizard – Feature Selection

To configure Git to use the p4merge as the diff tool, the global config needs to be edited. The global config, on Windows 7 and 8 is found in c:\users\<username>\.gitconfig

The following needs to be added:

[diff]
    tool = P4Merge
[difftool "P4Merge"]
    cmd = p4merge "$LOCAL" "$REMOTE"

The [diff] section sets up the default tool to use, you can configure as many as you like. The [difftool "toolname"] section sets up the options for a specific tool.

Now, in Git Bash, you can type git difftool and it will show the diffs in the perforce merge tool between the current file and the previous commit.

If you have multiple files that have changes it will prompt one-by-one to view them in the diff tool.

If you’ve already staged the files (prior to a commit) then you’ll need to type git difftool --cached in order for them to show up.

If you wish to see just a specific file you can use git difftool name-of-file

Again, add the --cached option (just before the filename) if you’ve already staged the file prior to a commit.

Setting up as a Merge Tool

Open up the .gitconfig file, as above, and make some changes to it. Add the following sections to it which are similar to the diff tool.

[merge]
    tool = P4Merge
[mergetool "P4Merge"]
    cmd = p4merge "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
    keepTemporaries = false
    trustExitCode = false
    keepBackup = false

If you get a merge conflict when merging branches or pulling down from the remote repository you can now use git mergetool to merge the changes.

Getting Tortoise Git to work with GitHub repositories

In this post I’ll walk you through installing Tortoise Git in a way that allows it to interact easily with GitHub repositories.

Download msysgit

First off download msysgit, a prerequisite for running Tortoise Git. (A the time of writing this was v1.8.3).

For the installation, I mostly accepted all the default options. The only change I made was to allow the system’s PATH environment variable to be updated. This will be required for a latter step.

I also left the default “Checkout windows-style, commit unix-style endings”, which is equivalent to the git option core.autocrlf being set to true. You probably also want to set this on if you don’t have it set already. GitHub also has an article on their site about file specific options that you might want to include in a .gitattributes file in your repository.

If you have any existing repositories on your system you can now use GitBash to work with them. At the moment each command, however, will require you to type your user name and password.

Download Tortoise Git

Then download Tortoise Git (v1.8.4 at the time of writing). If you have Windows 8 you should go for the 64-bit edition. Again, I just accepted all the default installation options.

As with GitBash in the msysgit installation, once this is set up you’ll be able to work with any existing repositories, and again each operation will require a user name and password to be allowed.

Download the git-credential-winstore

GitHub has an article on how to set up password caching (skip to “password caching” for download link) if you are using tools other than GitHub for Windows. The file requires that the path variable has the git bin folder in it. This will be the case if the option above was made when installing msysgit. I also found that a machine reboot was required before installing this as it didn’t immediately find git in the path after installing msysgit.

The git-credential-winstore install very quickly. It asks one slightly confusingly worded question, “Do you want to install git-credential-winstore to prompt for passwords?”. The correct answer is “yes”. It doesn’t mean that it will always prompt you instead of at the command line or the GUI tool, it will only prompt for a password if it does not know the credentials to use, after that it uses what’s in its credential store so you don’t get asked all the time.

When git-credentials-winstore is installed it will create a [credentials] section in your .gitconfig file which should be at C:\Users\<username>\.gitconfig

Be aware, however, that GitHub does have a nasty habit of removing the [credentials] section of the .gitconfig file. To get around this, copy the credential section to the gitconfig file in the msysgit directory (If you followed the installation defaults it will probably be in C:\Program Files (x86)\Git\etc.) You’ll have to run as administrator in order to edit that gitconfig file due to its location.

If you have multiple users on your machine you may also want to move the installed location of git-credentials-winstore as it installs in your AppData directory. However, I’ve not tried this as I’m the only user on my machine.

You can now use GitBash and Tortoise Git with your GitHub repository.

Getting Started with AngularJS – The Application Module

As with all applications there has to be a starting point. Where does the application start? In AngularJS that starting point is the module.

And because a module is, well, modular, you can plug modules into each other to build the application, share components and so on.

Actually, I suppose in Angular it actually starts with a directive, that points to the module to start with, because, if you have more than one, which one do you start with?

<html ng-app="angularCatalogue">
  
</html>

The ng-app directive bootstraps the application by telling AngularJS which module contains the root of the application.

A module is defined like this:

angular.module("angularCatalogue",[])

The name of the above module is "angularCatalogue", the name of the application, which is what is placed in the ng-app directive in the html element previously.

You can also add as a second parameter to the module an array of other modules to inject. The modules don’t necessarily have to be loaded in any particular, so it is okay to refer to a module that may not exist at that point.

The module function returns a Module object, which you can then set up as you need it. Typically an application will have some sort of configuration, controllers, directives, services and so on.

Wiring up the view

In the html you will need to indicate where the view is to be placed.

You can do this via the ng-view directive, which can look like this:

<ng-view></ng-view>

or

<div ng-view></div>

Everything inside the element will be replaced with the contents of the view.

The application then needs to be told where the view is. You can configure the application module with that information, like this:

angular.module("angularCatalogue") 
    .config(["$routeProvider", function($routeProvider){
        $routeProvider.when("/",
            {
                templateUrl:"/ngapp/templates/search.html",
                controller: "productSearchController"
            });
    }]);

The config call on the module allows the module to be configured. It takes an array that consisted of the names of objects to be injected into the configuration and the function that performs the configuration.

The function has a $routeProvider injected into it. This allows routing to be set up. In the example above a route is set up from the home page of the application ("/") that inserts the given template into the element (ng-view) that designated the view and it uses the given controller.

I’ll move onto controllers in an upcoming post.

A note on the dependency injection

If you never minify you javaScript you can get away with something like this:

angular.module('myApplication')
    .config(function($routeProvider){
        ...
     });

You’ll notice that there is no array, it is just taking a function. Angular can work out from the parameter names what needs to be injected. However, if the code is minified most minifiers will alter the parameter names to save space in which case angular’s built in dependency injection framework fails because it no longer knows what to resolve things to. Minifiers do not, however, minify string literals. If the string literals exist then it will use them as to determine what gets resolved into which parameter position. The strings must match the position of their counterpart in the function parameters.

Therefore the minifier friendly version of the previous snippet becomes:

angular.module('myApplication')
    .config(['$routeProvider', function($routeProvider){
        ...
     }]);

A note on naming conventions

You can name things what you like but AngularJS has some conventions reserved for itself.

  • Its own services are prefixed with a $ (dollar). Never name your services with a dollar prefix as your code may become incompatible with future versions of angular.
  • Its own directives are prefixed with ng. Similarly to the previous convention, don’t name any of your directives with an ng prefix as it may clash with what’s in future versions of angular.
  • In javaScript everything is camel cased (the first word is all lower cased, subsequent words have the first letter capitalised), in the HTML dashes separate the words. So if you create a directive called myPersonalDirective when that directive is placed in HTML it becomes my-personal-directive.

Getting Started with AngularJS – Bundling the files

When you are building AngularJS apps you will probably want to store all your various controllers, directives, filters, etc. into different files to keep it all nicely separated and easy to manage. However, putting script blocks to all those files in your HTML is not efficient in the least. Not only do you have several round-trips to the server, the browser will be downloading a lot of code that is designed to be readable and maintainable, potentially with lots of additional whitespace and comments.

If the back end of your application is using .NET then you can bundle together CSS and Javascript files to make them more optimised.

For example, I have a small AngularJS prototype application that uses bundling so that, when it is run with the optimisations turned on, it will need less files and more compact javascript and CSS. The method that creates these bundles looks like this:

public static void RegisterBundles(BundleCollection bundles)
{
    bundles.Add(new StyleBundle("~/Content/base-styles.css")
        .Include("~/Content/bootstrap.css")
        .Include("~/Content/angular-ui.css")
        .Include("~/Content/angularCatalogue.css"));

    bundles.Add(new ScriptBundle("~/Scripts/base-frameworks.js")
        .Include("~/Scripts/jquery-{version}.js")
        .Include("~/Scripts/angular.js")
        .Include("~/Scripts/angular-resource.js")
        .Include("~/Scripts/angular-ui.js")
        .Include("~/Scripts/bootstrap.js"));

    bundles.Add(new ScriptBundle("~/Scripts/angular-catalogue.js")
    // Configure the Angular Application
      .Include("~/ngapp/app.js")

    // Filters
      .Include("~/ngapp/filters/idFilter.js")
      .Include("~/ngapp/filters/allBut.js")

    // The services
      .Include("~/ngapp/Services/colourService.js")
      .Include("~/ngapp/Services/brandService.js")
      .Include("~/ngapp/Services/productTypeService.js")
      .Include("~/ngapp/Services/productService.js")
      .Include("~/ngapp/Services/sizeService.js")

  // Directives
      .Include("~/ngapp/Directives/userFilter.js")
      .Include("~/ngapp/Directives/productDetailsDirective.js")

  // Controllers
      .Include("~/ngapp/Controllers/productSearchController.js")
      .Include("~/ngapp/Controllers/productDetailController.js")
      .Include("~/ngapp/Controllers/editProductController.js"));
}

This method is called from the Application_Start() method in global.asax.cs.

What this does is set up a number of bundles. In this case three bundles are set up. One for the CSS, and two for javascript (one is a set of standard third party libraries, the other is the angularJS application itself).

In the layout or view you can then reference these bundles using the path passed in to the constructor. Like this:

<html>
  <head>
    <!-- Other bits go here -->
    @Styles.Render("~/Content/base-styles.css")
  </head>
  <body>
    @RenderBody()
    @Scripts.Render("~/Scripts/base-frameworks.js")
    @Scripts.Render("~/Scripts/angular-catalogue.js")
  </body>
</html>

Remember to use the tilde notation just like in the code that defines the bundles.

When the optimisations are turned off the scripts will render as a script block per include. When the optimisations are turned on then it outputs one script block. When the server receives a request for that script it resolves the name to match the bundle, it then sends back an amalgamated and minified  version of that script file. This then loads much faster on the client as there are less roundtrips to the server and takes much less bandwidth.

Here’s what the two scenarios look like:

Optimisations turned off

This is what the two @Script.Render() blocks at the end of the HTML look like:

<script src="/Scripts/jquery-1.9.1.js"></script>
<script src="/Scripts/angular.js"></script>
<script src="/Scripts/angular-resource.js"></script>
<script src="/Scripts/angular-ui.js"></script>
<script src="/Scripts/bootstrap.js"></script>

<script src="/ngapp/app.js"></script>
<script src="/ngapp/filters/idFilter.js"></script>
<script src="/ngapp/filters/allBut.js"></script>
<script src="/ngapp/Services/colourService.js"></script>
<script src="/ngapp/Services/brandService.js"></script>
<script src="/ngapp/Services/productTypeService.js"></script>
<script src="/ngapp/Services/productService.js"></script>
<script src="/ngapp/Services/sizeService.js"></script>
<script src="/ngapp/Directives/userFilter.js"></script>
<script src="/ngapp/Directives/productDetailsDirective.js"></script>
<script src="/ngapp/Controllers/productSearchController.js"></script>
<script src="/ngapp/Controllers/productDetailController.js"></script>
<script src="/ngapp/Controllers/editProductController.js"></script>

And when this is rendered in the browser, the following calls are made.

There are 18 requests in the above example. 901kb is transferred to the browser and it took 911ms to complete loading everything (the above does not show images, css or ajax calls that are also downloaded as part of the page)

Optimisations turned on

Now, compare the above to this representation of the same section of the page:

<script src="/Scripts/base-frameworks.js?v=oHeDdLNj8HfLhlxvF-JO29sOQaQAldq0rEKGzugpqe01"></script>
<script src="/Scripts/angular-catalogue.js?v=fF1y8sFMbNn8d7ARr-ft_HBP_vPDpBfWVNTMCseNPC81"></script>

And when rendered in the browser, it makes the following requests:

There are now just two requests, one for the base-framework bundle, and one for the angular-catalogue (our application code) bundle.

Because the bundling process minifies the files the amount of data transferred is much smaller too, in this case 223kb (a saving of 678kb or roughly 75%). For established frameworks that ship with a *.min.js version the bundling framework will use that convention and use the existing minified file. If it can’t find one it will minify the file for you.

And because there is less data to transfer and less network round-trips to wait for the time to fully load the page has been reduced  to 618ms (a saving of 293ms or roughly ⅔ of the time of the previous request).

More information

There is a lot more to bundling than I’ve mentioned here. For a more in depth view of bundling read Scott Guthrie’s blog on Bundling and Minification Support.

Authenticating Across Virtual Directories

If you have an application set up in a way similar to the previous post, which is essentially a domain that contains a number of web application hosted in various virtual directories on the server.

In my previous example, the root of the domain contains the application that contains the account management (the sign in, password retrieval, account set up, etc.), however each of the applications in each virtual directory must know who is logged in.

Assuming you are using the .NET’s built in authentication mechanisms this is unlikely to work out of the box. There is some configuration that need to happen to allow each of the applications to sync up.

Setting up the web.config file

In MVC 4 Forms Authentication must be set up explicitly.

<system.web>
  <authentication mode="Forms">
  </authentication>
  <!-- Other config settings -->
</system.web>

To ensure that each application can decrypt the authentication ticket in the cookie, they all must share the same machine key as by default IIS will assign each application its own encryption and decryption keys for security.

<system.web>
  <machineKey decryptionKey="10FE3824EFDA35A7EE5E759651D2790747CEB6692467A57D" validationKey="E262707B8742B1772595A963EDF00BB0E32A7FACA7835EBE983A275A5307DEDBBB759B8B3D45CA44DA948A51E68B99195F9405780F8D80EE9C6AB46B9FEAB876" />
  <!-- Other config settings -->
</system.web>

Do not use the above key – it is only an example.

These two settings must be shared across each of the applications sitting in the one domain.

Generating a Machine Key

To generate a machine key:

  • Open “Internet Information Services (IIS) Manager” on your development machine.
  • Set up a dummy application so that it won’t affect anything else on the machine.
  • Open up the Machine Key feature in the ASP.NET section

    IIS Manager

    IIS Manager

  • (1) In the “Validation key” section uncheck “Automatically generate at runtime” and “Generate a unique key for each application”.

    Machine Key Configuration in the IIS Manager

    Machine Key Configuration in the IIS Manager

  • (2) In the “Decryption key” section uncheck “Automatically generate at runtime” and “Generate a unique key for each application”.
  • (3) Click “Generate Keys” (this will change the keys randomly each time it is pressed)
  • (4) Click “Apply”

The web.config for this web application will now contain the newly generated machine key in the system.web section. Copy the complete machineKey element to the applications that are linked together.

There is an “Explore” link on the site’s main page in IIS to open up Windows Exporer on the folder which contains the web site and the web.config file.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers