Linking Perforce Merge to Git

Git’s built in Merge conflict resolution is awful. Although all the information is there it is difficult to use for all but the simplest of conflicts. Luckily, it is relatively easy to wire up a third party diff and merge tools to help.

Setting up as a diff tool.

You can download the Perforce Visual Merge Tool here. The only part of the installer that is needed is the “Visual Merge Tool (P4Merge)”

Perforce Installation Wizard - Feature Selection

Perforce Installation Wizard – Feature Selection

To configure Git to use the p4merge as the diff tool, the global config needs to be edited. The global config, on Windows 7 and 8 is found in c:\users\<username>\.gitconfig

The following needs to be added:

[diff]
    tool = P4Merge
[difftool "P4Merge"]
    cmd = p4merge "$LOCAL" "$REMOTE"

The [diff] section sets up the default tool to use, you can configure as many as you like. The [difftool "toolname"] section sets up the options for a specific tool.

Now, in Git Bash, you can type git difftool and it will show the diffs in the perforce merge tool between the current file and the previous commit.

If you have multiple files that have changes it will prompt one-by-one to view them in the diff tool.

If you’ve already staged the files (prior to a commit) then you’ll need to type git difftool --cached in order for them to show up.

If you wish to see just a specific file you can use git difftool name-of-file

Again, add the --cached option (just before the filename) if you’ve already staged the file prior to a commit.

Setting up as a Merge Tool

Open up the .gitconfig file, as above, and make some changes to it. Add the following sections to it which are similar to the diff tool.

[merge]
    tool = P4Merge
[mergetool "P4Merge"]
    cmd = p4merge "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
    keepTemporaries = false
    trustExitCode = false
    keepBackup = false

If you get a merge conflict when merging branches or pulling down from the remote repository you can now use git mergetool to merge the changes.

Getting Tortoise Git to work with GitHub repositories

In this post I’ll walk you through installing Tortoise Git in a way that allows it to interact easily with GitHub repositories.

Download msysgit

First off download msysgit, a prerequisite for running Tortoise Git. (A the time of writing this was v1.8.3).

For the installation, I mostly accepted all the default options. The only change I made was to allow the system’s PATH environment variable to be updated. This will be required for a latter step.

I also left the default “Checkout windows-style, commit unix-style endings”, which is equivalent to the git option core.autocrlf being set to true. You probably also want to set this on if you don’t have it set already. GitHub also has an article on their site about file specific options that you might want to include in a .gitattributes file in your repository.

If you have any existing repositories on your system you can now use GitBash to work with them. At the moment each command, however, will require you to type your user name and password.

Download Tortoise Git

Then download Tortoise Git (v1.8.4 at the time of writing). If you have Windows 8 you should go for the 64-bit edition. Again, I just accepted all the default installation options.

As with GitBash in the msysgit installation, once this is set up you’ll be able to work with any existing repositories, and again each operation will require a user name and password to be allowed.

Download the git-credential-winstore

GitHub has an article on how to set up password caching (skip to “password caching” for download link) if you are using tools other than GitHub for Windows. The file requires that the path variable has the git bin folder in it. This will be the case if the option above was made when installing msysgit. I also found that a machine reboot was required before installing this as it didn’t immediately find git in the path after installing msysgit.

The git-credential-winstore install very quickly. It asks one slightly confusingly worded question, “Do you want to install git-credential-winstore to prompt for passwords?”. The correct answer is “yes”. It doesn’t mean that it will always prompt you instead of at the command line or the GUI tool, it will only prompt for a password if it does not know the credentials to use, after that it uses what’s in its credential store so you don’t get asked all the time.

When git-credentials-winstore is installed it will create a [credentials] section in your .gitconfig file which should be at C:\Users\<username>\.gitconfig

Be aware, however, that GitHub does have a nasty habit of removing the [credentials] section of the .gitconfig file. To get around this, copy the credential section to the gitconfig file in the msysgit directory (If you followed the installation defaults it will probably be in C:\Program Files (x86)\Git\etc.) You’ll have to run as administrator in order to edit that gitconfig file due to its location.

If you have multiple users on your machine you may also want to move the installed location of git-credentials-winstore as it installs in your AppData directory. However, I’ve not tried this as I’m the only user on my machine.

You can now use GitBash and Tortoise Git with your GitHub repository.

Getting Started with AngularJS – The Application Module

As with all applications there has to be a starting point. Where does the application start? In AngularJS that starting point is the module.

And because a module is, well, modular, you can plug modules into each other to build the application, share components and so on.

Actually, I suppose in Angular it actually starts with a directive, that points to the module to start with, because, if you have more than one, which one do you start with?

<html ng-app="angularCatalogue">
  
</html>

The ng-app directive bootstraps the application by telling AngularJS which module contains the root of the application.

A module is defined like this:

angular.module("angularCatalogue",[])

The name of the above module is "angularCatalogue", the name of the application, which is what is placed in the ng-app directive in the html element previously.

You can also add as a second parameter to the module an array of other modules to inject. The modules don’t necessarily have to be loaded in any particular, so it is okay to refer to a module that may not exist at that point.

The module function returns a Module object, which you can then set up as you need it. Typically an application will have some sort of configuration, controllers, directives, services and so on.

Wiring up the view

In the html you will need to indicate where the view is to be placed.

You can do this via the ng-view directive, which can look like this:

<ng-view></ng-view>

or

<div ng-view></div>

Everything inside the element will be replaced with the contents of the view.

The application then needs to be told where the view is. You can configure the application module with that information, like this:

angular.module("angularCatalogue") 
    .config(["$routeProvider", function($routeProvider){
        $routeProvider.when("/",
            {
                templateUrl:"/ngapp/templates/search.html",
                controller: "productSearchController"
            });
    }]);

The config call on the module allows the module to be configured. It takes an array that consisted of the names of objects to be injected into the configuration and the function that performs the configuration.

The function has a $routeProvider injected into it. This allows routing to be set up. In the example above a route is set up from the home page of the application ("/") that inserts the given template into the element (ng-view) that designated the view and it uses the given controller.

I’ll move onto controllers in an upcoming post.

A note on the dependency injection

If you never minify you javaScript you can get away with something like this:

angular.module('myApplication')
    .config(function($routeProvider){
        ...
     });

You’ll notice that there is no array, it is just taking a function. Angular can work out from the parameter names what needs to be injected. However, if the code is minified most minifiers will alter the parameter names to save space in which case angular’s built in dependency injection framework fails because it no longer knows what to resolve things to. Minifiers do not, however, minify string literals. If the string literals exist then it will use them as to determine what gets resolved into which parameter position. The strings must match the position of their counterpart in the function parameters.

Therefore the minifier friendly version of the previous snippet becomes:

angular.module('myApplication')
    .config(['$routeProvider', function($routeProvider){
        ...
     }]);

A note on naming conventions

You can name things what you like but AngularJS has some conventions reserved for itself.

  • Its own services are prefixed with a $ (dollar). Never name your services with a dollar prefix as your code may become incompatible with future versions of angular.
  • Its own directives are prefixed with ng. Similarly to the previous convention, don’t name any of your directives with an ng prefix as it may clash with what’s in future versions of angular.
  • In javaScript everything is camel cased (the first word is all lower cased, subsequent words have the first letter capitalised), in the HTML dashes separate the words. So if you create a directive called myPersonalDirective when that directive is placed in HTML it becomes my-personal-directive.

Getting Started with AngularJS – Bundling the files

When you are building AngularJS apps you will probably want to store all your various controllers, directives, filters, etc. into different files to keep it all nicely separated and easy to manage. However, putting script blocks to all those files in your HTML is not efficient in the least. Not only do you have several round-trips to the server, the browser will be downloading a lot of code that is designed to be readable and maintainable, potentially with lots of additional whitespace and comments.

If the back end of your application is using .NET then you can bundle together CSS and Javascript files to make them more optimised.

For example, I have a small AngularJS prototype application that uses bundling so that, when it is run with the optimisations turned on, it will need less files and more compact javascript and CSS. The method that creates these bundles looks like this:

public static void RegisterBundles(BundleCollection bundles)
{
    bundles.Add(new StyleBundle("~/Content/base-styles.css")
        .Include("~/Content/bootstrap.css")
        .Include("~/Content/angular-ui.css")
        .Include("~/Content/angularCatalogue.css"));

    bundles.Add(new ScriptBundle("~/Scripts/base-frameworks.js")
        .Include("~/Scripts/jquery-{version}.js")
        .Include("~/Scripts/angular.js")
        .Include("~/Scripts/angular-resource.js")
        .Include("~/Scripts/angular-ui.js")
        .Include("~/Scripts/bootstrap.js"));

    bundles.Add(new ScriptBundle("~/Scripts/angular-catalogue.js")
    // Configure the Angular Application
      .Include("~/ngapp/app.js")

    // Filters
      .Include("~/ngapp/filters/idFilter.js")
      .Include("~/ngapp/filters/allBut.js")

    // The services
      .Include("~/ngapp/Services/colourService.js")
      .Include("~/ngapp/Services/brandService.js")
      .Include("~/ngapp/Services/productTypeService.js")
      .Include("~/ngapp/Services/productService.js")
      .Include("~/ngapp/Services/sizeService.js")

  // Directives
      .Include("~/ngapp/Directives/userFilter.js")
      .Include("~/ngapp/Directives/productDetailsDirective.js")

  // Controllers
      .Include("~/ngapp/Controllers/productSearchController.js")
      .Include("~/ngapp/Controllers/productDetailController.js")
      .Include("~/ngapp/Controllers/editProductController.js"));
}

This method is called from the Application_Start() method in global.asax.cs.

What this does is set up a number of bundles. In this case three bundles are set up. One for the CSS, and two for javascript (one is a set of standard third party libraries, the other is the angularJS application itself).

In the layout or view you can then reference these bundles using the path passed in to the constructor. Like this:

<html>
  <head>
    <!-- Other bits go here -->
    @Styles.Render("~/Content/base-styles.css")
  </head>
  <body>
    @RenderBody()
    @Scripts.Render("~/Scripts/base-frameworks.js")
    @Scripts.Render("~/Scripts/angular-catalogue.js")
  </body>
</html>

Remember to use the tilde notation just like in the code that defines the bundles.

When the optimisations are turned off the scripts will render as a script block per include. When the optimisations are turned on then it outputs one script block. When the server receives a request for that script it resolves the name to match the bundle, it then sends back an amalgamated and minified  version of that script file. This then loads much faster on the client as there are less roundtrips to the server and takes much less bandwidth.

Here’s what the two scenarios look like:

Optimisations turned off

This is what the two @Script.Render() blocks at the end of the HTML look like:

<script src="/Scripts/jquery-1.9.1.js"></script>
<script src="/Scripts/angular.js"></script>
<script src="/Scripts/angular-resource.js"></script>
<script src="/Scripts/angular-ui.js"></script>
<script src="/Scripts/bootstrap.js"></script>

<script src="/ngapp/app.js"></script>
<script src="/ngapp/filters/idFilter.js"></script>
<script src="/ngapp/filters/allBut.js"></script>
<script src="/ngapp/Services/colourService.js"></script>
<script src="/ngapp/Services/brandService.js"></script>
<script src="/ngapp/Services/productTypeService.js"></script>
<script src="/ngapp/Services/productService.js"></script>
<script src="/ngapp/Services/sizeService.js"></script>
<script src="/ngapp/Directives/userFilter.js"></script>
<script src="/ngapp/Directives/productDetailsDirective.js"></script>
<script src="/ngapp/Controllers/productSearchController.js"></script>
<script src="/ngapp/Controllers/productDetailController.js"></script>
<script src="/ngapp/Controllers/editProductController.js"></script>

And when this is rendered in the browser, the following calls are made.

There are 18 requests in the above example. 901kb is transferred to the browser and it took 911ms to complete loading everything (the above does not show images, css or ajax calls that are also downloaded as part of the page)

Optimisations turned on

Now, compare the above to this representation of the same section of the page:

<script src="/Scripts/base-frameworks.js?v=oHeDdLNj8HfLhlxvF-JO29sOQaQAldq0rEKGzugpqe01"></script>
<script src="/Scripts/angular-catalogue.js?v=fF1y8sFMbNn8d7ARr-ft_HBP_vPDpBfWVNTMCseNPC81"></script>

And when rendered in the browser, it makes the following requests:

There are now just two requests, one for the base-framework bundle, and one for the angular-catalogue (our application code) bundle.

Because the bundling process minifies the files the amount of data transferred is much smaller too, in this case 223kb (a saving of 678kb or roughly 75%). For established frameworks that ship with a *.min.js version the bundling framework will use that convention and use the existing minified file. If it can’t find one it will minify the file for you.

And because there is less data to transfer and less network round-trips to wait for the time to fully load the page has been reduced  to 618ms (a saving of 293ms or roughly ⅔ of the time of the previous request).

More information

There is a lot more to bundling than I’ve mentioned here. For a more in depth view of bundling read Scott Guthrie’s blog on Bundling and Minification Support.

Authenticating Across Virtual Directories

If you have an application set up in a way similar to the previous post, which is essentially a domain that contains a number of web application hosted in various virtual directories on the server.

In my previous example, the root of the domain contains the application that contains the account management (the sign in, password retrieval, account set up, etc.), however each of the applications in each virtual directory must know who is logged in.

Assuming you are using the .NET’s built in authentication mechanisms this is unlikely to work out of the box. There is some configuration that need to happen to allow each of the applications to sync up.

Setting up the web.config file

In MVC 4 Forms Authentication must be set up explicitly.

<system.web>
  <authentication mode="Forms">
  </authentication>
  <!-- Other config settings -->
</system.web>

To ensure that each application can decrypt the authentication ticket in the cookie, they all must share the same machine key as by default IIS will assign each application its own encryption and decryption keys for security.

<system.web>
  <machineKey decryptionKey="10FE3824EFDA35A7EE5E759651D2790747CEB6692467A57D" validationKey="E262707B8742B1772595A963EDF00BB0E32A7FACA7835EBE983A275A5307DEDBBB759B8B3D45CA44DA948A51E68B99195F9405780F8D80EE9C6AB46B9FEAB876" />
  <!-- Other config settings -->
</system.web>

Do not use the above key – it is only an example.

These two settings must be shared across each of the applications sitting in the one domain.

Generating a Machine Key

To generate a machine key:

  • Open “Internet Information Services (IIS) Manager” on your development machine.
  • Set up a dummy application so that it won’t affect anything else on the machine.
  • Open up the Machine Key feature in the ASP.NET section

    IIS Manager

    IIS Manager

  • (1) In the “Validation key” section uncheck “Automatically generate at runtime” and “Generate a unique key for each application”.

    Machine Key Configuration in the IIS Manager

    Machine Key Configuration in the IIS Manager

  • (2) In the “Decryption key” section uncheck “Automatically generate at runtime” and “Generate a unique key for each application”.
  • (3) Click “Generate Keys” (this will change the keys randomly each time it is pressed)
  • (4) Click “Apply”

The web.config for this web application will now contain the newly generated machine key in the system.web section. Copy the complete machineKey element to the applications that are linked together.

There is an “Explore” link on the site’s main page in IIS to open up Windows Exporer on the folder which contains the web site and the web.config file.

Setting up a website that uses multiple projects

I’m looking at the possibility of restructuring some of our applications to unify them under one brand and one site. Currently our applications are on different sub-domains of our main domain and we’d like to bring all that under one roof so our application can be something like https://app.example.com and that’s it.

To that end I’m looking at setting up a central project (a portal, if you like) that the user enters and logs into and from there they can move off into the various applications depending on what they want to do. Each of the application would sit in a virtual directory off the main application.

Basic Setup

Each of the projects needs to have the project properties in the web tab synchronised so that they are in agreement with each other. I decided on a port number to use and duplicated that across each of the projects.

Root Project

To start with the root project (that’s the one that appears at the root of the domain) should be set to use IIS Express.

  • In the solution explorer right click the project and then select “Properties” from the menu, alternatively click the project then press Alt+Enter.
  • Once the properties appear go to the “Web” tab and scroll down to the “Server” section.
  • Ensure that “Use Local IIS Web Server” is selected
  • Check “Use IIS Express” if it isn’t already.
  • In the project URL choose a port number that you want to use across each of the projects. (You can leave the default for this project if you wish, but take a note of it for the others)
  • Press “Create Virtual Directory” to set up IIS Express.
Setting up the root application

Setting up the root application

Remember the port number that was used for the root project as it will be needed for the other projects.

Set up the first application

In the first application project put similar details in Project Properties.

The only difference is that the Project URL has a virtual directory added to it.

Setting up the first application

Setting up the first application

Set up the second application

This is similar to the first application, except that the Project URL has a different virtual directory added to it.

Setting up the second application

Setting up the second application

Tip of the Day: A DateOnly function in SQL Server

It occurs to me that I’ve probably written several versions of this function in various installations on SQL Sever over the last 10+ years (10 years 2 months, 26 days – to be exact) since I started using SQL Server. I should really put it somewhere that I can refer to it easily and not have to re-write it again. (So why not put it in my blog?)

It’s a simple little thing that takes a DATETIME and returns the same but without the time elements, basically, it returns the date only.

CREATE FUNCTION [dbo].[DateOnly]
(
  @DateTime DATETIME
)
RETURNS DATETIME
AS
BEGIN
  RETURN 
    DATEADD(MILLISECOND, -DATEPART(MILLISECOND, @DateTime),
      DATEADD(SECOND, -DATEPART(SECOND, @DateTime),
        DATEADD(MINUTE, -DATEPART(MINUTE, @DateTime), 
          DATEADD(HOUR, -DATEPART(HOUR, @DateTime), @DateTime))));
END

Configuring Nancy to use views in a separate assembly

I’m in the process of setting up a Nancy application that will run in ASP.NET on IIS and on Ubuntu (using Mono). As a result I put the main Nancy application into an assembly all of its own and created two host assemblies, one for each environment.

I found pretty quickly that it was a real pain to get the FileSystemViewLocationProvider to work properly in this scenario without a lot of futzing about… and I don’t like it when you have to manually mess around with thing just to get an application deployed properly, or even just running in the debugger.

My solution was to use the ResourceViewLocationProvider instead and just have the views added as resources to the assembly.

I also created a custom bootstrapper for my application so that it would know to pick up the resources instead of the files.

using Nancy;
using Nancy.Bootstrapper;
using Nancy.TinyIoc;
using Nancy.ViewEngines;
using Nancy.ViewEngines.Razor;

namespace HelloWorld.Web
{
  public class HelloWorldBootstrapper : DefaultNancyBootstrapper
  {
    protected override void ConfigureApplicationContainer(TinyIoCContainer container)
    {
      base.ConfigureApplicationContainer(container);

      // Configure the resource view location provider
      var assembly = GetType().Assembly;
      ResourceViewLocationProvider
          .RootNamespaces
          .Add(assembly, "HelloWorld.Web.Views");
    }
    protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
    {
      StaticConfiguration.CaseSensitive = true;
      StaticConfiguration.DisableErrorTraces = false;
      StaticConfiguration.EnableRequestTracing = true;
      base.ApplicationStartup(container, pipelines);
    }
    protected override NancyInternalConfiguration InternalConfiguration
    {
      get
      {
        var result = NancyInternalConfiguration
          .WithOverrides(nic => nic.ViewLocationProvider = typeof (ResourceViewLocationProvider));
        return result;
      }
    }

    protected override System.Collections.Generic.IEnumerable ViewEngines
    {
      get 
      { 
        yield return typeof (RazorViewEngine);
      }
    }
  }
}

I also found that adding the Razor view engine via NuGet also adds a post build action to the project file which doesn’t work in Ubuntu. I had to strip that out to allow the build to work, however, since my bootstrapper explicitly references to the RazorViewEngine the file is copied by the build engine to the output directory anyway.

Tip of the day: Going quickly to an item in the Entity Model Diagram

After a conversation recently about how difficult it was to find stuff in the EDMX diagram because it can often be a right pigs breakfast, I stumbled across this today.

In Visual Studio there is a Model Browser that is available when viewing the diagram. It appears in the same space as the solution explorer. If you don’t see it in the tab list you can add it by going to View–>Other Windows–>Entity Data Model Browser. Like this:

Menu to open Entity Data Model Browser

Menu to open Entity Data Model Browser

Once there, you can open the tree to get the item you want much more easily that finding in on the diagram.  Open entity types to see a list:

The model browser window

The model browser window

Right-click the entity you want to move the diagram to and select “Show in Designer”

Show in Designer

Show in Designer

The designer will shift to the location of the table and put it in the centre of the window for you. It will also select the table.

It may be a really simple thing, but I wish I’d discovered it sooner.

Automatically replacing an image on an HTML page when it is not found.

The project I’m working on has just moved the image hosting to Amazon S3. Previously what happened was that we had a big folder full of images that had been uploaded from our users and that if the site needed to render an image it would check the directory for the image it needed, if it didn’t have it, it would look for the original and then resize and render that (storing the resized version in the folder so it can be found the next time). If the original couldn’t be found either it displayed a replacement image in place that was basically an image that said “There is no image available.”

That worked well enough with a small number of users but it really didn’t scale well.

Now that we’ve moved the hosting to Amazon S3 we create all the image sizes needed at the time they are initially uploaded. If we need a new size we have a tool that will go and create all the resized versions for us. The only issue that remains is that some images don’t exist for various reasons. Much of the legacy data came from systems that were installed on people’s desktops and the image data simply never got sync’ed to the central server properly.

But there is a way around this on the browser. The img tag can have an onerror attribute applied, which can then call a function which replaces the image src with a dummy image that contains the message for when there is no image.

For example:

<div>
  <img src="error.jpg" onerror="replaceImage(this, 'replacement.jpg');" title="This image is replaced on an error"/>
</div>

<script type="text/javascript">
  function replaceImage(image, replacementUrl){
    image.removeAttribute("onerror");
    image.src=replacementUrl;
  }
</script>

Although this looks a little ugly (putting in lots of onerror attributes on images) there is a lot less code to be written. When trying to achieve the same results in jQuery I eventually gave up. That’s not to say that it can’t be done, just that for pragmatic reasons I didn’t pursue it as I was spending too much time trying to get it to work.

The function does two things, first it removes the onerror because if the replacementUrl is also broken it will just recurse the call to the error handler and the browser will just slow right down. Second, it performs the actual replacement.

To see it in action there is an example page to demonstrate it.

I also tried to create a jQuery based solution to fit in with everything else. However, there were a couple of problems with a jQuery solution that were less than ideal.

  • You can’t attach an error event to the images because by the time you have done so the error event will be long past. You have to loop around all the images initially to find out which didn’t load before jQuery got a chance to get going.
  • For images that are added to the page by jQuery itself the .on does not work because delegated events, which allow you to create event handlers on elements before they are created, need the events to bubble up to a parent that did exist at the point the event handler was attached. The error event, among a small set of other events, does not bubble up. And if you attach it directly to the newly created element on the page then it will likely be too late, especially on a fast connection, as it will have already fired off the error event. You could do the same as before and check manually to see if the image loaded or not – but then the code is getting rather unwieldy and unmanageable.

In the end I found that the small bit of code that is called from the onerror attribute on each img element that needed it was more compact and didn’t require lots of extra lines of code to ensure that all the errors were corrected in the case that jQuery just didn’t get there in time.

Finally, if anyone has a solution in jQuery that does not require cluttering up the HTML, I’d like to see it

Follow

Get every new post delivered to your Inbox.