continuous integration, continuous delivery, Bazooka, automated deploy

I started developing professionally six years ago and got a job fresh out of university. At the time I didn’t know much about all the topics surrounding development: continuous integration, continuous delivery, capacity planning, etc.. .

When I started developing my first application what was taught to me was: verify that it builds and deploy it to a network folder with the Visual Studio Publish option. It worked fine as I was working alone but it had numerous problems:

  • a build was not reproducible exactly, as it didn’t match exactly what was in source control but what was on my computer
  • I had to remember to build and deploy the correct configuration which was different from the one used to develop
  • in a strange case my computer was the only one able to build the application due to specific version of a compiler only I had installed

All these things summed up to many little problems so I started looking for better solutions, reading books about continuous delivery and deployment. At the same time the number of applications we were working one augmented magnifying the problems we had so I started working on Bazooka a tool to automate application deployments.

Last week I was looking at its statistics page and noticed one thing: in less than two years (from June 2015 to February 2017) over thirty developers worked on ninety applications making over ten thousand deploys.

Deplyments by month

As you can see usage has been steady with an increase during summer months as many of our applications have a usage peak in the summer.

Over 90 applications were published some of which are very active while others are in maintenance mode and experience only a few deploys per year

Deployments by application

During these two years many bugs have been fixed and features added but there still are some that could make for an easier deployment experience like release promotion through enviroments but this is an argument for another post

OData, ASP.NET Core

At work we’re in the process of migrating some applications to ASP.NET Core to take advantage of some of it’s features:

  • the ability to self host in process without IIS making it perfect to create standalone services
  • its design based on dependency injection

While we would love to be able to use the .NET Core version there are too many libraries that have not been ported still so we are forced to run on the full .NET framework and keep an eye on future developments to use the .NET Core framework.

While this solves many problems some libraries that integrated with ASP.NET have not been ported, in our case what was missing was the OData support. Searching online you can found some informations about it:

  • https://github.com/OData/WebApi/issues/772
  • https://github.com/OData/WebApi/issues/229
  • https://github.com/OData/WebApi/issues/744

but all signs point to the porting work being stopped due to other priorities. This was a show stopper for us as we rely on OData to easily connect a React component representing a sorted, paginated and filterable table directly to a data source without having to code all possible queries.

Fortunately we didn’t rely on the classic ODataController but instead bound ODataQueryOptions as an action parameter and the built some extension method to apply them and obtain a PageResult (a type representing your data along with the total number of elements in the collection which is necessary to draw a paginated table).

    [HttpGet]
    public PageResult<Person> Teens(ODataQueryOptions opts)
    {
        return dataSource.Persons().Where(x => x.Age < 18).PagedFilter(opts);
    }

The main problem is that this type by default won’t bind to the request as this was done in WebAPI from ODataQueryParameterBindingAttribute. As the source is easily available thanks to Microsoft new openness we can easily adapt it to an IModelBinder and IModelBinderProvider used by Asp.NET Core

public class ODataQueryOptionsModelBinder : IModelBinder
{
    private struct AsyncVoid
    {
    }

    private static MethodInfo _createODataQueryOptions = typeof(ODataQueryOptionsModelBinder).GetMethod("CreateODataQueryOptions");

    private static readonly Task _defaultCompleted = Task.FromResult<AsyncVoid>(default(AsyncVoid));

    public Task BindModelAsync(ModelBindingContext bindingContext)
    {
        if (bindingContext == null)
        {
            throw new ArgumentNullException("bindingContext");
        }
        
        var request = bindingContext.HttpContext.Request;
        if (request == null)
        {
            throw new ArgumentNullException("actionContext");
        }

        var actionDescriptor = bindingContext.ActionContext.ActionDescriptor;
        if (actionDescriptor == null)
        {
            throw new ArgumentNullException("actionDescriptor");
        }


        Type entityClrType = GetEntityClrTypeFromParameterType(actionDescriptor) ?? GetEntityClrTypeFromActionReturnType(actionDescriptor as ControllerActionDescriptor);

        Microsoft.Data.Edm.IEdmModel model = actionDescriptor.GetEdmModel(entityClrType);
        ODataQueryContext entitySetContext = new ODataQueryContext(model, entityClrType);
        ODataQueryOptions parameterValue = CreateODataQueryOptions(entitySetContext, request, entityClrType);
        bindingContext.Result = ModelBindingResult.Success(parameterValue);

        return _defaultCompleted;
    }

    private static ODataQueryOptions CreateODataQueryOptions(ODataQueryContext ctx, HttpRequest req, Type entityClrType) {
        var method = _createODataQueryOptions.MakeGenericMethod(entityClrType);
        var res = method.Invoke(null,new object[] { ctx,req}) as ODataQueryOptions;
        return res;
    }

    public static ODataQueryOptions<T> CreateODataQueryOptions<T>(ODataQueryContext context, HttpRequest request)
    {
        var req = new System.Net.Http.HttpRequestMessage(System.Net.Http.HttpMethod.Get, request.Scheme + "://" + request.Host + request.Path + request.QueryString);
        return new ODataQueryOptions<T>(context, req);
    }

    internal static Type GetEntityClrTypeFromActionReturnType(ControllerActionDescriptor actionDescriptor)
    {
        if (actionDescriptor.MethodInfo.ReturnType == null)
        {
            throw new Exception("Cannot use ODataQueryOptions when return type is null");
        }

        return TypeHelper.GetImplementedIEnumerableType(actionDescriptor.MethodInfo.ReturnType);
    }

    internal static Type GetEntityClrTypeFromParameterType(ActionDescriptor parameterDescriptor)
    {
        Type parameterType = parameterDescriptor.Parameters.First(x => x.ParameterType == typeof(ODataQueryOptions) || x.ParameterType.IsSubclassOf<ODataQueryOptions>()).ParameterType;

        if (parameterType.IsGenericType &&
            parameterType.GetGenericTypeDefinition() == typeof(ODataQueryOptions<>))
        {
            return parameterType.GetGenericArguments().Single();
        }

        return null;
    }
} }

the ODataModelHelper

public static class ODataModelHelper
{
    private const string ModelKeyPrefix = "MS_EdmModel";

    private static System.Web.Http.HttpConfiguration configuration = new System.Web.Http.HttpConfiguration();

    internal static Microsoft.Data.Edm.IEdmModel GetEdmModel(this ActionDescriptor actionDescriptor, Type entityClrType)
    {
        if (actionDescriptor == null)
        {
            throw new ArgumentNullException("actionDescriptor");
        }

        if (entityClrType == null)
        {
            throw new ArgumentNullException("entityClrType");
        }

        if (actionDescriptor.Properties.ContainsKey(ModelKeyPrefix + entityClrType.FullName))
        {
            return actionDescriptor.Properties[ModelKeyPrefix + entityClrType.FullName] as Microsoft.Data.Edm.IEdmModel;
        }
        else
        {
            ODataConventionModelBuilder builder = new ODataConventionModelBuilder(ODataModelHelper.configuration, isQueryCompositionMode: true);
            EntityTypeConfiguration entityTypeConfiguration = builder.AddEntity(entityClrType);
            builder.AddEntitySet(entityClrType.Name, entityTypeConfiguration);
            Microsoft.Data.Edm.IEdmModel edmModel = builder.GetEdmModel();
            actionDescriptor.Properties[ModelKeyPrefix + entityClrType.FullName] = edmModel;
            return edmModel;

        }
    }
}

the IModelBinderProvider

public class ODataModelBinderProvider : IModelBinderProvider
{
    public IModelBinder GetBinder(ModelBinderProviderContext context)
    {
        if (context.Metadata.ModelType.GetTypeInfo() == typeof(ODataQueryOptions) ||
            context.Metadata.ModelType.GetTypeInfo().IsSubclassOf<ODataQueryOptions>())
        {
            return new ODataQueryOptionsModelBinder();
        }

        return null;
    }
}

and finally register it in the ConfigureServices method

var mvc = services.AddMvc(options =>{
	options.ModelBinderProviders.Insert(0, new ODataModelBinderProvider());
});

bazooka, continuous delivery

After more than one year and over five thousand deploys a new release of Bazooka is ready. Grab the new and shiny 0.3 version from the release page.

This version is a complete rewrite of the User Interface to make it more appealing and easier to add functionalities to.

Also it comes with some new features:

  • the agent no longer stops if multiple packages are found when updating
  • application groups can be reordered in homepage customizing the placement for every user
  • an application enviroment configuration can be cloned from another enviroment
  • an entire application configuration can be cloned from another one

Stay tuned for more features.

docker, ASP.NET5, VS2015

If you are reading this article you probably already know what Docker is and as a .NET developer you’ve always wanted to be able to use it to deploy your applications.

With the beta 8 of ASP.NET 5 a new deploy option is available that lets you deploy your website to a virtual machine in Azure running Docker. This is actually pretty simple and well covered by the documentation but what if you don’t have an Azure Account, network access or simply prefer to try things on your local computer.

This situation is not covered by the docs so I’ll cover the necessary steps. Note that I’ve tried this instructions on Windows 10 but there should be no problem on WIndows 7, 8 or 8.1 .

Prerequisites

First of all make sure you’ve installed all of the followings:

and verify your docker installation is working.

Deploying to Docker

The first thing to do is to create a new ASP.NET Web Application with the ASP.NET Preview Templates. Choose Web application as in the image and uncheck the host in the cloud option.

Proceed with project creation and then build it by pressing Ctrl+Shift+B. To start the deploy simply right-click on the project and select the Publish option.

Choose the docker containers option and then the Custom Docker host.

dockerhost.png

Now you will have to specify the connection info for your local Docker host

To obtain these info launch the Docker quickstart terminal installed with the Docker Tools.

You will see, drawn in green, the name of your machine and it’s IP, in this case 192.168.99.100. Return to the connection info dialog and insert ** tcp://192.168.99.100:2376 **. Then execute in the terminal

docker-machine config default

substituting default with your machine name. You will see all the config options o connect to your docker host. Copy all of it except the last parameter ( -H tcp://192.168.99.100:2376) as it is not necessary and paste it in the **Docker Advanced Options > Auth Options ** field.

Now you’re done. Press Publish and wait until the build terminates, the browser will automatically open your website running in Docker.

javascript, application development, spa

Sometimes, when developing an application, especially in certain fields you’ll receive the same old request:

I want to select all the element in the table and [print|delete|close|whatever] them

While this request is reasonable (sometimes) it carries a lot of problems that must be solved to implement it correctly:

  • how many elements can be selected together ? 10, 20, 50, a thousand?
  • how many users the system has that can use this feature together
  • does the user have to see the status of his pending jobs?

This type of feature can easily cause problems if not tought well. Imagine a user coming in, selecting a thousand rows and clicking “Print”. Now the system has to print maybe a couple thousand pages which may take some time.

If the action can be completed in a small time usually there is no problem but imagine a user that gives the command and doesn’t receive a timely response. Usually one of the thing he tries is to execute the action again which means your system is now printing double the pages.

This problem can be even compounded if many users try to use the same function swamping your application with pending requests until it grinds to a halt, especially if these jobs are processed by a queue that cannot be emptied sufficiently fast.

In some of this cases you’ll find that a really simple solution can be used that can be simply called “cheating”. Suppose you have the list of elements to process and for each of these you have to make a AJAX call to the server. Instead of creating a new action to process all the element in a batch you can simply chain all the request together to execute in sequence and show an update on screen in this way:

var d = jQuery.Deferred();
var p = d.promise();
var i = 0;
for (i = 0; i < chosen.length; i++) {
  p = p.then(_.bind(function (index) {
    return $.post("youaction", function (response) {
	  "SHOW AN UPDATE ON SCREEN"
    });
   },this, i))
}

d.resolve();

As you can see all we do is create an initial promise through jQuery and then proceed to concatenate a Promise for each element to process. The last line simply starts the process by resolving the first request and the starting with the second and so on.

While I called this solution “cheating” as it doesn’t really process in batch all the element it has some nice properties:

  • it solves the user problem fully as it allows him to select some elements and apply an action to them in bulk
  • it doesn’t swamp the system with huge requests, as only one element at a time is processed
  • it eliminates the problem of having to stop the user from requeing the same batch because he thinks it is taking too much, or from having to implement a way to show the user the status of his pending jobs.

All in all a simple solution that pleases almost everyone. The only drawback is that if the user closes the browser not all the elements will be processed but in many cases you’ll find that this doessn’t matter.