We deploy our app using TFS Teambuild. To deploy to multiple environments (dev, tst, acceptance and  production) we use solutionconfiguration and config transformations.

Because we want to leverage .NET 4.5 for multiple reasons we needed to use the Azure 1.8 sdk. I noticed the ServiceDefinition.csdef transformation was not working as before when deploying using the 1.8 azure sdk.

The .ccproj file contained the following xml:

<ItemGroup>
  <ServiceDefinition Include="ServiceDefinition.csdef" />
  <None Include="ServiceDefinition.Local.csdef" />
  <None Include="ServiceDefinition.Development.csdef" />
  <None Include="ServiceDefinition.Test.csdef" />
  <None Include="ServiceDefinition.Acceptance.csdef" />
  <None Include="ServiceDefinition.Release.csdef" />
  <ServiceConfiguration Include="ServiceConfiguration.cscfg" />
  <None Include="ServiceConfiguration.Local.cscfg" />
  <None Include="ServiceConfiguration.Development.cscfg" />
  <None Include="ServiceConfiguration.Test.cscfg" />
  <None Include="ServiceConfiguration.Acceptance.cscfg" />
  <None Include="ServiceConfiguration.Production.cscfg" />
  <None Include="ServiceConfiguration.Release.cscfg" />
</ItemGroup>
<UsingTask TaskName="TransformXml" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll" />
<PropertyGroup>
  <ServiceConfigurationTransform>ServiceConfiguration.$(AzureDeployEnvironment).cscfg</ServiceConfigurationTransform>
</PropertyGroup>
<Target Name="TransformServiceConfiguration" BeforeTargets="ValidateServiceFiles" Condition="exists('$(ServiceConfigurationTransform)')">
  <TransformXml Source="@(ServiceConfiguration)" Destination="%(Filename)%(Extension).tmp" Transform="$(ServiceConfigurationTransform)" />
</Target>
<PropertyGroup>
  <ServiceDefinitionTransform>ServiceDefinition.$(AzureDeployEnvironment).csdef</ServiceDefinitionTransform>
</PropertyGroup>
<Target Name="TransformServiceDefinition" BeforeTargets="ValidateServiceFiles" Condition="exists('$(ServiceDefinitionTransform)')">
  <TransformXml Source="@(ServiceDefinition)" Destination="%(Filename)%(Extension).tmp" Transform="$(ServiceDefinitionTransform)" />
</Target>
<Target Name="CopyTransformedEnvironmentConfigurationXmlBuildServer" AfterTargets="AfterPackageComputeService" Condition="'$(AzureDeployEnvironment)'!='' and '$(IsDesktopBuild)'!='true' ">
  <Copy SourceFiles="ServiceConfiguration.cscfg.tmp" DestinationFiles="$(OutDir)ServiceConfiguration.cscfg" />
  <Copy SourceFiles="ServiceDefinition.csdef.tmp" DestinationFiles="ServiceDefinition.build.csdef" />
</Target>

Listing 1

As you can see in line 31 of listing 1, the ServiceDefinition.csdef.tmp (holds transformed xml) is copied to ServiceDefinition.build.csdef. CSPack knew about the convention that the .build.csdef file needed to be used to create the package. The package than can be deployed to Azure.

When we started to use the 1.8 sdk this did not work anymore. I found the following stack overflow post describing a change in this process, but not my issue.
After a lot of reading log files and studying .target files, I came to the conclusion that the convention using .build.csdef  does not function anymore, instead I tried changing the DestinationFiles atrribute value to “$(OutDir)/ServiceDefinition.csdef”. That works!

Mind the / after $(OutDir), if you do not use it the filename will be longer.
So line 31 will be:

<Copy SourceFiles="ServiceDefinition.csdef.tmp" DestinationFiles="$(OutDir)\ServiceDefinition.csdef" />

Listing 2

Henry Cordes
My thoughts exactly…


In my previous post I wrote about how we (the ROBIN dev-team) decided to use WebSync for our real-time solution. This time around I want to share how we implemented WebSync in ROBIN.

Architecture
WebSync Server supports multiple application architectures.

Integrated application architecture:

 Img 01: Integrated Application Architecture WebSync
Img 01: Integrated Application Architecture WebSync

The integrated architecture means that the webapplication and WebSync Server run in the same IIS pool. In Azure this means you can spin up a new instance when the running instance(s) run out of resources.

Isolated Application Architecture:

002_websync_isolated_archImg 02: Isolated Application Architecture WebSync

The isolated architecture means that the webapplication and WebSync Server run in separate IIS application pools.
At this moment in time we implemented the integrated architecture, but we are debating if we need to go for the isolated architecture, we think it is more robust and gives us the opportunity to decouple the real-time, or notification logic even more.

ROBIN Deployment Architecture
We have the following deployment architecture:

Img 03: ROBIN Deployment ArchitectureImg 03: ROBIN Deployment Architecture

As image 03 shows we use the integrated architecture, realtime is part of our Process Services layer. In our Frontends, API’s and Background Worker layers we leverage Websync to do the real-time handling for us.

Storage Provider
WebSync needs storage, and supports the following providers out-of-the-box:

  • Sticky Providers
    • In Memory Provider (Default)
    • Sticky Sql Provider
  • Stateless Providers
    • Stateless Sql Provider
    • Azure Table Provider

We use the Stateless Sql Provider, because it is the fastest provider WebSync supports that works on Azure. We already had an on premise working version that used the Sticky SQL Provider, switching seemed the easiest. We think switching to the Azure Table Provider will be easy and we will do this if we need it for scalability reasons, we would like to avoid this because we want the best performance possible.

Implementation details
To use WebSync we had to implement a few parts:

  • Database deployment script (we do not want WebSync to generate the tables it needs in our production environment)
  • Configuration (web.config for the most part, we really want to leverage Azure’s ServiceConfiguration for this, but webSync does not support it at this moment)
  • Server-side publishing
  • Client real-time handling

Database deployment script and configuration
The database deployment was easy, because WebSync can generate the tables for you (default). The only thing we had an issue with, is that if you do not want WebSync generating the tables, but create them yourself, you have to add that in config. you have to use the manageSchema atrribute:

<websync>
  <server providertype="FM.WebSync.Server.Providers.Stateless.SqlProvider">
    <providersettings>
      <add name="connectionStringName" value="ConnectionstringToUse" />
      <add name="manageSchema" value="false" />
    </providersettings>
  </server>
</websync>

Listing 01: manageSchema = false

Server-side publishing
On the server-side we lean heavily on Inversion of Control or Dependency Injection. We find it is real helpfull to compose our objects as we see fit, and it also helps tremendously in our TDD workflow. Because we need to support al kind of clients (webapplications, iPhone, other mobile devices) we evolved to the following (simplified) implementation:

IRequestHandlerWrapper

namespace Robin.FrontEnds.Realtime.Interfaces
{
    public interface IRequestHandlerWrapper
    {
        void Publish(long userPersonId, string data, string channelBase);
    }
}

Listing 02: IRequestHandlerWrapper

The IRequestHandlerWrapper is the interface we use to abstract the Publish method that WebSync uses away.

RequestHandlerWrapper

public class RequestHandlerWrapper : IRequestHandlerWrapper
    {
        private const string DashboardChannelFormat = "/{0}/{1}";
        
        public void Publish(long userPersonId, string data, string channelBase)
        {
            var channel = CreateDashboardChannel(channelBase, userPersonId);
            var publication = new Publication
            {
                Channel = channel,
                DataJson = data
            };

            RequestHandler.Publish(publication);
        }

        private static string CreateDashboardChannel(string channelBase, long userPersonId)
        {
            return string.Format(CultureInfo.InvariantCulture, DashboardChannelFormat, channelBase, userPersonId);
        }
    }

Listing 03: RequestHandlerWrapper

RequestHandlerWrapper implements IRequestHandlerWrapper. Line 14 uses FM.WebSync.Server.RequestHandler and executes the Publish().

Ofcourse, our implementation consists of more, but this, in short, is how we wrap WebSync. We got a RealtimeService (which implements IRealtimeService)and have the notion of 'Publisher', for every frontend (webapplication, device) we have a Publisher implementation, we inject these using Dependency injection. It makes us really flexible. For every new Publisher type we only have to create a new Publisher implementation, or leverage an existing one.

Client real-time handling
Our main webapplication has a ‘one page architecture’, we have a single page and use Javascript to show different parts of the application on that page. We use module and jquery widget factory  patterns. We are looking into backbone, ember and knockout. We have the notion of a ‘Communicator’ in our clientside architecture. In the Communicator we handle the subscriptions and all incoming real-time (push) messages.

Robin.communicator

robin.communicator = (function ()
{
    var self = this;

    fm.websync.client.init({
        requestUrl: robin.ui.constants.WebSyncRequestUrl
    });

    fm.websync.client.connect({
        onSuccess: function (args)
        {
            self.connected = true;
        }
    });

    addDashboardSubscription = function (agentId)
    {
        fm.websync.client.subscribe
        ({
            channel: '/desktop/' + agentId,
            onReceive: function (args)
            {
                var data;
                var notification = args.data;
                if (notification.NotificationType === robin.ui.constants.ConversationAdded)
                {
                    data = {
                        InboxItems: [notification.InboxItem]
                    };

                    $.publish(robin.ui.buildDashboardTopic(robin.ui.constants.NewInboxItemsTopic), [data]);
                }
                else if (notification.NotificationType === robin.ui.constants.ConversationRead)
                {
                    data = {
                        'ConversationId': notification.ConversationId
                    };
                    $.publish(robin.ui.buildConversationTopic(notification.ConversationId, robin.ui.constants.ReadTopic), [data]);
                }
            }
        });
    };

    removeDashboardSubscription = function (agentId)
    {
        fm.websync.client.unsubscribe
        ({
            channel: '/desktop/' + agentId,
            onSuccess: function (args)
            {
            }
        });
    };

    return {
        addDashboardSubscription: addDashboardSubscription,
        removeDashboardSubscription: removeDashboardSubscription,
        destroy: destroy
    };

})();

Listing 04: robin.communicator

As you can see in listing 04 the communicator also uses a jQuery plugin for pub/sub. This plugin makes it possible to use the publish-subscribe pattern throughout our modules and widgets. This plugin uses topics to differentiate between publications. Whenever a real-time message is pushed from the server, a $.publish(…)event for a particular topic is fired and all subscribers to this topic will recieve the message.

I made the listings as small as possible, to show the intend. The WebSync client api is clean and self explaining. You have to init, than connect and subscribe.
We have lots of advantages from using WebSync, the administration of what clients are connected to which subscriptions is abstracted away for the most part. The way WebSync handles all browsers is a big benefit also, we implemented WebSockets ourselves and had to do all that work manually untill now.

Henry Cordes
My thoughts exactly…


Because we think it is very important to serve our users changes and notifications in a real-time fashion, we evaluated a lot of tools, components to achieve this. First we looked at Socket.IO, NowJs and Node.js, but we soon found out that this is not very hard if your team is used to Microsoft tooling. Since the ROBIN frontend (we call it the Agent Desktop) is a web application that is built using the Microsoft stack, we need a framework that is better integrated with the stack we know.

Azure and other considerations

We run ROBIN on Azure which added another complexity. We started out using raw Websockets and used Nugget where we only targeted the newest browsers like Chrome and Firefox, but as soon as the websocket specs evolved Chrome did no longer supported our version and Firefox did not support websockets anymore. So we did an evaluation of real/time Comet frameworks that work with .NET. This was before SignalR existed.

Evaluation
We evaluated Nugget, Pokein, Kaazing and WebSync

The points we evaluated are:

  • features (Websockets, Comet) 
  • client and server framework
  • browsers that are supported
  • maturity
  • learning curve
  • effort
  • price
  • maintainability
  • performance
  • Azure support

We found that while Websync and Pokein did not support websockets at the time, they handled Comet really well. We knew that WebSyncwill support websockets as soon as the specs are official and browsers are supporting these official specs. We found that WebSyncwas more mature, they are robust and their api’s are well documented, although they are more expensive they are most mature also. We found that the google group they use to communicate with their community is very responsive, they will help you as soon as they can.
Whenever we had an issue they helped us and the issue was fixed quickly up to now.

Implementation
We implemented WebSync and are using it in production for a while now, we also have an iPhone app that uses WebSync. This was easy, because they provide iOS examples.

Things to improve
We find that Azure integration works, but can be done better. The configuration of the database (WebSyncneeds four tables in a database) is handled in web.config. When working in azure ServiceConfiguration is preffered, but not supported by WebSync at this moment. So all instances need a configutred web.config and serviceconfigs will not work.
SQL Azure is supported and Azure Table Storage can be used for WebSync storage also, but performance is better on SQL Azure.

Conclusion
All in all, we are very pleased with WebSync, it abstracts the real-time communication away and works really well for us. We hope to grow together with WebSync to a real-time enabled future for ROBIN!

Henry Cordes
My thoughts exactly...


At Trinicom, the place I work, we are creating a new customer experience product. We lean heavily on good engineering practices and use Scrum as our development process.
For the application we defined cornerstones, or fundamentals: usability being one of them.
Because our application needs to be zero footprint web based, we use JavaScript a lot, mostly JQuery actually. We structure our JavaScript code using module and jquery widget patterns. We are thinking about implementing an mvc framework for Javascript, or use something like knockout.js, because we depend on real-time data.
Our application needs real-time info and because we develop for the future we are leveraging Websockets to make the real-time updates a reality.
As I mentioned we lean on good engineering practices, TDD is important for our C# code, so because we develop more and more JavaScript code, we are using TDD for our client-side code also.

QUnit
We use QUnit for our JavaScript unit tests. We use Continuous Integration and run our unit tests at every check-in, on TFS 2010. We wanted that for our JavaScript unit tests also.
So we started searching for information, there was very little. It seems that most ASP.NET devs do not integrate JavaScript unit tests into their builds, or do not unit test their JavaScript at all.

Web Client Developer Guidance
The Web Client Developer Guidance from Patterns & Practices on Codeplex provides some information on how to integrate QUnit unit tests into a TFS 2008 build. The problem was, that the guidance is incomplete and unclear. The P&P group created the QUnitExtensions.js that provides the familiar Asserts C# developers know and love, isTrue, isFalse, areEqual, IsNull, isNotNull, isUndefined, isNullOrUndefined, isNotNullNorUndefined.
The guidance exist of a ASP.NET MVC  app that takes JSON and serializes that to some xml format. The xml format is than persisted on disk. The JSON is provided by the QUnit testrunner that is extended and uses the url to a Controller of ASP.NET MVC app that takes JSON. The extended testrunner serializes the testresults into a hidden formfield and submits the contents of the field to the MVC app’s controller.

The format that was persisted to disk is unfamiliar and does not seem to map to any TFS format. The guidance says all you have to do is write the xml file to disk and tell the buildserver where it is by using an msbuild script and the result is written to the buildlog etc. That did not do the trick for us.

Workflow Activity
We created a Workflow activity that we use in our build workflow.

[RequiredArgument]
public InArgument ResultFile { get; set; }

[RequiredArgument]
public InArgument BuildDetail { get; set; }

protected override void Execute(CodeActivityContext context)
{
	IBuildDetail buildDetail = context.GetValue(this.BuildDetail);
	string file = context.GetValue(this.ResultFile);

	TestRun testRun = DeserializeTestRun(file);
	string outcome = GetOutcome(testRun.TestsFailed);

	IActivityTracking currentTracking = context.GetExtension().GetActivityTracking(context);
	IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();
	childNode.Type = currentTracking.Node.Type;

	IBuildStep  buildStep = childNode.Children.AddBuildStep("JS Unittest Build Step", string.Format("The Javascript Unittests for build: {0} have run", testRun.Name));
	if(testRun.TestsFailed > 0)
	{
	   IBuildError buildError = childNode.Children.AddBuildError(string.Format("The Javascript UnittestRun {0} ", outcome), DateTime.Now);
		buildError.Save();
	}

	WriteNewBuildInformationNode(currentTracking, string.Format("The Javascript UnittestRun for build: {0} {1}",  testRun.Name,outcome));
	WriteNewBuildInformationNode(currentTracking, string.Format("{0} test(s) run", testRun.TestsTotal));
	WriteNewBuildInformationNode(currentTracking, string.Format("{0} test(s) succesfull", testRun.TestsPassed));
	WriteNewBuildInformationNode(currentTracking, string.Format("{0} test(s) failed", testRun.TestsFailed));
	WriteTestCaseInformation(testRun.TestCases, currentTracking, buildDetail);

	SetBuildStatus(testRun.TestsFailed > 0, buildStep, buildDetail);

	if (testRun.TestsFailed == 0)
	{
		WriteToBuildSummary(buildDetail, string.Format("The Javascript UnittestRun for build: {0} {1}", testRun.Name, outcome));
	}

	buildStep.FinishTime = DateTime.Now;
	buildStep.Save();
	buildDetail.Information.Save();
}	

Listing 1

The activity takes a file and deserializes the xml into objects, it than uses the object graph to report into the buildlog if the unit test where succesfull, or not.

TestRun Domain objects

post_jsut_01_TestRun_classdiagram

Img 1: The TestRun domain

Image 1 shows the domain objects of the TestRun. A TestRun has one or more TestCases and a TestCase can have  an Output that is of the ErrorInfo type.
A TestCase has an OutCome and a StatusPassed, if an Exception was thrown, the details of that Exception are found in the Output property.
The TestRun has three properties that hold the result for the run, how many Tests where run (TestTotal), how many Test failed (TestFailed) and how many tests passed (TestsPassed).
The Activity ofcourse uses the three properties of the TestRun to decide wether the Run was succesfull or not. When the TestRun is not successful, the TestCases that failed are written to the buildlog and the build will fail.

ITestResultSerializer interface
The Web Client Developer Guidance contains an MVC application that uses an interface named: ITestResultSerializer. I wrote our own implementation that is simpler than the implementations provided by Patterns & Practices.
They use XmlTextWriter to write out the xml, I just serialize the objectgraph to xml.

ITestResultSerializer Tfs implementation

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization.Formatters.Binary;
using System.Text;
using System.Xml;
using System.IO;
using System.Xml.Serialization;
using QUnitTestResult.Contract;
using QUnitTestResult.Models.Tfs;

namespace QUnitTestResult.Models
{
    public class TfsTestResultSerializer : ITestResultSerializer
    {

        public void Serialize(TestRun testRun, string resultsFilePath, string fileName, string buildName)
        {
            string filename = Path.Combine(resultsFilePath, fileName);
            if (!Directory.Exists(resultsFilePath))
            {
                Directory.CreateDirectory(resultsFilePath);
            }

            using (var textWriter = new XmlTextWriter(filename, Encoding.UTF8))
            {
                var xmlSerializer = new XmlSerializer(typeof(TestRun));
                xmlSerializer.Serialize(textWriter, testRun);
            }
        }

    }
}

Listing 2: TfsTestResultSerializer

The TfsTestResultSerializer writes the xml to a location on disk. The build workflow knows this location and picks the xml up and the Activity publishes the results to the buildlog of our TFS 2010 TeamBuild.

Workflow – BuildTemplate

To make it possible to run JavaScript unittests in an TFS 2010 Teambuild, I made some changes to the workflow of the BuildTemplate.

Arguments
In the workflow I defined a few Arguments, that can be set in the builddefinition. The arguments are:

  • RunJavascriptUnittests (datatype boolean - default value = false)
  • JavascriptUnittestBrowserFiles (datatype List<string> – default value = new String() { “c:\program files\apple\safari.exe” })
  • TestresultAcceptorWebsite (datatype string – default value = “localhost:8081”)
  • JavascriptTestRunnerFile (datatype string – default value = “scripts/unittests/testrunner.htm”)

 

post_jsut_buildtemplate_args

Img 2: Buildtemplate arguments

 

In the default BuildTemplate there is an Activity that is called: “Run Tests”. You can find it if you follow the default BuildTemplate and click your way through the following path in the workflow:

Process > Sequence > Run On Agent > Compile, Test, and Associate Changesets and Work Items > Sequence > Compile, Test, and Associate Changesets and Work Items > Try Compile and Test > Compile and Test > For Each Configuration in BuildSettings.PlatformConfigurations > Compile and Test for Configuration > If Not DisableTests > Run Tests

Below the “If Then” Activity: “If Not TestSpec Is Nothing” I add my own “If Then Activity” and call it: “If RunJavascriptUnittests is True”.

post_jsut__02_RunTests_SequenceImg 3: Run Tests Sequence Build Workflow

In the If I check a boolean Argument I added to the builddefinition (as stated in the Arguments section above), that is used to specify if we want to run JaveScript Unittests in the build.

post_jsut__03_If_jsutests_need_torunImg 4: If RunJavaScriptUnitTests is set to true

If this argument is set to true, the “Then” branche in the If then Activity is folllowed. In the “Then" part I add a Sequence called: “Run Javascript Unittests”.

 post_jsut__04_ForeachbrowserexeinImg 5: Run JS UnitTests

The sequence holds a “For Each” Activity: “ForEach Browser exe in JavascriptUnittestBrowserFiles”.

post_jsut__05_ForeachtestrunnerinImg 6: ForEach browser exe in Argument

In the “ForEach” Activity all strings in the “JavascriptUnittestBrowserFiles” argument are iterated over, these strings hold the path and name of the executable of a browser on the buildserver (for example “C:\users\username\AppData\Local\Google\Chrome\Application\chrome.exe”).
Inside the Body another “ForEach” activity “Foreach testRunner in JavascriptTestRunnerFiles” exists.

Sequence

post_jsut_foreach_testrunner_in_jstestrunnerFiles

Img 7: For each testrunner in argument

This “ForEach” Activity will iterate over all strings in the Argument: “JavascriptTestRunnerFile”, this is so that we can have more than one html file containing Javascript unittests. Inside the body I put another Sequence activity (img 8).

post_jsut_sequence_actual_workImg 8: Sequence that does the work 

WriteBuildMessage
The “WriteBuildMessage” only writes to the buildlog for debugging purposes, also it comes in handy when someting goes wrong in a build sometimes.

Start testrunner js unittests (InvokeProcess)
Next the “Start testrunner js unittests” InvokeProces Activity will be run. The properties are shown in img  9.

post_jsut__10_StartTestRunnerJsUT

Img 9: InvokeProcess calls browser exe with html file as parameter

The “FileName” property is filled with the “browserFile” variable from img 6 and holds the exe of the browser.

The ‘”Arguments” property  is the string that concatenates all variables to a string that holds the important pieces of the puzzle.

post_jsut__11_StartTestRunnerJsUTArguments
Img 10: Arguments property The string looks like listing 4:

"""file:///" + BuildDirectory + "\Binaries\_PublishedWebsites\" + JavascriptTestRunnerFile + "?submitResults=" + TestresultAcceptorWebsite + "/qunit/acceptor&resultfilePath=" + TestResultsDirectory + "&build=" + BuildDetail.BuildNumber + "&platform=" + platformConfiguration.Platform + "&flavor=" + platformConfiguration.Configuration + "&teamproject=" + BuildDetail.BuildDefinition.TeamProject + """"

Listing 4

Listing 4 shows how the Arguments are built up: It tells the browser to open the “JavascriptTestRunnerFile” which holds the html filename in our project as a file on the filesystem, we use the “"BuildDirectory” variable and the kwoledge where the websites are published in this directory to point to the right file.
Next we add querystring variables and values so the testrunner javascript file knows where to send the results of the tests and which build we run, what solutionconfiguration we are building etc .

 

browserFile.Substring(0, browserFile.LastIndexOf("\"))

Listing 5 Working Directory

Listing 5 shows the value for the Working directory property.

FindMatchingFiles
The next step is to check if the results are written to disk as expected, with the “FindMatchingFiles” activity using the MatchPattern shown in Listing 6:

TestResultsDirectory + String.Format("\QUnitResult_{0}.xml", BuildDetail.BuildNumber)

Listing 6 MatchPattern

 

When the file is found the variable JSUnitTestMatchingFileResult is filled with the number of results.

If Js UnitTestResultFile is found
The “If Then” activity “If Js UnitTestResultFile is found” is run.

post_jsut__14_IfFileFoundImg 11: “If Then” activity

If the results Count() is equal to 1 the Workflow Activity mentioned in the beginning of the post is called (“JsUnitTestResultReader”), if not a message is written to the buildlog.

post_jsut__15_ReadJSResultsActivityImg 12: Properties JsUnitTestResultReader 

We pass the BuildDetail to the Activity, so the context is known.
We also need the name and path of the resultfile, to read the results and write them to the buildlog.

post_jsut__16_ReadJSResultsActivityResultFileImg 13: ResultFile property

TestResultsDirectory + String.Format("\QUnitResult_{0}.xml", BuildDetail.BuildNumber)

Listing 7 ResultFile

Now the circle is full, and the results can be written to the buildlog.

post_jsut_resultbuildlog_posImg 14: Passed unittests in log details

Of course when a test fails the build fails partially (like normal unittests) and the result shows in the buildlog summary, the results like shown in img 14 are only visible in the details (diagnostics) part of the buildlog.

Henry Cordes
My thoughts exactly…


We needed to deploy a WindowsService in our nightly build. We deploy to a Development, a Test and Acceptance environment. That for the moment all these ‘environments’ are on the same physical server does not really matter. We are going to change this, but for now this is how it works.

This situation creates the requirement to deploy different instances of the same WindowsService on the same server (a Development, Test and Acceptance instance).
To install a WindowsService through our Build we created a WindowsService that is capable of installing itself when calling it with some arguments via the Command Line. 
Maybe I will blog about how we did this in the future, but for now, I think the TeamBuild Workflow is more interesting.
In the main sequence of the build workflow some extra elements exist, we added these elements because of the special build functionalities we need.

The main sequence used in this workflow looks like image 1:

01_BuildSequence
Img 1: Complete Build Sequence

The Deploy of the services is taking place in the fifth element of the workflow, another nested sequence I call ‘Deploy WindowsServices’ as shown on image 2:

02_If_tests_succes
Img 2: Deploy WindowsServices (Sub)Sequence

Inside the sequence an If then activity is placed that checks if compilation succeeded and if the tests have run succesfully (image 3):

03_If_tests_succesfull
Img 3: If compilation and Tests are successful activity

The Condition has the syntax as is shown in listing 1

BuildDetail.CompilationStatus = BuildPhaseStatus.Succeeded And (BuildDetail.TestStatus = BuildPhaseStatus.Succeeded Or BuildDetail.TestStatus = BuildPhaseStatus.Unknown)

Listing 1

Arguments
If you need variables that you want to influence in the Builddefinition the best option is to use Arguments, this are variables that are accessible in the Builddefinition. The Arguments can be added or changed by clicking the tab ‘Arguments’ in the bottom-left of the workflow editor (image 4):

03A_Arguments
Img 4: Arguments Build (can be set in builddefinition)

In the Arguments I added the Argument: WindowsServicesToDeploy the datatype (or argumenttype) is a string array. The WindowsServicesToDeploy argument holds the projectnames of windowsservices that have to be deployed and that are part of the solution that has to be build.

Argument WindowsServicesToDeploy:

The Argument of type string[] that is called WindowsServicesToDeploy has the following value:

New String() {"WebSocketServerService"}

Listing 2

The string array, has only one value for now, but when I add more windowsservices to the solution, all I have to do is add the name of the project in the array of the values in the Builddefinition and we're done.

In the If Then activity shown on image 3, in the Then section I added a ForEach activity. Image 5 shows the properties of this Activity:

04_for_each_Service_properties
Img 5: For each Service in Argument (List<string>)

Listing 2 and 3 show the syntax and value for the TypeArgument and the Values properties of the System.Activities.ForEach<System.String> activity:

TypeArgument:

Microsoft.TeamFoundation.Build.Workflow.Activities.PlatformConfiguration

Listing 3

Values:

BuildSettings.PlatformConfigurations

Listing 4

SolutionConfigurations
As I mentioned our Development, Test and Acceptance environments are all on one box (for now), to differentiate between these environments, we created Solution Configurations for all these environments in our solution. By using pre processing directives we then change configuration of connectionstrings, or url’s etc. in our code and our automated tests. So we have to do a different deployment for all Solution Configurations that are defined in the Builddefinition.

The ForEach activity that loops through all strings in the string array that is defined in the argument WindowsServicesToDeploy (that holds the projectnames of windowsservices that have to be deployed) contains yet another ForEach activity shown on image 6. In this ForEach I loop through all SolutionConfigurations.

05_for_each_Service_ForEach
Img 6: For each Solution Configuration in BuildDefinition

When we double-click to view the contents of this Foreach activity, we see the contents of image 7:

06_for_each_SolutionConfiguration
Img 7: Try Catch in For Each

The ForEach that loops through all SolutionConfigurations contains a TryCatch activity, as shown on image 8 an InvalidOperationException and Exceptions are caught in this TryCatch activity:


07_TryCatch
Img 8: Deploy WindowsService Sequence inside try Catch

In the Catches, BuildErrorMessages are written. In the Try block another Sequence is added that is called ‘Deploy windowsService’.

Variables
To hold state variables can be used, variables can be added or changed by clicking the tab ‘Arguments’ in the bottom-left of the workflow editor (image 9):

 

07a_Variables
Img 9: Variables

The variables, psExecOutput (Int32) and matchingFileResult (IEnumerable<String>) are added.

The ‘Deploy WindowsService’ activity
As we can see on image 8, the sequence Deploy WindowsService is executed inside the Try block.
In the sequence I start off with a FindMatchingFile activity, this is a standard build workflow activity that is available in team build 2010.

08_Deploy_WS_Sequence
Img 10: Deploy WindowsService Sequence

The FindMatchingFile activity has the properties that are shown on image 11, where the MatchPattern is the most important:

09_Find_Matching_Files
Img 11: FindMatchingFile Activity

The MatchPattern describes the pattern where the files you need are going to be found by.

MatchPattern:

 

String.Format("\\\\\{0}_{1}\{0}.exe", windowsService, solutionConfiguration.Configuration)

Listing 5

The next activity is another ‘If Then’, that checks if the FindMatchingFile activity found one file, as shown on image 12:

10If_a_Matching_File
Img 12: If then containing a Invoke Process Activity

PSExec.exe
If one file is found, than an ‘Invoke Process Activity’ is executed, when not a buildmessage is written.   Image 13 shows all properties of the InvokeProcess Activity, used to  invoke PsExec to execute the uninstallation of the WindowsService, because if the file is found on the location where the matchpattern says the file could be present, an older version of the windowsservice is installed. So we need to uninstall it.

12_Invoke_Uninstall_properties
Img 13: Invoke Process Activity's properties

We renamed the psexec.exe, to prevent security breach. 
Listing 6,7 and 8 show the values of the properties for the uninstall action for our WindowsService:

Arguments:

String.Format("\\ -d C:\Services\\{1}_{0}\{1}.exe -uninstall -name ""{0}"" /accepteula", solutionConfiguration.Configuration, windowsService)

Listing 6

Listing 6 shows the value for Arguments property, the arguments that you pass to the process you invoke. In our case we invoke psexec.exe that calls our windowsservice on another machine, so the arguments are arguments that we pass to psexec. The arguments used are:

  • –d which instructs psexec not to wait for the application to terminate, so the build will can continue in case something takes a very long time;
  • /accepteula which takes care of the dialog that will popup the first time a user calls psexec and will fail the build if it pops up, because the dialog will never be closed
  • C:\Services\\{1}_{0}\{1}.exe -uninstall -name ""{0}"" will result in something like: ‘C:\Services\\servicename_Development\servicename.exe -uninstall -name "servicename"’ this is the exe that psexec will call, we pass the parameters for the service right in there (-uninstall -name "servicename").

DisplayName:

Invoke PSExec Process Uninstall

Listing 7

 

FileName:

"D:\Tools\SysInternals\PSTools\PsExecT.exe"

Listing 8

Listing 8 shows the Local Path to PSExec.exe on the buildserver.

Result:

psExecOutput (Int32)

Listing 9

 

WorkingDirectory:

"D:\Tools\SysInternals\PSTools" 

Listing 10

Listing 10 shows the working directory for PSExec on the buildserver)

After the deinstallation, or if the service is not installed on the server the CopyDirectory activity is executed


13_CopyDirectory
Img 14: CopyDirectory Activity

The CopyDirectory Activity uses the following properties:

Destination

String.Format("\\\Services\{0}_{1}", windowsService, solutionConfiguration.Configuration)

Listing 11

Source

String.Format("{0}\{1}\bin\{2}", BuildDetail.DropLocation, windowsService, solutionConfiguration.Configuration)

Listing 12

When the copy action is ready another Invoke Process Activity is executed that calls psexec to install the windowsservice.

img_15_InvokeInstall
Img 15: Invoke Process (Install Service)

The details of this activity looks like image 16:

img_16_InvokeInstall_Process
Img 16: Invoke Process details

The properties are shown on image 17:

img_17_InvokeInstall_Properties
Img 17: Properties Invoke Install Process

All properties have almost the same values as the uninstall invoke process in image 13, only the Arguments are sligthly different.

Arguments:

String.Format("\\<servername> -d C:\Services\TX.Communication.Prototype\{1}_{0}\{1}.exe -install -name ""{0}"" /accepteula", solutionConfiguration.Configuration, windowsService)

Listing 12

These steps take care of installing our WindowsServices in our build, so we can deploy in an automated fashion. After the depoloyment, in our nigthly build we also run automated UI Tests. Another topic that is quite interesting…

Henry Cordes
My thoughts exactly…