Orchestration a first glance

September 30, 2012


So what exactly is orchestration. This is something I am delving into pretty heavily of late there is a multitude of orchestration tools on the market today but I am going to start with one that is 100% free and not too disimilar from other leading names on the market. Orchestration can work at the technical layer and or the human layer to handle the Business Process Management (BPM).

VMWare orchestrator, as the name implies this is primarily designed to orchestrate Virtual machine infrastructure in VMWare VSphere and many other products in the VMWare and Microsoft stacks.

Most orchestration tools follow a pretty similar pattern, a visual workflow of actions / activities that perform a task. Each action / activity may require input parameters and has the potential to pass output parameters describing the activity that just took place.

Sample Workflow


Each box on screen represents and action / activity. Lets take a very simple action that could be performed by a VMWare administrator. Lets say we wish to power on a machine. Before we can power on a machine via the console we need to identify which machine we wish to power on. Once we have selected our machine in the console we can right click that machine and power it on. An orchestration tool can perform all the same actions that we do via the console however we need to program each step as we go along. Once we have powered that machine on via the console we need to wait for the power on action to complete prior to being able to use the machine.

If we were to do this in workflow we would need to do the same however each step would have to be pre-defined in code first. The diagram above shows multiple boxes etc each connected with links to other actions. Each of those actions may define different input / output parameters which are expected to be provided to the action prior to its execution and subsequently outputs would be passed on to succeeding actions are they are executed.

Step 1: Understanding Actions / Activities

So lets step away from the VMWare realm for a second and look at a very simple script to create a user. Then we will define how this could be setup as an action / activity (I keep using the word activity as Edwin and I are focusing quite heavily on Windows Workflow Foundation (within Microsoft Team Build 2010 inside Team Foundation Server). I will follow up with more of this on the next post.

So back to the task at hand. Lets say we wanted to orchestrate creation of a user.

We could create a simple script such as.

set objContainer = (“LDAP://OU=users, CN=installpac, CN=com”)

set objUser = objContainer.Create(“user”, “CN=john_mcfadyen”)

objUser.Put (“SamAccountName”, “johnmcfadyen”)


objUser.SetPassword (“secret”)

set objUser = nothing

This script does not really promote re-user and would be better written as a function / sub such as.

function CreateUser(ContainerName, UserName, Password)

set objContainer = (“LDAP://” & ContainerName)

set objUser = objContainer.Create(“user”, “CN=” & UserName)

objUser.Put (“SamAccountName”, userName)


objUser.SetPassword (Password)

if err.Number = 0 then

CreateUser = “success”


CreateUser = “fail”

end if

set objUser = nothing

end function

Now that the code is more dynamic and can accept input parameters we have code that is now suitable for use as an Orchestrated action / activity. Our code has 3 input parameters in its constructor.

  1. UserName


  2. Password
  3. Container


A Sample Action in VCO 

The following screenshot shows an action written in VCO requesting two input parameters.










Using actions in Workflows

So once you have your base actions written you can start glueing them altogether within your workflow.In my case I wanted to see if I could get VCO to help me out with some mass production of servers. For those of you familiar with VCloud Director (VCD) I wanted to create a vApp (or group of servers) without having to pay for VCD. I actually built all this up before I even realised there was such a system as VCD. That’s what you get for not keeping up with current trends. So I ended up wanting the following steps to be done.

  • Get a list of machines to clone
  • Clone a machine from an existing template
  • Allocate any additional disk required
  • Allocate any additional networks required
  • Allocate machine on a physical host
  • Sysprep the machine and assign networking detail
  • Join appropriate domain
  • Copy software to machine
  • Install software

Consuming Data

So the next trick was trying to consume some data to perform the actions, my initial inspection I thought I would be able to easily consume some xml via the XML plugin but I quickly came to realise it would be much harder than this to do within a workflow. Under most programmatic scenario’s consuming the content in an xml file would be relatively simple. But one of the issues with workflow based systems is that your workflow needs to consume objects that are recognised by the workflow system. As such I quickly realise that I would have to encapsulate my data stream within my XML into another object that was recognised by the workflow solution.

VMWare Orchestrator offers a construct for doing exactly that known as a configuration template. So my task at hand was to wrap my xml data into my orchestration construct.






So I wrote a few activities to parse my xml file and load the data found into configuration templates.

Once I had my data in configuration templates you simply need to loop through the data and perform the desired actions against each machine.

Looping through objects

Once you have your dataset in a configuration template it is a simple matter of looping through each item in the

configuration template. 

If I was to do this again chances are I would do things differently, perhaps by leveraging AMQP or Soap calls directly into vCO. This would allow me to monitor each aspect of the deployment on a case by case basis and subsequently determine steps to take after each action was validated. But for the sake of this demonstration

Here I pass my array list of configuration item content into the loop. Then process the actions against that current item, then its just a matter of repeating the steps against each new object in the arraylist. Here you can see I am cloning a machine from a template whilst waiting for the cloning action to complete.

Creating machine Template types. 

 One issue I found with cloning machines from templates is that not all machines have the same requirements. Some need 1 nic, some need multiple nics. Some need 1 disk some need multiple. So I created the concept of a machine template. A machine template allowed me to add disk / network items for each cloned instance after the cloning process took place.

The act of cloning a machine gives you whatever disk / network configuration was available in the VM Template you just cloned. The new machine template allowed me to modify the base cloned VM without the need for creating multiple VM Templates and therefore wasting unnecessary disk space.

As you can see by the diagram on the left this template allows me to specify the disk / network allocations for each machine template type. When cloning a VM you specify which machine template to apply after the cloning process takes place.

This allowed me to use a minimum number of VM templates with an unlimited number of machine templates to be applied over the top. In addition to the machine template I then kick-off an unattended build script which would provision software to the VM after the initial cloning process and machine templates were applied.


Customisation Spec

VMWare has a concept of a customisation spec, but it falls pretty short when you are attempting to sysprep a number 

of machines. As I was in an environment that had multiple domains I found the customisation spec was cumbersome to configure so I generated a sysprep file by parsing the content in this xml.

The result was a dynamic sysprep script which allowed delivery into any domain combination. This was basically a replacement for the inbuilt VMware customisation spec.


The Results





A running execution within a VM Ware environment could deliver an unlimited number of machines in minutes.

Improvements for the next time around

I think there is a number of improvements that could easily be made here but this is a simple demonstration of how you can orchestrate your way into mass deployment of environments / systems without spending huge amounts on a full cloud infrastructure. This is not intended to compete with a full blown cloud infrastructure but it does help you achieve similar results at a fraction of the cost. It is obviously nowhere near as maintainable but I built this well and truly before I even knew cloud concepts even existed.

Currently I am working on a way to link all of this to TFS, yes I understand TFS already has TFS lab manager in place but guess what not everyone runs on hyper V so in my case that system is pretty much useless to our organisation.

So keep an eye out as we plan to release a similar concept but linking in VMWare VCentre instead of Hyper V into TFS 2010 / 2012 for those of you whom are in the same situation as we were.









Introducing DevOps

September 30, 2012

So lately Edwin Ashdown and I have been doing a stack of work with ALM and DevOps.

For those of you in the packaging space that haven’t heard of DevOps its probably about time you hit google and did a little research. DevOps is all about bridging the gaps between Developers and Operations teams and a bunch of other ideas and initiatives.

So typically Developers and Operations / Infrastructure people speak different languages and they both have fairly different parts of the SDLC puzzle to look after. For the last 10 years I have sat firmly between both groups and handled the translation of what each group is discussing. This has made my transition to DevOps relatively painless.

Heres a typical SDLC cycle. I would give the credit to whomever drew it but the truth is I have no idea where I got it from. (so I apologise in advance to the owner)


So DevOps is a buzzword term thats being thrown around a lot of late, the short story is its an attempt to bridge the divide between

  • Developer and Operations 
  • ITIL vs delivery
  • Security vs Productivity
  • Orchestration vs SneakerNet
  • Self Service vs no service

So those of you that know me probably realise I like digging pretty deep into the technical realm and DevOps has been no exception to this rule. With the advent of Cloud Technologies SAAS, PAAS, IAAS, DAAS is now easily within the reach of your typical enterprise IT organisation. Interestingly although most of this is so much easier to do than it was 5 years ago many companies are still struggling to take that leap into the DevOps or cloud technologies.

From my observations this seems like its not for technical reasons, but more because the culture of an organisation is not ready to accept such a significant change. As such introducing a lot of these systems has become an issue of social acceptance.

Unfortunately for those of you who are in a similar situation I don’t have the answer to change that. So instead I will head back down the technical path and hopefully by bringing some understanding of these new technologies and concepts will help you to ease these initiatives into your organisation.

Developer and Operations

So unless you have amazing Devs in your company like Oliver Reeves or Matthew Erbs chances are extracting Deployment related information from them is like pulling hens teeth. So what DevOps brings to the table here is not new its just been re-badged and tweaked a little bit.

  • Continuous Integration
  • Continuous Deployment
  • Continuous Testing

By strictly enforcing the above DevOps negates the communication requirements needed between developers and infrastructure. This is more often handled by taking environment delivery into the Cloud or orchestrated delivery.


ITIL vs delivery

So I think everyone will agree ITIL is a necessary evil, however in the case of some companies ITIL is so heavily implemented its crippling the Organisation. In a recent company I have been with we couldn’t even spool up a single server without going through a 3 month design phase, implementation plans, release schedules and the list goes on. The effect this had on the organisation was that nobody would bother with any form of innovation as the layers of cr@p you had to wade through were just not worth the investment. As such the infrastructure was all significantly nearly end of life / support agreements.

Current cloud technologies deal with this ITIL nightmare with technology. Placing infrastructure requests at the hands of the user with Self service portals and business process management (BPM) handling the ITIL workflow with ease. I will follow up with some posts on how VMWare VCloud Director / VMWare Request Manager deal with the workflow aspects.

I am sure all of you will agree making it easier for the end user to get what they want is on all of our agenda’s. Yes these kinds of tools also handle VM Sprawl so things don’t just spiral out of control by making it so easy people just go a little crazy.

 Security vs Productivity

Is there such a thing as too much security? Well I look at this one with a pretty simple rule of thumb. When security is so tight that it starts costing the organisation huge amounts of money by blocking productivity. Well that’s when there is too much security. I am lucky enough to have been in one of those companies (lucky me).

As an IT technical resource that couldn’t even google a search if it had “SQL” or “C#” in the search makes life pretty difficult. Searching for the syntax of a command went from seconds to hours. To me that’s just plain ridiculous but hey I am pretty sure there is a good reason to block productivity and increase company expenditure somewhere. I bet the bean counters are ecstatic how security teams are allowed to reduce the bottom line (*grin*).

Tools such as Microsoft’s System Centre Configuration Manager (SCCM) coupled with OSD to deliver dynamic Operating system deployment with less cumbersome security models. Microsoft has kindly exposed the API to make automation of this toolset a breeze. (more on this later).

Orchestration vs SneakerNet

So orchestration is one of my favourite past times at the moment, I pretty much gave up my packaging life (where possible) to delve into Orchestration. So if your like me part way between developer and operations then orchestration should definately be something you are looking at.

Orchestration is typically workflow based systems that are used to “Orchestrate” other tools into doing your bidding. There are a stack of tools out there that are designed expressly with automation of SDLC systems in mind. My weapons of choice in this arena are VMWare Orchestrator (this is a bit tricky to get into but the price is perfect at $0). This is something your bean counter friends will likely approve with limited hesitation. Another good one is Microsoft Opalis which has recently been badged as SCCM Orchestrator 2012.

So what’s on offer from an orchestration tool??? The answer to this is very simple…

  • Automation
  • Automation
  • Automation

Getting back to the SDLC side of things, an Orchestration tool makes Continuous Deployment a reality. Take this scenario as a typical Orchestration benefit.

  • Developer checks in code
  • Continuous integration compiles code
  • Unit Testing is done
  • A call made to orchestration tool to provision infrastructure
  • A call made to orchestration tool to configure distribution tool such as SCCM
  • A call made to orchestration tool to instruct distribution tool to advertise products to newly provisioned environment
  • Build validation testing is run
  • A call made to orchestration tool to tear down the environment

Self Service vs no service

A typical scenario in the SDLC world is I testing need an environment. Somebody designs the environment, IP addressing is assigned, software is installed. Environment is delivered to testing, testing request a release of code, testing starts. Defects are logged the cycle starts again.

Now from start to finish this can take anything from days to months (yes I am talking about another of great clients here, I can’t mention any names). So for those of you who are stuck in the “months” to provision life-cycle self service is going to be something of great interest.

A typical cycle for this scenario is.

  1. Tester requests environment, email is sent to infrastructure team
  2. Infrastructure approves environment build
  3. Environment is built “auto-magically”
  4. IP addressing assigned, NAT firewalls setup.
  5. Email is sent to testing environment is ready

All this is done without anyone really lifting a finger other than to click “yes” this is ok.

So the long and the short of all of this is DevOps is about making life easier. This might mean some of us will be without a job as the technology starts to control / deliver itself. I think the developers are pretty safe but its those of us that are in the middle that are at risk.

So the way I see IT of the future is that if you are not a developer sooner or later your job might be put at risk. Companies are currently looking to slash budgets and DevOps is going to be one significant painless way they can achieve it. So if your currently in an IT field and your not a developer you better start looking at how DevOps is going to impact your future.

Orchestration is here to stay, fortunately its not that mature yet and nor is it for those who are half heartedly following IT as a career.

Over the next few posts I am going to deep dive into a number of orchestration tools and show some examples on how you can leverage them to automate your entire infrastructure.

First cab off the ranks will be VMWare Orchestrator, this is one of my favourite tools because it is priced very competitively at $0. This is likely because its relatively difficult to use and why would people pay for something that requires significant investment in knowledge and time before you can leverage to do anything remotely useful. But hey on my last project time was a luxury I had plenty of. So I got in nutted it out and presto a few months later I had even more time on my hands.

One last picture as food for thought on a what a typical DevOps team will comprise, these are by no means my favourite tools merely guidelines on what you should be covering off in your automation arsenal.

Feel free to swap out your tools and technologies as long as your still covering each of the major processes you should be in a pretty good state.


So if you haven’t already read my post on conflict management it might be worth reading up on this first.


The documentation on WiX is somewhat obfuscated to the normal reader and not many people have made an attempt to clear up that documentation to well.

There is some pretty good detail here on WiX in general but it doesn’t go too deep into anything if your serious about SDLC packaging.


So here I will attempt to detail some of the more important aspects of application packaging, particularly when using WiX to generate MSI packages from a Continuous Integration (CI) build.

These days I typically use Continuous Integration (CI) and Continuous Delivery (CD) for most packaging and deployment activities. Often the CD will be delivered into a Platform As A Service (PAAS) cloud. In most instances I will deliver into a VMWare cloud architecture but that is getting a little off topic.

So back to the original post of CMDB within an SDLC packaging setup.

So as many of you will know component rules but here is an excerpt of component rules from the god father himself http://robmensching.com/blog/posts/2003/10/18/Component-Rules-101

So what Rob states is good information but rarely does he go into detail on how to achieve some of this with his uber geek toolset WiX.

So there is two parts to this puzzle.

  1. Creating component guids
  2. Naming components

So the first part is actually pretty easy but its probably the least documented and as such is probably the one that is done incorrectly more often than not.

So using the following command line in heat you get the following .wxs file generated.

heat dir .\<path to gac files> -gg -dr INSTALLDIR -out .\testSample.wxs -sfrag -sreg

The important part of this call to note is the -gg option. This tells heat to generate component guids immediately. This option is actually not the best choice, my preference is to use the -ag option for components.

heat dir .\<path to gac files> -ag -dr INSTALLDIR -out .\testSample.wxs -sfrag

The -ag option doesn’t appear to be all that much different but the results are actually significantly different.

Here you can see the component guid attributes are all tagged with “*” . This is actually very good because the result of such an action tells the linker to deterministically create the component codes at link time.

What is actually happening here is very important to understand because it is based on the entire premise conflict management was built on (go figure, I love your work Rob). So what is actually happening here is that during link time the target path of the files is being resolved and then the component codes are generated based on the resolved target path.

This in effect means that a file deliver to path xxx will always generate component code xxx.

For example.

If you have a file targetted to c:\windows\system32\file_xxxx.dll then the component code will be generated exactly the same for every heat call you make that delivers a file to that same path. The end result is that if you compiled 20 different applications that all delivered the file c:\windows\system32\file_xxxx.dll then all 20 packages would have the same component code for that target path.

For those of you who are familiar with standard guid generation this would typically never ever happen. So you might think this is a strange thing to do but this is actually very very smart. Because the entire premise Windows Installer was built on is that files delivered to the same path should use the same component code to ensure reference counting is put into effect.

For those of you that don’t have a full grasp on Windows Installer Conflict Management the long and the short of this result is that if you do this you will not have issues with applications that share files in common areas. The term “Reference Count” ensure applications that share content do not break other shared applications during uninstallation of your products.

So this is a very very good thing to be doing. Unfortunately hardly anyone actually does it correctly or for that matter is even aware of the issue in the first place. Now this is actually only half of the puzzle. For those of you familiar with Wise Package Studio and Installshield this is a pretty big improvement on how these tools handle conflict management. Interestingly enough its a pretty simple mod to their code to fix as well. (so one wonders why they both fail to deliver such a simple technique into setup capture toolsets).

So the next part is somewhat more difficult and requires you have a grasp on some relatively simple XSLT and a little more heat command line action.

So we now need to call an XSLT to transform the output of heat into something a little more useful. The technique I use here is to turn off unique ID’s using the heat -suid option.

So the effect of using non-unique names causes the componentId and fileId to use the name of the file that has just been harvested (as shown above). Now in an application which is not too complicated this is actually a simple fix. The however does become an issue when you have the same filename in multiple WiX harvested fragments.

For example if you have;

DirectoryA with FileA.dll within it.

DirectoryB with FileA.dll within it.

and each harvest is called from a separate heat call the above method creates a problem of duplicate ComponentId and FileId attributes within your harvest WiX fragments. Obviously this is bad and causes the compiler to create exceptions. So this fix alone is not enough. This is where the XSLT comes into play allowing you to correct this issue.

So using an XSLT I prefix my heat call with the name of the harvested content in the case shown above the results would be as below.



So my XSLT code looks like this and I use this as a base XSLT for every heat call I make.

To call the XSLT you simply add the -t option to your heat call.

heat dir .\<path to gac files> -ag -dr INSTALLDIR -out .\testSample.wxs -sfrag -t <path to xslt file>

The result of adding the XSLT call to your heat call is each non unique filename is prefixed with the name of the heat call used.

Components will get a prefix of


Files will get a prefix of


This naming convention makes it very simple to identify files / components from each harvest (it looks pretty particularly when looking from referencing tables).

The end result is this.

So there you have it, this is how to conflict manage WiX installations. This is also a pre-requisite phase to patching which I might follow up sometime later.

ps, please keep commenting and I will keep up the posting.

Hi all,

Yes I know it has been years since I last posted. Kids family and work have taken their toll on the old blog.

But I have a few friends who have revitalised my enthusiasm to get this blog back on track.   Edwin Ashdown who is a master build engineer who could easily take a chair alongside the well known build experts in the world.  Edwin is currently releasing a beta for online Continuous Integration. A topic which coincides for the reason of this post.

For those of you who have followed me over the years would know that my past exploits were all based around deployment typically within the Windows Installer Realm. For the most part this was highly around 3rd party products being released and silenced within the corporate environment. This role was commonly referred to as “application repackaging”.

Well during my quiet bout I completely turned this role on its head and although I mainly support the corporate environment for the most part I do dabble with delivery to the end user community as well primarily in SDLC areas.

But the reason for this post is how I have transitioned from your every day packager to what I have coined the term “Enterprise Deployment Architect”

For those of you whom are in the trenches packaging for the corporate enterprise looking for your next step look no further for this post should hopefully put you on that path.

So exactly what is an “Enterprise Deployment Architect” (EDA). Well for the most part it doesn’t formally exist yet. So lets take a look at some sibling roles and determine how they fit into an organisation.

Application Architect

The application architect or high end developer who will modularise an application that allows it to be scalable and available. Typically the application architect would follow patterns and practices of development. This is something that I find very interesting.

Patterns and Practices are a common theme in most development groups, now lets take a look at deployment. Hands up anyone who can name a deployment pattern off the cuff! I bet none of you put your hands up and the reason is well they just don’t exist. I hope to change that by floating a few deployment patterns I have come up with to handle large scale integrated deployments into massive SDLC environments.

Solution Architect

The Solutions Architect is responsible for the development of the overall vision that underlies the projected solution and transforms that vision through execution into the solution. The Solutions Architect becomes involved with a project at the time of inception and is involved in the Functional analysis (FA) of developing the initial requirements. They then remain involved throughout the balance of the project.

Again there are pretty solid patterns and guidelines here. (go figure there seems to be a pattern to use a pattern)

Enterprise Architect

The Enterprise architect is a master of the corporate environment. Experts in both leadership and subject matter experts (SME). Their role covers a holistic view of the organization’s strategy, processes, information, and information technology assets.

The enterprise architect works across the enterprise to drive common approaches and expose information assets and processes across the enterprise. These guys are surely instrumental in making multiple solutions scale within and external to the corporate environment.

So this leads into what I now call the “Entprise Deployment Architect”

Enterprise Deployment Architect

So to date this role doesn’t exist, well if it does it is pretty poorly documented or even recognised from the view that I have. So if we take a look at the enterprise architect these fellows drive the business to success through common development concepts, standard OS architectures.

So am I dreaming or is there really an opening for this role? Ask yourself a few of these questions that I asked my current employer!

Q: How many developement teams are there?

A: 20 or so

Q: How many dev / test environments do we have ?

A: between 100-150 environments (ouch)

Q: Do they all develop with standard Continuous Integration practices?

A: 1 team does ? …….WTH…. 1 team does?

Q: Do they all package their software the same way?

A: Deployment recieves ! flat file, scripted, msi, self extracting exe, inno setup, xcopy deployment!

Surely this was wrong, there are 20 teams and only a single team is following standard dev practices (yes well done Adrian you know I am talking about you!, love your work)

Q: So how does the integration take place in an corporation with over 100 environments 1000 servers in dev / test?

A: Not very well.

Ok so by now your getting the picture. When it comes to deployment there is virtually no patterns limited practices, there is no commonality and the systems are getting more and more complicated without people trying to do something about the integration of the corporation.

So these reasons alone are why I believe there needs to be a driver to introduce a roll introduced and my following blogs are going to explain what exactly that role should cover. And more importantly create some patterns and practices around how to achieve it in a large scale environment with numerous environments and connected solutions.

So for those of you who are specialising in server based deployment, application packaging and looking for the next career move an “Enterprise Deployment Architect” may very well be that next step.

So keep posted as I will follow up with some of these key concepts in a multitude of environments and organisational layouts.

  • Continuous Integration
  • Continuous Delivery
  • Referential Integrity
  • Dynamic Deployment
  • Database / Schema Versioning
  • IIS meta base
  • Websphere automation
  • SSRS, SSIS, SSAS, SSMS deployment
  • Orchestration (Opalis, VCO)
  • Cloud provisioning (VCD, Lab Manager)

Anyone keen on developing this role feel free to comment and drop that “Enterprise Deployment Architect” role whenever you can.


Useful Custom Actions

April 11, 2009

I have been using these for sometime and only recently realised how often that was.
I thought they may be useful to some of you out there.
A simple little routine to write to the MSI log.
function WriteToMsiLog(strMessage)
   Const msiMessageTypeInfo = &H04000000
   Set objMessage = session.Installer.CreateRecord(1)
   objMessage.StringData(0) = "Log: [1]"
   objMessage.StringData(1) = strMessage
   if session.property("CUSTDEBUG") <> "" then msgbox "MESSAGE: " & strMessage
   Session.Message msiMessageTypeInfo, objMessage
end function
This is a nice easy way to get some output to your logs. As most of you know I spend more time in the repackaging area as opposed to SDLC solutions as such VBS CA’s are more prominent in my current work.
Note: If you pass CUSTDEBUG property during installation you will get debugging feedback messages which are useful during testing of new vbs CustomActions.
How to Format Records such as [#FileKey], [$ComponentKey]
function FormatRecord(strRecordName)
  on error resume next
  set objInstaller = session.installer
  set objRecord = objInstaller.CreateRecord(1)
  objRecord.StringData(0) = strRecordName
  strFormattedRecord = Session.FormatRecord(objRecord)
  WriteToMsiLog "MESSAGE: Formatting record key " & strRecordName & vbcr & "Result = " & strFormattedRecord
  FormatRecord = strFormattedRecord
  set objRecord = nothing
end function
This is a handy way to resolve those MSI variables. For those of you setting perms with subinacls or other third party tools (i.e. not the lock permissions table) then you you may well need to use this to ensure your packages stay dynamic. This is particularly common issue amongst the repackagers as path locations are more often than not a static path is acceptable. However in the unlikely chance someone changes and installation path the repackagers would fall victim to failed CA’s as paths in the CA’s would not exist.
Using a Formatted record and passing that Record value to the deferred phase via CustomActionData would ensure that your packages maintain support for dynamic installation (i.e. variable installation directories)


December 18, 2008

This topic is one which is a little overdue, I started something and never finished it. The last couple of posts were a lead up to this.
I discussed a little about the client / server side of the Windows Installer service and the related security concepts around the Immediate / Deferred phases. I bring this up as time and time again I see Installers which blatently ignore these issues. I see packages which don’t take into account the security concepts of the Installer service. More often than not this is due to a complete lack of clear and readable information about the topic.
I am going to attempt to resolve that issue here and now but bear with me there is alot happening and it can be a little difficult to grasp.
One of the key points I mentioned in the previous post was that you should only modify the system using a Deferred Custom Action. This is what I like to teach as the golden rule of packaging. If your going to write registry, edit copy delete files you need to do this during the deferred phase. Because this is the only time you have access to the elevated server side portion of the Windows Installer service. This is highly important if you even intend to deploy your packages to a locked down environment or a Windows Vista or later OS.
Lets run through a few examples of good and bad configuration.
1) Editing an XML that exists prior to installation.
Lets assume we already have a file installed on a machine such as:
c:\program files\testapplication\myXmlFile.xml
We now want to edit this xml file and for the sake of this demonstration we will have a simple vb script to do this job. (yes there are better ways but I want to keep this simple for the demonstration).
Our VB script my be something very simple such are editing an attribute in the xml such as a server name.
set objNode = objXmlDocument.SelectSingleNode("//Environment/ServerDetails")
set objAttribute = objNode.SetAttribute("ServerName", session.property("ServerName"))
We pop in a Windows Installer property of [ServerName]
Now as this file already exists on the machine some people may argue an Immediate CustomAction would suffice to edit this file. Interestingly enough this run as an immediate CustomAction may work in many cases. Here’s why.
a) the file exists already so no issues with the file not being present during installation
b) vb script is ok to edit the file
c) ServerName property is present during immediate phase
Ok so looking at this chances are it could work ok. Running a test installation potentially works as well. So whats the issue you ask ?
Ok now lets throw a little complexity into this cycle. Now try to install this on a user account with limited access, in particular no access to edit c:\program files\*. We run this same successful installation under a locked down user account the result is. FAIL
The locked down user no longer has access to edit the file and as such security permissions on the machine deny access to the file causing the CustomAction to fail. Initiating a rollback of the installation and resulting in a failed installation.
So what do we do now ?
Ok lets try the same scenario in deferred phase using the elevated CustomAction.
a) the file exists already so no issues with the file not being present during installation
b) vb script is ok to edit the file
c) user has access to file as running in local system elevated context
d) ServerName property is not present during deferred phase
The result being the file is edited successfully dropping a blank value into the ServerName attribute of the xml.
So whats the go now ? Neither solutions work ?
Immediate phase fails due to access rights.
Deferred phase fails due to properties not being available.
So now we are in a catch 22 situation, how do we work around this. Remember earlier I stated the golden rule as editing the system must be done during the deferred phase ? Was I wrong about this or is there something missing from the equation ?
The solution is a concept referred to as CustomActionData. CustomActionData is a special property used to retain property values within the deferred phase. The idea being a simple throw and catch scenario. Where the data you need to use from the Immediate phase is thrown across into the Deferred phase. Initially this process seems a little complicated but once you get your head around it like everything it becomes very simple.
The requirements are as follows.
1) need to edit the system only during the deferred phase
2) need properties which are only available during the immediate phase but run during the deferred phase.
So how do we achieve this ???
The process is like this.
1) Collect  properties required during the immediate phase
2) Throw those properties across into the deferred phase
3) Catch the properties in the deferred phase
4) Run our CustomAction to edit XML using the catch results from previous steps
Seems pretty simple when you put it like that huh.
Technical Implementation
1 & 2)  Use a Set PROPERTY CustomAction to set a property that will be access during the deferred phase.
For example
Create a PROPERTY called SERVERNAME with a value of TEST
The above CA would create a new Property called DEFERREDVALUE with a value of [SERVERNAME] which has a value of TEST. The net result being
Now we have completed the first portion of throwing values across into the deferred phase so how do we catch those values on the other side. The answer is.
We now need to access the CustomActionData property. This is a special property which gains its value from the name of the CustomAction in the deferred phase.
To access the CustomActionData property we do this.
strServerName = session.property("CustomActionData")
msgbox strServerName 
This took me a little while to come to grips with as the documentation in the SDK is pretty light. your probably wondering at this point how does CustomActionData contain the correct property. There are plenty of properties in default packages so how does CustomActionData access the correct one. The trick to this is the name of your CustomAction.
If the customAction we setup in the Deferred phase is called
DEFERREDVALUE then the CustomActionData property will contain the value of the property DEFERREDVALUE which in this case = TEST.
If the name of the CustomAction is called SERVERNAME then the CustomActionData property will = the value of SERVERNAME.
I know this seems like a multitude of additional steps when one could argue that an immediate CustomAction after InstallFinalize would suffice. But this is the only way to obtain access to the elevated context of the Windows Installer Server process. This in turn means it is the only way to successfully deploy to a locked down environment.
So to cut a long story short if your currently writing CA’s to edit the system during the immediate phase you need to change your practice and implement the CustomActionData solution.
Yeah sure its alot more difficult, it involves a few more steps and is a little fiddly but the results are your installation should work in any state of lock down.
This is quite a bit to chew over so have a read let me know if this makes any sense at all and I will follow up with some real examples of how this can be implemented. (I dont have an editor with me atm to create some samples for you).
Please give me some feedback on this one as its hard to know if I covered it off clearly its a pretty muddy topic to cover without a whiteboard.

Sequencing PART II

October 14, 2008

Hi all, its been a while since my last post as I have been pretty busy with RL lately. A few people have been asking why I haven’t been posting. Although there are many reasons I won’t bore you with the details.

Anyway this latest topic is something that so many people misunderstand, misuse, or simply don’t care about. Which in my mind is all the more reason why I should write about it.

This is heavily related to my first post on installation sequences then you should probably have a quick look at that first. I left out some important bits of information I just assumed everyone knew. Interestingly enough the more packagers I have spoken to lately the more I realise the complete lack of understanding that is out there. Even some of the major players and training institutions are getting this wrong but I won’t go into who, where and why you can work that out for yourselves. (I hope I don’t goof the explanation and do the same).

I have updated the last diagram I drew to try to clarify these next explanations.


So from the diagram above its a little difficult to see how this relates back to the tables. So bare with me while I try to explain it all. As we already know about standard installation conditions there are two tables which are processed. These tables are InstallUISequence and InstallExecuteSequence table. From the diagram above the green sections represented as the Acquisition phase consist of those two tables. During the processing of these two tables the installer service is Acquiring information about what actions need to be performed during installation. (hence the name).

The execution phase on the right shown in the purple has two sub-phases which are known as Deferred and Commit/Rollback. This is explained in more detail in my previous blog Installation Sequences. What I am going to cover in this blog is the smaller items at the bottom of this picture. These represent the actual Windows Installer Service and how things are processed during the Acquisition and Execution Phases.

There are two processes CLIENT and SERVER. The CLIENT process is protected and has limited access to the Windows Installer database (MSI) and the underlying Operating System. The CLIENT portion also runs programs which are only in memory for the duration of the task and then they are terminated.

The SERVER process usually consists of processes which don’t terminate, one of the most important of those processes is the actual Windows Installer Service. The SERVER process needs full access to the Operating System to allow installation of the application and its resources. For this reason a portion of the installation needs to be elevated for the duration of the requirements needed to install the application. In order to optimise security and limit the time the installation has full rights the Execution phase was born.

I often hear many people say why does this need to be so difficult, the answer to that is that to successfully install on the myriad of platforms and user rights coupled with locked down environments the sequencing need to be complex to accommodate all of these requirements whilst maintaining optimum security.

So lets run through an installation again this time taking notice of the processes and how this affects the installation.

So lets assume we double click on an MSI running the standard installation. A CLIENT and SERVER process is launched. We first start to process the actions in the InstallUISequence, those actions are passed through to the CLIENT process. This means that during UI phases we have limited access to the underlying operating system. During this CLIENT side processing we do not have full access to the windows installer database.

We then start to process the InstallExecuteSequence during this part of the acquisition phase we have access to both the CLIENT and SERVER processes (this can be verified through check an installer log and looking for the (c) and (s) entries in the log). As such we have both read / write access to Windows Installer MSI however we still only have limited access to the underlying operating system. So according to my diagram above the yellow section has both access to the pink and purple section in the upper row of the process section of that diagram.

When we reach the execution phase (also commonly referred to as Deferred) a very important thing happens. Two additional processes are launched which are run in User / System context (or service account context). During the deferred phase actions were written to the installation script by the acquisition phase are then passed to one of the two newly created services. Because these two processes are launched here this is how we gain full access and elevation to the underlying operating system.

Now the most important part to understand here is this.

  1. We are now disconnected from the MSI and running as Either user or system.
  2. We now have full access to the underlying operating system

This is a very very important and often overlooked distinction. Because the two additional processes are launched at the start of the deferred phase it is the only location where we have true elevation capabilities and as such it is also the reason during this phase is the only place we have the capability to install an application on a locked down environment. It is also the reason why you cannot access the session object during the deferred phase.

This is why you hear so many people say when you modify the system you must use a deferred CA. Because the deferred phase is the only phase that has access to the Windows Installer Service accounts elevated context.

To summarise all of this

  1. InstallUISequence runs in CLIENT context
  2. InstallExecuteSequence runs in both CLIENT and SERVER context
  3. Deferred runs as USER or LOCAL SYSTEM
  4. Deferred is the only area which has FULL elevation context
  5. Immediate Custom actions have access to the session object
  6. Deferred Custom actions run a script which is disconnected from MSI meaning we lose access to the session.
  7. This is complex because it needs to cater for all scenarios whilst maintaining least privilege and maximum security

I was going to cover Custom Actions and there placement within these sequences but I think I will leave that for the next post as its getting late where I am.

Please feel free to ask any questions you have on this or post comments as this will inspire me to do this more often.


Get every new post delivered to your Inbox.