Orchestration a first glance

September 30, 2012


Orchestration

So what exactly is orchestration. This is something I am delving into pretty heavily of late there is a multitude of orchestration tools on the market today but I am going to start with one that is 100% free and not too disimilar from other leading names on the market. Orchestration can work at the technical layer and or the human layer to handle the Business Process Management (BPM).

VMWare orchestrator, as the name implies this is primarily designed to orchestrate Virtual machine infrastructure in VMWare VSphere and many other products in the VMWare and Microsoft stacks.

Most orchestration tools follow a pretty similar pattern, a visual workflow of actions / activities that perform a task. Each action / activity may require input parameters and has the potential to pass output parameters describing the activity that just took place.

Sample Workflow

 

Each box on screen represents and action / activity. Lets take a very simple action that could be performed by a VMWare administrator. Lets say we wish to power on a machine. Before we can power on a machine via the console we need to identify which machine we wish to power on. Once we have selected our machine in the console we can right click that machine and power it on. An orchestration tool can perform all the same actions that we do via the console however we need to program each step as we go along. Once we have powered that machine on via the console we need to wait for the power on action to complete prior to being able to use the machine.

If we were to do this in workflow we would need to do the same however each step would have to be pre-defined in code first. The diagram above shows multiple boxes etc each connected with links to other actions. Each of those actions may define different input / output parameters which are expected to be provided to the action prior to its execution and subsequently outputs would be passed on to succeeding actions are they are executed.

Step 1: Understanding Actions / Activities

So lets step away from the VMWare realm for a second and look at a very simple script to create a user. Then we will define how this could be setup as an action / activity (I keep using the word activity as Edwin and I are focusing quite heavily on Windows Workflow Foundation (within Microsoft Team Build 2010 inside Team Foundation Server). I will follow up with more of this on the next post.

So back to the task at hand. Lets say we wanted to orchestrate creation of a user.

We could create a simple script such as.

set objContainer = (“LDAP://OU=users, CN=installpac, CN=com”)

set objUser = objContainer.Create(“user”, “CN=john_mcfadyen”)

objUser.Put (“SamAccountName”, “johnmcfadyen”)

objUser.SetInfo

objUser.SetPassword (“secret”)

set objUser = nothing

This script does not really promote re-user and would be better written as a function / sub such as.

function CreateUser(ContainerName, UserName, Password)

set objContainer = (“LDAP://” & ContainerName)

set objUser = objContainer.Create(“user”, “CN=” & UserName)

objUser.Put (“SamAccountName”, userName)

objUser.SetInfo

objUser.SetPassword (Password)

if err.Number = 0 then

CreateUser = “success”

else

CreateUser = “fail”

end if

set objUser = nothing

end function

Now that the code is more dynamic and can accept input parameters we have code that is now suitable for use as an Orchestrated action / activity. Our code has 3 input parameters in its constructor.

  1. UserName

     

  2. Password
  3. Container

 

A Sample Action in VCO 

The following screenshot shows an action written in VCO requesting two input parameters.

 

 

 

 

 

 

 

 

 

Using actions in Workflows

So once you have your base actions written you can start glueing them altogether within your workflow.In my case I wanted to see if I could get VCO to help me out with some mass production of servers. For those of you familiar with VCloud Director (VCD) I wanted to create a vApp (or group of servers) without having to pay for VCD. I actually built all this up before I even realised there was such a system as VCD. That’s what you get for not keeping up with current trends. So I ended up wanting the following steps to be done.

  • Get a list of machines to clone
  • Clone a machine from an existing template
  • Allocate any additional disk required
  • Allocate any additional networks required
  • Allocate machine on a physical host
  • Sysprep the machine and assign networking detail
  • Join appropriate domain
  • Copy software to machine
  • Install software

Consuming Data

So the next trick was trying to consume some data to perform the actions, my initial inspection I thought I would be able to easily consume some xml via the XML plugin but I quickly came to realise it would be much harder than this to do within a workflow. Under most programmatic scenario’s consuming the content in an xml file would be relatively simple. But one of the issues with workflow based systems is that your workflow needs to consume objects that are recognised by the workflow system. As such I quickly realise that I would have to encapsulate my data stream within my XML into another object that was recognised by the workflow solution.

VMWare Orchestrator offers a construct for doing exactly that known as a configuration template. So my task at hand was to wrap my xml data into my orchestration construct.

 

 

 

 

 

So I wrote a few activities to parse my xml file and load the data found into configuration templates.

Once I had my data in configuration templates you simply need to loop through the data and perform the desired actions against each machine.

Looping through objects

Once you have your dataset in a configuration template it is a simple matter of looping through each item in the

configuration template. 

If I was to do this again chances are I would do things differently, perhaps by leveraging AMQP or Soap calls directly into vCO. This would allow me to monitor each aspect of the deployment on a case by case basis and subsequently determine steps to take after each action was validated. But for the sake of this demonstration

Here I pass my array list of configuration item content into the loop. Then process the actions against that current item, then its just a matter of repeating the steps against each new object in the arraylist. Here you can see I am cloning a machine from a template whilst waiting for the cloning action to complete.

Creating machine Template types. 

 One issue I found with cloning machines from templates is that not all machines have the same requirements. Some need 1 nic, some need multiple nics. Some need 1 disk some need multiple. So I created the concept of a machine template. A machine template allowed me to add disk / network items for each cloned instance after the cloning process took place.

The act of cloning a machine gives you whatever disk / network configuration was available in the VM Template you just cloned. The new machine template allowed me to modify the base cloned VM without the need for creating multiple VM Templates and therefore wasting unnecessary disk space.

As you can see by the diagram on the left this template allows me to specify the disk / network allocations for each machine template type. When cloning a VM you specify which machine template to apply after the cloning process takes place.

This allowed me to use a minimum number of VM templates with an unlimited number of machine templates to be applied over the top. In addition to the machine template I then kick-off an unattended build script which would provision software to the VM after the initial cloning process and machine templates were applied.

 

Customisation Spec

VMWare has a concept of a customisation spec, but it falls pretty short when you are attempting to sysprep a number 

of machines. As I was in an environment that had multiple domains I found the customisation spec was cumbersome to configure so I generated a sysprep file by parsing the content in this xml.

The result was a dynamic sysprep script which allowed delivery into any domain combination. This was basically a replacement for the inbuilt VMware customisation spec.

 

The Results

 

 

 

 

A running execution within a VM Ware environment could deliver an unlimited number of machines in minutes.

Improvements for the next time around

I think there is a number of improvements that could easily be made here but this is a simple demonstration of how you can orchestrate your way into mass deployment of environments / systems without spending huge amounts on a full cloud infrastructure. This is not intended to compete with a full blown cloud infrastructure but it does help you achieve similar results at a fraction of the cost. It is obviously nowhere near as maintainable but I built this well and truly before I even knew cloud concepts even existed.

Currently I am working on a way to link all of this to TFS, yes I understand TFS already has TFS lab manager in place but guess what not everyone runs on hyper V so in my case that system is pretty much useless to our organisation.

So keep an eye out as we plan to release a similar concept but linking in VMWare VCentre instead of Hyper V into TFS 2010 / 2012 for those of you whom are in the same situation as we were.

 

 

 

 

 

 

 

 

Introducing DevOps

September 30, 2012


So lately Edwin Ashdown and I have been doing a stack of work with ALM and DevOps.

For those of you in the packaging space that haven’t heard of DevOps its probably about time you hit google and did a little research. DevOps is all about bridging the gaps between Developers and Operations teams and a bunch of other ideas and initiatives.

So typically Developers and Operations / Infrastructure people speak different languages and they both have fairly different parts of the SDLC puzzle to look after. For the last 10 years I have sat firmly between both groups and handled the translation of what each group is discussing. This has made my transition to DevOps relatively painless.

Heres a typical SDLC cycle. I would give the credit to whomever drew it but the truth is I have no idea where I got it from. (so I apologise in advance to the owner)

 

So DevOps is a buzzword term thats being thrown around a lot of late, the short story is its an attempt to bridge the divide between

  • Developer and Operations 
  • ITIL vs delivery
  • Security vs Productivity
  • Orchestration vs SneakerNet
  • Self Service vs no service

So those of you that know me probably realise I like digging pretty deep into the technical realm and DevOps has been no exception to this rule. With the advent of Cloud Technologies SAAS, PAAS, IAAS, DAAS is now easily within the reach of your typical enterprise IT organisation. Interestingly although most of this is so much easier to do than it was 5 years ago many companies are still struggling to take that leap into the DevOps or cloud technologies.

From my observations this seems like its not for technical reasons, but more because the culture of an organisation is not ready to accept such a significant change. As such introducing a lot of these systems has become an issue of social acceptance.

Unfortunately for those of you who are in a similar situation I don’t have the answer to change that. So instead I will head back down the technical path and hopefully by bringing some understanding of these new technologies and concepts will help you to ease these initiatives into your organisation.

Developer and Operations

So unless you have amazing Devs in your company like Oliver Reeves or Matthew Erbs chances are extracting Deployment related information from them is like pulling hens teeth. So what DevOps brings to the table here is not new its just been re-badged and tweaked a little bit.

  • Continuous Integration
  • Continuous Deployment
  • Continuous Testing

By strictly enforcing the above DevOps negates the communication requirements needed between developers and infrastructure. This is more often handled by taking environment delivery into the Cloud or orchestrated delivery.

 

ITIL vs delivery

So I think everyone will agree ITIL is a necessary evil, however in the case of some companies ITIL is so heavily implemented its crippling the Organisation. In a recent company I have been with we couldn’t even spool up a single server without going through a 3 month design phase, implementation plans, release schedules and the list goes on. The effect this had on the organisation was that nobody would bother with any form of innovation as the layers of cr@p you had to wade through were just not worth the investment. As such the infrastructure was all significantly nearly end of life / support agreements.

Current cloud technologies deal with this ITIL nightmare with technology. Placing infrastructure requests at the hands of the user with Self service portals and business process management (BPM) handling the ITIL workflow with ease. I will follow up with some posts on how VMWare VCloud Director / VMWare Request Manager deal with the workflow aspects.

I am sure all of you will agree making it easier for the end user to get what they want is on all of our agenda’s. Yes these kinds of tools also handle VM Sprawl so things don’t just spiral out of control by making it so easy people just go a little crazy.

 Security vs Productivity

Is there such a thing as too much security? Well I look at this one with a pretty simple rule of thumb. When security is so tight that it starts costing the organisation huge amounts of money by blocking productivity. Well that’s when there is too much security. I am lucky enough to have been in one of those companies (lucky me).

As an IT technical resource that couldn’t even google a search if it had “SQL” or “C#” in the search makes life pretty difficult. Searching for the syntax of a command went from seconds to hours. To me that’s just plain ridiculous but hey I am pretty sure there is a good reason to block productivity and increase company expenditure somewhere. I bet the bean counters are ecstatic how security teams are allowed to reduce the bottom line (*grin*).

Tools such as Microsoft’s System Centre Configuration Manager (SCCM) coupled with OSD to deliver dynamic Operating system deployment with less cumbersome security models. Microsoft has kindly exposed the API to make automation of this toolset a breeze. (more on this later).

Orchestration vs SneakerNet

So orchestration is one of my favourite past times at the moment, I pretty much gave up my packaging life (where possible) to delve into Orchestration. So if your like me part way between developer and operations then orchestration should definately be something you are looking at.

Orchestration is typically workflow based systems that are used to “Orchestrate” other tools into doing your bidding. There are a stack of tools out there that are designed expressly with automation of SDLC systems in mind. My weapons of choice in this arena are VMWare Orchestrator (this is a bit tricky to get into but the price is perfect at $0). This is something your bean counter friends will likely approve with limited hesitation. Another good one is Microsoft Opalis which has recently been badged as SCCM Orchestrator 2012.

So what’s on offer from an orchestration tool??? The answer to this is very simple…

  • Automation
  • Automation
  • Automation

Getting back to the SDLC side of things, an Orchestration tool makes Continuous Deployment a reality. Take this scenario as a typical Orchestration benefit.

  • Developer checks in code
  • Continuous integration compiles code
  • Unit Testing is done
  • A call made to orchestration tool to provision infrastructure
  • A call made to orchestration tool to configure distribution tool such as SCCM
  • A call made to orchestration tool to instruct distribution tool to advertise products to newly provisioned environment
  • Build validation testing is run
  • A call made to orchestration tool to tear down the environment

Self Service vs no service

A typical scenario in the SDLC world is I testing need an environment. Somebody designs the environment, IP addressing is assigned, software is installed. Environment is delivered to testing, testing request a release of code, testing starts. Defects are logged the cycle starts again.

Now from start to finish this can take anything from days to months (yes I am talking about another of great clients here, I can’t mention any names). So for those of you who are stuck in the “months” to provision life-cycle self service is going to be something of great interest.

A typical cycle for this scenario is.

  1. Tester requests environment, email is sent to infrastructure team
  2. Infrastructure approves environment build
  3. Environment is built “auto-magically”
  4. IP addressing assigned, NAT firewalls setup.
  5. Email is sent to testing environment is ready

All this is done without anyone really lifting a finger other than to click “yes” this is ok.

So the long and the short of all of this is DevOps is about making life easier. This might mean some of us will be without a job as the technology starts to control / deliver itself. I think the developers are pretty safe but its those of us that are in the middle that are at risk.

So the way I see IT of the future is that if you are not a developer sooner or later your job might be put at risk. Companies are currently looking to slash budgets and DevOps is going to be one significant painless way they can achieve it. So if your currently in an IT field and your not a developer you better start looking at how DevOps is going to impact your future.

Orchestration is here to stay, fortunately its not that mature yet and nor is it for those who are half heartedly following IT as a career.

Over the next few posts I am going to deep dive into a number of orchestration tools and show some examples on how you can leverage them to automate your entire infrastructure.

First cab off the ranks will be VMWare Orchestrator, this is one of my favourite tools because it is priced very competitively at $0. This is likely because its relatively difficult to use and why would people pay for something that requires significant investment in knowledge and time before you can leverage to do anything remotely useful. But hey on my last project time was a luxury I had plenty of. So I got in nutted it out and presto a few months later I had even more time on my hands.

One last picture as food for thought on a what a typical DevOps team will comprise, these are by no means my favourite tools merely guidelines on what you should be covering off in your automation arsenal.

Feel free to swap out your tools and technologies as long as your still covering each of the major processes you should be in a pretty good state.