How to use a Master Image to deploy any Server Role in your organization easily without having to keep track of multiple images, configurations, and builds

There are many advantages to using a single image to deploy all servers across your organization. Consolidating images can save you and your organization time, money, and disk space. Now you can have an easy, predefined, trainable process to deploy new servers.  This tutorial will teach you how to accomplish this in a manner that does not require top level technical expertise, or advanced scripting knowledge.

Sysprep and Server Roles

When first trying to put this solution together the first thing you will realize is windows has limited support for deploying server roles through Sysprep.  My first reaction was one of disbelief, but these things happen it’s not a perfect world. Below you can see a table with the main supported and unsupported roles.

Server   role

Windows   Server 2008

Windows   Server 2008 R2

Active Directory Certificate   Services (AD CS) No No
Active Directory Domain Services   (AD DS) No No
Active Directory Federation   Services (AD FS) No No
Active Directory Lightweight   Directory Services (AD LDS) No No
Active Directory Rights Management   Server (AD RMS) No No
Application Server Yes Yes
DHCP server Yes No
DNS Server No No
Fax Server No No
File Services No Yes
Hyper-V™ Not applicable Yes -Not supported for a virtual   network on Hyper-V™.
Network Policy and Access Services Yes No
Network Policy Routing and Remote   Access Services Yes Not applicable
Print Services No Yes
Remote Desktop Session Host   (Terminal Services) Yes-Not supported in scenarios   where the master Windows image is joined to a domain. Yes-Not supported in scenarios   where the master Windows image is joined to a domain.
UDDI Services No Not applicable
Web Server (Internet Information   Services) Yes-Not supported with encrypted   credentials in the Applicationhost.config file. Yes-Not supported with encrypted   credentials in the Applicationhost.config file.
Windows Deployment Services (WDS) No No
Windows Server Update Services   (WSUS) No No

If your organization is like mine the majority of servers deployed use roles that are unsupported by Sysprep, meaning you cannot simply define a role to be installed when Sysprep runs through the answer file. Since roles cannot be installed by Sysprep, one would conclude that the best way to template out your servers is to create a template for each role, .(i.e.1 for  Exchange 2010 MB, 1 for  CAS, 1 for HUB., 1 for Domain Controller, and 1 for Web Server etc.) This theory works well you simply prepare a server with all prerequisites for a role then Sysprep that server and you have a template for that role. The down side is this means multiple templates taking up valuable disk space in your infrastructure.

The solution I have authored uses the Sysprep answer file to call a .cmd file once OOBE (Out of Box Experience) completes but before explorer and the desktop launches. Using this logic, we can create a batch file that, with minimal deployment effort, will call a series of PowerShell scripts that will install the most commonly used roles for the most commonly used servers in your organization.

In theory this means one template for Standard Edition and one template for Enterprise edition with a master script folder in each that we can use to install any role prerequisites during initial spin up.  This solutions is hypervisor agnostic so it can be performed with any Virtualization technology that supports templates. Below you will find the steps we are going to take to complete, while not difficult it is time consuming so let’s get started.

Steps for Completion

1)      Get PowerShell scripts

2)      Create .cmd script

3)      Create Sysprep answer file with appropriate settings

4)      Create Answer File

5)      Create master image

6)      Run Sysprep

7)      Create Template from Syspreped machine

8)      Run Syspreped machine and test

Get PowerShell Scripts

Step 1 is actually quite simple.  If you are using well documented roles for the servers such as Exchange 2010, Web Servers, etc., script creation is a breeze.  For this example we will use Exchange Roles (http://technet.microsoft.com/en-us/library/bb691354.aspx). To create my scripts I simply copied the PowerShell code of the Microsoft website listed above, inserted it into a text file and saved it as a .ps1 file. I used the cmdlets available to create scripts for Exchange CAS and HUB, Mailbox, and all roles for a single server deployment.  Here are my examples:

Filename: Exchange_CA_HUB_MB.ps1
Code:    Import-Module ServerManager

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,Web-Asp-Net,Web-Client-Auth,Web-Dir-Browsing,Web-Http-Errors,Web-Http-Logging,Web-Http-Redirect,Web-Http-Tracing,Web-ISAPI-Filter,Web-Request-Monitor,Web-Static-Content,Web-WMI,RPC-Over-HTTP-Proxy

Filename: Exchange_CA_HUB.ps1
Code:    Import-module servermanager

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,Web-Asp-Net,Web-Client-Auth,Web-Dir-Browsing,Web-Http-Errors,Web-Http-Logging,Web-Http-Redirect,Web-Http-Tracing,Web-ISAPI-Filter,Web-Request-Monitor,Web-Static-Content,Web-WMI,RPC-Over-HTTP-Proxy

Filename: Exchange_MB.ps1
Code:    import-module servermanager

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server

If you need other types of roles they are readily available to you on the Internet.  If you are using Server 2008 features and roles whose combination is specific to your builds or applications it will require a little more work to write out your cmdlets. A list of all available features and commands can be brought up in PowerShell by using the “Get-WindowsFeature” cmd. You can only use this command after you have imported the Server Manager tools into PowerShell. You do this by running “import-module ServerManager” cmd. You can use this list to build your script with the following syntax:

Import-Module Server Manager
Add-WindowsFeature role, role, role

Don’t forget to replace the word role in this command with the Windows Feature/Role you would like to install

**Important** there should be 1 PowerShell script for each server type you want to deploy! Make sure your execution policy in PowerShell is set to Remote Signed by using the Set-ExecutionPolicy RemoteSigned command.

Create CMD Script File

Once you have completed the PowerShell scripts, you will have to create a .cmd script that will run after Sysprep OOBE. This script runs when you use the Synchronous Command option in Sysprep, but we will get into that later. I know that we can get a lot more complex when creating scripts and possibly even just creating one PowerShell script with all options we need. Unfortunately like many other overloaded administrators and engineers who wear many different hats, I do not have the PowerShell acumen required to make such a script, nor the time to learn it. Since we all know basic scripting this seemed like the next logical choice for accomplishing what I want without having to invest days in learning to do so with more advanced features. So let’s dive in.

We will create a basic script that will provide us with choices to run each of the PowerShell scripts we created.  The code for this is pretty simple and straight forward.  First I will provide the code, then the output, and then a brief explanation.

CODE:
@echo off
color 0a
:home
title Server Type Selection Screen
cls
echo Server Type Selection
echo =====================
echo.
echo 1) Exchange Single Server (MB,HUB,CA)
echo 2) Exchange Mailbox Server
echo 3) Exchange Hub and Client Access
echo 00) Exit
echo.
echo =====================
set /p var=Enter Numeral Option:
if %var%==1 PowerShell -ExecutionPolicy RemoteSigned -file “c:\deploy\Exchange_CA_HUB_MB.ps1″
if %var%==2 PowerShell -ExecutionPolicy RemoteSigned -file “c:\deploy\Exchange_MB.ps1″
if %var%==3 PowerShell -ExecutionPolicy RemoteSigned -file “c:\deploy\Exchange_CA_HUB.ps1″
if %var%==4 \\vbscript missing
if %var%==5 PowerShell -ExecutionPolicy RemoteSigned -file “c:\deploy\Web.ps1″
if %var%==6 PowerShell -ExecutionPolicy RemoteSigned -file “c:\deploy\Set-Lync2010Features.ps1
if %var%==00 Exit
goto home

Explanation:
The very first line of code turns off the line showing you where you are in the directory  as shown below.
As you might deduct every line with echo in front is displayed as readable text in the command window. The next line Set /p allows for user input, the /p for promptstring waits until a value is entered then it stores the value for execution.  The next line defines the variables and the commands to run if said variable is selected.  Fairly straight forward, though it is easy to make mistakes, check you work by running the script and trying to select an option, you can cancel once you see the tell-tale power shell 000000’s at the top of the command window

Create the Answer File

If you are not familiar with creating an answer file I will provide an in depth tutorial in the future. For now I will give you a down and dirty outline of the steps required.

First thing is first you have to download and install the Windows Automated Installation Kit (AIK). After installation, navigate to your start menu and open the Windows System Image Manager. This will allow you to create your answer file but before you do that you will need to specify the “image” to use. You can do this by going to the file menu and selecting “Select Windows Image”. You will have to navigate to the Sources folder on the root of your Windows Installation disk and select the appropriate “image” for your installation.

There are a plethora of options for one to select when creating the answer file but for our purposes we only need two. One is the option to specify the default Administrator Password and the other is the option to run the script on first logon. Both options can be found in the amd64_Microsoft-Windows-Shell-Setup container.

You will want to drag both of these selections to the oobeSystem container in the answer file and fill in all required information, as shown below. Once complete save this file and copy it to your Master Image in the Sysprep folder under System32.

**Caution** the administrator password is stored in plain text in the Answer File and it remains in the Sysprep folder.  You can use another SynchronousCommand with an “Order” of 2 to run a batch file that deletes the answer file, or you can delete it manually.

Create Master Image

This should be pretty straight forward and shouldn’t require much explanation. Simply create a VM and install a fresh image of Windows Server 2008 R2, make sure you install all virtual hardware, drivers and software that will be shared amongst all images. Install no Features or Roles, and copy all scripts onto a folder on the root of C: and name it Deploy.

**Important** Make sure your execution policy in PowerShell is set to Remote Signed by using the Set-ExecutionPolicy RemoteSigned command.

Run Sysprep

In my experience I have found the best way to do this is via command line simply open a new command window, navigate to the Sysprep folder and run the following:

sysprep.exe /oobe  /generalize /shutdown /unattend:Sysprep.xml

Create Template and Test

Once the VM has shut down after running the Sysprep command, you will want to copy it to a template which you will use to deploy all servers in the future. This process varies depending on your hypervisor and we will not be discussing this process further as this tutorial is hypervisor agnostic.

Once you have created your template deploy a new VM from template and test.  Verify all your scripts are installing the required services. Once you are happy with the result simply bring your new template into production and you are done.

The Tony – @TonyInTheCloud

Project Managers – Get your Head “IN” the Cloud – Budgets (Part 3)

Continued from Part 2

Free image courtesy of FreeDigitalPhotos.netLast in the series, but certainly not least, is Budgets.  Money makes the world go ‘round, but what about the Cloud?  It does take money, but does it take more, less, or a different type of money?

We all know that Project Budgets can get real big, real fast, when it comes to infrastructure costs.  Servers, Storage, Appliances, Controllers, Licenses, Setup Costs, and Wow!  You are at several hundred thousand or a million and you haven’t even started solving the business problem or built the “Next big thing” that will create a new revenue stream for your company, or whatever that Requirements Document said the Business line wanted to do.  Without the infrastructure to support the IT part of the project, nothing happens except uncomfortable meetings for CIO’s and their Technology Departments.  And all the while the Business lines keep falling further and further behind the competition.

Cloud Computing changes the PM game a few ways.  First, the Cloud provider absorbs the upfront cost of the infrastructure, at least a good one does.  All of that juicy technology ready to take your business to the next level.  Imagine all of those Giga bytes of RAM, a plethora of CPUs and Storage ready to take your data and process it and then store it for your fancy reports.  It is racked and ready to go.  And it is being managed by folks who are trained and current on said technology.

A More Predictable cost model

With the sunk costs being absorbed by someone else, you just need to get onboard and pay for your little piece of the Cloud.  Paying as you go means a lot less upfront cost and allows more of your company’s projects to get approved.  We spoke about scheduling earlier in this series, but to reiterate, more projects can be accomplished as well.  This is great for Project Managers and Directors of the PMO.   IT is known for being typically the largest cost center.  Now it can become a service provider to the business and those monthly Cloud Provider charges can be cross charged to the corresponding business unit.

Infrastructure Project budgets will be more of a contracted rate and the budget will go more into the software side for development.  That too can be a contracted rate if the company outsources development, but that is out of scope for this article.  Contract negotiations will become an even larger part of the Project Managers role.

Most Cloud companies charge for bandwidth usage kind of like a cell phone.  You pay for the traffic you use, and then overage if any, and the model scales linearly.  If you get onboard with a provider like Venturian, you have an even more predictable cost model.  How is it predictable?  You don’t pay for usage, you pay by CPU.  Again, that is another topic for a future article.

Shift to OpEx versus CapEx

One shift will be the shift to an OpEx Model.  As a PM, you have to track and report your CapEx so it can be properly depreciated over the upcoming years.  What do you do if there is no upfront cost?  Some people will start to throw up their hands and say “What will we do without CapEx”?  Well, you can do more with less.  If you don’t spend a million dollars on servers, licenses, support, maintenance, and setting them up, then you have a million dollars to do something else with, like hire more employees or invest in R&D.  There is no rule saying you have to have CapEx.  It is just an Accounting tool to spread the expenses over the life of the asset so the company doesn’t have to take the full hit in year 1.    And don’t forget, the company still has to pay for the asset in year 1, so it comes off the bottom line unless they get financing.  And financing costs interest.

Final Word

To round out this series, the Cloud will change Project Management.  For the better or worse, time will only tell.  But change is good when properly managed, and that is what Project Managers do.  We as PM’s must constantly challenge ourselves as business’s start to evolve into Cloud based models.  Being able to help the executive wing navigate, negotiate, and make the right business decision will become more of our charge.  Whatever you do, be sure to educate yourselves with how Cloud Computing is changing Project Management.

Terry Worrell, PMP – Director of Strategy and PMO

Project Managers – Get your Head “IN” the Cloud – (Part 2) RESOURCES AND TEAMS

iStock_000000677688XSmall

Continued from Part 1

“Managing this project is like herding cats”. “Putting more people in the car won’t get us there any faster”. How many times have we, as Project Managers, either uttered these types of phrases or heard one of our colleagues exclaim one in exasperation? Resource management, or Workforce management, as I like to say, is one of the more tricky parts of managing a project. Projects are like the quote from the Gettysburg Address, “…of the people, by the people, and for the people”. Without people there would be no project, which is why I do not like to refer to them as Resources. It dehumanizes them which makes it easier to devalue them.

My hope is that Cloud Computing can put some of the humanity back into projects. As business is evolving to an even faster pace, IT/Technology/Computer guys, or whatever you call the folks in your company that support and implement those shiny machines that keep your business humming along, are struggling to keep up. That sounds kinda funny given how fast technology is evolving. Where the mismatch is happening is when the dollars required to keep up with the latest technology cannot be justified to a non-technical company. So, an in house IT department must suffer along and “make it work”. Do “more with less”. Any of those management phrases that translate into “just shut up and make sure my excel sheet pops up when I click the button”. That is the rub. How can the business ensure they are supported by current technology so they can focus on running the business?

Enter, “The Cloud”. Cloud providers ARE Tech companies. Their revenue streams are tied to their investment in current technology. Their reason for being is to provide technology to other companies, so they cannot afford to not stay current. That investment not only translates into a non-technical company being able to leverage state of the art System Resources, but also fully trained and competent Human Resources who are intimate with the current technology and are available and able to help you.

The shift for PM’s will be more around being able to communicate and manage virtual teams and vendor/partner management to a different degree than is done currently. Virtual teams have been around a long time. But they are usually one vendor, one application, or some programmers on a project. We are talking about an entire infrastructure team being remote.

Imagine your in house data center disappearing and being replaced by a cube farm. Project Managers will be needed to stay on top of things and keep the communication channels open, now more than ever. With the bulk of the project and support work being done through the company’s partnership with a Cloud Provider, maybe a few Cloud Providers, the PM’s will plan with the Business, IT Management, and all of the key stakeholders, as they do now. Where it will be different is that the team performing the execution will be able to address their needs much quicker and more efficiently, with systems that can keep up with business demands in a time sensitive fashion.

Project Managers will need to embrace the Partnership model that will come with the Cloud. By having a better-oiled machine on the project team, projects will execute with more consistency and on a more professional level. Deadlines will actually become more than just a line item on a project schedule. Of course there will be bumps in the road (which is why we are needed), but you won’t have to deal with a short staffed IT Manager who is also doubling as the DBA and the Business Analyst. The technology folks at your Cloud provider will be committed to serving their clients. They will be more positive to work with because they will have proper training, be supported by a robust infrastructure and be contractually bound to provide a certain level of service – humanity rediscovered! The tools themselves for the project team will, of course, change with the evolution of more tele-presence, consolidated project servers and collaboration tools and mobile technology.

Client side Project Management will deal more with the business and will implement business solutions, which will provide more value add to the business. In a fully Managed Services environment, the entire IT department may be outsourced and your technology project team may consist of a Cloud provider, application vendor, business sponsor, and you. Sound scary? It is really not. Being organized and able to coordinate everyone’s efforts is what we have done since the first project manager built the first pyramid.  We are just being challenged with the new level of evolution. Are you ready for the challenge?

The next blog in this series will be around Budgets.

Terry Worrell, PMP – Director of Strategy and PMO

Commercial Real Estate and Datacenters

it engineers in network server room

While connecting buildings to ‘the internet’ is not a new thing, Cloud Connected Real Estate adds a completely new perspective to the utilization of the technologies. As an enthousiast on everything Cloud as well as Connected Real Estate, I have seen the changes take place and have noticed different approaches in different situations. One of the most interesting and unexplored approached, is the connection to a new or existing datacenter.

Some of the larger projects, like Lake Nona in Orlando and PlanIT Valley in Portugal, are actually starting out with a datacenter and build their community around it. Others, like the 600 Brickell project in Miami, use an exisiting datacenter to connect to.

The big question in connecting to a datacenter, existing or not, is the reason behind it. What does it add to the project, the owners and the tenants?

First of all, let’s explore the concept of the datacenter. A proper datacenter is everything that a proper office building is not. In its basic form, the differences are clear: In a datacenter. the air is cold and dry, there are no windows, it is noisy and gaining access is only for the (un)happy few. This might be a ‘Class A’ environment for electronics, but for people it is less than ideal. A ‘Class A’ office environment, on the other hand, provides proper accessibility for people and offers a conditioned, well lit environment.

Datacenters usually provide more powerful redundancy facilities than office buildings do, since the requirements for availability are based on the needs of electronic equipment, rather than people. Since physical equipment is not able to move itself to a more appropriate location, special facilities that lack (or are significantly less available) in office buildings, are often an integral part of datacenters. Examples are flywheel generators to avoid fluctuations in electric current when a power outage occurs and the generators take over, redundancy in cooling systems and means to get emergency fuel that office buildings do not have.

Looking at these descriptions, one thing becomes immediately clear – People do not belong in data centers and data centers do not belong in office buildings. Many have tried with various levels of success, but one or the other always deviates from its ideal picture.

When connecting realestate to a datacenter with a properly sized and designed connection, a bond is forged that offers an incredible benefit to all parties involved, a marriage made in heaven, a merger of Class A office space and Class A computer space, without concessions on either side. When it comes to all of the benefits of Cloud Computing, in all of its forms, a connection to a datacenter truly enables the Cloud and makes Cloud hosted applications seamless to the end user.

A direct connection to a datacenter removes the added latency (slowness and delays) that are inherent to the Internet. No matter how fast or how good the connection with your provider is, at some point, your data gets routed over a public internet connection at which point nobody has control over the capacity or latency of the connection(s) that you take to your destination. Its like driving to a destination during rush hour – you are aware that traffic jams may (or even ‘will’) occur, but there is no telling how long the delays will be.

Connecting directly to a datacenter takes this issue away, at least for the path between your office and the datacenter that you are connected to. If your Cloud provider is located in the same datacenter, a cross-connect can be put in place to guarantee enough fast and wide bandwidth for the whole company, at a fraction of the price of an Internet connection that should handle the same. If the cloud provider is physically located in a different datacenter, there is often the possibility of using the infrastructure that is used to inter-connect datacenters, such as a back-bone provided by a tier 1 connectivity provider or by the datacenter itself. Either way, going fully virtual on the desktop is no longer a wannahave, but a cando.

Another great benefit of datacenters is the choice of carriers. While not every carrier is available in every datacenter, these locations are the ‘wholesale clubs’ when it comes to Internet connectivity. A little shopping around will find you incredible deals on Internet connections, business connections at soho prices with more scalability than you will ever need. Choice is a good thing.

The same goes for providers of other services, from telephony to all forms of Cloud computing. Since many providers are in the same building, you will be able to negotiate great services for amazing prices.

When deciding on the next location for your business, check out Cloud Connected Real Estate and find out what the location is connected to. A great connection to a single Internet provider is great, a great connection to a datacenter is better.

Cas Mollien – Founder, SmallBizCIO

Project Managers – Get your Head “IN” the Cloud – (Part 1) TIME

TimeCloud

What is one of the biggest differences between managing an Infrastructure project compared to a Software Development project?  A few years ago, I would have answered with confidence, “Procurement lead time”.  Now, the answer has evolved with the adoption of the “Cloud”.

There are many blogs out there stating how Cloud Computing will changed Project Management.  I suppose this blog falls along the same lines, except for the fact that working for a Private Cloud Computing Company has given me, what I feel, is a unique perspective.  As a seasoned PMP, I have run the gauntlet on managing business and IT, Infrastructure and Software Development, multimillion dollar projects and small enhancement, and everything in between.  But this is neither a resume nor a blog on me.  Rather, this is to help fellow Project Management Professionals get ready for the next incarnation of managing projects.

Since Project Management is generally concerned with the triple constraint, (remember Time, Budget, and Resources?), I will be following that format and splitting up this blog into 3 parts so as not to inundate you with reading a novel.

If dot-coms were a wave, Cloud Computing will be the tsunami.  Get ready to throw out those old project templates, or at least do some major surgery on the assumptions used to build them.  Managing the Cloud is a bit different than the old vanilla projects we have grown accustomed to.

IT shops will be changing and so will the fact of needing a Project Manager to be in close proximity to the infrastructure.  IT will be a service like “electricity” or “telephone”.  You do not need to sit next the electric guy, right?

Time Impact

Schedule impact will be drastic.  Your hosted infrastructure project will be similar to adding a phone line in that it will require you to make a phone call or send in a ticket request with your specs and voilà; you can have instant servers, domain controllers, extra hard drive space, etc.  It sounds like an old Tommy Vu infomercial, but if you choose the right Cloud provider, it can be that easy.  Instead of a 3 month procurement lead time, try 1-3 days…. heck, 1-3 hours in some cases.

Prep time – Less analysis on scalability and processing power.  While analysis and pre-gaming are paramount to any project’s success, you no longer will need to be concerned with making a $100K mistake if youpurchased the wrong server.  Scalability is a key reason for going to the Cloud.  Venturian has a fixed cost model which makes pricing more predictable.  Not all are, so do your homework.  This peace of mind will allow for projects to move quicker through the early part of the project lifecycle.

The next blog in this series will be around Resources and Teams.

Terry Worrell, PMP – Director of Strategy and PMO

PART 2 – Project Managers – Get your Head “IN” the Cloud – RESOURCES AND TEAMS