Dienstag, 26. Juli 2011

Windows 7 Lite Touch installation with MDT 2010 – Part 3

Windows 7 Lite Touch installation with MDT 2010 – Part 3: "

Meanwhile MDT 2010 has reached Update 1, time to catch up where we left in part 2.


We will configure WDS to use PXE boot, use SQL Server to retrieve the computer name and have some beer afterwards.


PXE Boot


First configure a DHCP scope to serve the clients with an IP address. Then install the Windows Deployment Service (WDS) role on your MDT box, and configure WDS. I like to have a PXE delay of 3 seconds and I’m running DHCP on the same server as WDS. So I need to check ‘Do not listen on port 67’ and ‘Configure DHCP option 60 to indicate that this server is also a PXE server’.


Windows Deployment Services - PXE Response Windows Deployment Services - DHCP


Now we have to import the WinPE boot images, previous generated by MDT, into WDS:


Windows Deployment Services - Add Boot Image


Browse to the Boot-folder in the Deployment Share, and select the LiteTouch-wim-file(s) (I’m importing the x64 version only, as I don’t use Windows 7 x86 for now):


Windows Deployment Services - Add Boot Image Windows Deployment Services - Importing Boot Image


Windows Deployment Services - Boot Image added


Boot your client machine and hit F12 to boot into PXE, or choose boot from network card in the BIOS.


PXE boot client


PXE boot client, loading from wim file


We still have to choose a computer name during deployment:


MDT - configure the computer name


Using MS SQL Server (Express) you can fully automate this!


Preparing SQL Server


In my test lab I will use SQL Express 2008 SP1. Open SQL Server Configuration manager, set SQL Server Browser to automatic and start the service:


Start the SQL Server Brower Service


Enable Named Pipes in SQL Server Configuration Manager:


Enable Named Pipes


Restart the SQL Server service:


Restart the SQL Server service


Start SQL Management Studio and create a Security Login (I’ll use my MDT domain-join-user):


Create a Security Login for the MDT database Create a Security Login for the MDT database


Add the db_datareader and db_datawriter permissions for the domain\svc-join user to the MDT database:


Set permissions on the MDT database



Create a database


Open the Deployment Workbench and Create a new database:


MDT New Database MDT New Database


MDT New Database MDT SQL Share


We have finished creating the MDT database.


Now we have to configure CustomSettings.ini before we can use the database:


Configure Database Rules - Update CustomSettings.ini


By clicking Configure Database Rules, you actually adding extra lines to CustomSettings.ini in order to make a connection to the database. Select what you need:


Configure DB Wizard



Take a look at your CustomSettings.ini file (by right-clicking the DeploymentShare > Properties > Rules tab):


CustomSettings.ini


You can modify CustomSettings.ini further. To join a domain for example:


SkipDomainMembership=YES
JoinDomain=thedspot.local
DomainAdmin=svc-join
DomainAdminDomain=thedspot.local
DomainAdminPassword=*
MachineObjectOU=OU=Computers,OU=Unmanaged,DC=thedspot,DC=local



Obtaining Computer names from the SQL database


Hit Computers > New to add a MAC address and corresponding computername (OSDComputerName):


Add a new computer to the MDT database Add a new computer to the MDT database


OSDComputerName



Our LiteTouch deployment succeeded:


Deployment done


The computer name was retrieved from the database and domain join was successful.


Share

"

Exchange 2010: Powershell Script für die Installation

Exchange 2010: Powershell Script für die Installation: "

Wer Exchange 2010 häufig in unterschiedlichen Umgebungen installiert, kann sich hier ein wirklich tollen Powershell Script runterladen, es installiert alle nötigen Voraussetzungen für die ausgewählten Exchange Rollen. Ich nutze es gerne um schnell eine Testumgebung aufzusetzen.


http://www.ucblogs.net/files/folders/powershell/entry125.aspx


Das Script wurde von Pat Richard geschrieben, der dazugehörige Blog Post findet sich hier:


http://www.ucblogs.net/blogs/exchange/archive/2009/12/12/Automated-prerequisite-installation-via-PowerShell-for-Exchange-Server-2010-on-Windows-Server-2008-R2.aspx


Vielen Dank an dieser Stelle.


PS: Vor dem Aufrufen des Scriptes nicht vergessen den folgenden Befehl auszuführen:


set-executionpolicy remotesigned

"

VMware Enterprise or Enterprise Plus?

VMware Enterprise or Enterprise Plus?: "

imageYou may have heard about the coming licensing changes from VMware. You may have heard about the new vRAM licensing and some of the impacts it will have on current implementations. And you may have heard about the release of vSphere 5! If not, do a little Googling around and you’ll find oodles and oodles of information.

With all the changes coming toward the end of the year, this is probably a good time for customers to start looking at whether Enterprise Plus is a good option for them in the future.

You can refer to this link from VMware for a full breakdown of the editions:
http://www.vmware.com/vmwarestore/vsphere_purchaseoptions.html (List prices as well)

If you clicked the link, you may also notice that Advanced is now gone. If you had a vSphere 4 Advanced license, you now have a vSphere 5 Enterprise license. (Congrats!)

The obvious license enhancements are:
vRam Entitlement goes from 32GB per socket in Enterprise to 48GB per socket in Enterprise Plus. Assignment of 8 vCPUs to a single VM also jumps to a whopping 32 vCPUs per VM. More than enough to run the most demanding mission critical VM.

In the past, I wasn’t too impressed with the additional technical features of the ‘Plus’ edition but with the release of vSphere 5, they have put some of the more cool features in the Plus version and I expect to see it a lot more in the field now.

Storage DRS:
This is perhaps my favorite feature of vSphere 5. Much like DRS which could automate the distribution of CPU and RAM resources via system initiated vMotions, Storage DRS will automate the distribution of Storage resources via system initiated Storage vMotions. It will attempt to baseline IO requirements and move VMs to stay within those baselines. It will also be on guard to make sure space doesn’t run out on an datastore. This includes intelligent placement of VMs on datastores during creation. Storage DRS will require Enterprise Plus.

Auto Deploy:
In my mind, this is similar to Citrix Provisioning Streaming Services. The ability to PXE boot a machine, have it connect to vCenter and stream down the ESX Operating System to the hardware. For all this to work, vSphere will leverage Host Profiles to complete the configuration once the base machine boots up. Should make deployment, Patching and scaling pretty painless.

I/O Control
Both Storage and Network control will be available only in Enterprise Plus. These features will allow companies to better define and ensure Virtual Machine resources based on business needs and SLAs.

These are some of the new changes in vSphere 5 that might make a company re-evaluate their current licensing edition needs.



"

Hyper-V upgrade – The process

Hyper-V upgrade – The process: "

This article describes the Hyper-V upgrade process in a clustered environment.


In my previous post I described why we decided to upgrade our highly available Hyper-V environment from version 1 to version 2. We wanted to take advantage of the Cluster Shared Volumes, Dynamic Memory, and Live Migration. Next I’ll describe the process I went through to test the scenario in my test lab for the new features.


Upgraded one Hyper-V server to revision 2



  1. Moved VMs to one host – Eventually a fresh install of Windows Server 2008 R2 was done. Before doing so, virtual machines needed to be moved to one host using Failover Cluster Manager console.

  2. Evicted host from cluster – Now that one host (we’ll call it Host1) was not hosting any virtual machines, I began the process of removing the server from my domain by evicting it from the cluster using Failover Cluster Manager console. With the Cluster service no longer running, all iSCSI targets were disconnected in iSCSI Initiator.

  3. Removed host from Active Directory – Final step to completely decommission server.

  4. Setup additional iSCSI disks – On the SAN, created two new iSCSI targets. The first, R2VMClust, was to serve as the quorum disk for a new cluster. The second, CSV1, was a very large iSCSI target that would serve as my Cluster Shared Volume.

  5. Fresh installed Windows Server 2008 R2 –During the process, I chose to format the system drive for a fresh install as opposed to an upgrade. Finally, I added Host1 to the domain and installed Hyper-V and Failover Clustering.

  6. Created new cluster – On Host1, I created a cluster, HVR2Clust, that included Host1. I then modified the quorum to use Node Disk Majority using R2VMClust. Node Disk Majority is necessary to have quorum with only one Node online. To finish this step, I enabled Cluster Shared Volumes for the cluster and added CSV1 as a Cluster Shared Volume.


Hyper-V Upgrade - Live Migration


Hyper-V Upgrade – Live Migration


Moved virtual machines to the new cluster



  1. Shutdown VMs – Shutdown the VMs so that resources aren’t being accessed.

  2. Exported/Imported VMs – With the VMs shutdown, I exported the configurations only. I then moved all VM resource files to CSV1. On Host1, I imported each VM.

  3. Upgraded the remaining Hyper-V server to revision 2

  4. Destroyed Cluster and Disconnect from SAN – Now that Host2 was no longer being used, I destroyed the old cluster in Failover Cluster Manager. I then disconnected all iSCSI targets in iSCSI Initiator.

  5. Fresh installed Windows Server 2008 R2 – Host2 was then removed from the domain. Windows Server 2008 R2 was installed, Host2 added to the domain, and finally Hyper-V and Failover Clustering were installed. Finally, I added Host2 to the existing cluster.


Hyper-V Upgrade - Cluster Shared Volume


Hyper-V Upgrade – Cluster Shared Volume


Finalized VM Configuration


At this point, I redistributed VMs so that half were running on each host. I then shutdown VMs and modified memory settings to test dynamic memory settings. I then upgraded integration services.


Cluster Shared Volumes, Dynamic Memory, and Live Migration are all working as advertised.


Having simulated the upgrades, I’m ready to move forward with confidence Hyper-V 2.0 will provide the desired improvements to my virtual machine environment.


My next post will be a final summary once my live environment is upgraded.


Author: Aaron Denton


Copyright © 2006-2011, 4sysops, Digital fingerprint: 3db371642e7c3f4fe3ee9d5cf7666eb0


Related




"

Neu bei Sysinternals: FindLinks 1.0

Neu bei Sysinternals: FindLinks 1.0: "Das neue Utility FindLinks von Mark Russinovich listet den Dateiindex sowie alle festen Links (alternative Dateipfade im selben Volume), die für eine spezifische Datei existieren. Das nützliche Hilfsprogramm steht ab sofort in Version 1.0 zum kostenlosen Download bereit."

Best Practice: Configuring a Software Library for Group Policy Software Deployment

Best Practice: Configuring a Software Library for Group Policy Software Deployment: "

This article is a continuation of the other blog post I have previously published at Best Practice: How to deploy software using Group Policy. I highly recommend that you take to the time to review the other blog posting before continuing on reading this post. Most particularly if you are looking at using Group Policy to deploy software please review Tip #1 of the before mentioned article to make sure this method of software deployment is right for you.


One of the pitfalls with deploying software using Group Policy is that you can only specify a UNC path for the workstation for the installation files. The problem with this is if you are in a multi site environment you may end up trying to deploy a fair large software package over a slow WAN link (see image below).


image


This is creates the obvious problem that it makes the computer un-usable for a long time while the software attempts to download and install. This problem can also be exacerbated if there are multiple clients from the same site trying to install the software at the same time.


So to get around this problem there are a number of different options I will show you that can help mitigate the performance issues with installing software via GPO in a multi-site environment.


Software Library Naming Conventions


First of all I recommend that you implement a good naming convention for the software library in your environment. All installation files for all programs you deploy should be located in the software library so that they are easy to find and maintain.


The image blow shows a tried and true structure for a software library that I have seen work many time for multiple organisations.



image


This structure makes it very easy to find the programs that you are looking for from an administrative point of view and it allows for easy tracking for what versions of programs you have in your environment.


An example of this structure would look like this:


image


Sharing and Securing the Software Library


As your computer may need to install software before user logs on so the computers domain account will need to have permissions to read the files from the software library. To do this, at the top level of the folder structure called “Software” you will need to make sure you granted the group called “Domain Computers” read access to all files and sub-folders.


image


Now that you have secured your top level “software” folder you now need to share it out so that computers can access via the network (see image below). I would also recommend that you make it a hidden share to help hide if from any users that want to snoop around your network.


image


While you need to apply read permission on the software library for all domain computers you should tightly control modify access to this folder as it is possible that someone or something could plant something nefarious there and have it deployed to all your computers. Normally I don’t recommend that you control access to file using share level permissions however in this case you may want to consider leave the share as “read” only permission for everyone as an extra level of protection. By doing this you prevent anyone (even an IT administrator) from also accidently changing the files or folders which could potentially cause a LOT of issues.


image


Now that we have the Software Library created we will move on to see what various methods can be used to more efficiently distribute these files for your computers to use as a installation point.


Replicated Software Library (Only)


One way to get around the issue with distributing software is to make sure that you have a copy of the Software Library located at each site that you have workstations located. Simple setup a DFSR Replication Group for the top level “Software” folder and make another copy of the files at the Site B. To make sure workstations in Site B will install from the server in Site B you will need to create another software deployment GPO identical to the GPO in site A with the exception of the UNC path that points to the server in Site B. This way workstation in Site A will install from FileServerA and workstations in Site B will install from FileServerB thus avoiding the clients from pulling the install files via the WAN.


TIP: Remember there might be some replication latency when copying new files to the Software Library so make sure that all your files are fully replicated before you change your Group Policy Objects.


image


If you do use this method you should target the GPO for site A and Site B to an OU specific to that site. Doing this way would also means that any computer that is configured in the Site A OU but was located in the Site B site (e.g. laptop) would try to install programs via the WAN.


You therefore may be tempted to target your GPO’s to the Active Directory Site but this is something I would definitely NOT recommend. The problem with targeting a GPO to Active Directory Site would mean you would also be targeting all your servers in that site as well. For more on this see the “Linking GPO’s” section in my Best Practice- Group Policy Design Guidelines – Part 2 blog post.


This method does have one advantage and that is workstations that are not located in Site A or Site B will not attempt to install software via the WAN either.


Pros



  • Clients install software via LAN

  • Suitable for Windows Server 2003 R2 or later

  • Suitable for Windows XP clients or later

  • Only applies to selected sites

  • Low LAN Bandwidth


Cons



  • Difficult to manage due to Multiple GPO’s required to be created for each site.

  • Large infrastructure requirement for hosting multiple copies of Software Library


I don’t recommended just using this method by itself as you can see when the other methods below can be much easier to administer.


Replicated Software Library using a DFS Namespace


The obvious issues with the “Replicated Software Library (Only)” method is that you needed to create, maintain and target multiple GPO’s to your environment to ensure that software is distributed. To get around this issues you can deploy a domain based DFS Namespace in conjunctions with your DFSR Replication Group which will allow you to manage a single set of GPO’s for all your software deployment needs.


This method allows you to have one UNC path that can be used to distribute software to all your workstation no matter which site they are connected. Having only one UNC path also means that you don’t need to create multiple GPO’s for software deployment in each site.


Tip: As you are relying on a DFS Namespace this also means you have a reliance of you Active Directory Sites as this is how a workstation figures out what is the closest file server. Therefore it would be highly recommended that your AD Sites are configured correctly otherwise you might find that you workstation still installing from file servers in the wrong site.


image


A downside to this method is that if a computer was to connect to Site X and there was no file server in this site then the workstation would then try to find the next closest file server in another site (this would be bad). To mitigate this issue you really need to be sure that you have a software distribution point located in each of our sites so your workstations always have a local file server to pull the install files from.


Pros



  • Clients install software via LAN

  • Suitable for Windows Server 2003 R2 or later

  • Suitable for Windows XP clients or later

  • Low management due to single GPO for all workstations

  • Low LAN Bandwidth


Cons



  • Software is slow to install if site does not have a copy of the software library.

  • Large infrastructure requirement for hosting multiple copies of Software Library


This is probably the most commonly used configuration in most environments today. If you are in doubt as to what then this is probably the solution best balanced configuration of management overhead with


Replicated Software Library using a DNS Alias


This method of software deployment is very similar to the “Replicated Software Library using a DFS Namespace” options mentioned above but it instead relies up DNS Netmask Ordering for the client to find the local file server.


image


This option is configured on your DNS Servers (see image below) so it tries to return the closest IP address to the workstation based on the IP of the Workstations and the IP of the multiple A record for the Software Library servers.


image


For this option to work you also need to have multiple DNS A Records configured to point to all the servers that have a replica of the Software library (see below).


image


It also requires that your workstation IP address ranges are close or the same as the file servers. This would mean this option would not work if your workstations with in 10.1.0.0/24 subnets and your servers were in 10.0.0.0/24 subnet as they are not logically close to each other.


If you do use this option then you will also need to set Disable Strict Name Checking on the file servers hosting the software library so they will respond to the DNS Alias address.


Pros



  • Clients install software via LAN

  • Suitable for Windows Server 2003 R2 or later

  • Suitable for Windows XP clients or later

  • Lower management due to single GPO for all workstations

  • Low WAN Bandwidth


Cons



  • Software is slow to install if site does not have a copy of the software library.

  • Large infrastructure requirement for hosting multiple copies of Software Library.

  • Difficult to setup and requires the specific IP Address scheme


This option is definitely not recommend however it is an option that you can use if you are not able to configured a DFS Namespace but don’t want the overhead of maintain lots of Group Policy Objects.


Central Software Library using Branch Cache


BranchCache is an awesome new feature of Window Server 2008 R2 and Windows 7 that allows clients and servers to cache any SMB or HTTP/S traffic. As Group Policy performs software deployment via a UNC path from a SMB file server then it allows for client to cache any files it pulls down via the WAN. This means after an initial workstation in a site has pulled down the install files then workstation can then act as a temporary cache for other computers on the network thus making subsequent installs much quicker. The big advantage of this method is that you don’t need to have any server infrastructure at remotes sites, yet you still get the benefits of reduced WAN traffic and quicker install speeds.



image


In addition if you have a Public Key infrastructure in your organisation then it would be very easy to enabled BranchCache on a server. All the BranchCache clients would then send a copy of the files they download to the BranchCache server in the site so it can also act as a “Hosted Cache”. This would reduce the amount of WAN traffic even further as of course a workstation that is configured with BranchCache would need to be always turned of for act as a cache for the other workstations.


Tip: By default BranchCache is disabled even if it install on a computer. Therefore you need to enable the “Turn on Cache Mode” group policy setting.


Pros



  • Clients install software via LAN after second install

  • Lower management due to single GPO for all workstations

  • Low Infrastructure Requirements


Cons



  • Only suitable for Windows Server 2008 R2 and / or Windows 7

  • First client to install will be slower


If you are running Windows 7 and / or Windows Server 2008 R2 in your organisation then you should really consider implement branch cache. This really delivers the best of both worlds as you can implement this with a low amount of infrastructure are your remote sites yet still reduce WAN bandwidth all using a single GPO/UNC path to deploy the software.


Summary


As you can see there are many different option available to you for distributing your software in your environment via Group Policy. In selecting a method of deployment that is right for you environment I would pick firstly the solution that gives the best end user experience and then the one that has the lowest administrative overhead.





"