Pushing the Limits of Windows: Paged and Nonpaged Pool

In previous Pushing the Limits posts, I described the two most basic system resources, physical memory and virtual memory. This time I’m going to describe two fundamental kernel resources, paged pool and nonpaged pool, that are based on those, and that are directly responsible for many other system resource limits including the maximum number of processes, synchronization objects, and handles.

Here’s the index of the entire Pushing the Limits series. While they can stand on their own, they assume that you read them in order.

Pushing the Limits of Windows: Physical Memory

Pushing the Limits of Windows: Virtual Memory

Pushing the Limits of Windows: Paged and Nonpaged Pool

Pushing the Limits of Windows: Processes and Threads

Pushing the Limits of Windows: Handles

Pushing the Limits of Windows: USER and GDI Objects – Part 1

Pushing the Limits of Windows: USER and GDI Objects – Part 2

Paged and nonpaged pools serve as the memory resources that the operating system and device drivers use to store their data structures. The pool manager operates in kernel mode, using regions of the system’s virtual address space (described in the Pushing the Limits post on virtual memory) for the memory it sub-allocates. The kernel’s pool manager operates similarly to the C-runtime and Windows heap managers that execute within user-mode processes.  Because the minimum virtual memory allocation size is a multiple of the system page size (4KB on x86 and x64), these subsidiary memory managers carve up larger allocations into smaller ones so that memory isn’t wasted.

For example, if an application wants a 512-byte buffer to store some data, a heap manager takes one of the regions it has allocated and notes that the first 512-bytes are in use, returning a pointer to that memory and putting the remaining memory on a list it uses to track free heap regions. The heap manager satisfies subsequent allocations using memory from the free region, which begins just past the 512-byte region that is allocated.

Nonpaged Pool

The kernel and device drivers use nonpaged pool to store data that might be accessed when the system can’t handle page faults. The kernel enters such a state when it executes interrupt service routines (ISRs) and deferred procedure calls (DPCs), which are functions related to hardware interrupts. Page faults are also illegal when the kernel or a device driver acquires a spin lock, which, because they are the only type of lock that can be used within ISRs and DPCs, must be used to protect data structures that are accessed from within ISRs or DPCs and either other ISRs or DPCs or code executing on kernel threads. Failure by a driver to honor these rules results in the most common crash code, IRQL_NOT_LESS_OR_EQUAL.

Nonpaged pool is therefore always kept present in physical memory and nonpaged pool virtual memory is assigned physical memory. Common system data structures stored in nonpaged pool include the kernel and objects that represent processes and threads, synchronization objects like mutexes, semaphores and events, references to files, which are represented as file objects, and I/O request packets (IRPs), which represent I/O operations.

Paged Pool

Paged pool, on the other hand, gets its name from the fact that Windows can write the data it stores to the paging file, allowing the physical memory it occupies to be repurposed. Just as for user-mode virtual memory, when a driver or the system references paged pool memory that’s in the paging file, an operation called a page fault occurs, and the memory manager reads the data back into physical memory. The largest consumer of paged pool, at least on Windows Vista and later, is typically the Registry, since references to registry keys and other registry data structures are stored in paged pool. The data structures that represent memory mapped files, called sections internally, are also stored in paged pool.

Device drivers use the ExAllocatePoolWithTag API to allocate nonpaged and paged pool, specifying the type of pool desired as one of the parameters. Another parameter is a 4-byte Tag, which drivers are supposed to use to uniquely identify the memory they allocate, and that can be a useful key for tracking down drivers that leak pool, as I’ll show later.

Viewing Paged and Nonpaged Pool Usage

There are three performance counters that indicate pool usage:

  • Pool nonpaged bytes
  • Pool paged bytes (virtual size of paged pool – some may be paged out)
  • Pool paged resident bytes (physical size of paged pool)

However, there are no performance counters for the maximum size of these pools. They can be viewed with the kernel debugger !vm command, but with Windows Vista and later to use the kernel debugger in local kernel debugging mode you must boot the system in debugging mode, which disables MPEG2 playback.

So instead, use Process Explorer to view both the currently allocated pool sizes, as well as the maximum. To see the maximum, you’ll need to configure Process Explorer to use symbol files for the operating system. First, install the latest Debugging Tools for Windows package. Then run Process Explorer and open the Symbol Configuration dialog in the Options menu and point it at the dbghelp.dll in the Debugging Tools for Windows installation directory and set the symbol path to point at Microsoft’s symbol server:


After you’ve configured symbols, open the System Information dialog (click System Information in the View menu or press Ctrl+I) to see the pool information in the Kernel Memory section. Here’s what that looks like on a 2GB Windows XP system:


    2GB 32-bit Windows XP

Nonpaged Pool Limits

As I mentioned in a previous post, on 32-bit Windows, the system address space is 2GB by default. That inherently caps the upper bound for nonpaged pool (or any type of system virtual memory) at 2GB, but it has to share that space with other types of resources such as the kernel itself, device drivers, system Page Table Entries (PTEs), and cached file views.

Prior to Vista, the memory manager on 32-bit Windows calculates how much address space to assign each type at boot time. Its formulas takes into account various factors, the main one being the amount of physical memory on the system.  The amount it assigns to nonpaged pool starts at 128MB on a system with 512MB and goes up to 256MB for a system with a little over 1GB or more. On a system booted with the /3GB option, which expands the user-mode address space to 3GB at the expense of the kernel address space, the maximum nonpaged pool is 128MB. The Process Explorer screenshot shown earlier reports the 256MB maximum on a 2GB Windows XP system booted without the /3GB switch.

The memory manager in 32-bit Windows Vista and later, including Server 2008 and Windows 7 (there is no 32-bit version of Windows Server 2008 R2) doesn’t carve up the system address statically; instead, it dynamically assigns ranges to different types of memory according to changing demands. However, it still sets a maximum for nonpaged pool that’s based on the amount of physical memory, either slightly more than 75% of physical memory or 2GB, whichever is smaller. Here’s the maximum on a 2GB Windows Server 2008 system:


    2GB 32-bit Windows Server 2008

64-bit Windows systems have a much larger address space, so the memory manager can carve it up statically without worrying that different types might not have enough space. 64-bit Windows XP and Windows Server 2003 set the maximum nonpaged pool to a little over 400K per MB of RAM or 128GB, whichever is smaller. Here’s a screenshot from a 2GB 64-bit Windows XP system:


2GB 64-bit Windows XP

64-bit Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 memory managers match their 32-bit counterparts (where applicable – as mentioned earlier, there is no 32-bit version of Windows Server 2008 R2) by setting the maximum to approximately 75% of RAM, but they cap the maximum at 128GB instead of 2GB. Here’s the screenshot from a 2GB 64-bit Windows Vista system, which has a nonpaged pool limit similar to that of the 32-bit Windows Server 2008 system shown earlier.


2GB 32-bit Windows Server 2008

Finally, here’s the limit on an 8GB 64-bit Windows 7 system:


8GB 64-bit Windows 7

Here’s a table summarizing the nonpaged pool limits across different version of Windows:

32-bit 64-bit
XP, Server 2003 up to 1.2GB RAM: 32-256 MB
> 1.2GB RAM: 256MB
min( ~400K/MB of RAM, 128GB)
Vista, Server 2008,
Windows 7, Server 2008 R2
min( ~75% of RAM, 2GB) min(~75% of RAM, 128GB)

Paged Pool Limits

The kernel and device drivers use paged pool to store any data structures that won’t ever be accessed from inside a DPC or ISR or when a spinlock is held. That’s because the contents of paged pool can either be present in physical memory or, if the memory manager’s working set algorithms decide to repurpose the physical memory, be sent to the paging file and demand-faulted back into physical memory when referenced again. Paged pool limits are therefore primarily dictated by the amount of system address space the memory manager assigns to paged pool, as well as the system commit limit.

On 32-bit Windows XP, the limit is calculated based on how much address space is assigned other resources, most notably system PTEs, with an upper limit of 491MB. The 2GB Windows XP System shown earlier has a limit of 360MB, for example:


2GB 32-bit Windows XP

32-bit Windows Server 2003 reserves more space for paged pool, so its upper limit is 650MB.

Since 32-bit Windows Vista and later have dynamic kernel address space, they simply set the limit to 2GB. Paged pool will therefore run out either when the system address space is full or the system commit limit is reached.

64-bit Windows XP and Windows Server 2003 set their maximums to four times the nonpaged pool limit or 128GB, whichever is smaller. Here again is the screenshot from the 64-bit Windows XP system, which shows that the paged pool limit is exactly four times that of nonpaged pool:


     2GB 64-bit Windows XP

Finally, 64-bit versions of Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 simply set the maximum to 128GB, allowing paged pool’s limit to track the system commit limit. Here’s the screenshot of the 64-bit Windows 7 system again:


    8GB 64-bit Windows 7

Here’s a summary of paged pool limits across operating systems:

32-bit 64-bit
XP, Server 2003 XP: up to 491MB
Server 2003: up to 650MB
min( 4 * nonpaged pool limit, 128GB)
Vista, Server 2008,
Windows 7, Server 2008 R2
min( system commit limit, 2GB) min( system commit limit, 128GB)

Testing Pool Limits

Because the kernel pools are used by almost every kernel operation, exhausting them can lead to unpredictable results. If you want to witness first hand how a system behaves when pool runs low, use the Notmyfault tool. It has options that cause it to leak either nonpaged or paged pool in the increment that you specify. You can change the leak size while it’s leaking if you want to change the rate of the leak and Notmyfault frees all the leaked memory when you exit it:


Don’t run this on a system unless you’re prepared for possible data loss, as applications and I/O operations will start failing when pool runs out. You might even get a blue screen if the driver doesn’t handle the out-of-memory condition correctly (which is considered a bug in the driver). The Windows Hardware Quality Laboratory (WHQL) stresses drivers using the Driver Verifier, a tool built into Windows, to make sure that they can tolerate out-of-pool conditions without crashing, but you might have third-party drivers that haven’t gone through such testing or that have bugs that weren’t caught during WHQL testing.

I ran Notmyfault on a variety of test systems in virtual machines to see how they behaved and didn’t encounter any system crashes, but did see erratic behavior. After nonpaged pool ran out on a 64-bit Windows XP system, for example, trying to launch a command prompt resulted in this dialog:


On a 32-bit Windows Server 2008 system where I already had a command prompt running, even simple operations like changing the current directory and directory listings started to fail after nonpaged pool was exhausted:


On one test system, I eventually saw this error message indicating that data had potentially been lost. I hope you never see this dialog on a real system!


Running out of paged pool causes similar errors. Here’s the result of trying to launch Notepad from a command prompt on a 32-bit Windows XP system after paged pool had run out. Note how Windows failed to redraw the window’s title bar and the different errors encountered for each attempt:


And here’s the start menu’s Accessories folder failing to populate on a 64-bit Windows Server 2008 system that’s out of paged pool:


Here you can see the system commit level, also displayed on Process Explorer’s System Information dialog, quickly rise as Notmyfault leaks large chunks of paged pool and hits the 2GB maximum on a 2GB 32-bit Windows Server 2008 system:


The reason that Windows doesn’t simply crash when pool is exhausted, even though the system is unusable, is that pool exhaustion can be a temporary condition caused by an extreme workload peak, after which pool is freed and the system returns to normal operation. When a driver (or the kernel) leaks pool, however, the condition is permanent and identifying the cause of the leak becomes important. That’s where the pool tags described at the beginning of the post come into play.

Tracking Pool Leaks

When you suspect a pool leak and the system is still able to launch additional applications, Poolmon, a tool in the Windows Driver Kit, shows you the number of allocations and outstanding bytes of allocation by type of pool and the tag passed into calls of ExAllocatePoolWithTag. Various hotkeys cause Poolmon to sort by different columns; to find the leaking allocation type, use either ‘b’ to sort by bytes or ‘d’ to sort by the difference between the number of allocations and frees. Here’s Poolmon running on a system where Notmyfault has leaked 14 allocations of about 100MB each:


After identifying the guilty tag in the left column, in this case ‘Leak’, the next step is finding the driver that’s using it. Since the tags are stored in the driver image, you can do that by scanning driver images for the tag in question. The Strings utility from Sysinternals dumps printable strings in the files you specify (that are by default a minimum of three characters in length), and since most device driver images are in the %Systemroot%\System32\Drivers directory, you can open a command prompt, change to that directory and execute “strings * | findstr ”. After you’ve found a match, you can dump the driver’s version information with the Sysinternals Sigcheck utility. Here’s what that process looks like when looking for the driver using “Leak”:


If a system has crashed and you suspect that it’s due to pool exhaustion, load the crash dump file into the Windbg debugger, which is included in the Debugging Tools for Windows package, and use the !vm command to confirm it. Here’s the output of !vm on a system where Notmyfault has exhausted nonpaged pool:


Once you’ve confirmed a leak, use the !poolused command to get a view of pool usage by tag that’s similar to Poolmon’s. !poolused by default shows unsorted summary information, so specify 1 as the the option to sort by paged pool usage and 2 to sort by nonpaged pool usage:


Use Strings on the system where the dump came from to search for the driver using the tag that you find causing the problem.

So far in this blog series I’ve covered the most fundamental limits in Windows, including physical memory, virtual memory, paged and nonpaged pool. Next time I’ll talk about the limits for the number of processes and threads that Windows supports, which are limits that derive from these.

How to add a new disk to a SQL Server Failover Cluster

When you need to add a new disk to a SQL Server 2008 R2 Failover Cluster (running on Windows Server 2008 R2), you just need to follow this steps:


1. Configure your SAN to present your disks to the Cluster Servers.

2. Detect the new disk’s at Windows Storage Manager (on both servers):



3. Add this new this to the cluster Service (Go to Server Manager –> Features –> Failover Cluster Manager –> Storage –> Right Click and click Add a Disk):



4. Go to Failover Cluster Manager –> Services and Applications, and right click SQL Server and then select “Add storage”:



5. Now it’s important, after the step 4 you can see the disk at cluster service associated to the SQL Server Service, but at SQL Server Manager, you can’t see the disk! So, you just need to add service dependence:

Right Click SQL Server Resource, and click Properties:


Click on Dependencies Tab, and then click on the last line to add a new line, and select “AND” and then select the new disk resource:



Now click Apply and OK and the new disk is now available to the SQL Server Service!



Don’t forget to click like (FB) or share this post Piscar de olho

How to Activate and Use Active Directory Recycle Bin with PowerShell

Important Note:This acction is irreversible, so when you activate the Active Directory Recycle Bin Feature you will not be able to disable this feature!


First open Active Directory Module for Windows PowerShell (You can find it on your DC Administrative Tools).

Then type this command:

Enable-ADOptionalFeature –Identity “CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration, DC=domain,DC=com” –Scope ForestOrConfigurationSet –Target “domain.com”

You need to change “DC=domain,DC=com” and “-Target “domain.com” to your domain information.


If youreceive this error:



You just need to run the Active Directory Module for Windows PowerShell with your Domain Admin and with “Run As Administrator”.


After do this here is the result:



Now the Active Directory Recycle Bin is enabled,and if you need to restore an ADObject you just need to list the recycle bin objects:

Get-ADObject -SearchBase “CN=Deleted Objects,DC=domain,DC=com” -ldapFilter “(objectClass=*)” -includeDeletedObjects | FT ObjectGUID,Name  -A




Then you can recover the Object using this command:

Restore-ADObject -Identity dd83eec4-f136-4aed-b1e1-437f7fed4f92




Hope you enjoy ;)


‘Error: The following Active Directory error occurred: Access is denied’ Adding delegation

When we try to add delegation to a specific user in Active Directory Users and Computers and we receive this error:


The following Active Directory error occurred



You should do this:


You need to go to the domain controllers group policy:



Open the Group Policy Tab:



Expand “Default Domain Controllers Policy->Computer Configuration->Windows Settings->Local Policies->User Rights Assignment” and locate the “Enable computer and user accounts to be trusted for delegation”:



Open this Policy and add the group or user who should have rights to add delegations:



After this, you can try again and you should have no problems adding delegation to the user.


Source: http://support.microsoft.com/kb/250874/en-us

Expand and Extend To Increase Capacity On A Virtual Hard Disk

Before following these instructions… PLEASE backup your machine. The best way to get a good backup of a machine is to do an export. So please, export the machine or at the very least test your backup procedures to make sure they are valid. Also, if you want a way to do this through a GUI then check out the
video and go to the end. Very cool shortcut that is much easier than the command prompt but requires Windows 2008 R2


the player will be placed here

If you’re having trouble viewing the video, Click here to download it.

I ran this on a Windows Server 2008 R2 SP1 machine. These instructions will be identical for Windows 7. I am not certain of the changes from earlier versions of the OS. I think these commands are all available in Vista but I am not sure and I do not have a vista machine available for testing. The commands in this post must be run while the VHD file is closed. It cannot be attached with disk manager and it cannot be attached to a running VM. You also have to be in an elevated command prompt. Start | type cmd | Right-click cmd (top of menu) | Run as Administrator

I have done several posts recently on managing VHD’s. I wanted to simplify the process of adding additional space to the C: drive (Boot Drive) of a virtual machine. I have collapsed it down to a few simple steps using diskpart. Thanks to all that posted comments giving me the suggestion to write this post. If you have snapshots on the volume or are using differencing disks you will need to check out How To Merge a Chained Differential VHD Disk So It Can Be Extended and Expanded to learn what you need to do before you can expand and extend these disks.

The commands I ran to expand and extend my disk (all together). Notice your commands will be different as you will have to put in your name and path and look to see what volume # your disk is so you are working with the correct volume. A couple things of note as I worked through these instructions again and produced a video of it. The most important of which is that the disk has to be completely closed before you can start the machine up.

I did a video walkthrough of these instructions. You can down the video from Expand and Extend VHD.vmv

Select vdisk file=”D:\W510\BRS Fargo – {SQL, FEP Server ConfigMgr } .10\BRS-FargoNew.vhd”
list vdisk
expand vdisk maximum=40000
attach vdisk
list disk

– If disk is not online… Use “Online Disk” to bring it online

list volume
select volume 9
list volume
detach vdisk

Now let’s look in more detail at what I am doing with each command…

Command Description
diskpart Launch the DiskPart utility
Select vdisk file=”D:\W510\BRS Fargo – {SQL, FEP Server ConfigMgr } .10\BRS-FargoNew.vhd” Select the VHD file. Notice that if the path or the file name has spaces you have to put “” quotes around it
list vdisk Shows you a list of Vdisks. The * at the left shows the one that is selected.
expand vdisk maximum=40000 changes the size of the vdisk to this new size defined by the maximum= parameter. This is the number of bytes that the resulting disk should have in it. 40000 is almost 40gb.
attach vdisk once the disk is expanded you have to mount it to work on the disk. This actually mounts the disk. If you look in Disk Manager or in windows explorer you would see that after you run this you have a new drive. It is doing the Attach to the currently selected vdisk which is the one we just expanded
list disk Shows the list of disks. This is all mounted disks so it includes all vDisks that are mounted. The disk we want to work with is already selected because we just attached it. if it were not selected you could select it with Select Disk #. Notice that we now have 19gb of free space thanks to the expand vdisk command we ran above
Online Disk If disk is not showing online. you will need to bring it online.
list volume List volumes (partitions) the ### column is the most important. It has the number you need to use to select the volume you will work with. notice that the volume is currently 19gb
select volume 9 Select the volume we want to work with. In my case the volume I want to work with is Volume 9
extend extends the currently selected volume to use all contiguous available space on the same disk
List Volume running again to show the new size of 39gb. When we ran it before, it was only 19gb.
detach vdisk dismounts the Vdisk volume. Taking the volume offline is required because Hyper-V cannot load a disk that is mounted
exit Exit diskpart utility

And finally, a screenshot of what I typed and the result.


SharePoint 2010: Load Balance Central Administration

I am logged on to LAB-INDX as PINTOLAKE\Service-SharePoint. LAB-INDX is one of the servers where Central Administration is located, to avoid double hops. I will configure the farm directly on the server which hosts Central Administration

Open “SharePoint 2010 Central Administration” from the “Microsoft SharePoint 2010 Products” under All Programs

Select “Configure alternate access mappings”

On the right side of the screen pull down the menu next to “Alternate Access Mapping Collection”

Select “Change Alternate Access Mapping Collection”

Select “Central Administration”

If you notice both of the servers that we added the Central Administration web site to are present, however if you look that the URL on the right. Its doing a redirect to http://lab-indx:22222. Which is not what we want to do.

Click “Edit Public URLs”

The Default is “http://lab-indx:22222″. I added “http://lab-indx-mr:22222″ for the Intranet and “http://spca.pintolake.net” under Custom

CUSTOM is optional; it will require a DNS entry pointing “spca.pintolake.net” to the Load Balanced Virtual IP Address of the two indexing servers (LAB-INDX and LAB-INDX-MR)

Now our Public URL’s for the Zone are correct.

If you did not enter a “Custom” zone, you do not need to do the next step. At this point you will be able to access the Central Administration web site via “http://lab-indx:22222″ and “http://lab-indx-mr:22222″ which should be fine in most cases.



When you create an Alternate Access Mapping it does not change the Bindings in IIS, we need to manually edit the bindings on all servers that are hosting the web application for that host header entry.

We will repeat the “Edit HOST HEADER” process on both servers which you are hosting the Central Administration site. In this case I am making this change on LAB-INDX and LAB-INDX-MR

Open “Internet Information Services (IIS) Manager” from “Administrative Tools”

Right Click the “SharePoint Central Administration v4″ site and select “Edit Bindings…”

Click “Add…”

Enter “spca.pintolake.net”

Press “OK”

Click “Close” and exit IIS Manager

Repeat the “Edit HOST HEADER” process on both servers which you are hosting the Central Administration site. In this case I am making this change on LAB-INDX and LAB-INDX-MR


Try to access Central Administration site from all three URL’s. If you cannot access the Load Balanced IP it could that you have to enter the DisableLoopbackCheck on all of your SharePoint 2010 servers including the SQL Server.

I was able to access the Central Administration web site via:

Perform NLB testing on http://spca.pintolake.net by disabling NLB on each server then testing access from a remote location.

Moving an SSL certificate from Windows 2003 Server to Windows Server 2008

In this article I will move an SSL certificate from a Windows 2003 Server box to a Windows Server 2008 machine. We will need to export the certificate on the 2003 Server and import and configure the SSL certificate on the 2008 Server. This article is divided into 4 sections:

  1. Export SSL Certificate on Windows 2003 Server
  2. Import an SSL Certificate on Windows Server 2008
  3. Configure an SSL Certificate on Windows Server 2008
  4. Testing the website

Export SSL Certificate on Windows 2003 Server


  • Open IIS manager on the 2003 Server
  • Under the “Directory Security” tab, select “Server Certificate”

  • Press “Next” to continue

  • Select “Export the current certificate to a .pfx file”
  • Press “Next” to continue

  • Export the .pfx file to a location to the desired location, you will need to copy this file to the 2008 Server so make sure you know where you saved it at
  • Press “Next” to continue

  • Enter a password for the certificate

  • Remember the location of the .pfx file
  • Press “Next” to continue

  • Press “Finish” to complete the export

Import an SSL Certificate on Windows Server 2008


Copy the PFX file to the 2008 Server. You will need to import it so remember the location you copy it to.

  • Open Internet Information Services (IIS) Manager
  • Select the SERVERNAME on the right side of the screen, then under “Features View” on the left side panel, double click “Server Certificates”

  • Select “Import” on the right hand side of the screen
  • Select the path of the .pfx file
  • Enter the password for the SSL certificate
  • Select “Allow the certificate to be exported”, if you want this option. I like to select it just in case I need to export the SSL certificate again
  • Press OK to continue

  • You should now see the SSL certificate in the list of certificates
  • This concludes installation of the SSL certificate we now need to configure

Configure an SSL Certificate on Windows Server 2008


After we import the SSL certificate in Windows Server 2008 we need to configure the website to use the certificate

  • Right click the web site for which the SSL certificate was imported for
  • Select “Edit Bindings…”

  • Select “Add…”
  • Select the “Type” as “https”
  • Select “IP Address” as “All Unassigned”. NOTE: You can assign multiple SSL Certificate to a server as long as each SSL certificate is using a DIFFERENT IP ADDRESS because only one IP Address can bind the 443 port at a time with IIS
  • Select the “SSL certificate”, select the SSL certificate that you have imported for this website
  • Press OK to continue

You should see the binding for “https” on the list of bindings now

  • Press “Close” to continue

We can stop the configuration here if we wanted users to access the site via http OR https, I want to force users to use https so we will make the next configuration change

  • Under the “Features View”, double click “SSL Settings”

  • Check “Require SSL” and press “Apply”

Testing the website



To test go to HTTPS https://www.sitename.com, you should go to the site with no problems.


If you go to HTTP http://www.sitename.com you will get a 403.4 error. You can automatically redirect HTTP to HTTPS with some minor tweaks, search my site for HTTP to HTTPS and you will find some useful articles on that change

Install and Configure NLB (WLBS) on Windows Server 2008

In this article I will load balance 2 servers and take you through the process step-by-step. Load Balancing takes 2 or more servers and lets them share one IP address so both servers can serve client requests. At the end of this article you should be able to configure NLB.

Gathering Information

Log onto both of the servers and run IPCONFIG /ALL from the command prompt. We need the name, domain and IP address of each server that will be in the NLB Cluster. We will also need to make up an additional name for the cluster in this example we will use SERVER-LB for the virtual cluster name.

The 2 servers we will be Load Balancing are PL2008-01 and PL2008-02. The virtual cluster name will be PL2008-V. So if this was a web server users would go to http://PL2008-V, depending how we configure NLB either PL2008-01, PL2008-02 or both servers will service the web request.

PL2008-01.pintolake.net Server 1
PL2008-02.pintolake.net Server 2
PL2008-V.pintolake.net Virtual cluster name and IP address of Servers 1/2

In this example both servers only have one network card. If you have multiple network cards you will still be able to load balance the 2 servers. You need to configure one NIC per server for NLB, both NIC’s should be on the same VLAN and be they should able to contact each other.



Installation of NLB feature on all NLB nodes

This should be done on ALL NODES in the NLB Cluster. In this case we are performing this installation on PL2008-01 and PL2008-02.

Open Server Manager, you can open this several different ways in Windows Server 2008. Probably the quickest way to open Server Manager is to right click “My Computer” and choose “Manage”, another way is open “Control Panel” go to “Program and Features” and select “Turn Windows features on or off”. A third way to open it is “Server Manager” option under Administrative Tools.

  • Select “Features” from the Server Manager menu on the left
  • Press “Add Features”

  • Select the checkbox next to “Network Load Balancing”
  • Press “Next”

  • Press “Install”

Installation will proceed to install the necessary components

Installation has successes. It is highly recommended that you repeat this process on all nodes in the NLB cluster at this point before continuing with configuration

  • Press “Close”

NOTE: Network Load Balancing may also be installed from a command prompt with elevated privileges (right click on the command prompt in the Start menu and select Run as administrator) by running the servermanagercmd -install nlb command.

For example:

C:\Windows\system32>servermanagercmd -install nlb


Start Installation...

[Installation] Succeeded: [Network Load Balancing].


Success: Installation succeeded.

Configuring NLB on NODE 1 (PL2008-01)

Network Load Balanced clusters are built using the Network Load Balancing Manager which you can start from Start -> All Programs -> Administrative Tools menu or from a command prompt by executing nlbmgr.

  • Under the Cluster Menu option select “New”

  • Enter the first node in the cluster which is PL2008-01
  • Press “Connect”


You will have the option to choose which network adapter you want to use, the NIC should be on the same subnet as the other servers in the NLB cluster

  • Press “Next”

  • Enter the Priority ID as, 1 (each node in the NLB cluster should have a UNIQUE ID)
  • Make sure the correct adapter was selected under “Dedicated IP Address”
  • Select “Started” for the “Initial host state” (this tells NLB whether you want this node to participate in the cluster at startup)
  • Press “Next”

  • Press “Add”
  • Enter the Cluster IP and Subnet mask
  • Press “OK”

You can add multiple IP Addresses for the cluster, enter as many as you want.

  • Make sure the “Cluster IP addresses” are correct
  • Press “Next”

  • Select the IP Address for this cluster
  • Enter the NLB address “PL2008-V.pintolake.net”
  • Enter “Unicast” as the “Cluster operation mode”
  • Press “Next”

Unicast vs Multicast

Unicast/Multicast is the way the MAC address for the Virtual IP is presented to the routers. In my experience I have almost always used Multicast, which if you use you should enter a persistent ARP entry on all upstream switches or you will not be able to ping the servers remotely.

In the unicast method:

  • The cluster adapters for all cluster hosts are assigned the same unicast MAC address.
  • The outgoing MAC address for each packet is modified, based on the cluster host’s priority setting, to prevent upstream switches from discovering that all cluster hosts have the same MAC address.

In the multicast method:

  • The cluster adapter for each cluster host retains the original hardware unicast MAC address (as specified by the hardware manufacture of the network adapter).
  • The cluster adapters for all cluster hosts are assigned a multicast MAC address.
  • The multicast MAC is derived from the cluster’s IP address.
  • Communication between cluster hosts is not affected, because each cluster host retains a unique MAC address.

Selecting the Unicast or Multicast Method of Distributing Incoming Requests http://technet.microsoft.com/en-us/library/cc782694.aspx




I am leaving all the default for the port rules; by default its set to all ports with Single affinity, which is sticky. For more information on Port Rules, see my Note below.

  • Press “Finish”

NOTE: Add/Edit Port Rule Settings

For most scenarios I would keep the default settings. The most important setting is probably the filtering mode. “Single” works well for most web application, it maintains a users session on one server so if the user server requests go to PL2008-01, PL2008-02 will continue to serve that request for the duration of the session.


  • You want to ensure even load balancing among cluster hosts
  • Client traffic is stateless (for example, HTTP traffic).


  • You want to ensure that requests from a specific client (IP address) are sent to the same cluster host.
  • Client state is maintained across TCP connections (for example, HTTPS traffic).

Class C

  • Client requests from a Class C IP address range (instead of a single IP address) are sent to the same cluster host.
  • Clients use multiple proxy servers to access the cluster, and they appear to have multiple IP addresses within the same Class C IP address range.
  • Client state is maintained across TCP connections (for example, HTTPS traffic).

For more information on this please see this TechNet article:

Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule http://technet.microsoft.com/en-us/library/cc759039.aspx


You should see a couple of things in the NLB Manager, this will let us know that this node successfully converged on our new PL2008-V.pintolake.net NLB Cluster

  • Make sure the node’s status changes to “Converged”
  • Make sure you see a “succeeded” message in the log window

Configuring NLB for NODE 2 (PL2008-02)

We will configure PL2008-02 from PL2008-01. If we wanted to configure this from PL2008-02 then we would need to connect to the PL2008-V cluster first then add the host to the cluster.

  • Right click the cluster name “PL2008-V.pintolake.net” and select “Add Host to Cluster”

  • Enter PL2008-02 and press “Connect”

A list of Network adapters will show up

  • Select the network adapter you want to use for Load Balancing
  • Press “Next”

This step is very important; each node in the NLB cluster should have a unique identifier. This identifier is used to identify the node in the cluster.

  • Enter the Priority ID as, 2 (each node in the NLB cluster should have a UNIQUE ID)
  • Make sure the correct adapter was selected under “Dedicated IP Address”
  • Select “Started” for the “Initial host state” (this tells NLB whether you want this node to participate in the cluster at startup)
  • Press “Next”

  • Press “Finish”

You should see a couple of things in the NLB Manager, this will let us know that both nodes successfully converged on our new PL2008-V.pintolake.net NLB Cluster

  • Make sure that both node’s status changes to “Converged”
  • Make sure each node has a unique “host priority” ID
  • Make sure each node is “started” under “initial host state”
  • Make sure you see a “succeeded” message in the log window for the second node

A closer look at the configuration information for this NLB cluster


  • Go to the command prompt and type “wlbs query”, as you can see HOST 1 and HOST 2 converged successfully on the cluster. This means things are working well.
  • Ping each server locally and remotely
  • Ping the virtual IP locally and remotely – you should do this three times from each location. If you cannot ping remotely you may need to add a static ARP entry in your switches and/or routers where the host machines reside
    • 1 – Both nodes up
    • 2 – Node 1 down
    • 3 – Node 2 down

NLB Documentation (from Windows Help)

Availability, scalability, and clustering technologies

Windows Server 2008 provides two clustering technologies: failover clusters and Network Load Balancing (NLB). Failover clusters primarily provide high availability; Network Load Balancing provides scalability and at the same time helps increase availability of Web-based services.

Your choice of cluster technologies (failover clusters or Network Load Balancing) depends primarily on whether the applications you run have long-running in-memory state:

Failover clusters are designed for applications that have long-running in-memory state, or that have large, frequently updated data states. These are called stateful applications, and they include database applications and messaging applications. Typical uses for failover clusters include file servers, print servers, database servers, and messaging servers.

Network Load Balancing is intended for applications that do not have long-running in-memory state. These are called stateless applications. A stateless application treats each client request as an independent operation, and therefore it can load-balance each request independently. Stateless applications often have read-only data or data that changes infrequently. Front-end Web servers, virtual private networks (VPNs), File Transfer Protocol (FTP) servers, and firewall and proxy servers typically use Network Load Balancing. Network Load Balancing clusters can also support other TCP- or UDP-based services and applications.

Network Load Balancing overview

The Network Load Balancing (NLB) service enhances the availability and scalability of Internet server applications such as those used on Web, FTP, firewall, proxy, virtual private network (VPN), and other mission-critical servers.

What are NLB clusters?

A single computer running Windows can provide a limited level of server reliability and scalable performance. However, by combining the resources of two or more computers running one of the products in Windows Server 2008 into a single virtual cluster, NLB can deliver the reliability and performance that Web servers and other mission-critical servers need.

Each host runs a separate copy of the desired server applications (such as applications for Web, FTP, and Telnet servers). NLB distributes incoming client requests across the hosts in the cluster. The load weight to be handled by each host can be configured as necessary. You can also add hosts dynamically to the cluster to handle increased load. In addition, NLB can direct all traffic to a designated single host, which is called the default host.

NLB allows all of the computers in the cluster to be addressed by the same set of cluster IP addresses, and it maintains a set of unique, dedicated IP addresses for each host. For load-balanced applications, when a host fails or goes offline, the load is automatically redistributed among the computers that are still operating. When a computer fails or goes offline unexpectedly, active connections to the failed or offline server are lost. However, if you bring a host down intentionally, you can use the drainstop command to service all active connections prior to bringing the computer offline. In any case, when it is ready, the offline computer can transparently rejoin the cluster and regain its share of the workload, which allows the other computers in the cluster to handle less traffic.

Hardware and software considerations for NLB clusters

  • NLB is installed as a standard Windows networking driver component.
  • NLB requires no hardware changes to enable and run.
  • NLB Manager enables you to create new NLB clusters and to configure and manage clusters and all of the cluster’s hosts from a single remote or local computer.
  • NLB lets clients access the cluster by using a single, logical Internet name and virtual IP address—known as the cluster IP address (it retains individual names for each computer). NLB allows multiple virtual IP addresses for multihomed servers.


In the case of virtual clusters, the servers do not need to be multihomed to have multiple virtual IP addresses.

NLB can be bound to multiple network adapters, which allows you to configure multiple independent clusters on each host. Support for multiple network adapters is different from virtual clusters in that virtual clusters allow you to configure multiple clusters on a single network adapter.

Installing the NLB feature

To use Network Load Balancing (NLB), a computer must have only TCP/IP on the adapter on which NLB is installed. Do not add any other protocols (for example, IPX) to this adapter. NLB can load balance any application or service that uses TCP/IP as its network protocol and is associated with a specific Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port.

To install and configure NLB, you must use an account that is listed in the Administrators group on each host. If you are not using an account in the Administrators group as you install and configure each host, you will be prompted to provide the logon credentials for such an account. To set up an account that NLB Manager will use by default: in NLB Manager, expand the Options menu, and then click Credentials. We recommend that this account not be used for any other purpose.

You can use Initial Configuration Tasks or Server Manager to install NLB. To install NLB, in the list of tasks, click Add features and in the list of features in the wizard, click Network Load Balancing.

Managing NLB

Server roles and features are managed by using Microsoft Management Console (MMC) snap-ins. To open the Network Load Balancing Manager snap-in, click Start, click Administrative Tools, and then click Network Load Balancing Manager. You can also open Network Load Balancing Manager by typing Nlbmgr at a command prompt.

Additional references for NLB

To learn more about NLB, you can view the Help on your server. To do this, open Network Load Balancing Manager as described in the previous section and press F1.

For deployment information for NLB, see http://go.microsoft.com/fwlink/?LinkId=87253

For instructions on how to configure NLB with Terminal Services, see http://go.microsoft.com/fwlink/?LinkId=80406

For operations information for NLB, see http://go.microsoft.com/fwlink/?LinkId=87254

For troubleshooting information for NLB, see http://go.microsoft.com/fwlink/?LinkId=87255

disable the Shutdown Event Tracker in Server 2003/2008

A nice feature that can sometimes by annoying is the Shutdown Event Tracker, that allows you to comment every time you shut off a server. While I do understand the auditing requirements to have this in place on production server, this feature is unnecessary for development and test servers and can be disabled.

To disable Shutdown Event Tracker we need to open the Group Policy Editor, the Group Policy Editor can be opened from pressing “START > RUN > GPEDIT.MSC ” then press ENTER

From the navigation menu on the left side navigate to “Computer Configuration > Administrative Templates > System”, then on the right side double click “Display Shutdown Event Tracker”

Change the option from “Not Configured” to “Disabled” > Press “Apply” then press “OK”.

Note you can also use this to enable Shutdown Event Tracker in Vista although I have no idea why someone would want to do that.

A restart is not required for this change to take effect.

Enable Remote Desktop:

In this article we will change the default listening port for RDC/RDP from 3389 to 5555. This is useful when you want to prevent external or internal users from scanning port 3389 to see what computers are available to connect to. Sure they can scan port 5555 however it is another step for an intruder and they will need to figure out what is running on port 5555 once they see it is open.

Right click “My Computer”

Select “Properties”

Click “Advanced system settings”, depending on your version of Windows you might be able to skip this step

Select the “Remote” tab

Select “Allow users to connect remotely to this computer” or “Allow connections from computers running any version of Remote Desktop”

Press “Select Remote Users” or “Select Users”

Select the users you want to be able to login remotely.

Press “OK”, until you close out of System Properties

By default RDC/RDP runs on port 3389. When you connect using Remote Desktop Connection it uses port 3389 even though you cannot see it.

Changing the RDP Port:

Select Start, Run then type “Regedit”

Press “OK”

Locate and then click the following registry subkey:


You can see the “PortNumber” value is set to 3389 by default.

Select “PortNumber”

On the Edit menu, click Modify

Select “Decimal” – *very important*

Type the new port number of “5555″

Click “OK”

You can close REGEDIT at this point. A restart or reboot is NOT required for this change.

Open Remote Desktop Connection (“mstsc” from the run window, or you can usually find this under “Start > All Programs > Accessories”)

Type the SERVERNAME which you changed to RDP port on and append :5555 to it

So you should enter SERVERNAME:5555

Press “Connect”

Be sure to open port 5555 on any firewalls that may be in the way

%d bloggers like this: