Configuration of two node file server failover clustering – Part 2


Failover cluster configuration includes two (or more) server nodes that share an external storage. Based on the iSCSI technology, StarWind Softwafe Inc. StarWind enables to create an external storage in Windows environment without implementation of expensive FC or external SCSI solutions. With StarWind you can create a shared disk array on a host running Microsoft Windows.

A sample example of two node failover cluster using StarWind:

image

Requirements for Windows Server 2008 R2 Failover Cluster

Here’s a review of the minimum requirements to create a Windows Server 2008 R2 Cluster:

    • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster.
    • A shared storage: This is where we can use StarWind iSCSI SAN software.
    • Two network cards on each server, one public network (from which we usually access Active Directory network) and a private for heartbeat between servers. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
      When we are using iSCSI protocol for our shared storage Microsoft recommends three network cards on each host: Public network, private, and one dedicated to iSCSI communication from servers to the storage, which in our case will be represented by a server using StarWind iSCSI software.
    • Windows Server 2008 R2 Enterprise or Datacenter Editions for hosts which will be part of the cluster. Always keep in mind that cluster is not supported in Standard Editions.
    • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.
    • DNS host records of all nodes must be configured.
Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

      • Windows Server 2008 or Windows Server 2008 R2
      • 10 GB of disk space for StarWind application data and log files
      • [Highly Recommended] 4 GB of RAM
      • 1 Gigabit Ethernet or 10 Gigabit Ethernet.

Installing StarWind iSCSI SAN Software:

Perform the below steps on a Windows 2008 Server for which you want to treat as SAN:

Download the StarWind software from http://www.starwindsoftware.com/starwind-free

After you’ve downloaded the installation file, just double click it and the wizard will start.

a

Follow the wizard normally as any installation. In the process you will find one of the interesting features about it: You can install the service separately from the console from which you can administer the StarWind iSCSI.

b

This way you can install the console on any machine compatible to access the server or servers with StarWind iSCSI and manage storage, permissions, etc. In this case, I’ll be selecting the full installation.

The next steps are pretty straight forward so you won’t have any problem. Once the final steps are completed you’ll get a warning about the iSCSI Service needed before installing the StarWind iSCSI Service.

c

You just need to access the “Services” console and set the service as started and automatic.

d

After you click install, the process only takes a few seconds and you will additionally see some drivers that will be installed on the operating system; click “Install”.

e

Preparing Quorum Volume:

Start StarWind iSCSI console, In the “General” screen we’ll find the summary information plus how to connect to local or remote StarWind host.

g

In the “Configuration” section we can find the common parameters to configure iSCSI StarWind, for example the “Network” options which enable the iSCSI communications (port 3260) on any of the network adapters identified.

h

If we are using a special LAN/VLAN to separate our iSCSI traffic as recommended, then we should only enable the IP address used for that purpose.

Launch the StarWind Management Console selecting Start -> All Programs –> StarWind Software -> StarWind -> StarWind. After the console is launched its icon appears in the system tray. Double click the icon with the left mouse
button or single click it with the right and select Start Management pop-up menu item. From the StarWind Servers tree please select the computer you wish to connect to. Press the right mouse button over the desired host (computer) and select the Connect menu item. You will be prompted to enter the login and password. Default ones are: root, starwind. You can always change them later

With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.

i

Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. In my case I’m using a simple name “w2k8r2-clstr”, click on “Next”.

j

Since we are going to be using hard drives to present our storage, in “Storage Type” select “Hard Disk”, click on “Next”.

k

In “Device Type” please note that we can use physical as virtual drives to present to our clients using iSCSI. We are going to select “Basic Virtual”, from which we’ll create a file (.img) that will represent the LUN; click on “Next”.

l

Select “Image File device” and click on “Next”.

m

Since we are creating a new one, select “Create new virtual disk” and click on “Next”.

n

If you have decided to create a new virtual disk please specify the location and the name of the virtual disk you wish to be created. Also you have to provide the virtual disk size in megabytes. Check any additional parameters of the virtual disk you wish to create. Please refer to the online help for details regarding those additional parameters (Compressed and Encrypted).

Here, we are using a separate drive where I’m going to save all of my LUNs

p

In the cache parameters, leave the default options selected “Normal (no caching)”; click on “Next”.

q

In the last screen, just click on “Finish” and we’ll have our LUN ready.

As optional and recommended review the options for “CHAP permissions” and “Access Rights”. Within these options we can configure all the parameters needed for secure environments.

Once we’ve completed this, we can access this file from a Windows Server 2008 R2 host.

Preparing Cluster Nodes:

Change the below settings on Node1:

Network settings:

Each adapter will be assigned a static IP address. Select the Use the following IP address option and type in the IP address you wish to use. The Subnet mask and DNS server address must also be provided. All the values must be
correctly chosen given the networking configuration of the Corporate LAN that the cluster will be a part of. As this interface is for the public network, a default gateway will need to be assigned.

image
Press the OK button.

Just as was done for the first network adapter, assign appropriate values to the TCP/IP configuration of the second network adapter using the following example image as guidance. This interface is used for iSCSI target storage communications and a default gateway need not be specified.

image

Configuring iSCSI initiator

Launch the Microsoft iSCSI Software Initiator application Administrative Tools-> iSCSI Initiator

image

Select the Discovery Tab.

In the Target Portals group, click the Add Portal… button.

image

Press the Add Portal… button

In the Add Target Portal dialog enter IP address or DNS name of the StarWind target server.

image

Click on the Targets tab. Select the IQN of the target just added.

In my case, I’ve created two LUNs available for the cluster.

r

 

Press the Log On… button.

The Log On to Target dialog now appears. In this dialog click on the checkbox Automatically restore this connection when the system boots to make this connection persistent.

image

Initializing, formatting and creating partitions
When the StarWind Disks are connected, they show up on the initiator machine as new disk devices. Before these devices can be used as cluster disks, they have to be initialized and formatted. Launch the Computer Management console.

s

Bring disks online. Press the right mouse button over the disk and select Online.

image

Initialize the Disks. Press the right mouse button over the Disk and select Initialize Disk. Follow the wizard to initialize the new disks.

image

Right-click over the unallocated space and select New Simple Volume. Follow the instructions in the wizard to create an NTFS partition for use as the quorum disk.

image

image

Change the below settings on Node2:

Perform the above steps on node 2. Only changes needs to be done:

IP address for network adapter 1 – 192.168.1.22 (LAN)

IP address for network adapter 2 – 192.168.2.22 (heartbeat)

Rest, configure the settings as it was done on Node 1.

Install Failover Cluster feature and run cluster validation

Perform the below steps on both nodes:

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

In “Server Manager”, access “Features” and select “Failover Cluster”. This feature does not need a reboot to complete.

image

Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.

v

Select Next and Select Run all tests (recommended) for testing the complete configuration.

w

Check the parameters are correct. Press the Previous button should any changes be required

x

We can also have a detailed report about the results on each test.

y

Create a Cluster
It is now time to create the cluster. Click Create a Cluster item from the Actions panel shown on the right.

z

Add the names of servers you wish to use as cluster nodes.
image

Next, Specify Cluster Name and Cluster IP address.

image

Click Next till Finish

a2

Now that the creation of the cluster is complete it will be shown in the panel on the left. Expand the cluster by clicking on the ‘+’ symbol next to the cluster, then click on Storage. The Failover Cluster Management console should look like the example picture provided below. Both cluster disk resources will be shown as online.

a3

Adding services to a cluster

To add a service, open the Failover Cluster Management tool, browse to your cluster and right-click Services And Applications.  From the shortcut menu, choose the Configure A Service Or Application option.  This starts the High Availability Wizard.

Select Service Or Application

After the introduction page of the wizard, you’re asked to choose the service or application that you want to make highly available. Check with your software vendor to determine cluster compatibility. The next steps of the wizard will change depending on what you select on this page

image

For each service you configure, you must specify how clients will access the service. Remember that the clustered service will appear as a single entity to client computers. Name the service and provide the service with a unique IP address

image

Next, choose the shared storage device that will be used by the chosen service

image

Once you make this selection, you have the opportunity to confirm your selections. Afterwards, the wizard will make the selected service highly available on your network

image

you can see the shared file server in action.

image

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: