Configuring a read-only replicated folder on Windows Server 2008 R2.


Pre-deployment considerations

Please read the following notes carefully before deploying the read-only replicated folders feature.

a) Feature applicability: The read-only replicated folders feature is available only on replication member servers which are running Windows Server 2008 R2. In other words, it is not possible to configure a replicated folder to be read-only on a member server running either Windows Server 2003 R2 or Windows Server 2008.

b) Backwards compatibility: Only the server hosting read-only replicated folders needs to be running Windows Server 2008 R2. The member server that hosts a read-only replicated folder can replicate with partners that are on Windows Server 2003 or Windows Server 2008. However, to configure and administer a replication group that has a read-only replicated folder, you need to use the DFS Management MMC snap-in on Windows Server 2008 R2.

c) Administration of read-only replicated folders: In order to configure a replicated folder as read-only replicated folder, you need to use the DFS Management MMC snap-in on Windows Server 2008 R2. Older versions of the snap-in (available on Windows Server 2003 R2 or Windows Server 2008) cannot configure or manage a read-only replicated folder. In other words, these snap-ins will not display the option to mark a replicated folder ‘read-only’.

d) Schema updates: If you have an older version of the schema (pre-Windows Server 2008), you will need to update your Active Directory schema to include the DFS Replication schema extensions for Windows Server 2008. Details regarding how to do so are given below:

Warning 

Before deploying:

The read-only replicated folders feature depends on the ‘msDFSR-ReadOnly’ flag in Active Directory. This flag is available as part of the Windows Server 2008 schema.

A detailed list of the schema extensions for the DFS Replication service can be found at: http://blogs.technet.com/askds/archive/2008/07/02/what-are-the-schema-extension-requirements-for-running-windows-server-2008-dfsr.aspx

 

There are no schema extensions for the DFS Replication service in the Windows Server 2008 R2 release. The rest of this discussion assumes that the Active Directory schema has been updated to include the Windows Server 2008 schema updates for DFS Replication.

 

1. Configuring a new read-only replicated folder

First of all, let’s take a look at how to configure a folder for replication between a couple of member servers. We will configure a folder containing reports to be replicated from Contoso’s hub server to the server in the sales office. The replicated folder will be configured as a read-only replicated folder on the sales office member server. The steps to create a new replication group are listed below.

The only difference (while configuring a read-only replicated folder) is that a check-box needs to be enabled while selecting the path of the replicated folder on the read-only member. This change is explained in Step 10 below.

Step 1: Launch the DFS Management Console (on Windows Server 2008 R2 member).

The DFS Management console (dfsmgmt.msc) is an MMC snap-in that can be used to configure and manage DFS Namespaces as well as DFS Replication.

Warning  Note: Please note that the new Windows Server 2008 R2 features (read-only replicated folders and clustered DFS Replication) can be configured only using the DFS Management snap-in that ships on Windows Server 2008 R2.

The DFS Management console on Windows Server 2003 R2 or Windows Server 2008 servers cannot be used to configure read-only replicated folders.

Select ‘Replication’ in the left hand side pane in order to configure and manage DFS Replication. The ‘Actions’ pane on the right can be used to configure replication groups and folders that need to be replicated using DFS Replication.

DfsmgmtScreenshot

Step 2: Click on the ‘New Replication Group…’ action.

In the ‘Actions’ pane on the right, click on ‘New Replication Group…’. This launches the‘New Replication Group Wizard’, which is illustrated in the below screenshot. The wizard walks through a set of operations that need to be performed while configuring the new replication group.

CreateNewRg

Step 3: Select the type of replication group.

First of all, select the type of replication group to be created. The Multipurpose replication group’ can be used to configure custom replication topologies. This type of replication group can be used to create replication topologies such as ‘hub and spoke’ and ‘full mesh’. It is also possible to create a custom replication topology by first adding a set of servers to the replication group and then configuring custom connections between them to achieve the desired custom replication topology.

RGType

The second type of replication group (Replication group for data collection’) is a special type of replication topology and is used to add two servers to a replication group in such a way that a hub (destination) server can be configured to collect data from another branch server. Let’s select ‘Multipurpose replication group’.

Step 4: Select the name and domain for the replication group.

In the wizard page that follows, enter a name for the replication group as well as the domain in which to create the replication group.

image

 

Step 5: Add replication group members.

In the wizard page that follows, add the member servers between which data is to be replicated. We now select CONTOSO-HUB and CONTOSO-SALES as the replication member servers, as seen in the screenshot below.

AddRgMembers

 

Step 6: Configure the replication topology, replication schedule and bandwidth utilization.

In this wizard page, configure the desired connection topology for replication.

  • The Hub and spoke’ option is available only when three or more replication member servers are selected. Therefore, it is grayed out in the below screenshot.
  • The ‘Full mesh’ connection topology connects every replication member server with all other members in the replication group. This usually works fine when there are 10 or fewer servers in the replication group.
  • The option ‘No topology’ can be selected in order to configure a custom connection topology. In this case, The connections between the servers need to be added so as to create the desired topology.

SelectReplnTopology

Since there are only two servers in the replication group and we want them to be connected to each other, we select ‘Full mesh’ as the connection topology.

A custom replication schedule and bandwidth throttling can also be configured. The default option configures the DFS Replication service to replicate continuously without any bandwidth restrictions.

SelectReplnBw

However, it is also possible to configure replication to take place during specific time windows (such as, for example after office hours). This can be done by selecting the option ‘Replicate during the specified days and times’ and then selecting the replication schedule in the wizard page that is launched. For example, the below screenshot illustrates an example where replication has been configured using all available bandwidth between 6pm and 6am (after office hours).

ConfigureReplnSchedule

 

Step 7: Configure the primary member in the replication group.

When the DFS Replication service conducts initial sync, this is the server which has the authoritative copy of the data. After the initial synchronization is complete, DFSR switches to multi-master replication mode and changes can be made on any of the replication member servers.

SelectPrimaryMember

Therefore, while setting up replication, choose the member server that has the authoritative copy of the data for the replicated folder as primary. In this example, the hub server has been configured to be the primary member.

Warning  Note: Please note that while setting up replication, the replicated folder cannot be configured to be read-only on the primary member. This is because the authoritative copy of data must reside on a read-write member.

 

 

Step 8: Configure the replicated folder path on the primary member.

In the following screen, configure the local path to the replicated folder on the primary member. Remember that this screen allows you to configure the replicated folder path for the primary member in the replication group. Since the primary member contains the authoritative copy of data that is to be replicated to the other member(s) in the replication group, it is not possible to mark the replicated folder read-only for this member. If the path does not exist, then the DFS Management console will create it.

SelectRFOnPrimary1

SelectRFOnPrimary

 

Step 9: Configure the path to the replicated folder on other replication member servers.

In the following screen, configure the replicated folder path for the remaining members in the replication group. By default, the membership of each of the replicated folders is set to Disabled. The membership status can be enabled by configuring the path to the replicated folder on the other members in the replication group.

In order to set the local path to the replicated folder on the remaining members and to enable the membership, click on the ‘Edit…’ button after selecting the member on which you want to set the path. In the dialog that appears, enter the local path to the replicated folder on the member.

SelectRFOnSecondary

 

Step 10: Configure the replicated folder to be read-only on desired members.

Notice that the dialog box which appears when the ‘Edit…’ button is clicked has an option to configure the replicated folder read-only on that member. First, select a local path for the replicated folder on this member server.

In order to configure the replicated folder to be read-only, select the check box ‘Make the selected replicated folder on this member read-only’.

MakeRfReadOnly

That’s it. The replication group can now be created.

image

Remember that replication does not begin until the configuration settings for this new replication group have replicated to the domain controller that is polled for configuration information by the DFS Replication service on the replication group members. Therefore, there will be a delay corresponding to the time it takes for the configuration settings to replicate between domain controllers in the domain and the time taken for all replication member servers to receive these configuration changes from Active Directory.

ReplicationDelay

Once the replication group has been configured, the membership status reflects which folders have been configured to be read-only. For example, in the below screenshot the replicated folder ‘C:\Reports’ has a membership status of ‘Enabled (read-only)’ on the member CONTOSO-SALES.

ReadOnlyReplicaConfigured

Configuring a read-only replicated folder to be read-write is as simple as selecting ‘Make read-write’ from the right click menu. Remember that this process will cause the DFS Replication service on the read-only member to rebuild its database before it switches to being read-write.

MakeRfReadWrite

 

 

2. Configuring an existing replicated folder to be read-only

It is also possible to convert a read-write replicated folder to read-only at any point in time. In order to upgrade an existing replicated folder to be a read-only replicated folder, first upgrade the machine to Windows Server 2008 R2 and then follow the below steps to configure the replicated folder as a read-only replicated folder on that member.

Step 1: Launch the DFS Management Console (on WS2008 R2 member).


Warning  Note:
Please note that the new Windows Server 2008 R2 features (read-only replicated folders and clustered DFS Replication) can be configured and managed only using the DFS Management snap-in that ships on Windows Server 2008 R2.

The console on Windows Server 2003 R2 or Windows Server 2008 servers cannot be used to configure read-only replicated folders.

Launch the DFS Management snap-in and connect to the corresponding replication group. The below screenshot illustrates what to expect in the DFS Management console. The console displays the replicated folders and the member servers in the replication group.

ReadWriteRfs

The screenshot above shows that the replicated folder ‘Reports’ is currently configured to be read-write on both members of the replication group ‘ContosoReports’. In order to configure the replicated folder to be read-only on the member CONTOSO-SALES, follow the steps listed below.

Step 2: Configure the replicated folder to be read-only.

Select the member on which the replicated folder needs to be configured as read-only, right click and select ‘Make read-only’ in the right click menu.

UpgradeToReadOnly

After the replicated folder has been configured to be read-only on a particular member, the Membership status changes to ‘Enabled (read-only)’ for that member in the DFS Management MMC snap-in. This is how read-only replicated folders can be identified in the DFS Management snap-in. The DFS Replication service on the member which has this newly configured read-only replicated folder, notices this change the next time it polls Active Directory for configuration information. Thereafter, the replicated folder will begin to behave like a read-only replicated folder.

PostUpgradeToReadOnly

Warning  Note: Active Directory replication ensures that changes in configuration are replicated amongst all domain controllers so that any domain controller polled by the DFS Replication service has up to date configuration information. Therefore, the rate at which the DFS Replication service notices changes in configuration information is dependent on AD replication latencies as well as the frequency with which it polls Active Directory for configuration information.

Hence, it will take a while before the DFS Replication service treats the replicated folder as a read-only replicated folder.

Configuration of two node file server failover clustering – Part 2


Failover cluster configuration includes two (or more) server nodes that share an external storage. Based on the iSCSI technology, StarWind Softwafe Inc. StarWind enables to create an external storage in Windows environment without implementation of expensive FC or external SCSI solutions. With StarWind you can create a shared disk array on a host running Microsoft Windows.

A sample example of two node failover cluster using StarWind:

image

Requirements for Windows Server 2008 R2 Failover Cluster

Here’s a review of the minimum requirements to create a Windows Server 2008 R2 Cluster:

    • Two or more compatible servers: You need hardware that is compatible with each other, highly recommended to always use same type of hardware when you are creating a cluster.
    • A shared storage: This is where we can use StarWind iSCSI SAN software.
    • Two network cards on each server, one public network (from which we usually access Active Directory network) and a private for heartbeat between servers. This is actually an optional requirement since using one network card is possible but not suitable in almost any environment.
      When we are using iSCSI protocol for our shared storage Microsoft recommends three network cards on each host: Public network, private, and one dedicated to iSCSI communication from servers to the storage, which in our case will be represented by a server using StarWind iSCSI software.
    • Windows Server 2008 R2 Enterprise or Datacenter Editions for hosts which will be part of the cluster. Always keep in mind that cluster is not supported in Standard Editions.
    • All hosts must be member from an Active Directory domain. To install and configure a cluster we don’t need a Domain Admin account, but we do need a Domain account which is included in the local Administrators of each host.
    • DNS host records of all nodes must be configured.
Requirements for StarWind iSCSI SAN Software

Here are the requirements for installing the component which will be in charge of receiving the iSCSI connections:

      • Windows Server 2008 or Windows Server 2008 R2
      • 10 GB of disk space for StarWind application data and log files
      • [Highly Recommended] 4 GB of RAM
      • 1 Gigabit Ethernet or 10 Gigabit Ethernet.

Installing StarWind iSCSI SAN Software:

Perform the below steps on a Windows 2008 Server for which you want to treat as SAN:

Download the StarWind software from http://www.starwindsoftware.com/starwind-free

After you’ve downloaded the installation file, just double click it and the wizard will start.

a

Follow the wizard normally as any installation. In the process you will find one of the interesting features about it: You can install the service separately from the console from which you can administer the StarWind iSCSI.

b

This way you can install the console on any machine compatible to access the server or servers with StarWind iSCSI and manage storage, permissions, etc. In this case, I’ll be selecting the full installation.

The next steps are pretty straight forward so you won’t have any problem. Once the final steps are completed you’ll get a warning about the iSCSI Service needed before installing the StarWind iSCSI Service.

c

You just need to access the “Services” console and set the service as started and automatic.

d

After you click install, the process only takes a few seconds and you will additionally see some drivers that will be installed on the operating system; click “Install”.

e

Preparing Quorum Volume:

Start StarWind iSCSI console, In the “General” screen we’ll find the summary information plus how to connect to local or remote StarWind host.

g

In the “Configuration” section we can find the common parameters to configure iSCSI StarWind, for example the “Network” options which enable the iSCSI communications (port 3260) on any of the network adapters identified.

h

If we are using a special LAN/VLAN to separate our iSCSI traffic as recommended, then we should only enable the IP address used for that purpose.

Launch the StarWind Management Console selecting Start -> All Programs –> StarWind Software -> StarWind -> StarWind. After the console is launched its icon appears in the system tray. Double click the icon with the left mouse
button or single click it with the right and select Start Management pop-up menu item. From the StarWind Servers tree please select the computer you wish to connect to. Press the right mouse button over the desired host (computer) and select the Connect menu item. You will be prompted to enter the login and password. Default ones are: root, starwind. You can always change them later

With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.

i

Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. In my case I’m using a simple name “w2k8r2-clstr”, click on “Next”.

j

Since we are going to be using hard drives to present our storage, in “Storage Type” select “Hard Disk”, click on “Next”.

k

In “Device Type” please note that we can use physical as virtual drives to present to our clients using iSCSI. We are going to select “Basic Virtual”, from which we’ll create a file (.img) that will represent the LUN; click on “Next”.

l

Select “Image File device” and click on “Next”.

m

Since we are creating a new one, select “Create new virtual disk” and click on “Next”.

n

If you have decided to create a new virtual disk please specify the location and the name of the virtual disk you wish to be created. Also you have to provide the virtual disk size in megabytes. Check any additional parameters of the virtual disk you wish to create. Please refer to the online help for details regarding those additional parameters (Compressed and Encrypted).

Here, we are using a separate drive where I’m going to save all of my LUNs

p

In the cache parameters, leave the default options selected “Normal (no caching)”; click on “Next”.

q

In the last screen, just click on “Finish” and we’ll have our LUN ready.

As optional and recommended review the options for “CHAP permissions” and “Access Rights”. Within these options we can configure all the parameters needed for secure environments.

Once we’ve completed this, we can access this file from a Windows Server 2008 R2 host.

Preparing Cluster Nodes:

Change the below settings on Node1:

Network settings:

Each adapter will be assigned a static IP address. Select the Use the following IP address option and type in the IP address you wish to use. The Subnet mask and DNS server address must also be provided. All the values must be
correctly chosen given the networking configuration of the Corporate LAN that the cluster will be a part of. As this interface is for the public network, a default gateway will need to be assigned.

image
Press the OK button.

Just as was done for the first network adapter, assign appropriate values to the TCP/IP configuration of the second network adapter using the following example image as guidance. This interface is used for iSCSI target storage communications and a default gateway need not be specified.

image

Configuring iSCSI initiator

Launch the Microsoft iSCSI Software Initiator application Administrative Tools-> iSCSI Initiator

image

Select the Discovery Tab.

In the Target Portals group, click the Add Portal… button.

image

Press the Add Portal… button

In the Add Target Portal dialog enter IP address or DNS name of the StarWind target server.

image

Click on the Targets tab. Select the IQN of the target just added.

In my case, I’ve created two LUNs available for the cluster.

r

 

Press the Log On… button.

The Log On to Target dialog now appears. In this dialog click on the checkbox Automatically restore this connection when the system boots to make this connection persistent.

image

Initializing, formatting and creating partitions
When the StarWind Disks are connected, they show up on the initiator machine as new disk devices. Before these devices can be used as cluster disks, they have to be initialized and formatted. Launch the Computer Management console.

s

Bring disks online. Press the right mouse button over the disk and select Online.

image

Initialize the Disks. Press the right mouse button over the Disk and select Initialize Disk. Follow the wizard to initialize the new disks.

image

Right-click over the unallocated space and select New Simple Volume. Follow the instructions in the wizard to create an NTFS partition for use as the quorum disk.

image

image

Change the below settings on Node2:

Perform the above steps on node 2. Only changes needs to be done:

IP address for network adapter 1 – 192.168.1.22 (LAN)

IP address for network adapter 2 – 192.168.2.22 (heartbeat)

Rest, configure the settings as it was done on Node 1.

Install Failover Cluster feature and run cluster validation

Perform the below steps on both nodes:

Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.

In “Server Manager”, access “Features” and select “Failover Cluster”. This feature does not need a reboot to complete.

image

Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.

v

Select Next and Select Run all tests (recommended) for testing the complete configuration.

w

Check the parameters are correct. Press the Previous button should any changes be required

x

We can also have a detailed report about the results on each test.

y

Create a Cluster
It is now time to create the cluster. Click Create a Cluster item from the Actions panel shown on the right.

z

Add the names of servers you wish to use as cluster nodes.
image

Next, Specify Cluster Name and Cluster IP address.

image

Click Next till Finish

a2

Now that the creation of the cluster is complete it will be shown in the panel on the left. Expand the cluster by clicking on the ‘+’ symbol next to the cluster, then click on Storage. The Failover Cluster Management console should look like the example picture provided below. Both cluster disk resources will be shown as online.

a3

Adding services to a cluster

To add a service, open the Failover Cluster Management tool, browse to your cluster and right-click Services And Applications.  From the shortcut menu, choose the Configure A Service Or Application option.  This starts the High Availability Wizard.

Select Service Or Application

After the introduction page of the wizard, you’re asked to choose the service or application that you want to make highly available. Check with your software vendor to determine cluster compatibility. The next steps of the wizard will change depending on what you select on this page

image

For each service you configure, you must specify how clients will access the service. Remember that the clustered service will appear as a single entity to client computers. Name the service and provide the service with a unique IP address

image

Next, choose the shared storage device that will be used by the chosen service

image

Once you make this selection, you have the opportunity to confirm your selections. Afterwards, the wizard will make the selected service highly available on your network

image

you can see the shared file server in action.

image

Failover Clustering in Windows Server 2008 R2 Part 1


Introduction

A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

Windows Server Failover Clustering (WSFC) is a feature that can help ensure that an organization’s critical applications and services, such as e-mail, databases, or line-of-business applications, are available whenever they are needed. Clustering can help build redundancy into an infrastructure and eliminate single points of failure. This, in turn, helps reduce downtime, guards against data loss, and increases the return on investment.

Failover clusters provide support for mission-critical applications—such as databases, messaging systems, file and print services, and virtualized workloads—that require high availability, scalability, and reliability.

What is a Cluster?

Cluster is a group of machines acting as a single entity to provide resources and services to the network. In time of failure, a failover will occur to a system in that group that will maintain availability of those resources to the network.

How Failover Clusters Work?

A failover cluster is a group of independent computers, or nodes, that are physically connected by a local-area network (LAN) or a wide-area network (WAN) and that are programmatically connected by cluster software. The group of nodes is managed as a single system and shares a common namespace. The group usually includes multiple network connections and data storage connected to the nodes via storage area networks (SANs). The failover cluster operates by moving resources between nodes to provide service if system components fail.

Normally, if a server that is running a particular application crashes, the application will be unavailable until the server is fixed. Failover clustering addresses this situation by detecting hardware or software faults and immediately restarting the application on another node without requiring administrative intervention—a process known as failover. Users can continue to access the service and may be completely unaware that it is now being provided from a different server

clip_image002

Figure . Failover clustering

Failover Clustering Terminology

1. Failover and Failback Clustering
Failover is the act of another server in the cluster group taking over where the failed server left off. An example of a failover system can be seen in below Figure. If you have a two-node cluster for file access and one fails, the service will failover to another server in the cluster. Failback is the capability of the failed server to come back online and take the load back from the node the original server failed over to.

image

2. Active/Passive cluster model:

Active/Passive is defined as a cluster group where one server is handling the entire load and, in case of failure and disaster, a Passive node is standing by waiting for failover.

· One node in the failover cluster typically sits idle until a failover occurs. After a failover, this passive node becomes active and provides services to clients. Because it was passive, it presumably has enough capacity to serve the failed-over application without performance degradation.

image

3. Active/Active failover cluster model

All nodes in the failover cluster are functioning and serving clients. If a node fails, the resource will move to another node and continue to function normally, assuming that the new server has enough capacity to handle the additional workload.

image

4. Resource. A hardware or software component in a failover cluster (such as a disk, an IP address, or a network name).

5. Resource group.

A combination of resources that are managed as a unit of failover. Resource groups are logical collections of cluster resources. Typically a resource group is made up of logically related resources such as applications and their associated peripherals and data. However, resource groups can contain cluster entities that are related only by administrative needs, such as an administrative collection of virtual server names and IP addresses. A resource group can be owned by only one node at a time and individual resources within a group must exist on the node that currently owns the group. At any given instance, different servers in the cluster cannot own different resources in the same resource group.

6. Dependency. An alliance between two or more resources in the cluster architecture.

image

7. Heartbeat.

The cluster’s health-monitoring mechanism between cluster nodes. This health checking allows nodes to detect failures of other servers in the failover cluster by sending packets to each other’s network interfaces. The heartbeat exchange enables each node to check the availability of other nodes and their applications. If a server fails to respond to a heartbeat exchange, the surviving servers initiate failover processes including ownership arbitration for resources and applications owned by the failed server.

The heartbeat is simply packets sent from the Passive node to the Active node. When the Passive node doesn’t see the Active node anymore, it comes up online

image

8. Membership. The orderly addition and removal of nodes to and from the cluster.

9. Global update. The propagation of cluster configuration changes to all cluster members.

10. Cluster registry. The cluster database, stored on each node and on the quorum resource, maintains configuration information (including resources and parameters) for each member of the cluster.

11. Virtual server.

A combination of configuration information and cluster resources, such as an IP address, a network name, and application resources.

Applications and services running on a server cluster can be exposed to users and workstations as virtual servers. To users and clients, connecting to an application or service running as a clustered virtual server appears to be the same process as connecting to a single, physical server. In fact, the connection to a virtual server can be hosted by any node in the cluster. The user or client application will not know which node is actually hosting the virtual server.

image

image

12. Shared storage.

All nodes in the failover cluster must be able to access data on shared storage. The highly available workloads write their data to this shared storage. Therefore, if a node fails, when the resource is restarted on another node, the new node can read the same data from the shared storage that the previous node was accessing. Shared storage can be created with iSCSI, Serial Attached SCSI, or Fibre Channel, provided that it supports persistent reservations.

clip_image001[6]

13. LUN

LUN stands for Logical Unit Number. A LUN is used to identify a disk or a disk volume that is presented to a host server or multiple hosts by a shared storage array or a SAN. LUNs provided by shared storage arrays and SANs must meet many requirements before they can be used with failover clusters but when they do, all active nodes in the cluster must have exclusive access to these LUNs.

Storage volumes or logical unit numbers (LUNs) exposed to the nodes in a cluster must not be exposed to other servers, including servers in another cluster. The following diagram illustrates this.

clip_image001

14. Services and Applications group

Cluster resources are contained within a cluster in a logical set called a Services and Applications group or historically referred to as a cluster group. Services and Applications groups are the units of failover within the cluster. When a cluster resource fails and cannot be restarted automatically, the Services and Applications group this resource is a part of will be taken offline, moved to another node in the cluster, and the group will be brought back online.

15.  Quorum

The cluster quorum maintains the definitive cluster configuration data and the current state of each node, each Services and Applications group, and each resource and network in the cluster. Furthermore, when each node reads the quorum data, depending on the information retrieved, the node determines if it should remain available, shut down the cluster, or activate any particular Services and Applications groups on the local node. To extend this even further, failover clusters can be configured to use one of four different cluster quorum models and essentially the quorum type chosen for a cluster defines the cluster. For example, a cluster that utilizes the Node and Disk Majority Quorum can be called a Node and Disk Majority cluster.

A quorum is simply a configuration database for Microsoft Cluster Service, and is stored in the quorum log file. A standard quorum uses a quorum log file that is located on a disk hosted on a shared storage interconnect that is accessible by all members of the cluster

clip_image001[4] Why quorum is necessary

When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network, but might not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.

To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5 are a minority and stop running as a cluster, which prevents the problems of a “split” situation. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.

There are four quorum modes:

    • Node Majority: Each node that is available and in communication can vote. The cluster functions only with a majority of the votes, that is, more than half.
    • Node and Disk Majority: Each node plus a designated disk in the cluster storage (the “disk witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
    • Node and File Share Majority: Each node plus a designated file share created by the administrator (the “file share witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
    • No Majority: Disk Only. The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage. Only the nodes that are also in communication with that disk can join the cluster. This is equivalent to the quorum disk in Windows Server 2003. The disk is a single point of failure, so only select scenarios should implement this quorum mode.

 

16. Witness Disk

    – The witness disk is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. (A witness disk is part of some, not all, quorum configurations.)

Configuration of two node Failover Cluster and Quorum Configuration:

Multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.

Which editions include failover clustering?

The failover cluster feature is available in Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The feature is not available in Windows Web Server 2008 R2 or Windows Server 2008 R2 Standard

Network Considerations

All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity

Quorum model:

For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum

Step –1 Configure the Cluster

Add the Failover Clustering feature to both nodes of your cluster. Follow the below steps:

1. Click Start, click Administrative Tools, and then click Server Manager. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.)

2. In Server Manager, under Features Summary, click Add Features. Select Failover Clustering, and then click Install

image

3. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.

4. Repeat the process for each server that you want to include in the cluster.

5. Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.

Go to properties of Cluster (or private) network and check out register the connection’s addresses in DNS.

image

6. Next, go to Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network (LAN) is first in the list:

image

7. Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.

Step 2 – Validate the Cluster Configuration:

1. Open up the Failover Cluster Manager and click on Validate a Configuration.

2. The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.

a

3. we need this cluster to be supported so we must run all the needed tests

clip_image012

4. Select run all tests.

clip_image014

5. Click next till it gives the report like below

b

When you click on view report, it will display the report similar as below:

image

Step 2 – Create a Cluster:

In the Failover Cluster Manager, click on Create a Cluster.

image

The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.

Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.

image

Confirm your choices and click Next.

c

Click Next till finish, it will create the cluster by name MYCLUSTER.

Step 3 – Implementing a Node and File Share Majority quorum

First, we need to identify the server that will hold our File Share witness. This File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named NYDC01

.d

The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions.  You will need to make sure you give the cluster computer account read/write permissions in both shared and NTFS for MYCLUSTER share.

image

image

Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.

image

On the next screen choose Node and File Share Majority and click Next.

image

In this screen, enter the path to the file share you previously created and click Next.

e

Confirm that the information is correct and click Next till summary page and click Finish.

Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.

f

The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster

Auditing Windows Server 2008 File and Folder Access


File and folder auditing is enabled and disabled using either Group Policy (for auditing domains, sites and organizational units) or local security policy (for single servers). To enable file and folder auditing for a single server, select Start -> All Programs -> Administrative Tools -> Local Security Policy. In the Local Security Policy tool, expand the Local Policies branch of the tree and select Audit Policy.
Configuring Local Audit Policy
Double click on the Audit Object Access item in the list to display the corresponding properties page and choose whether successful, failed, or both types of access to files or folders may be audited:

 

Setting the Audit Object Properties to enable file and folder access tracking
Once the settings are configured click on Apply to commit the changes and then OK to close the properties dialog. With file and folder auditing enabled the next task is to select which files and folders are to be audited.

Configuring which Files and Folders are to be Audited

Once file and folder access auditing has been enabled the next step is to configure which files and folders are to be audited. As with permissions, auditing settings are inherited unless otherwise specified. By default, configuring auditing on a folder will result in access to all child subfolders and files also being audited. Just as with inherited permissions, the inheritance of auditing settings can be tuned off for either all, or individual files and folders.

To configure auditing for a specific file or folder begin by right clicking on it in Windows Explorer and selecting Properties. In the properties dialog, select the Security tab and click on Advanced. In the Advanced Security Settings dialog select the Auditing tab. Auditing requires elevated privileges. If not already logged in as an administrator click the Continue button to elevate privileges for the current task. At this point, the Auditing dialog will display the Auditing entries list containing any users and groups for which auditing has been enabled as shown below:
The file and folder auditing entries dialog
To add new users or groups whose access attempts to the select file or folder are to be audited click on the Add…’ button to access the Select User or Group dialog. Enter the names of groups or users to audit, or Everyone to audit access attempts by all users. Click on OK to display the Auditing Entries for dialog as illustrated below:
Configuring file and folder auditing for a specific user or group
Use the drop down list to control whether the auditing setting is to be applied to the current file or folder, or whether it should propagate down to all children files and/or sub-folders. Finally, select which types of access are to be audited and, for each type, whether successful, failed or both kinds of attempt are to be audited. Once configured, click on OK to dismiss current dialog and then Apply the new auditing settings in the Auditing Entries dialog.

From this point on, access attempts on the selected file or folder by the specified users and groups of the types specified will be recorded in the server’s security logs which may be accessed using the Events Viewer, accessible from Computer Management.

Sending ‘As’


Send As

Being able to send messages directly as the manager means that the recipient of the message will think that the manager has sent the message, even though it was actually the assistant that sent it. The key to achieving this process is the Send As permission. This is an Active Directory permission that is granted by the system administrator; it cannot be granted from within Outlook. To grant the Send As permission, the administrator needs to perform the following steps:

  1. Run the Active Directory Users and Computers snap-in.
  2. Click the View menu and then make sure that the Advanced Features option is selected. This will make sure you see the Security tab referenced later in step 4.
  3. Locate the relevant user account, in this case the manager’s user account, and bring up its properties.
  4. Go to the Security tab and click the Add button.
  5. Add in the assistant’s account that you’d like to send as the manager and make sure that you grant the assistant’s account the Send As right. This is shown in Figure 1.


Figure 1: Granting Send As Rights

Making these changes will allow the assistant to use the From field in Outlook and choose the manager’s mailbox as the sending mailbox. This is shown below in Figure 2. If you don’t see the From field when composing a new message in Outlook, click the View / From Field option in the new message window. This applies to Outlook 2003. For Outlook 2007 (beta 2 for this article), you’ll find the Show From button on the Options tab of the ribbon.


Figure 2: Using Outlook’s From Field

However, it’s important to note that it can take a while, possibly up to two hours, for the permissions changes to take effect which has proved to be the source of much frustration amongst Exchange administrators. Once the permissions changes have been made and the Outlook From field completed, it’s quite common for the assistant to receive a non-delivery report just after sending the message. These non delivery reports will look like the sample one shown below in Figure 3:


Figure 3: Permissions Failure Non-Delivery Report

Of course, the key wording above is the line that reads You do not have permission to send to this recipient. Is it possible to speed up this permissions change process? Well, I haven’t been able to get someone from Microsoft to confirm this, but I believe it’s possible via the Mailbox Cache Age Limit registry key documented in KB article 327378. The KB article mentions changing the Mailbox Cache Age Limit registry key, which according to the article is used to re-read logon quota information. In my experience, modifying this key (or creating it if it doesn’t exist) with a suitable value, in minutes, speeds up the permissions change process. Note that you must restart the Information Store service after modifying this registry key. The general consensus of opinion here is not to make this value too low; a sensible value is 15 minutes. The alternative to creating or modifying this registry key is to simply re-start the Information Store service, which appears to make the permissions changes take effect immediately. Of course, restarting the Information Store service is rarely practical during business hours and you may also not prefer to go poking around in the registry, so you can also choose to wait for the permissions to be re-read at the next interval, which, as stated earlier, could be up to 2 hours.

Once the permissions have been granted and successfully taken effect, the assistant can send the message as normal. What does the recipient of the message actually see? Quite simply, the recipient will not be able to tell that it was the assistant who actually sent this message as it will appear just as if the manager had sent it. We’ll talk about another method, the Send on Behalf of method, a little later in this article.

Sending as a Group or Public Folder

Administrators often ask how they can send as a distribution group, or even a public folder. One of the most common applications of this scenario is where an organization creates a helpdesk-style distribution group, meaning that multiple users receive messages addressed to the distribution group. It’s then typically a requirement that these users send messages so that they appear to come from the distribution group rather than from the individual members of the group. The good news is that the Send As permission works for these objects too. To send as a distribution group, the steps are identical to those that I detailed earlier, the only difference being that you’d obviously need to locate the distribution group and bring up its properties, rather than a user account. An example of this is shown below in Figure 4, where my own user account has been granted the Send As rights for the IT Consultants distribution group.


Figure 4: Send As a Distribution Group

Of course, it’s also possible to send as a public folder. In this case, the steps are a little different but the concept is the same. The steps are:

  1. Run the Exchange System Manager snap-in.
  2. Under the relevant administrative group, navigate to Folders / Public Folders and then find the relevant public folder that you’d like to send messages as.
  3. Bring up the properties of the folder and go to the Permissions tab.
  4. Click the Directory Rights button and then add your chosen user account as before, making sure that the Send As right has been granted.
  5. If the Directory Rights button is not available, make sure that the public folder is mail-enabled. This can be done by first right-clicking the public folder in Exchange System Manager, then choosing All Tasks / Mail Enable.
  6. Back in the properties of the public folder, switch to the Exchange Advanced tab and make sure that the Hide from Exchange address lists option is not selected, otherwise you won’t be able to locate the folder when clicking the From button in Outlook.

Send On Behalf Of

Now let’s go back to our manager/assistant example and consider the scenario where the manager requires the assistant to send email messages on their behalf, making sure that the recipient knows that the assistant has indeed sent the message on behalf of the manager. To achieve this, Outlook’s delegate access feature can be used.

The important difference between delegate access and the Send As permission that I covered earlier in this article is that the delegate access feature can be set by the user or by the administrator. Therefore, in our example, the manager can set delegate access by choosing Tools / Options from within Outlook and then choosing the Delegates tab. Figure 5 shows how this looks.


Figure 5: Delegate Access Tab

Clicking the Add button will then allow the manager to choose their assistant that will act as the delegate from the Global Address List (GAL). Once the assistant has been chosen, the Delegate Permissions window is displayed, an example of which is shown in Figure 6. Here you can see that the assistant has been given Editor permissions by default to the Calendar and Tasks folders, but not the Inbox folder. Therefore, the next thing to do is to ensure the assistant also has Editor permissions against the manager’s Inbox folder. Once done, this will allow the assistant to send messages on behalf of the manager.


Figure 6: Default Delegate Permissions Window

Another way to set delegate access can be performed by the Exchange administrator. This can be performed via the following series of steps:

  1. Run the Active Directory Users and Computers snap-in.
  2. Locate the relevant user, in this case the manager’s user account, and bring up its properties.
  3. Go to the Exchange General tab and click the Delivery Options button.
  4. In the Send on behalf area, click the Add button and add in the assistant’s account that you’d like to send on behalf of the manager. This is shown below in Figure 7.


Figure 7: Administrator Granting Send On Behalf Of

Once delegate access has been set, the assistant can now use the From field in Outlook as previously shown in Figure 2 above. The difference is how the message recipient sees the sender of the message. You’ll remember from earlier in the article that if the administrator grants direct Send As rights, the message will be shown as if it was sent directly by the manager. With the Send on Behalf of permission, the recipient will see that the message has been sent by the assistant on behalf of the manager. This is shown in Figure 8.


Figure 8: ‘Send on Behalf of’ Sample Message

Another useful thing that I want to mention is that it’s worth noting here what happens when the recipient replies to this message. In Figure 8 above, if I reply to the message the reply will be addressed to the manager and not the assistant. If the assistant wishes replies to go back to them, the assistant needs to make use of the Have replies sent to: option when composing the original message. This is shown in Figure 9.


Figure 9: Setting The Reply Destination

Finally, note that it’s also possible to send on behalf of a public folder. This option can be found by bringing up the properties of the public folder in Exchange System Manager, clicking the Exchange General tab, and then clicking the Delivery Options button.

Backup

How do you configure NetBackup to work with NDMP?


This answer assumes the hostname of your NDMP box is “toaster” and the hostname of your Master Server is “dumpster”

Do the following: 1) Login to dumpster as root, and install the NDMP packages (SUNWnbdmp). If you are not aware, you have to purchase the NDMP option from Veritas for NetBackup. You get the NDMP package, documentation.

2) Set your NDMP authorization: dumpster# /usr/openv/volmgr/bin/set_ndmp_attr -auth toaster root It will ask you for a Password, and enter toaster’s password.

3) Put the following line in /usr/openv/netbackup/bp.conf: ALLOW_NDMP

4) Connect toaster to one of the drives in your Jukebox, and reboot it so it can recognize the drive. Unfortunately, NetApps dont have drvconfig or alike. Check to make sure the drive is recognized after reboot: toaster % sysconfig -t This will show you the drive, and all device files you can use with it. I normally use the norewind device nrst0a. (or b.. whatever comes up in sysconfig’s output)

5) On toaster, start the ndmpd daemon. ndmpd daemon comes with DataONTAP so it should be there (atleast in recent versions). To start ndmpd, do toaster % ndmpd on To see the usage of ndmpd, just enter ndmpd.

6) Come back to your master server (dumpster), and add the NDMP drive: Pull up xdevadm, select DRIVES -> ADD DRIVE. This will pull up the ADD DRIVE window. In that window, select/provide the following information: DRIVE TYPE: DLT (or whatever type your drive is)

DRIVE INDEX: 0 (or any number of your choice)

DRIVE NAME: toaster_jukeboxname_drive# (or what ever you like)

NO REWIND DEVICE: toaster:nrst0a

DRIVE STATUS: UP

CLEANING FREQUENCY: 300 (or what ever you like)

ROBOTIC DRIVE: YES

ROBOT TYPE: TLD (or what ever type your Jukebox is)

ROBOT NUMBER: <your robot’s number>

ROBOT DRIVE: thats connected to toaster>

At this point, you’re ready to test NDMP backups. Use xbpadm to create a class of type NDMP, and include toaster as client, and a sample directory under file list. Create a schedule “manual_backup” dont put any regular dates on it, and start a manual backup of that NDMP class for toaster and see how it goes.

You do not have to install any software on toaster. All you need to do is start ndmpd. You want to put that in its rc file so its started every time its rebooted.

%d bloggers like this: