Convert VMware Fusion VM to ESXi based VM

There are occasions that we need to create custom built Linux VMs on behalf of our clients. Forexample, we may build a Linux VM that has all the best practices for 12c Oracle database or a 12c Weblogic Server. We sometimes do this in VMware Fusion or in vSphere/ESXi config.
In this example we will showcase how we migrate VMs built in Fusion to a ESXi based environment.
In this example, it is assumed that the Linux VM is been pre-created in VMware Fusion.
My VMware Fusion runs on MAC OsX 10.9.5

The key tool in this migration/conversion is called vmware-vdiskmanager, and is located in the following directory:

vmware-vdiskmanager has the following capabilities (as per Help)

NitinV$ ./vmware-vdiskmanager -h

VMware Virtual Disk Manager - build 1945692.
Usage: vmware-vdiskmanager OPTIONS |
Offline disk manipulation utility
Operations, only one may be specified at a time:
-c : create disk. Additional creation options must
be specified. Only local virtual disks can be
-d : defragment the specified virtual disk. Only
local virtual disks may be defragmented.
-k : shrink the specified virtual disk. Only local
virtual disks may be shrunk.
-n : rename the specified virtual disk; need to
specify destination disk-name. Only local virtual
disks may be renamed.
-p : prepare the mounted virtual disk specified by
the volume path for shrinking.
-r : convert the specified disk; need to specify
destination disk-type. For local destination disks
the disk type must be specified.
-x : expand the disk to the specified capacity. Only
local virtual disks may be expanded.
-R : check a sparse virtual disk for consistency and attempt
to repair any errors.
-e : check for disk chain consistency.
-D : make disk deletable. This should only be used on disks
that have been copied from another product.

Other Options:
-q : do not log messages

Additional options for create and convert:
-a : (for use with -c only) adapter type
(ide, buslogic, lsilogic). Pass lsilogic for other adapter types.
-s : capacity of the virtual disk
-t : disk type id

Disk types:
0 : single growable virtual disk
1 : growable virtual disk split in 2GB files
2 : preallocated virtual disk
3 : preallocated virtual disk split in 2GB files
4 : preallocated ESX-type virtual disk
5 : compressed disk optimized for streaming
6 : thin provisioned virtual disk - ESX 3.x and above

Below is the command I ran to convert my Linux VM (Openfiler) to ESXi vmdk:
/Applications/VMware\ -r OpenFiler1.vmwarevm/Virtual\ Disk.vmdk -t 4 /Volumes/Oracle-images\ 1/LinuxStones/linuxStones.vmdk
Creating disk '/Volumes/Oracle-images 1/LinuxStones/linuxStones.vmdk'
Convert: 100% done.

Virtual disk conversion successful.

ls -l ~/LinuxStones


This conversion produces two files.
Once the vmdk and flat.vmdk files are generated, the next step is to import these into ESXi. I used vSphere client to execute this workflow:
1. Create a new VM, using the usual method; e.g., File->New->Virtual Machine->Custom-> Choose Datastore location
2. Choose Virtual Machine Version -> Guest CPU/Memory/Network/SCSI controller settings -> Select “Do Not Create Disk” -> Finish
3. Go back to VM Configuration-> Datastore -> Browse DataStore -> Upload
4. Upload .vmdk and -flat.vmdk
5. Go back to VM configuration (Virtual Machine Properties) -> Add -> Device Type (Hard Disk) -> “Use an existing virtual disk”
6. Locate the datastore and select the existing disk -> Finish -> OK
7. Startup VM

VMware Fusion vmx conversion to OVA for VM appliance shipments

As in the earlier post on VMware VM appliance shipments, there are occasions that we need to build VM appliances for our clients. Depending on the scenarios, requirements, or convenience, we will build VMs in VMware Fusion and convert them to OVA files so our clients can simply import them (“Deploy a OVF Template”).

In this simple case, where I don’t have to edit the ovf file, etc, I use the ovftool to convert the VMware Fusion vmx file to OVA.

$ ./ovftool -st=VMX -tt=OVA /Volumes/Oracle-images/OEL66/OEL66Stones.vmwarevm/OEL66Stones.vmx /Volumes/Oracle-images/OEL66/VMX/OEL66Stones.ova
Opening VMX source: /Volumes/Oracle-images/OEL66/OEL66Stones.vmwarevm/OEL66Stones.vmx
Opening OVA target: /Volumes/Oracle-images/OEL66/VMX/OEL66Stones.ova
Writing OVA package: /Volumes/Oracle-images/OEL66/VMX/OEL66Stones.ova
Transfer Completed
Completed successfully

the st flag states that the source file type is a vmx, and the tt indicates its is a target type of ova. Once its converted, I simply send this VM image to clients, where they import into vSphere.

Here’s a link to the ovftool User’s Guide:

Just a bit on OVFtool: OVFTool is used to distribute and import virtual machines and vApps. e.g., you can create a virtual machine and use OVF Tool to export it into an OVF package for installation, either within your organization or for distribution to other organizations. OVF facilitates the use of vApps, which consist of preconfigured virtual machines that package applications with the operating system that they require.OVF Tool 1.0 replaces an earlier Java‐based OVF Tool that was experimental. OVF Tool supports OVF version 1.0

Setting Round-Robin Multipathing Policy in VMware ESXi 6.0

Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP) are part of the VMware APIs for Pluggable Storage Architecture (PSA). The SATP has all the knowledge of the storage array to aggregate I/Os across multiple channels and has the intelligence to send failover commands when a path has failed. The Path Selection Policy can be either “Fixed”, “Most Recently Used” or “Round Robin”.

If a VMware VM is using RDM with All Flash Arrays, then the Round Robin policy should be used. Furthermore, inside the Linux kernel (VM), the noop IO scheduler should be used. Both need to executed for proper throughput.

As a best practice, the preferred method to set Round Robin policy, is to create a rule that will allow any newly added FlashArray device, to automatically set the Round Robin PSP and an IO Operation Limit value of 1. In this blog I’ll refer to the PureStorage array for setting Round Robin policy as well as setting IO limit.

The following command creates a rule that achieves both of these for only Pure Storage FlashArray devices:

esxcli storage nmp satp rule add -s “VMW_SATP_ALUA” -V “PURE” -M “FlashArray” -P”VMW_PSP_RR” -O “iops=1”

This must be repeated for each ESXi host.
This can also be accomplished through PowerCLI. Once connected to a vCenter Server this script will iterate through all of the hosts in that particular vCenter and create a default rule to set Round Robin for all Pure Storage FlashArray devices with an I/O Operation Limit set to 1.

$hosts = get-vmhost
foreach ($esx in $hosts)
$esxcli=get-esxcli -VMHost $esx
$$null, $null, “PURE FlashArray RR IO Operation Limit
Rule”, $null, $null, $null, “FlashArray”, $null, “VMW_PSP_RR”, “iops=1”, “VMW_SATP_ALUA”,
$null, $null, “PURE”)

It is important to note that existing, previously presented devices will need to be either manually set to Round Robin and an I/O Operation Limit of 1 or unclaimed and reclaimed through either a reboot of the host or through a manual device reclaim process so that it can inherit the configuration set forth by the new rule. For setting a new I/O Operation Limit on an existing device, use the following procedure:

The first step is to change the particular device to use the Round Robin PSP. This must be done on every ESXi host and can be done with through the vSphere Web Client, the Pure Storage Plugin for the vSphere Web Client or via command line utilities.

Via esxcli:
esxcli storage nmp device set -d naa. –psp=VMW_PSP_RR

Note that changing the PSP using the Web Client Plugin is the preferred option as it will automatically configure Round Robin across all of the hosts. Note that this does not set the IO Operation Limit to 1. That is a command line option only, and must be done separately.

Round Robin can also be set on a per-device, per-host basis using the standard vSphere Web Client actions. The procedure to setup Round Robin policy for a Pure Storage volume. Note that this does not set the IO Operation Limit it 1 which is a command line option only—this must be done separately.

The IO Operations Limit cannot be checked from the vSphere Web Client—it can only be verified or altered via command line utilities. The following command can check a particular device for the PSP and IO Operations Limit:

esxcli storage nmp device list -d naa.

To set a device that is pre-existing to have an IO Operation limit of one, run the following command:

esxcli storage nmp psp roundrobin deviceconfig set -d naa. -I 1 -t iops

Setting Jumbo Frames – Portrait of a Large MTU size

There cases where we need to ensure that large packet “address-ability” exists. This is needed to verify configuration for non standard packet sizes, i.e, MTU of 9000. For example if we are deploying a NAS or backup server across the network.

Setting the MTU can be done by editing the configuration script for the relevant interface in /etc/sysconfig/network-scripts/. In our example, we will use the eth1 interface, thus the file to edit would be ifcfg-eth1.

Add a line to specify the MTU, for example:

Assuming that MTU is set on the system, just do a ifdown eth1 followed by ifup eth1.
An ifconfig eth1 will tell if its set correctly

eth1 Link encap:Ethernet HWaddr 00:0F:EA:94:xx:xx
inet addr: Bcast: Mask:
inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link
RX packets:141567 errors:0 dropped:0 overruns:0 frame:0
TX packets:141306 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:101087512 (96.4 MiB) TX bytes:32695783 (31.1 MiB)
Interrupt:18 Base address:0xc000

To validate end-2-end MTU 9000 packet management

Execute the following on Linux systems:

ping -M do -s 8972 [destinationIP]
For example: ping -s 8972

The reason for the 8972 on Linux/Unix system, the ICMP/ping implementation doesn’t encapsulate the 28 byte ICMP (8) + TCP (20) (ping + standard transmission control protocol packet) header. Therefore, take in account : 9000 and subtract 28 = 8972.

[root@racnode01]# ping -s 8972 -M do
PING ( 8972(9000) bytes of data.
8980 bytes from ( icmp_seq=0 ttl=64 time=0.914 ms

To illustrate if proper MTU packet address-ability is not in place, I can set a larger packet size in the ping (8993). The packet gets fragmented you will see
“Packet needs to be fragmented by DF set”. In this example, the ping command uses ” -s” to set the packet size, and “-M do” sets the Do Not Fragment

[root@racnode01]# ping -s 8993 -M do
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.859/0.955/1.167/0.109 ms, pipe 2
PING ( 8993(9001) bytes of data.
From ( icmp_seq=0 Frag needed and DF set (mtu = 9000)

By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, e.g., the switch, source or target node, target or anything else inbetween.

Finally, another way to verify the correct usage of the MTU size is the command ‘netstat -a -i -n’ (the column MTU size should be 9000 when you are performing tests on Jumbo Frames)