Occasionally, you will need to P2V a physical Linux machine that doesn’t quite match up with the accepted configuration defaults for vSphere — particularly, a distribution (or even a fork of a distribution) that’s not 100% supported. For example, I recently had to P2V a CentOS 2.1 VM, which you would assume would appear to vSphere as a Red Hat Enterprise Linux 2.1 VM, seeing as the two distributions are binary-compatible. However, I assumed that, and it turned out I was wrong. 🙂
Given that CentOS is binary-compatible with RHEL, it’s very possible that you could simply edit /etc/redhat-release on the CentOS box to mirror what’s on a RHEL box and be done with it. (I haven’t tried, but if anyone wants me to, just holler.)
Out of the box, because VMware Converter can’t figure out what distribution you’re using, it can’t/won’t/doesn’t install the correct drivers when you P2V your old, busted Linux host. So, when you boot up the VM, you get a nice “NO OPERATING SYSTEM FOUND” message or something similar. It’s relatively easy to fix, though, so don’t worry! The easiest way to accomplish this is to boot the “new” Linux VM after it’s P2Ved — and when I say “boot”, I mean “boot via the Linux rescue CD that’s relevant for that OS.” So, for CentOS 2.1, I power on the new VM but boot it off the CentOS rescue disc.
Once we’re there, a few relatively simple commands get us to where we need to be. Mount the filesystems of your P2Ved VM, and then run the following. Note that these all need to be done in a chrooted environment. CentOS’ rescue disc does a chroot for you by default, but RHEL will not — so you need to enter something like chroot /mnt/sysimage (where /mnt/sysimage is the location of your VM’s filesystem(s), not the rescue disc’s filesystem(s)!)
First things first, we need a SCSI driver that’s going to work. For vSphere, you have two (okay, three or four, really) choices: LSI Logic, BusLogic (or PVSCI, I guess.) BusLogic is more suited to older VMs running older OSs, so let’s choose LSI Logic. Now, you’ll need to edit /etc/modules.conf on the VM, and make its only SCSI configuration entry read like this:
alias scsi_hostadapter mptscsih
If there are any other SCSI-related entries in the modules file, remove them. Such an example is alias_scsihostadapter1. We do this, of course, because vSphere presents its virtualized storage device as being equivalent to an LSI Logic device, which is supported in your VM’s OS by the mptbase driver bundle.
Now, there’s no point in specifying a driver for which your kernel doesn’t have the appropriate module! For this reason, we’ll need to rebuild your initrd, which you can think of as being a small package that contains drivers — drivers necessary to boot your kernel. Can’t boot a kernel on a SCSI device if you don’t load the SCSI device first, can you!? mkinitrd is easy, if not a little cumbersome, to use. You do need to know the exact kernel version of your VM, though; you can check that by looking for the default kernel as specified in /etc/grub.conf (sorry, LILO users, but you need to step out of 1996.) This is an appropriate mkinitrd command:
mkinitrd -v -f --preload mptbase --with=mptbase /boot/initrd-2.4.9-e.40enterprise.img 2.4.9-e.40enterprise
As you can see, I’m running kernel 2.4.9 with a build called “e.40enterprise” (blame CentOS for this, not me.) And, because it’s Linux, you need to specify the mptbase package two different times. With two different syntaxes. Yay!
We have two things to do in GRUB. First, we need to make sure (and, again, this may or may not affect you — it depends on your choice of Linux distribution) that GRUB is looking at the right SCSI device for our kernel. Because if your physical HP server used the CCISS driver and the virtual server uses the mptbase driver, the VM won’t be able to find /dev/cciss/anything! So, make sure something like this:
kernel /vmlinuz-2.4.9-e.40enterprise ro root=/dev/cciss/c0d0p3
Becomes something like this:
kernel /vmlinuz-2.4.9-e.40enterprise ro root=/dev/sda3
Now, you need to run the grub command itself, and install the mkinitrd package into the GRUB partition. This is, remarkably, much easier than it sounds. Ironically, of course, it is more difficult than it was in LILO’s days. Meet the new boss, same as the old boss. Invoke /sbin/grub and then do this from the prompt:
grub> root (hd0,0) (stuff snipped) grub> setup (hd0) (more stuff snipped)
Again, note the Linux world’s infinite wisdom by simply throwing away the real identification contained in SCSI drivers (i.e., that /dev/cciss/c0d0p0 is different to /dev/sda1) and, confusingly, choosing to go to simply hd0. Which came out of nowhere. It’s not even an IDE reference, either, if it were it would be /dev/hda! And people wonder why Linux isn’t ready for the desktop…
That’s it for GRUB. Yay! Oh, wait, we still have one more thing: fstab.
On the off chance that you’re using device IDs (such as /dev/sda3) in your /etc/fstab file — as opposed to, say, ext2 or ext3 filesystem labels — you’ll need to make sure fstab knows where all of your partitions are. Come to think of it, even if you’re using filesystem labels, you still need to edit your fstab because you can’t label a swap partition! So, you have to tell Linux where that lives, now:
/dev/cciss/c0d0p7 swap swap defaults 0 0
Needs to be something like:
/dev/sda7 swap swap defaults 0 0
And with that, you’re done! I swear.
Okay, I lied…
VMware Converter, Linux and helper VMs
It’s not very well-documented in VMware’s GUI, but it is in the manual, so I will give VMware a pass on it. But here goes: if you’re P2Ving a Linux VM, you almost certainly want to take advantage of the “Helper VM” option that’s in VMware Converter. And, if you do, for the love of Godot put the helper VM in the same VLAN as your physical host. Or vCenter Server host. Or both. Or whatever… either way, though, know that it needs to talk to everything!