Quickly benchmarking vSphere’s PVSCSI driver

There are a lot of discussions going on about VMware’s PVSCSI driver (available in vSphere, under select guest OSs) so I figured I’d give it a try. Although I’m a little disappointed that it’s not [currently?]) supported to boot off a PVSCSI-controlled disk, there are people who have gotten it to work. I thought I’d do a quick-and-dirty test by using Oracle’s ORiON tool; the guest operating system is Windows 2008 Server SP1 (64-bit). I’m booting off a standard 20GB partition (connected to the LSI Logic SCSI controller), with a simple, raw 1GB partition (connected to the PVSCSI controller) as a test LUN. Here are the results:

orion.exe -run simple -testname mytest -num_disks 1

Using paravirtualized SCSI driver:

Maximum Large MBPS=103.97 @ Small=0 and Large=2
Maximum Small IOPS=6159 @ Small=5 and Large=0
Minimum Small Latency=0.51 @ Small=1 and Large=0

Not using paravirtualized SCSI driver:

Maximum Large MBPS=108.16 @ Small=0 and Large=2
Maximum Small IOPS=6543 @ Small=5 and Large=0
Minimum Small Latency=0.56 @ Small=2 and Large=0

Notice that, when not under load, latency is lower (which is better) with the paravirtualized driver but throughput is also lower (which is poorer.) In order to use a raw partition in ORiON under Windows, you need to specify something like this in your .lun file:

\\.\e:

Where e: is the drive letter assigned to the raw, non-formatted partition. I’ll do some more tests as time allows — I have a feeling that, when the VM is under higher load, the PVSCSI’s numbers will be better. Similarly, I’ll need to play with block sizes & disk counts, too.

Here is a new test (stolen from here), which gives the VM a bit more of an I/O beating. The results are very interesting:

orion.exe -run advanced -testname mytest -num_disks 1 -type rand -write 5 -matrix row -num_large 1

Using paravirtualized SCSI driver:

Maximum Large MBPS=83.17 @ Small=0 and Large=1
Maximum Small IOPS=956 @ Small=5 and Large=1
Minimum Small Latency=5.21 @ Small=1 and Large=1

Not using paravirtualized SCSI driver:

Maximum Large MBPS=70.46 @ Small=5 and Large=1
Maximum Small IOPS=806 @ Small=5 and Large=1
Minimum Small Latency=5.46 @ Small=1 and Large=1

Here, we can see quite a big difference in numbers. Latency is down by 5.5% while performance (both MBPS and IOPS) are up by 15.3%. For the record, the ESX host is vSphere build 175625; the datastore is NFS-mounted to a Sun Fire x4540 storage server with 48 500GB SATA disks.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s