Yuri Astrakhan
Yuri Astrakhan

Reputation: 9965

Performance of NVMe vs SCSI for Local SSDs in GCP using Container OS

In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:

        NVMe    SCSI
real    157.3   150.1
user    107.2   107.1
sys     21.6    22.2

The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with

docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox

Test was executed with

time md5sum largefile

Upvotes: 6

Views: 3626

Answers (1)

Dan
Dan

Reputation: 7737

I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.

FWIW, md5sum is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio for that directly on top of local SSDs.

Upvotes: 3

Related Questions