Lowjax Cluster updated specs, and some performance details.

The cluster consists of four hosts contributing to VSAN.

  • VMware ESXi 6.0.0, 2715440
  • VCSA 6.0
  • VDP 6.0
  • VSAN 6.0
  • VRLI 6.0
  • intel-nvme vib: 1.0e.1.1-1OEM
  • scsi-mpt2sas vib:
  • net-igb vib: 5.2.10-1OEM
  • LSI 2308 P20 Firmware (IT/Pass through)
  • Supermicro 1U Short Chassis (CSE-512L-260B)
  • Supermicro Xeon UP UATX Server Motherboard (X10SL7-F)
  • Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz (BX80646E31230V3)
  • (2x) Crucial 16GB Kit for 32GB Memory (CT2KIT102472BD160B)
  • HDPLEX 160W DC-ATX Power Supply
  • Mini-Box Y-PWR, Hot Swap, Load Sharing Controller
  • (2x) HDPLEX Internal 80W AC Power Adapter (160W peak with failover)
  • Dynatron Copper Heatsink K129 (1U CPU)
  • Enzotech BGA Copper heatsinks (northbridge)
  • SuperMicro Mylar Air Shroud
  • Avago (LSI) 2308 SAS2 on-board HBA Controller
  • Samsung 850 Pro 128GB SSD (MZ-7KE128BW)
  • Intel 750 Series 400GB PCIe NVMe 3.0 SSD (SSDPEDMW400G4R5)
  • (3x) Seagate 1TB Enterprise Capacity HDD SAS (ST1000NM0023)
  • Intel i210 on-board NIC with 2x 1Gbe
  • Intel 10Gbe Converged Ethernet Network Adapter (X540T1)

I am still trying to somehow slow the IOPs down with workloads, missing hosts, component syncs, client IOPs benchmarking, and simulated user events across multiple clients. There seems to be nothing i can do to slow things down at the storage layer. Even ten VDP backups running simultaneously, while doing the stuff listed above doesn’t have any affect. My latencies remain between 0.1ms and 1.2ms.

A Windows client with a two strips storage policy reads sequentially at 1000MB/s – 1700MB/s and writes 600MB/s – 1300MB/s. The same client performs 50K – 60k Random 4k Write IOPs, and 80k – 120k Read IOPs. The benchmark results have never dipped below the stated minimum number. Even with ten other benchmarks running simultaneously generates the same numbers as if no other benchmarks were running. Even during a VDP performance test everything remains the same!

Vmotions happen in just seconds, and on average a host entering VSAN maintenance mode is just under a minute. The capabilities and performance so far is out of this world. If your near-line HCL + 10Gbe, Vmware VSAN truly delivers! Eventually I will put together some comprehensive benchmarks numbers for various scenarios.

4 thoughts on “Lowjax Cluster updated specs, and some performance details.”

  1. Hello, great article!

    I am having HUGE performance issues with my single node VSAN at home in my lab. I’m currently using the following:

    Supermicro X10SL7-F
    1 x Samsung 850 Pro PCIe NVMe SSD 256GB for cache tier
    2 x Samsung Pro 840 128GB and 1 x Samsung 850 Pro 512GB for capacity tier (all SATA drives). These drives are connected to the SAS ports on the LSI-2308 controller on the motherboard.
    32GB RAM

    I’m running ESXi 6 update 2 but the performance is TERRIBLE. I’m barely getting 20MB/s on an ALL FLASH VSAN!

    I’ve read so much now that my head is spinning: disk queue depths, HBAs, SAS etc etc.

    What I’m trying to find out is, what is the cause of my poor performance? I have budget to buy new drives but want to be sure what to buy that will work great with VSAN (and want it to be all flash). Do I need new drives? A new HBA? You seem to have a similar setup to me from a hardware point of view (ignoreing the drives) and you said you were getting amazing VSAN performance so I was wondering what I needed to do to get the speeds you’re experiencing! 😉

    Do you have any ideas? Great site!

    1. What does you network stack look like? You will for sure need 10G, especially in all flash. You will have very mixed results in a mixed storage tier environment. They should all be the same drives.

  2. Thanks for the reply Jon! I discovered my problem: Using consumer SSD drives with ESXi is a no no! After swapping out my Samsung 950 Pro SATA for Samsung Enterprise SM863 SSDs this has helped.

    I have a couple more questions I hope you can help with:

    1) Do you think using a consumer Samsung PCIe NVMe SSD drive is ok in the cache tier (in a home lab)?

    2) How many VMs do you think you could run on a single SATA SSD in the capacity tier (again in a home lab setup)? Is 10 VMs reasonable if I use the Samsung Pro 950/960 PCIe NVMe SSD drive for cache and a single Samsung SM863 SATA SSD for capacity?

    I’m wanting to build a 3 or 4 node VSAN cluster with Supermicros new super small servers (the SYS-E200-8D) but they can only take a single M2 drive and one 2.5″ SATA SSD.

    1. I am using Intel 750 Series 400GB PCIe NVMe 3.0 SSD (SSDPEDMW400G4R5) in my lab currently. They work reliably well, albeit occasionally will get some congestion during extreme rebuilds. Haven’t tried the new Samsung NVME consumer, but maybe you will get the same results…

      Can’t really say how many VMs would work. Don’t really think it will be the number of VMs, but the load the VMs introduce. Without a 10G VSAN stack you are likely to run into problem serving data over the network on VMs. The VMs could be servers and do basic things but large r/w on the guest side will cause contention/congestion. Unfortunately I have not used SATA SSD in all-flash without using an HBA.

      As for what will run on a single SATA SSD capacity, I don’t know. Please provide full detail on the cluster. I would never recommend 1 disk for capacity, since the components will be very difficult to distribute, no fault tolerance besides the host, and especially rebuilding will be a nightmare. Two disks really is a minimum.

      In order to get VSAN working in any stable lasting configuration you really do need high-end hardware. PCIe/SAS SSD, SAS HDD, SAS2+ HBA. It is very hard to conform to small form factor servers. Makes it difficult to have the needed PCI-Express lanes for 10GB, NVME, and the HBA.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>