I decided todo some R&D for the storage parts. Available to me are four 128GB Samsung m.2 PCIe SSDs, four Samsung EVO 850 128GB SSDs, four Intel DC3700 SSDs, eight HGST 2.5″ 1tb 72k HHDs, eight Seagate E.3 1TB 72k/SAS HDDs, and a plethora of other disks. Obviously the performance with all SATA magnetic drives in VSAN is appalling. The DC3700’s hardly get the meaningful attention they need. Pretty much incapable of sustaining acceptable any use case scenarios. When one Linux, and Win guests start a copy to itself of more than 25%r/w to itself, the latency needle bounces off l20-190ms. Unworkable.
Currently I am switching out the SATA drives with on hand Seagate SAS drives. Leaving the DC3700’s in as flash, since in theory they should perform better than the M.2 NGFF, given its more enterprise style of handling queues. The numbers on the Samsungs are hard to ignore, leaving me to wonder if brute speed could offset the queue balance. A comparison i am excited about.
Worth special mention is the noise level coming from the cabinet. Sits idle at around 20db-35db, exhaust pointed away, and between 3-5feet away. Occasionally the noise maxes near 44db. All in all its very quiet. Their power supplies are passive, but both psu/acdc heat up considerably. Not enough however to affect system temperatures considering their position in the chassis.
So VSAN host storage is will look like:
1x Sam EVO 850 Pro 128GB (esxi, host cache, dump), backed by iSCSI or 16GB SSD
1x DC3700 / Sam M.2 (NGFF)
2x Seagate Con E.3 7200K 1TB SAS2 6gb
(will maybe play around and add 1+ HGST 2.5″ 1TB per host)
Last benchmark of the SATA vSan I forgot the save+grab the vsan observer results. Needing to benchmark the SATA configuration again is annoying, having todo the same thing twice now; I should have automated it to begin with. The solution i previously had was PowerShell based on SQL-Cluster+Vcenter, the Vcenter Appliance requires I make the needed glue changes. Hopefully i will also deem it worthy to thrown up here and Github. Essentially it would start and stop via ssh, passing the needed rvc commands, dumping the data into san accessible csv etc.