Cooling down those hot SAS drives!

Got more aluminum today, and couldn’t resist haphazardly laying it across the top. The bottom aluminum bar is 1/4″ think, 12″ X 16″. The top bar is 1/4T 16″ X 6″. And the two aluminum finned heat-sinks are 1/4ish” thick + Fins = 1.2″ish thick, 8.5″ X 6″. The final product will be trimmed accordingly, and sandwiched together using aluminum round bars as posts.

Proper Power Supply power on/off trick (RIP paper clip)

In the end my solution involved combining two 2-pin power connectors, into one four pin connector. Using wire cutters I cut off the two unneeded prongs, and pulled out the wire and leads. The two wires coming out are then spliced together. So when the fabricated power connector is connected the PSU will power on. I have since replaced the spliced wire with a push button power switch seen in the photo. When I replace the other HDPLEX’s spliced connector with a push button, I will post photos offering a better understanding.

Custom Direct Attached Storage power supply testing photos (2x HDPLEX 250w)

Snapped some photos while playing around with different power supply options for C-DAS. The absolute winner is HDPLEX’s 250W Power Supplies. Currently using two generic AC Adapters, but once all the aluminum parts come in, I will probably switch to two HDPLEX 120W AC Adapters (heat is an issue here). The current and probably final configuration is three Seagate E.3 7200K 1TB SAS2 6gb disks per host, totaling 12 disks (10.91TB VSAN Datastore). The Flash storage is currently one Samsung XP941 128GB M.2 NGFF PCI-E, and won’t change until the new Intel 400GB SSD NVME cards are more available. The last photos show an “External Mini SAS SFF-8088 to four SATA” stacked, which might eventually replace the current straight SATA to SATA back-plane connections. Since External SAS is supposedly supported in vSphere 6 VSAN.

Additional software/vibs added to each ESXi host (drivers, firmware, misc)

This list contains S3 CloudFront backed download links. Enjoy!

I can only recommend the following for Supermicro X10SL7-F motherboard and ESXi 5.5+ The following are the result of extensive reliability and performance testing for all four hosts. Some if not most of these links are general and not specific to a X10SL7-F, but i cannot vouch for their use in such a scenario.

Note (04-09-15): CloudFront fixed re Geo-restrictions issues site round

Offline Package/VIB:
igb-5.2.7-1331820-offline_bundle-2157967.zip {
(Updated to the latest Intel IGB drivers/softare v5.2.7 as of 051715)
}
vmware-esx-sas2flash.vib {
(third-party LSI software allows for LSI HBA firmware updates from within ESXi, v20.00.00.00 as of 051715)
}
9207_8i_Package_P20_IR_IT_Firmware_BIOS_for_MSDOS_Windows.zip {
(LSI archive containing latest “9207-8.bin” firmware P20 IT and mptsas2.rom)
}
mpt2sas-20.00.00.00.1vmw-offline_bundle-2253936.zip {
(latest LSI drvier/software package for LSI 2308, v20.00.00.00.1 as of 051715)
*ESXi 5.5u2 latest does not contain VSAN compatable drivers, greater than v16 introduces VSAN certification, and comes with less than v16*
}
cpu-microcode-1.7.0-2.x86_64.vib {
(includes latest third-party errata and whatnots, v1.7.0-2 as of 051715) V-Front.de
}
sata-xahci-1.28-1-offline_bundle.zip {
(latest third-party sata-xahci drivers for unsupported hardware Samsung M.2 XP941, v1.28-1 as of 051715) V-Front.de
}
esxcli-shell-1.1.0-15.x86_64.vib {
(latest third-party extended esxcli capability, v1.1.0-15 as of 051715) V-Front.de
}

Creating a cheap direct attached storage chassis for the cluster.

Well after much contemplation I have opted to move the two storage disks and SSD into a custom quasi direct attached storage chassis. In this configuration each host will have external 3x SAS/SATA 3.5bays, connected via 3x 7pin SATA. The four backplanes will be arranged on a 1u sliding tray shelf on their sides. Filling up roughly 80% of the 19 inches. The blank space will be used for something eventually. The void space behind the backplanes will hold a HDPLEX 160w+ACDC internal adapter, providing power to all four backplanes. Hopefully zip ties, super glue, and plastic bonding stuff will suffice. However i might need some kind of ratcheted nylon tie down.

Maybe go Lego brick kung-fu action, and Kragl the shit out of it. Darn, i named it…Lego it is 😉 Joker hostage situation with Batman and SWAT responding. The Lego blocks could, in imagination land, transform the front of the 4us as a building side. So pulling some of the blocks exposing room sized cavities, which could be lit with an LED, and be one part of the hostage crisis (comic book page). Batman climbing up a SATA cable bundle, representing an elevator shaft, to get to Joker in the penthouse suite. Sourcing figures won’t be easy/cheap (need 10+), but blocks not so much. Plenty of time to sketch out the structure, and find out which scene bad idea is not not bad.

— Penn-Elcom R1290/1U Sliding Rack Tray w/Fixing Points
— HDPLEX 160W DC-ATX Power Supply
— HDPLEX Internal 120W AC Power Adapter
4x ICY DOCK DataCage Classic MB453IPF-B 3 x 3.5″ HDD in 2 x 5.25″ Bay SAS/SATA

This should give a slight bump to the available power for each host, and allow much needed elbow room when modifying the VSAN storage. There should also be a significant temp drop for each node, sys/cpu etc. If your reading this you might have been asking yourself why not have just gone with a 1u server chassis with 4x 3.5″ bays? Well the answer is simple “Depth”. I don’t have it, since i wanted cable arms for each host, and a manageable cabinet. Not even mentioning the noise generated from proper backblane cooling systems. I choose these “MB453IPF-B” drive bays, because I didn’t want to convert four internal SAS ports to SFF-*, rather just go streight 7pin <--> 7pin. Another reason was the detachable fan, and capability to provide dual 7pin SAS. The fan can also be easily replaced with superior Noctua fans.

The vertical space is pretty costly, resting just under 4u’s. As of this moment (022615) the UPS is drawing 297W of 900w, or 33%. Of my random checks its never been seen over 36%, and goes as low as 30%. Runtime on batteries ranges from 18-25min. The PDU is not metered so unfortunately I’m limited to UPS and host level sensors. I do plan on implementing the SAS drives before a custom DAS comes into play. Mostly because I want to see what these HDPLEX’s can handle, and the temp/power numbers would be interesting.

Evaluating different VSAN storage options.

I decided todo some R&D for the storage parts. Available to me are four 128GB Samsung m.2 PCIe SSDs, four Samsung EVO 850 128GB SSDs, four Intel DC3700 SSDs, eight HGST 2.5″ 1tb 72k HHDs, eight Seagate E.3 1TB 72k/SAS HDDs, and a plethora of other disks. Obviously the performance with all SATA magnetic drives in VSAN is appalling. The DC3700’s hardly get the meaningful attention they need. Pretty much incapable of sustaining acceptable any use case scenarios. When one Linux, and Win guests start a copy to itself of more than 25%r/w to itself, the latency needle bounces off l20-190ms. Unworkable.

Currently I am switching out the SATA drives with on hand Seagate SAS drives. Leaving the DC3700’s in as flash, since in theory they should perform better than the M.2 NGFF, given its more enterprise style of handling queues. The numbers on the Samsungs are hard to ignore, leaving me to wonder if brute speed could offset the queue balance. A comparison i am excited about.

Worth special mention is the noise level coming from the cabinet. Sits idle at around 20db-35db, exhaust pointed away, and between 3-5feet away. Occasionally the noise maxes near 44db. All in all its very quiet. Their power supplies are passive, but both psu/acdc heat up considerably. Not enough however to affect system temperatures considering their position in the chassis.

So VSAN host storage is will look like:
1x Sam EVO 850 Pro 128GB (esxi, host cache, dump), backed by iSCSI or 16GB SSD
1x DC3700 / Sam M.2 (NGFF)
2x Seagate Con E.3 7200K 1TB SAS2 6gb
(will maybe play around and add 1+ HGST 2.5″ 1TB per host)

Last benchmark of the SATA vSan I forgot the save+grab the vsan observer results. Needing to benchmark the SATA configuration again is annoying, having todo the same thing twice now; I should have automated it to begin with. The solution i previously had was PowerShell based on SQL-Cluster+Vcenter, the Vcenter Appliance requires I make the needed glue changes. Hopefully i will also deem it worthy to thrown up here and Github. Essentially it would start and stop via ssh, passing the needed rvc commands, dumping the data into san accessible csv etc.

Here are some internal outdated photos of a host. (each identical)

More build photos…

While I continue the build and draft posts, enjoy these build photos. Here is a teaser of future post titles:

  • Using GIT to keep track of ESXi advanced configuration settings across hosts
  • Using GIT to manage the BIOS settings accross all hosts
  • Flashing the motherboard’s BIOS using Supermicro’s SUM tool
  • Mounting ESXi installer ISO over iKVM then installing ESXi 5.5
  • Using BASH, a task server, and ESXi CLI/SSH to schedule scripted events
  • BIOS (SUM) extended configuration pre ESXi install
  • Zyxel Switch VSAN checklist, with multicast traffic