A cursory look under the hood of an ESXi host

Stating right off that I have yet to solidify the placement of power supplies and internal AC adapters. Each host does have a Mylar air shroud, but is shown removed in the photo below.

The HDPLEX 160W and 120W AC sit atop a block of aluminum, this improves their cooling drastically, and even under the most punishing of tests, the AC Adapters survive. Both PSU and Internal AC (IAC) will eventually be screwed directly into the aluminum base. The cable management in that area will also be fined tuned at that time.  The configuration in the third ESXi host positions the IAC towards the front with a Noctua fan blowing into it.

The aluminum plate itself will secure into the chassis using the factory PSU mounting holes, and some drilled aluminum block holes. The aluminum block underbelly will be shaved a bit to remove to accommodate the protrusions from the chassis. Allowing for far better surface contact for the diffusion block. Toward the rear of the plate where the power comes in, will see some re-enforcement and provide a nice attachment area for the cable arms. Which would all be secured down to the aluminum block.

The above picture shows final placement of PCIe cards. I opted to have the PCIe M.2 adapter to reside on the first x8(x16) and the PCIe Intel x2 Gbe on the second x8(x8). Prior to this the reverse was used, making an easy PCIe Riser viable. For this configuration two PCIe Ribbons were used. The reasoning for the transition felt like best practices to me. You see I didn’t want my VSAN SSD interface sharing a bus with on-board storage. Moreover the x2 Gbe card does not need much.

I am also eagerly awaiting the arrival of 128GB Samsung M.2 NVME devices to hit the market. The Intel 750 variation is 400GB at 1$ a GB. Moreover the Samsung M.2 is a more manageable form factor, compared to a small sized PCIe card. And my hosts are already setup with PCIe 3.0 M.2 NGFF adapters.

All ESXi hosts are maxed out now for memory. The clusters offers up a total of128GB memory. The entire 12u unit uses between 330W-360W idle, under load anywhere between 360W-500W.  Keep in mind that usage includes everything (Sonicwall, Switches, POE, Modem, Console Servers, Hosts, Storage, PDUs, etc). The unit is completely self-contained.

Battery power, according to CyberPower UPS, says I have 18min power during idle-low usage. Even at peak times, a good 8min should be enough for power saving scripts to kick and gracefully shut things down. A topic i hope to go over in the near future.

You will notice the CPU Heatsink has been replaced by a Copper one, and the north bridge by a Copper BGA by Enzotech. The Samsung 850 Pro 128GB uses the on-board primary controller, and is marked as an SSD in the motherboard’s bios. This drive contains the ESXi hyper-visor install. The boot sequence is as follows 1.) Virtual CDROM  2.) Samsung 850  3.) Built-In EFI Console.

You will also notice the three SATA cables coming in through the back and plugging into on-board SAS2 ports. From there they plug into three SATA femal extension cables, which take it all the way to another set of extension at the rear of the Storage Chassis. Then finally into the rear of the ICY-DOCK.  With a little fine tuning this situation could admit-ably be more ideal. Still being on VSAN 5.5, i don’t yet have the luxury of utilizing external SAS. However I have already built out the solution for this.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>