New LSI 2308 – 9207-8i IT Firmware (Pass-through) released 20.00.04.00

Since September 25th 2014 we haven’t seen any firmware update from LSI for 9207-8i (20.00.00.00). Today I checked LSI’s websites and found the same P20 firmware package listed. However this time the release date was different, May 21st 2015. After looking at the archive the only thing to change was the firmware, 20.00.04.00 – 9207-8.bin.
LSI Downloads Link

Inside the downloaded archive you will find “9207-8.bin” in the “firmware/HBA_9207_8i_IT” folder. Use this firmware binary to update your LSI 9207-8i based HBA with the sas2flash tool for esxi 5.5+.

Todo a rolling update place the first host in vsan maintenance mode and run the following command. Replace “<datastore>”

/opt/lsi/bin/sas2flash -o -f /vmfs/volumes/<datastore>/9207-8.bin

Reboot the host, then take it out of maintenance mode, and proceed with the next host.

So far I am seeing positive benefit to the firmware update. Do note however the 2308 driver for ESXi is still at 20.00.00.00.1, hopefully there will be a corresponding update soon. No release notes on changes to the firmware that I could find.

Lowjax Cluster updated specs, and some performance details.

The cluster consists of four hosts contributing to VSAN.

Software:
  • VMware ESXi 6.0.0, 2715440
  • VCSA 6.0
  • VDP 6.0
  • VSAN 6.0
  • VRLI 6.0
  • intel-nvme vib: 1.0e.1.1-1OEM
  • scsi-mpt2sas vib: 20.00.00.00.1vmw-1OEM
  • net-igb vib: 5.2.10-1OEM
  • LSI 2308 P20 Firmware (IT/Pass through)
Core:
  • Supermicro 1U Short Chassis (CSE-512L-260B)
  • Supermicro Xeon UP UATX Server Motherboard (X10SL7-F)
  • Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz (BX80646E31230V3)
  • (2x) Crucial 16GB Kit for 32GB Memory (CT2KIT102472BD160B)
Power:
  • HDPLEX 160W DC-ATX Power Supply
  • Mini-Box Y-PWR, Hot Swap, Load Sharing Controller
  • (2x) HDPLEX Internal 80W AC Power Adapter (160W peak with failover)
Cooling:
  • Dynatron Copper Heatsink K129 (1U CPU)
  • Enzotech BGA Copper heatsinks (northbridge)
  • SuperMicro Mylar Air Shroud
Storage:
  • Avago (LSI) 2308 SAS2 on-board HBA Controller
  • Samsung 850 Pro 128GB SSD (MZ-7KE128BW)
  • Intel 750 Series 400GB PCIe NVMe 3.0 SSD (SSDPEDMW400G4R5)
  • (3x) Seagate 1TB Enterprise Capacity HDD SAS (ST1000NM0023)
Networking:
  • Intel i210 on-board NIC with 2x 1Gbe
  • Intel 10Gbe Converged Ethernet Network Adapter (X540T1)

I am still trying to somehow slow the IOPs down with workloads, missing hosts, component syncs, client IOPs benchmarking, and simulated user events across multiple clients. There seems to be nothing i can do to slow things down at the storage layer. Even ten VDP backups running simultaneously, while doing the stuff listed above doesn’t have any affect. My latencies remain between 0.1ms and 1.2ms.

A Windows client with a two strips storage policy reads sequentially at 1000MB/s – 1700MB/s and writes 600MB/s – 1300MB/s. The same client performs 50K – 60k Random 4k Write IOPs, and 80k – 120k Read IOPs. The benchmark results have never dipped below the stated minimum number. Even with ten other benchmarks running simultaneously generates the same numbers as if no other benchmarks were running. Even during a VDP performance test everything remains the same!

Vmotions happen in just seconds, and on average a host entering VSAN maintenance mode is just under a minute. The capabilities and performance so far is out of this world. If your near-line HCL + 10Gbe, Vmware VSAN truly delivers! Eventually I will put together some comprehensive benchmarks numbers for various scenarios.

DIY Sonicwall Rack chassis

Browsing through some project photos, I realized justice wasn’t given to the DIY Sonicwall setup. The materials are extremely simple, and end up costing much less than a $400 Sonicwall made rack housing. The benefits to using one is pretty straight forward, you gain access to the ports in a convenient way. Not to mention the much desired form-factor. With a result similar to the photo.

1u Rack shelf with lower venting slits.
12 Port Patch Panel Face Cover for keystone jacks.
Ten pack of Cat6 RJ45 Keystone Jacks
Some blank keystones to be tidy
(Coax) Connector Keystone Module, if you want some nicer cable modem cable management

A whole bunch of Flat Cat6 3″ patch cables, and a ton of zip tie variety. Do note this does require real zip tie competence.

Custom Direct Attached Storage progress, final stages

In this stage I decided to switch back to two 160W HDPLEX power supplies. Those got mounted to an aluminum block, and made some custom cables. Attached the front bezel, which serves multiple purposes. First, it keeps the ICY Docks firmly in place, since there is also another bar going across the backside. It also allows the ICY Docks to tilt into place, and can easily be removed. The front bezel also covers a fractional portion of the disk LED’s making the appearance much more pleasant (25% bright).

Decided to use fancy sleeved red cables for the power (12v), which first goes through a terminal block, and then into female molex. Finally for stage three I will attach the aluminum rods to connect the top and bottom. At the same time make custom cables for the power supplies, so that everything is sleeved red. Hopefully I will figure out where to mount the two power switches, in one of the photos they are shown placed in the racks square-holes.

Raspberry Pi 2 w/Debian, Out and In band consolidated device/task management server

Recently I decided to plan out the deployment of a Raspberry Pi 2 into the LowJax cluster.  I wanted a way to centralize the management of the devices in the cluster, and be on both in-band and out. Initially Its core responsibilities would be to trap SNMP data from the UPS and PDU, as well as a consolidate the console servers for the two Lantronix Devices (four ports). The running code and scripts would be synced to a GIT repo.

The new capabilities of accessing the consoles for four devices in one SSH session, ability to fine tune shutdown and power saving scripts. For instance when on battery this Raspberry Pi could gracefully reduce the power foot print by staggering a shutting down of non-essential vms and devices. The device could also monitor essential devices, and be capable of executing an emergency routine. Add a Pi for failover…

For instance a switch fails uptime/connectivity checks, the routine would attempt all three stages of restarting it. Standard SSH command, then attempt via serial console connection, and finally send the command to the PDU to restart that powered port. In theory even the most crazy extreme case of issuing a reboot to the primary PDU via the UPS would be possible.

Once I get comfortably on the way, I will post a deep dive into its usage, and post the code repository.

RASPBIAN — Raspberry Pi Debian Wheezy (File Details)
File: 2015-02-16-raspbian-wheezy.zip
Desc: Raspberry Pi Debian Wheezy “Raspbian” image
SHA1: b71d7b61f44e9bd582df71c9be494c271c97650f
Size: 974MB
Date: 2015-02-16
Download: LowJax Cloud


External Info:

Raspberry Pi Debian Wheezy Raspbian
Raspberry Pi 2 Website

(Updated) VMware Vcenter Appliance 5.5+ Database Archiving tool

Decided to add some features and improve things a bit. In this iteration you can now backup your Inventory Service Database and restore it.  Backup Inventory “./vcdb-tool -b” or restore it “./vcdb-tool -r <inv.bak>”
You can also list the contents of your archive folder with “./vcdb-tool -l”

GitHub URL: https://github.com/jonretting/vcdb-tool

git clone git@github.com:jonretting/vcdb-tool.git
Sample Output:

INFO:

  • Tested on Vcenter Appliance 5.5d and 5.5e
  • Works only on an embedded local Postgres Database
  • When using the “-p” purge option a value of “30” would delete all archives over 30 days old
  • Edit “vcdbt_backup_dest” variable to change backup destination path

OPTIONS:

vcdb-tool [-e] [-i=file] [-p=#]
-e    Export a backup archive to the backup path
-i    Import the specified backup file from the backup path (-i file.bak)
-b    Backup the Inventory Service database
-r    Restore an Inventory Service database archive (-r file.bak)
-l    List contents of archive folder
-p    Purge outdated backup archives older than # days (-p 30)
-h    this cruft

EXAMPLES:

  • Run a PostGres Database Export ./vcdb-tool -e
  • Run a PostGres Database Import ./vcdb-tool -i VCDB.db.042615.1430061195.bak
  • Run an Inventory Service Backup ./vcdb-tool -b
  • Run an Inventory Service Restore ./vcdb-tool -r inv-backup.042615.1430061173.bak
  • List archive folder contents ./vcdb-tool -l
  • Cleanup old backup archives older than thirty days ./vcdb-tool -p 30

Based on official VMware KB-2034505 and KB-2062682

A cursory look under the hood of an ESXi host

Stating right off that I have yet to solidify the placement of power supplies and internal AC adapters. Each host does have a Mylar air shroud, but is shown removed in the photo below.

The HDPLEX 160W and 120W AC sit atop a block of aluminum, this improves their cooling drastically, and even under the most punishing of tests, the AC Adapters survive. Both PSU and Internal AC (IAC) will eventually be screwed directly into the aluminum base. The cable management in that area will also be fined tuned at that time.  The configuration in the third ESXi host positions the IAC towards the front with a Noctua fan blowing into it.

The aluminum plate itself will secure into the chassis using the factory PSU mounting holes, and some drilled aluminum block holes. The aluminum block underbelly will be shaved a bit to remove to accommodate the protrusions from the chassis. Allowing for far better surface contact for the diffusion block. Toward the rear of the plate where the power comes in, will see some re-enforcement and provide a nice attachment area for the cable arms. Which would all be secured down to the aluminum block.

The above picture shows final placement of PCIe cards. I opted to have the PCIe M.2 adapter to reside on the first x8(x16) and the PCIe Intel x2 Gbe on the second x8(x8). Prior to this the reverse was used, making an easy PCIe Riser viable. For this configuration two PCIe Ribbons were used. The reasoning for the transition felt like best practices to me. You see I didn’t want my VSAN SSD interface sharing a bus with on-board storage. Moreover the x2 Gbe card does not need much.

I am also eagerly awaiting the arrival of 128GB Samsung M.2 NVME devices to hit the market. The Intel 750 variation is 400GB at 1$ a GB. Moreover the Samsung M.2 is a more manageable form factor, compared to a small sized PCIe card. And my hosts are already setup with PCIe 3.0 M.2 NGFF adapters.

All ESXi hosts are maxed out now for memory. The clusters offers up a total of128GB memory. The entire 12u unit uses between 330W-360W idle, under load anywhere between 360W-500W.  Keep in mind that usage includes everything (Sonicwall, Switches, POE, Modem, Console Servers, Hosts, Storage, PDUs, etc). The unit is completely self-contained.

Battery power, according to CyberPower UPS, says I have 18min power during idle-low usage. Even at peak times, a good 8min should be enough for power saving scripts to kick and gracefully shut things down. A topic i hope to go over in the near future.

You will notice the CPU Heatsink has been replaced by a Copper one, and the north bridge by a Copper BGA by Enzotech. The Samsung 850 Pro 128GB uses the on-board primary controller, and is marked as an SSD in the motherboard’s bios. This drive contains the ESXi hyper-visor install. The boot sequence is as follows 1.) Virtual CDROM  2.) Samsung 850  3.) Built-In EFI Console.

You will also notice the three SATA cables coming in through the back and plugging into on-board SAS2 ports. From there they plug into three SATA femal extension cables, which take it all the way to another set of extension at the rear of the Storage Chassis. Then finally into the rear of the ICY-DOCK.  With a little fine tuning this situation could admit-ably be more ideal. Still being on VSAN 5.5, i don’t yet have the luxury of utilizing external SAS. However I have already built out the solution for this.

 

Photos of Lantronix Console server mod

Decided I wanted to combine two console servers into a more space friendly form factor. So I took Lantronix EDS2100/UDS2100 devices, broke off the two mount tabs, then super-glued/apoxied/bondic them together. Considering my space limitations this makes cabling and mounting much easier.