Hope you are doing all great,
I wanted to bring this to the table because it is something that I thought happens, but in fact, it could be catastrophic for everyone in power outages situations when the energy is restored.
Knowing how much time a SAN will take to successfully boot is hard to measure, in fact, depends a lot on:
- The hardware configuration it has (How many JBODs, Drive amount and types, connections, and so on),
- If it was correctly powered off
- If it did not have a UPS or had one with not enough power to hold a proper shutdown.
- Data integrity checks.
- Copy the data that had cached to the drives. (Only the models that support it).
Depending on these and more factors that the SAN can take from 10 minutes up to 30 minutes to boot.
Most of us have a configuration in our servers that in the case of a power outage, once the energy is recovered, everything starts at the same time.
Newer servers have improved their boot time (down to 3 minutes on some occasions), which eventually will cause the ESXi to timeout trying to find the storage devices that were supposed to the available for whatever is in there to load correctly.
People rely on the SAN volumes to store their VMs for example, so when this happens. the ESXi ends loading without mapping the LUNs and eventually causing the Datastores associated with the LUNs and the VMs that live in those to appear inaccessible.
A manual rescan will be required once all has booted to see the storage devices and VMs again.
Some storage folks and I evaluated this scenario and came with 3 recommendations that might help to avoid this issue from happening.
Number # 1: Load a script into the local.sh to wait for a certain amount of time and rescan the storage devices
This will give enough time for the SAN to finishing booting and be available for the ESXi storage scan and mapping of the LUNs and then, datastores will be available for the VMs to boot.
You need to have these 2 lines into the local.sh script, 300 represents 5 minutes in seconds, if you want to extend this time you just need to multiply the number of minutes by 60.
/bin/sleep 300 /sbin/esxcli storage core adapter rescan --all
[[email protected]:/etc/rc.local.d] cat local.sh #!/bin/sh # local configuration options # Note: modify at your own risk! If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading. Changes are not supported unless under direction of # VMware support. # Note: This script will not be run when UEFI secure boot is enabled. exit 0
[[email protected]:/etc/rc.local.d] cat local.sh #!/bin/sh # local configuration options # Note: modify at your own risk! If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading. Changes are not supported unless under direction of # VMware support. # Note: This script will not be run when UEFI secure boot is enabled. /bin/sleep 300 /sbin/esxcli storage core adapter rescan --all exit 0
local.sh file is found under /etc/rc.local.d
Number # 2: Disable the auto power recovery option (if available).
The majority of today’s servers come with options on how to react to a Power Outage and what to do when the energy comes back, We have seen lots servers that power on immediately when the energy returns, this is great, but when a SAN is present and it will take longer to boot, we don’t recommend this from happening since it will boot the server and OS faster failing to map the LUNs, unless you’d like to try option 1 and you can keep the auto power recovery enable.
Disabling this will require manual intervention to power on the servers.
You can use this option if you don’t want to add the lines to the local.sh script and you don’t mind going and physically or remotely (iDRAC/iLO/BMC) and power them on.
Number # 3: Use an intelligent outlet (PDU or UPS)
As time pass, the technology evolves, and the usage of intelligent PDU/UPS has become common in data centers around the globe, this allows to set the configuration on how to react when the energy is restored, in terms of how much time you would like to wait until the outlets where “X” component gets energized.
This comes in handy because you can create a rule to set a “boot order” (by energizing the outlets) in a certain way that will allow all components to communicate correctly,
This is not a cheaper solution but if you have one and face this “Server booting faster than the SAN” issue, make sure to take advantage of that feature.
Hope this was a cool post for you guys to know,
Do now hesitate in contacting me if you have any comments,