The host node reboot was triggered by the Azure monitoring systems and migrates the VM but never tells you on the portal
When a host node reboot is triggered by the Azure monitoring systems detecting a potential failure condition with the physical node where the virtual machine was hosted. It would be good that the health status doesn't say HEALTHY while the VM is being migrated and in an unusable state when it's automatically moved to a different and physical node to avoid further impact.
I was scratching my head for over an hour wondering why I couldn't access the server at or nor shutdown or reboot it while this was happening for 40 mins.
Had to raise a Priority A ticket to get this explained to me as the Azure management portal said nothing.
VandeMore, Adam commented
Additionally the information in the portal when such an event happens can be thought of as misleading. For example a notification contains this info:
"A reboot was triggered from inside the virtual machine..."
When in reality the reboot was triggered from the hypervisor eg:
"We identified that your VM *redacted* became unavailable at *redacted* and availability was restored at *redacted*. This unexpected occurrence was caused by an Azure initiated host node reboot action.
The host node reboot was triggered by our Azure monitoring systems detecting a potential failure condition with the physical node where the virtual machine was hosted. This caused your VM to get rebooted."
Both quoted blocks of text are from the exact same incident and timeframe. However I needed to raise an Azure ticket to get the latter.