Skip to Main Content

December 2024

26

Watch our 'Azure Incident Retrospective' video about this incident:

What happened? 

Between 18:40 UTC on 26 December and 19:30 UTC on 27 December 2024, multiple Azure services were impacted by a power event that occurred in one datacenter, within one Availability Zone (physical zone AZ03), in the South Central US region. Within the impacted datacenter, our automated power systems managed the event as expected, without interruption for two of the three data halls. However, one data hall did not successfully transition to an alternate power supply. This failure led to a loss of compute, network, and storage infrastructure in this data hall.  

Customer workloads configured for multi-zone resiliency would have seen no impact, or only brief impact, as automated mitigations occurred. Only customer workloads without multi-zone resiliency, and with dependencies on the impacted infrastructure, became degraded or unavailable. Impacted downstream services included: 

Azure Alerts Management – between 18:40 UTC 26 December 2024 and 05:00 UTC on 27 December 2024, impacted customers may have experienced high latency in alerts notifications and persistence.

Azure App Service – between 18:40 UTC on 26 December and 12:00 UTC on 27 December, impacted customers may have received intermittent HTTP 500-level response codes, experience timeouts or high latency when accessing App Service (Web, Mobile, and API Apps), App Service (Linux), or Function deployments hosted in the South Central US region.

Azure Application Gateway – between 18:40 UTC on 26 December and 06:58 UTC on 27 December, impacted customers may have experienced data plane disruptions when trying to access your backend applications using Application Gateway hosted in the South Central US region.

Azure Backup – between 20:40 UTC on 26 December and 02:21 UTC on 27 December, impacted customers may have experienced failures in backup operations for Azure File shares in the South Central US region.

Azure Cache for Redis – between 18:45 and 21:35 UTC on 26 December, impacted customers may have lost cache availability and/or been unable to connect to cache resources hosted in the South Central US region.

Azure Cosmos DB – between 18:47 UTC on 26 December and 03:59 UTC on 28 December, impacted customers may have experienced a degradation in service availability and/or request latency. Some requests may have resulted in server errors or timeouts.

Azure Database for PostgreSQL – between 18:48 UTC on 26 December and 13:12 UTC on 27 December, impacted customers may have experienced connectivity failures and timeouts when executing operations, as well as unavailability of resources hosted in the South Central US region.

Azure Database Migration Service – between 19:01 UTC on 26 December and 12:17 UTC on 27 December, impacted customers may have experienced timeout errors when attempting to create a new migration service, or when using existing migration service in the South Central US region.

Azure Event Hubs | Azure Service Bus – Customers with Standard SKU, Premium SKU namespaces or AZ-enabled dedicated Event Hubs clusters experienced an availability drop for approximately five minutes, at the time when the incident started – this issue was mitigated automatically once namespace resources were reallocated to other availability zones. However, a subset of customers using Event Hubs Dedicated non-AZ clusters experienced an availability issue for an extended period of time when trying to access their Event Hubs namespaces in the region. The affected Event Hubs dedicated clusters recovered once the underlying failing VMs in their clusters were brought back online, the last of which were restored by 05:52 UTC on 27 December.

Azure Firewall – between 18:44 UTC on 26 December and 11:30 UTC on 27 December, impacted customers with an Azure Firewall deployed with multi-zone resilience may have seen partial throughput degradation and no availability loss. Customers with an Azure Firewall not utilizing multi-zone resiliency may have had resources dependent on the impacted Availability Zone (physical zone AZ03) which could have resulted in performance degradation or availability impact. Customers attempting control plane operations (for example, making changes to Firewall policies/rules) may have experienced failures during this incident.

Azure Logic Apps – between 18:47 UTC on 26 December and 03:10 UTC on 27 December, impacted customers may have encountered delays in run executions and failing data or control plane calls.

Azure SQL Database – between 20:12 UTC on 26 December and 18:22 UTC on 27 December, impacted customers may have experienced issues accessing services. New connections to databases in the South Central US region may have resulted in an error or timeout. Existing connections remained available to accept new requests, however if those connections were terminated then re-established, they may have failed.

Azure Storage – between 18:45 UTC on 26 December and 08:50 UTC on 27 December, impacted customers may have experienced timeouts and failures when accessing storage resources hosted in the South Central US region. This affected both Standard and Premium tiers of Blobs, Files and Managed Disks.

Azure Synapse Analytics – between 18:53 UTC on 26 December and 13:52 UTC on 27 December, impacted customers may have experienced spark job execution failures, and/or errors when attempting to create clusters, in the South Central US, East US 2, and/or Brazil South regions.

Azure Virtual Machines – between 18:41 UTC on 26 December and 22:26 UTC on 27 December, impacted customers may have experienced connection failures when trying to access some Virtual Machines hosted in the South Central US region. These Virtual Machines may have also restarted unexpectedly.

Azure Virtual Machine Scale Sets – between 19:04 UTC on 26 December and 11:18 UTC on 27 December, impacted customers may have experienced error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for resources hosted in the South Central US region.

• This incident also impacted a subset of Microsoft 365 services – further details are provided in the Microsoft 365 Admin Center, under incident ID MO966473

 

What went wrong and why? 

This incident was initially triggered by a utility power loss, itself caused by a localized ground fault – in which a high voltage underground line failed. After a phase to ground short developed in the buried feeder cables, the breaker feeding the datacenter tripped – leading to a loss of utility power, at 18:40 UTC.  

By design, the power distribution systems transferred power to diesel backup generators, where UPS batteries carry the load during this transition, which was successful for two of the three affected data halls. During the transition to generator power, the third data hall experienced UPS battery faults, which caused the load to drop during transition. 

In any power-related event, our first priority is to ensure the safety of our staff and infrastructure before any power restoration work can begin. Following our assessment, we were able to safely begin restoration at 20:13 UTC. IT power loads were manually re-energized on backup diesel generator power, by performing a bypass on the failed UPS devices. We began seeing infrastructure services returning by 20:35 UTC, with power fully restored by 20:56 UTC. As power and infrastructure recovered, the next validation steps were to ensure that Azure Networking and Azure Storage services were recovering as expected. By 21:00 UTC, almost all storage and network infrastructure services were confirmed as fully operational. A single storage scale unit remained significantly degraded, due to hardware that required deeper inspection and ultimately, replacement. 

As storage scale units recovered, 85% of the impacted Virtual Machines (VMs) recovered by 21:40 UTC as their Virtual Hard Disks (VHDs) became available. The next 13% of VMs recovered between 06:00 and 06:30 UTC, as the final storage scale unit became available. Despite all the storage issues being resolved, <2% of VMs impacted by this event remained unhealthy. These issues are detailed below and explain why impacted downstream services with dependencies on these VMs experienced long-tail recoveries. The incident was declared as mitigated at 19:30 UTC on 27 December 2024. 

Azure Storage:

For Zone Redundant Storage (ZRS) accounts, there was no availability impact – as data was served from replicas in other Availability Zones during this incident.  

The power loss event impacted six Storage scale units. After power restoration, scale units hosting Standard SSD Managed Disks, Premium SSD Managed Disks, Premium Blobs, and Premium Files, fully recovered automatically in around 30 minutes. For most of the HDD-based Standard Storage LRS/GRS scale units, the storage services took approximately one hour to recover. 

Unfortunately, within one Standard Storage scale unit, multiple network switches were non-functional following the power event, causing a significant portion of the data in that scale unit to be inaccessible because all replicas were unreachable. This caused significant impact to VMs and dependent services that were using Standard HDD managed disks and LRS blob or file storage accounts hosted on this scale unit. Mitigation required replacement networking equipment to be sourced from spares and installed by datacenter technicians. Network engineers then configured and validated these devices, before bringing them online. Additional actions were taken to recover storage nodes under the replaced switches. For the majority of accounts availability was restored by 06:10 UTC on 27th December 2024 (overall availability at 99.5%), with repairs required on a handful of servers to restore 100% availability by 08:50 UTC on 27 December 2024.  

Azure Compute / Virtual Machines: 

For customers using VM/compute workloads that leveraged multi-zone resiliency (such as VMSS flex across availability zones), there was no availability impact.  

For incidents like this, Azure has an automated recovery suite called ‘Defibrillator’ that starts automatically, to recover the VMs and Host machines they are running on, after datacenter power has been restored. It will orchestrate the power on for all affected Host machines, monitor the boot-up and bootstrap sequences, and ensure that the VMs are up and running. When this is running, Azure’s automated steady-state health detection and remediation systems suspend all activities, in order to avoid disrupting the disaster recovery process.  

At approximately 22:00 UTC on 26 December 2024, some compute scale units were found not tracking at the expected level of recovery. For the final 2% of VMs mentioned above, these experienced an extended recovery – we observed three separate events that contributed to this.  

  • The first scenario was due to initialization without a connection to a network device. Due to the network devices not being fully configured before the Host machines were powered on, a race condition triggered during the Host bootstrap process. This issue is specific to a certain hardware configuration within localized compute scale units, and necessitated the temporary disabling of some validation checks during the bootstrap process.
  • The second scenario delaying recovery was some machines failing to boot into the Host OS due to a newly discovered bootloader bug impacting a small subset of host hardware with higher levels of offline memory pages. When the hardware reports repeated corrected memory errors to the Host OS, the Host will offline certain memory ranges to prevent repeated use of that memory range. In a small subset of host hardware where a large range of offline memory was accumulated, this new Host OS bug was discovered – resulting in failing to bootstrap the Host OS. This category was mitigated by clearing and/or ignoring this offline memory list and allowing the Host OS to make forward progress where it could, then rebuild its offline memory list once it started to run the full OS.
  • The third scenario that had prevented compute recovery in some cases was due to control plane devices that are inline to execute the power operations on the Host machines. Datacenter technicians were required to reseat that infrastructure manually.

By 10:50 UTC on 27 December, >99.8% of the impacted VMs had recovered, with our team re-enabling Azure’s automated detection and remediation mechanisms. Some targeted remediation efforts were required for a remaining small percentage of VMs, requiring manual intervention to bring these back online. 

Azure Cosmos DB -

For Azure Cosmos DB accounts configured with availability zones, there was no impact, and the account maintained availability for reads and writes.

Impact on other Cosmos DB accounts varied depending on the customer database account regional configurations and consistency settings:  

  • Database accounts configured with availability zones were not impacted by the incident, and maintained availability for reads and writes. 
  • Database accounts with multiple read regions and a single write region outside South Central US maintained availability for reads and writes if configured with session or lower consistency. Accounts using strong or bounded staleness consistency may have experienced write throttling to preserve consistency guarantees until the South Central US region was either taken offline or recovered. This behavior is by design.  
  • Active-passive database accounts with multiple read regions and a single write region in South Central US maintained read availability, but write availability was impacted until the South Central US region was taken offline or recovered. 
  • Single-region database accounts in South Central US without Availability Zone configuration were impacted if any partition resided on the affected instances.

Azure SQL Database: 

For Azure SQL Databases configured with zone redundancy, there was no impact.

A subset of customers in this region experienced unavailability and slow/stuck control plane operations, such as updating the service level objective, for databases that are not configured as zone redundant. Customers with active geo-replication configuration were asked to consider failing out of the region at approximately 22:31 UTC.

Impact duration varied. Most databases recovered after Azure Storage recovered. Some databases took an extended time to recover due to the aforementioned long recovery time of some underlying VMs.

Azure Application Gateway: 

  • Application Gateway experienced issues with data path, control plane, and auto-scale operations, leading to service disruptions. Impact on Application Gateways varied depending on customer configuration:
  • Customers who deployed Application Gateways with zone redundancy may have experienced latency issues and overall degraded performance.
  • Customers who deployed Application Gateways to a single zone or did not specify zone info during deployment may have experienced data path loss if their deployments had instances in the affected zone.
  • Gateways with instances deployed in affected zone may have experienced failures or delays in configuration updates.
  • Gateways with instances deployed in affected zone may have experienced failures or delays in auto scale operations.

Azure Firewall: 

For Azure Firewalls deployed to all Availability Zones of the region, customers would not have experienced any data path impact. 

However, customers with an Azure Firewall deployed only to the impacted Availability Zone (physical zone AZ03), may have experienced some performance degradation – affecting the ability to scale out. Finally, customers attempting control plane operations (for example, making changes to Firewall policies/rules) may have experienced failures during this incident. Both of these impacts were experienced between 18:40 UTC on 26 December and 07:22 UTC on 27 December 2024.

Azure Synapse:  

Some users of Azure Synapse Analytics faced spark job execution failures in South Central US, Brazil South, and EastUS2. This impacted less than 1% of Synapse calls in those regions. Your logs may include one or more of the following errors that could be a result of this issue: “CLUSTER_CREATION_TIMED_OUT”, “FAILED_CLUSTER_CREATION”, “CLUSTER_FAILED_AFTER_RUNNING”. During this period, Azure Synapse could not provision on-demand compute due to failure to retrieve Management Group's ancestry for RBAC evaluations. The underlying storage for the SCUS instance of the ancestry data was impacted by this incident, which South Central US, Brazil South, and EastUS2 regions depend on. The data is replicated globally and regional failover attempts were made, but did not succeed due to a gateway error. The issue was resolved across all regions once South Central US region was recovered. 

How did we respond? 

  • 18:40 UTC on 26 December 2024 – Initial power event occurred which led to power loss in the affected data hall. 
  • 18:45 UTC on 26 December 2024 – Technicians from datacenter operations team engaged. 
  • 18:46 UTC on 26 December 2024 – Portal Communications started being sent to impacted subscriptions. 
  • 19:02 UTC on 26 December 2024 – Datacenter incident call began to support triaging and troubleshooting issues. 
  • 19:08 UTC on 26 December 2024 – Azure engineering teams joined a central incident call, to triage and troubleshoot Azure service impact. 
  • 20:13 UTC on 26 December 2024 – Power restoration assessed safe and began. 
  • 20:35 UTC on 26 December 2024 – Compute, Network, and Storage infrastructure began to recover. 
  • 20:54 UTC on 26 December 2024 – Communications published to our public status page.  
  • 20:56 UTC on 26 December 2024 – Power had been restored. Infrastructure recovery continued. 
  • 21:40 UTC on 26 December 2024 – 85% of the VMs impacted by underlying VHD availability recovered. 
  • 06:30 UTC on 27 December 2024 – Additional 13% of VMs impacted by VHD availability recovered. 
  • 08:30 UTC on 27 December 2024 – Ongoing mitigation of additionally impacted services. 
  • 13:00 UTC on 27 December 2024 – Mitigation to most affected services confirmed. 
  • 19:30 UTC on 27 December 2024 – Incident mitigation confirmed and declared.  

How are we making incidents like this less likely or less impactful? 

  • Datacenter response to return to utility power after ensuring battery health for UPS transition from generator (Completed).
  • We are reviewing the nature of the UPS battery failures in line with our global battery standards and maintenance procedures, to identify improvements to de-risk this class of issue across the fleet. (Estimated completion: February 2025)
  • Repairs to the offline failed utility line are in progress. (Estimated completion: February 2025)
  • The mitigation to bypass various checks during the bootstrap process have been applied to all impacted machines and are being evaluated and executed for other hardware configurations where needed. (Estimated completion: March 2025) 

How can customers make incidents like this less impactful? 

How can we make our incident communications more useful? 

You can rate this PIR and provide any feedback using our quick 3-question survey: