Skip to Main Content

April 2026

24

Join either of our upcoming 'Azure Incident Retrospective' livestreams discussing this incident (to hear from our engineering leaders, and to get any questions answered by our experts) or watch a recording of the livestream (available the following week, on YouTube):

This is our Preliminary PIR to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a Final PIR with additional details.

What happened?

Between 11:30 UTC on 24 April and 00:15 UTC on 25 April 2026, customers may have experienced failures or delays when attempting to provision, scale, or update resources in East US. Beyond this, a smaller subset of impacted customers may have experienced intermittent connectivity issues on existing workloads (including Virtual Machines and Azure Virtual Desktop sessions) for scenarios dependent on unhealthy internal service dependencies.

The issue initially began with impact to a subset of customers in a single Availability Zone (physical AZ-01) but as demand shifted, similar symptoms were observed impacting a subset of customers in AZ-02 and AZ-03. While none of these zones were impacted for the full duration of the incident, customers experienced periods of impact in each zone for portions of the incident.

The following services were among those affected: Azure Application Gateway, Azure App Service, Azure Batch, Azure Cache for Redis, Azure Data Explorer, Azure Data Factory, Azure Databricks, Azure Health Data Services, Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, Azure Service Fabric, Azure Synapse Analytics, Azure Virtual Desktop, Azure Virtual Machines, Azure Virtual Network Manager, Azure VMware Solution, Oracle Database@Azure, Virtual Machine Scale Sets – and potentially additional services that were dependent on new compute allocations in the region.

Note: Logical availability zones assigned to customer subscriptions may map to different physical availability zones. Customers can use the Locations API to understand this mapping: .

What went wrong and why?

The Azure PubSub service is a key component of the networking control plane, acting as an intermediary between resource providers and networking agents on Azure hosts. Resource providers, such as the Network Resource Provider, publish customer configurations during Virtual Machine or networking create, update, or delete operations. Networking agents (subscribers) on the hosts retrieve these configurations to program the hosts networking stack. Additionally, the service functions as a cache, ensuring efficient retrieval of configurations during VM reboots or restarts. This capability is essential for deployments, resource allocation, and traffic management in Azure Virtual Network (VNet) environments.

During normal platform operations, one partition of this PubSub service in AZ-01 became unhealthy and automatically attempted to fail over to a secondary replica. The failover did not complete successfully, resulting in a partial loss of control plane availability within AZ-01. We intervened to investigate and attempted a manual failover of the primary partition, but this attempt was also unsuccessful.

Shortly afterward, we observed a similar condition in AZ-03, which led to a partial loss of control plane availability in AZ-03 as well. As the investigation progressed, we suspected that a previously deployed update to a regional control plane dependency had introduced a latent regression. This issue did not surface during earlier validation and only manifested when failover conditions were triggered under sustained production load.

As part of our mitigation efforts, we identified a version from the prior week that represented a Last Known Good (LKG) state. We first applied this rollback in AZ-03, which successfully restored control plane service health in that zone. Based on this, we began rolling back the affected components in AZ-01. By design, rollback operations are executed in stages by Azure Fabric controllers using update domains to ensure platform safety, while recovery proceeds incrementally.

While mitigation was in progress, the platform was unable to maintain two healthy instances of the PubSub service across availability zones simultaneously, which is a requirement for normal replication and control plane operations. This resulted in a loss of quorum of the service. As the system attempted to rebalance, impact shifted between availability zones, leading to periods of degraded behavior across multiple zones.

Similar failure patterns began to appear in AZ-02 and again in AZ-03, expanding the scope of impact across the region. For AZ-02 we initiated and completed a rollback, and although AZ-03 had previously shown recovery following the rollback, subsequent instability indicated that the rollback in that zone had not fully completed, because of an orchestration fault. As impact reemerged, rollback operations in AZ-03 were restarted and then completed, fully restoring service health.

How did we respond?

  • 11:30 UTC on 24 April 2026 – Customer impact began. We observed failures or delays when customers attempted to provision, scale, or update resources in the affected region.
  • 11:38 UTC on 24 April 2026 – We detected an issue in AZ-01. A control plane partition became unhealthy and automatic failover attempts did not complete successfully.
  • 11:38–13:40 UTC on 24 April 2026 – We attempted manual failover in AZ-01. These efforts did not successfully restore service.
  • 13:40 UTC on 24 April 2026 – We identified a recently deployed update as the likely cause of the issue.
  • 13:50 UTC on 24 April 2026 – We began observing similar symptoms in AZ-03, indicating the issue was affecting multiple availability zones.
  • 14:07 UTC on 24 April 2026 – We initiated rollback to a previously known good version in AZ-03.
  • 15:03 UTC on 24 April 2026 – We observed significant recovery in AZ-03. Control plane availability exceeded 99%.
  • 15:04 UTC on 24 April 2026 – We initiated rollback actions to AZ-01.
  • 18:52 UTC on 24 April 2026 – We observed significant improvement in AZ-01 as rollback progressed.
  • 19:02 UTC on 24 April 2026 – We confirmed AZ-01 had recovered to greater than 99% availability, while the rollback continued in the background.
  • 19:05 UTC on 24 April 2026 – We observed similar symptoms in AZ-02 as load redistributed across the region.
  • 19:10 UTC on 24 April 2026 – We initiated rollback to a known good version in AZ-02.
  • 21:02 UTC on 24 April 2026 – We observed instability reappear in AZ-03. We determined this was because the rollback had not yet completed across all update domains. Consequently, we manually unblocked the rollback across remaining update domains in AZ03 to ensure stable recovery.
  • 22:39 UTC on 24 April 2026 – We confirmed rollback was fully completed in AZ-03.
  • 23:22 UTC on 24 April 2026 – We confirmed rollback was fully completed in AZ-02, completing PubSub mitigation across all affected zones.
  • 00:15 UTC on 25 April 2026 – We validated downstream service recovery and PubSub health across all zones in the region.

How are we making incidents like this less likely or less impactful?

  • We have assessed the risk of occurrence in other high volume regions, and have taken steps to rollback this PubSub service in these regions out of an abundance of caution. (Completed)
  • We are investing in improving our test coverage surrounding the failure cases and load patterns that contributed to this incident, to catch issues like this one before they reach production. (Estimated completion: TBD)
  • We are working to reduce rollback complexity, to be able to mitigate issues like this more quickly in future. (Estimated completion: TBD)
  • This is our Preliminary PIR to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a Final PIR with additional details.   

How can customers make incidents like this less impactful?

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey: