Skip to Main Content

Product:

Region:

Date:

April 2026

24

Join either of our upcoming 'Azure Incident Retrospective' livestreams discussing this incident (to hear from our engineering leaders, and to get any questions answered by our experts) or watch a recording of the livestream (available the following week, on YouTube):

This is our Preliminary PIR to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a Final PIR with additional details.

What happened?

Between 11:30 UTC on 24 April and 00:15 UTC on 25 April 2026, customers may have experienced failures or delays when attempting to provision, scale, or update resources in East US. Beyond this, a smaller subset of impacted customers may have experienced intermittent connectivity issues on existing workloads (including Virtual Machines and Azure Virtual Desktop sessions) for scenarios dependent on unhealthy internal service dependencies.

The issue initially began with impact to a subset of customers in a single Availability Zone (physical AZ-01) but as demand shifted, similar symptoms were observed impacting a subset of customers in AZ-02 and AZ-03. While none of these zones were impacted for the full duration of the incident, customers experienced periods of impact in each zone for portions of the incident.

The following services were among those affected: Azure Application Gateway, Azure App Service, Azure Batch, Azure Cache for Redis, Azure Data Explorer, Azure Data Factory, Azure Databricks, Azure Health Data Services, Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, Azure Service Fabric, Azure Synapse Analytics, Azure Virtual Desktop, Azure Virtual Machines, Azure Virtual Network Manager, Azure VMware Solution, Oracle Database@Azure, Virtual Machine Scale Sets – and potentially additional services that were dependent on new compute allocations in the region.

Note: Logical availability zones assigned to customer subscriptions may map to different physical availability zones. Customers can use the Locations API to understand this mapping: .

What went wrong and why?

The Azure PubSub service is a key component of the networking control plane, acting as an intermediary between resource providers and networking agents on Azure hosts. Resource providers, such as the Network Resource Provider, publish customer configurations during Virtual Machine or networking create, update, or delete operations. Networking agents (subscribers) on the hosts retrieve these configurations to program the hosts networking stack. Additionally, the service functions as a cache, ensuring efficient retrieval of configurations during VM reboots or restarts. This capability is essential for deployments, resource allocation, and traffic management in Azure Virtual Network (VNet) environments.

During normal platform operations, one partition of this PubSub service in AZ-01 became unhealthy and automatically attempted to fail over to a secondary replica. The failover did not complete successfully, resulting in a partial loss of control plane availability within AZ-01. We intervened to investigate and attempted a manual failover of the primary partition, but this attempt was also unsuccessful.

Shortly afterward, we observed a similar condition in AZ-03, which led to a partial loss of control plane availability in AZ-03 as well. As the investigation progressed, we suspected that a previously deployed update to a regional control plane dependency had introduced a latent regression. This issue did not surface during earlier validation and only manifested when failover conditions were triggered under sustained production load.

As part of our mitigation efforts, we identified a version from the prior week that represented a Last Known Good (LKG) state. We first applied this rollback in AZ-03, which successfully restored control plane service health in that zone. Based on this, we began rolling back the affected components in AZ-01. By design, rollback operations are executed in stages by Azure Fabric controllers using update domains to ensure platform safety, while recovery proceeds incrementally.

While mitigation was in progress, the platform was unable to maintain two healthy instances of the PubSub service across availability zones simultaneously, which is a requirement for normal replication and control plane operations. This resulted in a loss of quorum of the service. As the system attempted to rebalance, impact shifted between availability zones, leading to periods of degraded behavior across multiple zones.

Similar failure patterns began to appear in AZ-02 and again in AZ-03, expanding the scope of impact across the region. For AZ-02 we initiated and completed a rollback, and although AZ-03 had previously shown recovery following the rollback, subsequent instability indicated that the rollback in that zone had not fully completed, because of an orchestration fault. As impact reemerged, rollback operations in AZ-03 were restarted and then completed, fully restoring service health.

How did we respond?

  • 11:30 UTC on 24 April 2026 – Customer impact began. We observed failures or delays when customers attempted to provision, scale, or update resources in the affected region.
  • 11:38 UTC on 24 April 2026 – We detected an issue in AZ-01. A control plane partition became unhealthy and automatic failover attempts did not complete successfully.
  • 11:38–13:40 UTC on 24 April 2026 – We attempted manual failover in AZ-01. These efforts did not successfully restore service.
  • 13:40 UTC on 24 April 2026 – We identified a recently deployed update as the likely cause of the issue.
  • 13:50 UTC on 24 April 2026 – We began observing similar symptoms in AZ-03, indicating the issue was affecting multiple availability zones.
  • 14:07 UTC on 24 April 2026 – We initiated rollback to a previously known good version in AZ-03.
  • 15:03 UTC on 24 April 2026 – We observed significant recovery in AZ-03. Control plane availability exceeded 99%.
  • 15:04 UTC on 24 April 2026 – We initiated rollback actions to AZ-01.
  • 18:52 UTC on 24 April 2026 – We observed significant improvement in AZ-01 as rollback progressed.
  • 19:02 UTC on 24 April 2026 – We confirmed AZ-01 had recovered to greater than 99% availability, while the rollback continued in the background.
  • 19:05 UTC on 24 April 2026 – We observed similar symptoms in AZ-02 as load redistributed across the region.
  • 19:10 UTC on 24 April 2026 – We initiated rollback to a known good version in AZ-02.
  • 21:02 UTC on 24 April 2026 – We observed instability reappear in AZ-03. We determined this was because the rollback had not yet completed across all update domains. Consequently, we manually unblocked the rollback across remaining update domains in AZ03 to ensure stable recovery.
  • 22:39 UTC on 24 April 2026 – We confirmed rollback was fully completed in AZ-03.
  • 23:22 UTC on 24 April 2026 – We confirmed rollback was fully completed in AZ-02, completing PubSub mitigation across all affected zones.
  • 00:15 UTC on 25 April 2026 – We validated downstream service recovery and PubSub health across all zones in the region.

How are we making incidents like this less likely or less impactful?

  • We have assessed the risk of occurrence in other high volume regions, and have taken steps to rollback this PubSub service in these regions out of an abundance of caution. (Completed)
  • We are investing in improving our test coverage surrounding the failure cases and load patterns that contributed to this incident, to catch issues like this one before they reach production. (Estimated completion: TBD)
  • We are working to reduce rollback complexity, to be able to mitigate issues like this more quickly in future. (Estimated completion: TBD)
  • This is our Preliminary PIR to share what we know so far. After our internal retrospective is completed (generally within 14 days) we will publish a Final PIR with additional details.   

How can customers make incidents like this less impactful?

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey:

March 2026

9

Watch our 'Azure Incident Retrospective' video about this incident: 

What happened? 

Between 23:20 UTC on 9 March and 19:32 UTC on 10 March 2026, a platform issue resulted in impact to the Azure OpenAI Service. Impacted customers experienced HTTP 400 and HTTP 429 error responses, specifically for the GPT-5.2 model. All other GPT models were unaffected during this time.

This incident impacted customer resources and queries in the following regions: Australia East, Central US, East US 2, Korea Central, Norway East, Sweden Central, and UK South. 

What went wrong and why? 

The Azure OpenAI Service processes customer requests through model engines deployed across multiple Azure regions and supported by traffic routing systems. Depending on the selected deployment model (Global, Data Zone, or Regional), note that requests may be routed across multiple Azure regions within defined geographic boundaries to support availability and resilience, while customer data remains stored at rest in the selected Azure region. Learn more at: .

A recent update to the Azure OpenAI GPT 5.2 model introduced a configuration change that was not compatible with the version of the model engine code running in production. As part of this update, certain feature settings were enabled to improve service efficiency and resilience – however, the deployed engine version did not yet support those settings. As a result, when customer requests were routed to engines with this mismatch, the service was unable to process these requests correctly.

Generally speaking, any updates to the service are rolled out in line with our Safe Deployment Practices (SDP) which deploys to different regions gradually, in stages. During this update, the earlier stages of the rollout did not include sufficient backend model instances for this issue to surface before the update progressed to additional regions. As a result, the rollout had completed its deployment across our fleet before we were able to determine customer impact.

During mitigation of the primary issue, we identified a secondary issue that affected service recovery. Azure OpenAI relies on internal telemetry to understand real-time service capacity across regions and to route traffic accordingly. At the time recovery actions were underway, an unrelated issue in this internal telemetry system led to incomplete capacity information being leveraged. As a result, traffic routes were temporarily being determined using incomplete data, which led to a disproportionate amount of traffic being directed to a limited set of available regions. This created additional resource pressure in those regions and resulted in continued intermittent request failures (HTTP 429 errors) for some customers, even as the rollback of the configuration issue was progressing and other regions were actually available to receive requests. Once the routing updates were successfully completed and full capacity information was restored across regions, traffic distribution normalized and service recovery progressed as expected.

During the incident, we also identified a monitoring gap related to anomalous HTTP 400 error patterns. While HTTP 400 responses do occur during normal service usage, our monitoring was not configured for service‑side anomalies, only for client-side errors – which are typically caused by incorrect parameters in user requests. This miss in monitoring during the initial stages of the incident delayed detection and response.  

How did we respond? 

  • 23:20 UTC on 09 March 2026 – Customer impact began, triggered by the recent service update. 
  • 00:19 UTC on 10 March 2026 – We detected the issue via service monitoring. This prompted us to begin our investigation, engage with other teams to troubleshoot, and start developing a hot fix. 
  • 03:18 UTC on 10 March 2026 – We determined that a rollback could mitigate more quickly than hotfix, so started the rollback for the impacted model.
  • 10:55 UTC on 10 March 2026 – Traffic routes were determined to be using incomplete data, due to the aforementioned dependency issue. 
  • 12:40 UTC on 10 March 2026 – We identified and investigated resource constraints.
  • 18:00 UTC on 10 March 2026 – Rollback actions completed across all affected regions. 
  • 19:30 UTC on 10 March 2026 – Full capacity information was restored across regions, traffic distribution normalized. 
  • 19:32 UTC on 10 March 2026 – Once monitoring confirmed stable recovery, we determined the service was fully restored and all customer impact had been mitigated. 

How are we making incidents like this less likely or less impactful? 

  • We have already conducted additional engineer training on our operating procedures – including which scenarios can be quickly rolled back – to reduce the time to mitigate similar issues. (Completed) 
  • We have incorporated storing a ‘cached’ known good version of the traffic routing details, as an additional layer of resilience in case the dependent service is not able to serve the latest information on regional capacity availability. (Completed) 
  • We have improved our monitoring surrounding HTTP 400 errors, by establishing thresholds of errors on the service side. (Completed) 
  • To expand that monitoring further, we are improving our anomaly detection surrounding HTTP 4xx errors – to alert on anomalous error rates that may not meet our usual thresholds. (Estimated completion: April 2026) 
  • We are incorporating additional signals to our deployment systems, to reduce potential impact by stopping problematic rollouts automatically. (Estimated completion: May 2026)
  • Finally, we are improving our safe deployment practices by ensuring that early stages have sufficient backend model instances to catch issues like this earlier. (Estimated completion: June 2026) 

How can customers make incidents like this less impactful?

  • Consider reviewing our guidance and best practices related to Business Continuity and Disaster Recovery (BCDR) scenarios for Azure OpenAI:
  • Consider reviewing and implementing our best practices surrounding retry patterns, especially with exponential backoff, to improve workload resiliency during intermittent issues:
  • More generally, consider evaluating the reliability of your applications using guidance from the Azure Well-Architected Framework and its interactive Well-Architected Review:
  • The impact times above represent the full incident duration, so are not specific to any individual customer. Actual impact to service availability varied between customers and resources – for guidance on implementing monitoring to understand granular impact:
  • Finally, consider ensuring that the right people in your organization will be notified about any future service issues – by configuring Azure Service Health alerts. These can trigger emails, SMS, push notifications, webhooks, and more:

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey: