Skip to Main Content

Product:

Region:

Date:

April 2024

22

We are hosting two 'Azure Incident Retrospective' customer livestreams to discuss this incident and answer any questions - join us live, or watch the recording:

What happened?

Between 03:45 CST and 06:00 CST on 23 Apr 2024, a configuration change performed through a domain registrar resulted in a service disruption to 2 system domains (chinacloudapp.cn and chinacloudsites.cn) that are critical in our cloud operations in China. This caused the following impact in the Azure China regions:

Customers may have had issues connecting to multiple Azure services including Cosmos DB, Azure Virtual Desktop, Azure Databricks, Backup, Site Recovery, Azure IoT Hub, Service Bus, Logic Apps, Data Factory, Azure Kubernetes Services, Azure Policy, Azure AI Speech, Azure Machine Learning, API Management, Azure Container Registry, and Azure Data Explorer.

Customers may have had issues viewing or managing resources via the Azure Portal (portal.azure.cn) or via APIs.

Azure Monitor offerings, Log Analytics and Microsoft Sentinel in the China East 2 and China East 3 experienced intermittent data latency, failure to query and retrieve data which could have resulted in a failure of alert activation, and/or failures to create, update, retrieve or delete operations in Log Analytics.

What went wrong and why?

To comply with a regulatory requirement of the Chinese government, we conducted an internal audit, ensuring all our domains had the appropriate ownership and documented properly. During this process, ownership for two critical system domains for Azure in China were misattributed, and as a result, were flagged as potential candidates for decommission.

The next step of the decommissioning process is a period of monitoring active traffic on a flagged domain before continuing with its decommission. However, the management tool that provides DNS zone and hosting information was not scoped to include zones hosted within Azure China, which caused our system to report that the zone file did not exist. It is common for end-of-life domains that are not in use to not have a zone file, and, as such, the non-existent zone file notice did not raise any alerts with the operator. The workflow then proceeded to the next stage, where the nameservers of these two domains were updated to a set of inactive servers, which is a final check to identify any hidden users or services dependent on the domain.

As DNS caches across the Internet gradually timed out, DNS resolvers made requests to refresh the information for the two domains and received responses containing the inactive nameservers, resulting in failures to resolve FQDNs in those domains. Our health signals detected this degradation in our Azure China Cloud and alerted our engineers. Once we understood the issue, the change was reverted in a matter of minutes. However, the mitigation time was prolonged due to the caching applied by DNS resolvers.

The issue impacted only specific Microsoft-owned domains, and it did not affect the Azure DNS platform availability or DNS services serving any other zone hosted on Azure.

How did we respond?

  • 03:45 CST on 23 April 2024 – Nameserver configuration was updated. Due to previous DNS TTLs (Time To Live) the impact was not immediate.
  •  04:37 CST on 23 April 2024 - Our internal monitors alerted us to degradation in the service and created Incident.
  • 04:39 CST on 23 April 2024 – Incident was acknowledged by our engineering team.
  • 04:57 CST on 23 April 2024 – We determined the cause of the resolution failures coincide with a change in name servers for the chinacloudapp.cn and chinacloudsites.cn domains.
  • 04:59 CST on 23 April 2024 - We reverted to use the previously known good name servers.
  • 05:13 CST on 23 April 2024 - The reversion was completed, at which point services began to recover.
  • 06:00 CST on 23 April 2024 - Full recovery was declared after verifying that traffic for the services and affected DNS zones had recovered back to pre-incident levels.

How are we making incidents like this less likely or less impactful?

  • We have suspended any further runs of this domain lifecycle process until updates to the management tool for Azure in China are completed.
  • Update our validation process regarding domain lifecycle management to ensure all cloud region signals are incorporated (Estimated completion: June 2024).
  • Implement additional validations to obtain nameserver information by directly resolving the zone over internet (Estimated completion: July 2024).

How can our customers and partners make incidents like this less impactful?

  • As this issue impacted two domains used in operating the management plane of the Azure China Cloud and naming Azure services offered in the Azure China Cloud, users of the China Cloud did not have many opportunities to design their services to be resilient to this type of outage.
  • More generally customers should consider ensuring that the right people in your organization will be notified about any future service issues - by configuring Azure Service Health alerts: . These can trigger emails, SMS, webhooks, push notifications (via the Azure Mobile app ) and more.

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey:

19

What happened?

Between 04:26 UTC and 07:30 UTC on 19 April 2024, a platform issue in the West US region resulted in impact for the following services:

  • Azure Database for MariaDB: Connectivity issues, all connections may have failed for a subset of customers. Retries would have been unsuccessful.
  • Azure Database for MySQL - Single Server: Connectivity issues, all connections may have failed for a subset of customers. Retries would have been unsuccessful.
  • Azure Databricks: Due to Azure Databricks dependencies on the impacted database service, customers in West US, West US 2 and/or South Central US may have experienced failures and timeouts with workspace login and authentication requests. Cluster CRUD requests and jobs relying on cluster start/resize/termination may not have executed, jobs submitted through APIs/Schedulers may not have executed, UI and Databricks SQL queries may have timed out, users may have experienced failures launching Databricks SQL Serverless Warehouses and may have been unable to access UC APIs, and customers may have observed errors citing “Authentication is temporarily unavailable” or “TEMPORARILY_UNAVAILABLE”.

What went wrong and why?

All connections to Azure Database for MySQL - Single Server and Azure Database for MariaDB service are established through a gateway responsible for routing incoming connections to their respective servers. This gateway service is hosted by a group of stateless compute nodes sitting behind an IP address. As part of ongoing service maintenance, compute hardware hosting the gateway service is periodically refreshed to ensure we provide the most secure and performant experience. Before moving the incoming new connections requests to a new gateway ring, customers running their servers and connecting to older gateway rings are notified via email and in the Azure portal to update their outbound rules to allow new gateway IP address.

When the gateway hardware is refreshed, a new ring of the gateway compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers, and it will have a different IP address from older gateway rings in the same region, to differentiate the traffic. Once the new ring is fully functional, existing server connection traffic is routed to the new gateway by updating the DNS - and the older gateway hardware serving existing servers is planned for decommissioning. After the DNS update, all newer connections from the clients are automatically routed to the new gateway ring.

On 19 April 2024, as part of the planned maintenance event, the move of existing server incoming traffic to the new gateway ring was scheduled in West US region. At 04:26 UTC, the DNS update was made for a batch of Azure Database for MySQL and MariaDB servers, to route new incoming connections for servers to a new gateway ring. Due to a procedural error, the new connections were erroneously routed to a newly built gateway ring whose configuration was incomplete. This new gateway ring was not ready to accept new connections yet - this led to the login requests failing at the gateway, and therefore the gateway rejecting all new incoming connections to the Azure Database for MySQL and Azure Database for MariaDB servers. We initially failed to detect the failures caused by this change, as the new gateway ring configuration was incomplete, so it did not have the telemetry and monitoring configured yet. As a result, the issue went undetected from service telemetry, until we received reports from internal teams about unreachable databases.

How did we respond?

  • 19 April 2024 @ 04:46 UTC - Our MySQL team received a report from Azure Databricks that multiple databases were unreachable.
  • 19 April 2024 @ 05:14 UTC - Our MySQL on-call engineers determined that the databases were healthy but that no connections were being made.
  • 19 April 2024 @ 05:44 UTC - Our networking team was engaged to assist with the investigation.
  • 19 April 2024 @ 06:46 UTC - The issue was correlated to the attempt to move traffic to the new gateway ring, and mitigation steps to roll back the change were initiated.
  • 19 April 2024 @ 07:30 UTC - The incident was mitigated when the change was fully rolled back, and all the connections were moved back to the previous functional gateway ring after validation. 

How are we making incidents like this less likely or less impactful?

  • MySQL engineering will improve operational practices by adding additional checks, testing and sign-offs procedures before switching the production gateway rings. (Estimated completion: May 2024)
  • Azure Databricks engineering will introduce alerts to detect multiple databases being unavailable within a short span of time. (Estimated completion: May 2024)

How can customers make incidents like this less impactful?

  • Azure Databricks customers should consider reviewing our best practices surrounding disaster recovery for Azure Databricks, see: 
  • As Azure Database for MySQL – Single Server and Azure Database for MariaDB are on the retirement path, customers are recommended to upgrade to Azure Database for MySQL – Flexible Server. The Flexible Server architecture has no shared gateways and is a single VM per tenant architecture. All planned maintenance in Flexible Server is scheduled in maintenance window defined by the end user for that server, which architecturally avoids such incidents.  For details, see: 
  • More generally, consider evaluating the reliability of your applications using guidance from the Azure Well-Architected Framework and its interactive Well-Architected Review: 
  • Finally, consider ensuring that the right people in your organization will be notified about any future service issues - by configuring Azure Service Health alerts. These can trigger emails, SMS, push notifications, webhooks, and more: 

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey:

March 2024

14

Watch our 'Azure Incident Retrospective' video about this incident: 

What happened?

Between 10:33 UTC on 14 March 2024 and 11:00 UTC on 15 March 2024, customers using Azure services in the South Africa North and/or South Africa West regions may have experienced network connectivity failures, including extended periods of increased latency or packet drops when accessing resources. This incident was part of a broader continental issue, impacting telecom services to multiple countries in Africa.

The incident resulted from multiple concurrent fiber cable cuts that occurred on the west coast of Africa (specifically the WACS, MainOne, SAT3, and ACE cables) in addition to earlier ongoing cable cuts on the east coast of Africa (including the EIG, and Seacom cables). These cables are part of the submarine cable system that connect Africa’s internet to the rest of the world, and service Microsoft’s cloud network for our Azure regions in South Africa. In addition to the cable cuts, we later experienced impact reducing our backup capacity path, leading to congestion that impacted services.

Some customers may have experienced degraded performance including extended timeouts and/or service failures across multiple Microsoft services – while some customers may have been unaffected. Customer impact varied depending on the service(s), region(s), and configuration(s). Impacted downstream services included Azure API Management, Azure Application Insights, Azure Cognitive Services, Azure Communication Services, Azure Cosmos DB, Azure Databricks, Azure Event Grid, Azure Front Door, Azure Key Vault, Azure Monitor, Azure NetApp Files, Azure Policy, Azure Resource Manager, Azure Site Recovery, Azure SQL DB, Azure Virtual Desktop, Managed identities for Azure resources, Microsoft Entra Domain Services, Microsoft Entra Global Secure Access, Microsoft Entra ID, and Microsoft Graph. For service specific impact details, refer to the ‘Health history’ section of Azure Service Health within the Azure portal.

What went wrong, and why?

The Microsoft network is designed to support multiple failures to our Wide Area Network (WAN) capacity at any given point in time. Specifically, our regions in South Africa are connected via multiple diverse physical paths – both subsea and terrestrially within South Africa. The network is designed to support multiple failures and continue operating with only one single physical path. In this case, our South Africa Regions has four physically diverse subsea cable systems serving the country and the designed failure mode is that three out of four can fail with no impact to our customers.

Following news of geopolitical risks in the Red Sea, we ran internal simulations and capacity planning analysis. On 5 February, we initiated capacity additions to our African network. On 24 February, multiple cable cuts in the Red Sea impacted our east coast network capacity to Africa. This east coast capacity was unavailable, however there was no customer impact because of the built in redundancy.

Before our capacity additions from February had come online, on 14 March we experienced additional multiple concurrent fiber cable cuts, this time on the west coast of Africa – which further reduced the total network capacity for our Azure regions in South Africa. These cable cuts were due to a subsea seismic event (likely an earthquake and/or mudslide) which impacted multiple subsea systems – one of which is used by Microsoft. Additionally, after the west coast cable cuts had occurred, we experienced a line card optic failure on a Microsoft router inside the region that further reduced network headroom. Microsoft experiences hundreds of line card optic failures every day across the 500k+ devices that operate our network – such an event would normally have been invisible to our customers. However, the combination of concurrent cable cuts and this line card failure removed the necessary headroom on the failover path, which led to the congestion experienced.

This combination of events affected Azure services including Compute, Storage, Networking, Databases, and App Services – as well as Microsoft 365 services. While many customers leverage local instances of their services within the South Africa regions, there are some services that rely on API calls made to regions outside of South Africa. The reduced bandwidth to/from the South Africa regions, impacted these specific API calls and therefore impacted service availability and/or performance.

How did we respond?

The timeline that follows includes network availability figures, which represent the breadth of impact to our network capacity but may not represent the impact experienced by any specific customer or service.

  • 3 February 2024 – News articles surfaced geopolitical risk to Red Sea subsea cable infrastructure.
  • 5 February 2024 – Based on our internal simulations, we began the process of requesting capacity augments to Microsoft’s west coast Africa network.
  • 24 February 2024 – Multiple cable cuts in the Red Sea impacted east coast capacity (EIG, and Seacom cables), no impact to customers/services.
  • 4 March 2024 – Local fiber providers began work on approved capacity augments.
  • 14 March 2024 @ 10:02 UTC – Multiple cable cuts impacted west coast capacity (WACS + MAINONE + SAT3).
  • 14 March 2024 @ 10:33 UTC – Customer impact began, as reduced capacity began to cause networking latency and packet drops, our on-call engineers began investigating. Network availability dropped as low as 77%.
  • 14 March 2024 @ 11:55 UTC – Azure Front Door failed out of the region, to reduce inter-region traffic.
  • 14 March 2024 @ 12:00 UTC – Individual cloud service teams began reconfigurations to optimize network traffic to reduce congestion.
  • 14 March 2024 @ 15:44 UTC – After the combination of our mitigation efforts and the end of the business day in Africa, network traffic volume reduced – network availability rose above 97%.
  • 14 March 2024 @ 16:25 UTC – We continued implementing traffic engineering measures to throttle traffic and reduce congestion – network availability rose above 99%.
  • 15 March 2024 @ 06:00 UTC – As network traffic volumes increased, availability degraded, and customers began experiencing congestive packet loss – network availability dropped to 96%.
  • 15 March 2024 @ 11:00 UTC – We shifted capacity from Microsoft's Edge in Lagos to increase headroom for South Africa, last packed drops observed on our WAN. While this effectively mitigated customer impact, we continued to monitor until additional capacity supported more headroom.
  • 17 March 2024 @ 21:00 UTC – First tranche of emergency capacity came online.
  • 18 March 2024 @ 02:00 UTC – Second tranche of emergency capacity came online, Azure Front Door brought back into our South Africa regions, incident declared mitigated.

How are we making incidents like this less likely or less impactful?

  • We have added Wide Area Network (WAN) capacity to the region, in the form of a new physically diverse cable system with triple the capacity of pre-incident levels (Completed).
  • We are reviewing our capacity augmentation processes to help accelerate urgent capacity additions when needed (Estimated completion: April 2024).
  • We continue to work with our fiber providers to restore WAN paths after the cable cuts on the west coast of Africa (Estimated completion: April 2024) and on the east coast of Africa (Estimated completion: May 2024).
  • We are evaluating adding a fifth WAN path between South Africa and the United Arab Emirates, to build even more resiliency to the rest of the world (Estimated completion: June 2024).
  • We are increasingly shifting services to run locally from within our South Africa regions, to reduce dependencies on international regions where possible, including Exchange Online Protection (Estimated completion: June 2024).
  • In the longer term, we are investing in WAN Gateways in Nigeria to improve our fault isolation and routing capabilities. (Estimated completion: December 2024)
  • Finally, we are working to build out and activate Microsoft-owned fiber capacity to these regions, to reduce dependencies on local fiber providers. This includes investments in our own capacity on the new submarine cables going to Africa (specifically the Equiano, 2Africa East and West) which will exponentially increase capacity to serve our regions in South Africa. Importantly, this capacity will also be controlled by Microsoft – giving us more operational flexibility to add/change/move capacity in our WAN, versus relying on third-party telecom operators. These WAN fiber investments on new cable systems will land on the west coast of Africa (Estimated completion: December 2024) as well as on the east coast of Africa (Estimated completion: December 2025).

How can our customers and partners make incidents like this less impactful?

How can we make our incident communications more useful?

You can rate this PIR and provide any feedback using our quick 3-question survey: