Skip to Main Content

February 2026

2

What happened?

Between 19:46 UTC on 02 February 2026 and 06:05 UTC on 03 February 2026, a platform issue resulted in an impact for multiple Azure services in multiple regions.

  • Azure Virtual Machines – Customers may have experienced failures when deploying or scaling virtual machines, including errors during provisioning and lifecycle operations.
  • Azure Virtual Machine Scale Sets – Customers may have experienced failures when scaling instances or applying configuration changes.
  • Azure Kubernetes Service – Customers may have experienced failures in node provisioning and extension installation.
  • Azure DevOps and GitHub Actions – Customers may have experienced pipeline failures when tasks required virtual machine extensions or related packages.
  • Managed identities for Azure resources – Customers may have experienced authentication failures for workloads relying on this service.
  • Other dependent services – Customers may have experienced degraded performance or failures in operations that required downloading extension packages from Microsoft-managed storage accounts. (Azure Arc enabled servers, Azure Database for PostgreSQL)

What do we know so far?

A policy change was unintentionally applied to a subset of Microsoft‑managed storage accounts including ones used to host virtual machine extension packages. The policy blocked public read access which disrupted scenarios like virtual machine extension package downloads. This triggered widespread extension installation failures and caused downstream impact for services that rely on virtual machine scale set provisioning.

How did we respond?

  • 19:46 UTC on 02 February 2026 – Customers began experiencing issues while attempting to complete service management operations in multiple regions.
  • 19:55 UTC on 02 February 2026 – Service monitoring detected failure rates exceeding failure limit thresholds.
  • 20:10 UTC on 02 February 2026 – We started collaborating with additional teams to devise a mitigation solution.
  • 21:15 UTC on 02 February 2026 – We applied a primary mitigation and validated it was successful.
  • 21:50 UTC on 02 February 2026 – Began broader mitigation to impacted storage accounts.
  • 21:53 UTC on 02 February 2026 – In parallel, we completed an additional workstream to ensure no recurrence of the main triggering process.
  • 01:52 UTC on 03 February 2026 – We completed mitigation to impacted storage accounts.
  • 02:15 UTC on 03 February 2026 – Reviewed additional data and monitored downstream services to ensure that all mitigations were in place for all impacted storage accounts.
  • 06:05 UTC on 03 February 2026 – We concluded our monitoring and confirmed that all customer impact was mitigated.

What happens next?

Our team will be completing an internal retrospective to understand the incident in more detail. Once that is completed, generally within 14 days, we will publish a Post Incident Review (PIR) to all impacted customers. To get notified when that happens, and/or to stay informed about future Azure service issues, make sure that you configure and maintain Azure Service Health alerts – these can trigger emails, SMS, push notifications, webhooks, and more: . For more information on Post Incident Reviews, refer to . The impact times above represent the full incident duration, so are not specific to any individual customer. Actual impact to service availability may vary between customers and resources – for guidance on implementing monitoring to understand granular impact: . Finally, for broader guidance on preparing for cloud incidents, refer to .