Update - We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : A fix has been implemented by our teams. Ongoing Actions : Our teams are currently recovering impacted services.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 22, 2026 - 18:09 UTC
Identified - We are currently investigating an incident affecting our Compute - Instance offering, which is causing temporary availability issue in the GLOBAL region.
Here are some supplementary details :
Start time : 22/03/2026 10:55 UTC Impacted Service(s) : Some instances in the GLOBAL region are unreachable using IAM (Identity and Access Management). Customers Impact : Some customers are temporarily unable to login to their Public Cloud instances using IAM (Identity and Access Management). Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
Mar 22, 2026 - 16:55 UTC
Update - We have new information regarding the incident that affected your service(s).
Update : Our provider is still working to restore the service as quickly as possible. Based on our current assessment, we estimate that the incident is expected to be resolved by early March. We will keep you informed as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Feb 24, 2026 - 13:41 UTC
Update - We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Our teams are still working closely with our provider to solve the incident. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Feb 18, 2026 - 13:46 UTC
Update - We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : We continue to work with our provider on the incident. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Jan 07, 2026 - 19:22 UTC
Update - We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : The incident originated on 28/10/2025 12:00 UTC when attempts to upgrade Kafka from version 3.8 to the target x.y release began returning an error. The problem is limited to the upgrade path; existing Kafka clusters continue to operate normally, and no other functionality of the Data & Analytics platform is impacted. Actions in progress : Our teams are working closely with the provider to diagnose the root cause of the upgrade failure. We are monitoring the provider's remediation work in real time and have prepared a fallback path that allows customers to deploy alternative supported versions (e.g., 3.9 or 4.0) in parallel while the primary upgrade path is being restored. Key guidance for customers : No immediate action is required from customers. If you need a newer Kafka version before the official migration is available, you may provision a parallel cluster running version 3.9 or 4.0. We will continue to provide updates as the investigation progresses and will notify you when the upgrade path is fully restored. Root Cause : This incident is caused by our provider. Ongoing Actions : The incident has been identified and corrected. It is being closely monitored to ensure long-term stability.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding
Dec 26, 2025 - 12:58 UTC
Monitoring - Monitoring remains active for this incident.
Update : Our provider is still working to restore the service as quickly as possible. The migration from 3.8 to x.y version is expected to be available by the end of January 2026.
We apologize for any inconvenience caused and appreciate your understanding.
Dec 19, 2025 - 08:32 UTC
Update - We are continuing to work on a fix for this issue.
Dec 15, 2025 - 14:36 UTC
Update - We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Our provider has informed us that the migration from 3.8 to x.y version is more complex than initially anticipated, which is the reason why this feature is currently unavailable. However, customers can deploy Kafka version 3.9 or 4.0 in parallel if they need. Upgrade possibility is expected to be available by end January 2026.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Nov 03, 2025 - 14:50 UTC
Identified - We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our Data & Analytics offer on Kafka services.
Here are some supplementary details :
Start time : 28/10/2025 12:00 UTC Impacted Service(s) : Upgrading version trigger an error ("cannot upgrade from version 3.8 to x.y"). Customers Impact : Customers are temporarily unable to upgrade their Kafka version from 3.8 to x.y. Root Cause : This incident is caused by our provider. Ongoing Actions : The incident has been identified and our teams are working with our provider to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
Oct 31, 2025 - 12:18 UTC
Investigating - We are currently investigating an incident affecting our Data & Analytics offering, which is causing temporary functionality issue on Kafka services.
Here are some supplementary details :
Start time : 28/10/2025 12:00 UTC Impacted Service(s) : Upgrading version trigger an error ("cannot upgrade from version 3.8 to x.y"). Ongoing Actions : Our teams are investigating to determine the origin of the incident to fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Oct 30, 2025 - 16:15 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our Data Platform offer.
Here are the details of the maintenance: Start time : 23/03/2026 13:00 UTC End time : 23/03/2026 16:00 UTC Service impact : None Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our Data Platform infrastructure.
Thank you for your understanding. Posted on
Mar 10, 2026 - 15:34 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our AI Endpoints offer.
Here are the details of the maintenance: Start time : 25/03/2026 05:00 UTC End time : 25/03/2026 16:00 UTC Service impact : As part of a scheduled maintenance operation (https://network.status-ovhcloud.com/incidents/kx8qjdnr3gld), temporary performance slowdown may occur on the service. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our AI Endpoints offer.
Thank you for your understanding. Posted on
Mar 09, 2026 - 16:32 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our AI Endpoint offer.
Here are the details of the maintenance: Start time : 02/04/2026 04:00 UTC End time : 02/04/2026 15:00 UTC Service impact : As part of a scheduled maintenance operation (https://network.status-ovhcloud.com/incidents/dj3gd4jknvp0), temporary performance slowdown may occur on the service. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our AI Endpoint offer.
Thank you for your understanding. Posted on
Mar 09, 2026 - 16:37 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on US-EAST-VA-1/US-WEST-OR-1 region.
Here are the details of the maintenance: Start time : 02/04/2026 08:00 UTC End time : 02/04/2026 13:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on our Load Balancer service infrastructure. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding. Posted on
Mar 03, 2026 - 16:10 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our Object storage offer.
Start time : 07/04/2026 04:30 UTC End time : 07/04/2026 09:30 UTC Service impact : Some data storage will be temporarily unavailable during the maintenance window. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our Object storage offer in the YNM1 region.
Thank you for your understanding. Posted on
Mar 18, 2026 - 07:54 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our AI Endpoint offer.
Here are the details of the maintenance: Start time : 13/04/2026 04:00 UTC End time : 13/04/2026 15:00 UTC Service impact : As part of a scheduled maintenance operation (https://network.status-ovhcloud.com/incidents/smqvqzmbfwql), temporary performance slowdown may occur on the service. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our AI Endpoint offer.
Thank you for your understanding. Posted on
Mar 09, 2026 - 16:53 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our AI Endpoint offer.
Here are the details of the maintenance: Start time : 21/04/2026 04:00 UTC End time : 21/04/2026 15:00 UTC Service impact : As part of a scheduled maintenance operation (https://network.status-ovhcloud.com/incidents/7xp1b1vrgmkb), temporary performance slowdown may occur on the service. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our AI Endpoint offer.
Thank you for your understanding. Posted on
Mar 09, 2026 - 16:58 UTC
As part of our continuous improvement plan, maintenance is scheduled on our Managed Private Registry.
Start time : 21/04/2026 07:30 UTC End time : 21/04/2026 16:00 UTC Service impact : The upgrade of the Harbor version will set Harbor in read-only mode (only pull will be allowed) for 1 or 2mn. Service improvement : Following our continuous improvement policy, we are rolling out an upgrade of registries.
Thank you for your understanding. Posted on
Mar 05, 2026 - 01:18 UTC
As part of our continuous improvement plan, maintenance is scheduled on our Managed Kubernetes Service.
Start time : 28/04/2026 07:00 UTC End time : 30/04/2026 16:00 UTC Service impact : Customers have to anticipate this maintenance in order to adapt to the new Kubernetes version. Service improvement : As part of our continuous improvement policy, customers using Kubernetes version 1.30 will be migrated to version 1.31.
Thank you for your understanding. Posted on
Mar 18, 2026 - 21:56 UTC
As part of our continuous improvement plan, a maintenance is scheduled on our AI Endpoint offer.
Here are the details of the maintenance: Start time : 29/04/2026 04:00 UTC End time : 29/04/2026 15:00 UTC Service impact : As part of a scheduled maintenance operation (https://network.status-ovhcloud.com/incidents/lj16ckd9xb80), temporary performance slowdown may occur on the service. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our AI Endpoint offer.
Thank you for your understanding. Posted on
Mar 09, 2026 - 17:03 UTC
Past Incidents
Mar 22, 2026
Unresolved incident: [GLOBAL][Compute - Instance] - IAM Login incident notification.
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance in GRA5, GRA7, GRA9 and GRA11 regions has been resolved.
Start time : 20/03/2026 08:06 UTC End time : 20/03/2026 08:56 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 20, 09:01 UTC
Identified -
We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our Compute - Instance offer on the specific regions.
Here are some supplementary details :
Start time : 20/03/2026 08:06 UTC Impacted Service(s) : Some instances in GRA5, GRA7, GRA9 and GRA11 regions are unreachable. Customers Impact : Some customers are temporarily unable to access and use their instances in GRA5, GRA7, GRA9 and GRA11 regions. Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 20, 08:28 UTC
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance offer has been resolved.
Start time : 19/03/2026 18:16 UTC End time : 19/03/2026 19:21 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 19, 19:26 UTC
Identified -
We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our Compute - Instance offer on the specific region.
Here are some supplementary details :
Start time : 19/03/2026 18:16 UTC Impacted Service(s) : Some instances in the BHS1 region are unreachable. Customers Impact : Some customers are temporarily unable to access and use their instances in the specified region. Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 19, 18:49 UTC
Completed -
The scheduled maintenance has been completed.
Mar 19, 15:31 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 19, 13:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on EU-WEST-PAR region.
Here are the details of the maintenance: Start time : 19/03/2026 13:00 UTC End time : 19/03/2026 16:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service infrastructure. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Mar 10, 15:55 UTC
We apologize for any inconvenience caused and appreciate your understanding.
Mar 18, 20:34 UTC
Identified -
We are currently investigating an incident affecting our MKS offering, which is causing temporary availability issue in GRA11.
Here are some supplementary details :
Start time : 18/03/2026 13:30 UTC Impacted Service(s) : The creation of new octavia loadbalancers through MKS may have delay. Customers Impact : No issue on existing loadbalancers. Root Cause :https://public-cloud.status-ovhcloud.com/incidents/4s4tt0lqmmk4 Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 18, 16:02 UTC
Resolved -
We are pleased to inform you that the incident affecting our Load Balancer Octavia offering has been resolved.
Start time : 18/03/2026 13:30 UTC End time : 18/03/2026 20:08 UTC Root Cause : This incident was caused by a software issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 18, 20:13 UTC
Investigating -
We are currently investigating an incident affecting our Load Balancer Octavia offering, which is causing temporary issue in GRA11.
Here are some supplementary details :
Start time : 18/03/2026 13:30 UTC Impacted Service(s) : Pending operations are slowed down but the data plane is not impacted. Customers Impact : Customers may experience delays when performing load balancer management operations. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 18, 14:54 UTC
Completed -
The scheduled maintenance has been completed.
Mar 18, 10:18 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 08:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on GRA11 region.
Here are the details of the maintenance: Start time : 18/03/2026 08:00 UTC End time : 18/03/2026 11:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on our Load Balancer service infrastructure. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Feb 18, 08:23 UTC
Completed -
All services are operational.
Mar 17, 15:36 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 17, 09:00 UTC
Scheduled -
As part of our continuous improvement plan, we will be carrying out a maintenance on our object storage offer.
This may temporarily affect latency.
Start time : 17/03/2026 09:00 UTC End time : 17/03/2026 16:00 UTC Service impact : You may experience latency during maintenance. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our object storage offer. This upgrade include a new feature of checksum validation on the s3 api. This will allow our customers to use the latest sdk version.
Thank you for your understanding.
Mar 10, 22:50 UTC
Completed -
The scheduled maintenance has been completed.
Mar 17, 11:49 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 17, 08:30 UTC
Update -
The current maintenance has been re-scheduled. Below you can find the new end time of the operation.
Start time : 17/03/2026 08:30 UTC End time : 17/03/2026 12:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service in GRA9. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Feb 18, 05:18 UTC
Update -
Below you can find the new end time of the operation.
Start time : 09/03/2026 08:30 UTC End time : 09/03/2026 12:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service in GRA9. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Feb 13, 16:36 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on GRA9 region.
Here are the details of the maintenance: Start time : 09/03/2026 08:30 UTC End time : 09/03/2026 12:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service in GRA9. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Feb 10, 19:58 UTC
Resolved -
We are pleased to inform you that the incident affecting MongoDB services has been resolved.
Start time : 16/03/2026 14:00 UTC End time : 16/03/2026 20:05 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 17, 09:21 UTC
Monitoring -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : The teams have rolled out a fix that has resolved the issue. It is being closely monitored to ensure long-term stability.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 16, 20:05 UTC
Update -
We are continuing to investigate this issue.
Mar 16, 15:21 UTC
Investigating -
We are currently investigating an incident affecting our MongoDB offering, which is causing temporary availability issue in GRA/DE/UK/RBX/MUM/WAW/SGP/SBG/BHS/EU-WEST-PAR/EU-SOUTH-MIL.
Here are some supplementary details :
Start time : 16/03/2026 14:00 UTC Impacted Service(s) : New service creation is temporarily unavailable. Existing database services continue to run. Customers Impact : Customers are temporarily unable to order new MongoDB services in regions GRA/DE/UK/RBX/MUM/WAW/SGP/SBG/BHS/EU-WEST-PAR/EU-SOUTH-MIL Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 16, 15:20 UTC
Completed -
The scheduled maintenance has been completed.
Mar 16, 15:04 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 16, 12:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on DE1 region.
Here are the details of the maintenance: Start time : 16/03/2026 12:00 UTC End time : 16/03/2026 15:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service in DE1. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Feb 17, 16:48 UTC
Resolved -
We are pleased to inform you that the incident affecting our MongoDB Discovery offering has been resolved.
Start time : 10/03/2026 23:40 UTC End time : 13/03/2026 09:00 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 16, 09:09 UTC
Monitoring -
MongoDB Discovery offering has been fully restored since 13/03/2026 09:00 UTC. Monitoring remains active to ensure long-term stability.
Here are some supplementary details :
Start time : 10/03/2026 23:40 UTC Impacted Service(s) : New service creation and management operations were temporarily unavailable. Existing database services continued to run but might have been impacted due to degraded self-healing capabilities. Root cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 13, 10:08 UTC
Identified -
We are currently investigating an incident affecting our MongoDB Discovery offering in the GRA Region.
Here are some supplementary details : Start time : 10/03/2026 23:40 UTC Impacted Service(s) : New service creation and management operations are temporarily unavailable. Existing database services continue to run but may be impacted due to degraded self-healing capabilities. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 09:33 UTC
Resolved -
We had an incident on Storage offer which has now been resolved.
Here are some supplementary details :
Start time : 13/03/2026 07:20 UTC End time : 13/03/2026 08:15 UTC Impacted Service(s) : Severe network traffic disruptions on the S3 object storage clusters between 7:20 and 7:30 UTC. From 7:30 to 8:15 UTC, network instability persisted, causing 500/503 errors and authentication problems. Customers Impact : Customers may have encountered 500/503 errors and authentication issues. Root Cause : This incident was caused by this incident : https://network.status-ovhcloud.com/incidents/dt09splp43wh
We thank you for your understanding and patience throughout this incident.
Mar 13, 11:33 UTC
Resolved -
We had an incident on Containers & Orchestration offer that impacted the Load Balancer and Managed Kubernetes Service in GRA11 which has now been resolved.
Here are some supplementary details :
Start time : 12/03/2026 15:00 UTC End time : 12/03/2026 19:30 UTC Impacted Service(s) : Network perturbations were temporarily observed on the Load Balancer and Managed Kubernetes Service in GRA11. Customers Impact : Customers were temporarily facing issues using the Load Balancer and Managed Kubernetes Service in the specified region. Root Cause : This incident was caused by this incident: https://public-cloud.status-ovhcloud.com/incidents/2t9ck6jjtylr
We thank you for your understanding and patience throughout this incident.
Mar 12, 19:43 UTC
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance in GRA11 has been resolved.
Start time : 12/03/2026 15:00 UTC End time : 12/03/2026 19:30 UTC Root Cause : This incident was caused by a software issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 19:32 UTC
Update -
We are currently investigating an incident affecting our Compute - Instance offering, which is causing temporary functionality issue in the GRA11 region.
Here are some supplementary details :
Start time : 12/03/2026 15:00 UTC Impacted Service(s) : Creation of new Public cloud instances is temporarily unavailable. Also, network instabilities are being observed. Customers Impact : Customers are temporarily unable to create new Public Cloud instances and modify network configurations of their existing instances. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 17:02 UTC
Investigating -
We are currently experiencing an event affecting our offer.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 16:59 UTC
Resolved -
We are pleased to inform you that the incident affecting our Managed Kubernetes Service offer has been resolved.
Start time : 12/03/2026 09:11 UTC End time : 12/03/2026 10:21 UTC Root Cause : This incident was caused by a software issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 16:35 UTC
Monitoring -
We had an incident on Managed Kubernetes Service offer which has now been resolved. Monitoring remains active to ensure long-term stability.
Here are some supplementary details :
Start time : 12/03/2026 09:11 UTC End time : 12/03/2026 10:21 UTC Impacted Service(s) : All MKS Clusters on GRA11 are experiencing slow APIServer access from Worker Nodes. Customers Impact : Access to the MKSCluster APIServer is slow from some Worker Nodes, which can impact applications that frequently interact with the APIServer. Ongoing Actions : Our teams are continuing to monitor the system to ensure stable service.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 12, 11:09 UTC
Completed -
The scheduled maintenance has been completed.
Mar 12, 14:48 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 12, 12:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on GRA7 region.
Here are the details of the maintenance: Start time : 12/03/2026 12:00 UTC End time : 12/03/2026 15:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on our Load Balancer service infrastructure. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Feb 18, 08:20 UTC
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance in GRA11 region has been resolved.
Start time : 11/03/2026 17:00 UTC End time : 11/03/2026 19:35 UTC Root Cause : This incident was caused by a software issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 11, 19:36 UTC
Investigating -
We are currently investigating an incident affecting our Compute - Instance offering, which is causing temporary functionality issue in the GRA11 region].
Here are some supplementary details :
Start time : 11/03/2026 17:00 UTC Impacted Service(s) : Creation of new Public cloud instances and modification of network configuration is temporarily unavailable. Customers Impact : Customers are temporarily unable to create new Public Cloud instances and modify network configurations of their existing instances. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 11, 18:49 UTC
Completed -
The scheduled maintenance has been completed.
Mar 11, 18:34 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 11, 17:30 UTC
Update -
We will be undergoing scheduled maintenance during this time.
Mar 10, 05:33 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud infrastructure.
Start time : 11/03/2026 17:30 UTC End time : 11/03/2026 20:00 UTC Service impact : Short intermittent downtime on hosted LZ APIs during the upgrade. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our Public Cloud infrastructure in the region listed below : AF-CENTRAL-LZ-ABJ AF-NORTH-LZ-RBA
Thank you for your understanding.
Mar 9, 18:40 UTC
Completed -
The scheduled maintenance has been completed.
Mar 11, 17:59 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 11, 15:00 UTC
Scheduled -
As part of our continuous improvement plan, maintenance is scheduled on our Managed Kubernetes Service and LocalZones offer.
Start time : 11/03/2026 15:00 UTC End time : 11/03/2026 21:00 UTC Service impact : Customers may experience short, intermittent downtimes on hosted LZ APIs during the upgrade. Service improvement : As part of our continuous improvement policy, we will be performing an upgrade on our Managed Kubernetes Service offering in the EU-WEST-LZ.
Thank you for your understanding.
Mar 2, 22:31 UTC
Completed -
The scheduled maintenance has been completed.
Mar 11, 16:57 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 11, 12:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud Load Balancer on SBG5 region.
Here are the details of the maintenance: Start time : 11/03/2026 12:00 UTC End time : 11/03/2026 15:00 UTC Service impact : The API will be temporarily unavailable for 1 minute during the maintenance. Service improvement : As part of our continuous improvement policy, we will be performing a maintenance on the Load Balancer service in SBG5. This maintenance is necessary to ensure the continued reliability and performance of our platform. If you have any questions or concerns, please don't hesitate to reach out to our support team.
Thank you for your understanding.
Mar 4, 16:09 UTC
Completed -
The scheduled maintenance has been completed.
Mar 11, 16:56 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 11, 14:30 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Public Cloud infrastructure.
Start time : 11/03/2026 14:30 UTC End time : 11/03/2026 20:00 UTC Service impact : Short intermittent downtime on hosted LZ APIs (EU-WEST) during the upgrade. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our Public Cloud infrastructure in the region listed below : EU-WEST-LZ-AMS EU-WEST-LZ-BRU EU-WEST-LZ-DLN EU-WEST-LZ-LUX EU-WEST-LZ-MNC EU-WEST-LZ-MRS EU-WEST-LZ-VIE EU-WEST-LZ-ZRH
Thank you for your understanding.
Mar 9, 18:24 UTC
Resolved -
We are pleased to inform you that the incident affecting our AI & Machine Learning offer has been resolved.
Start time : 04/03/2026 10:00 UTC End time : 11/03/2026 07:00 UTC Root Cause : This incident was caused by a software issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 11, 12:56 UTC
Identified -
We have new information regarding the incident that affected your service(s).
Please find below the latest update on the situation: Update : Latency is still being observed on the LLama3.3-70B model. Our teams are actively monitoring the situation to ensure long-term stability. Ongoing Actions : The root cause of the incident has been identified, and our teams remain fully mobilised to restore normal service as quickly as possible.
We will continue to keep you informed about the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 9, 10:26 UTC
Monitoring -
Service has been fully restored since 06/03/2026 10:00 UTC. Monitoring remains active to ensure long-term stability.
Here are some supplementary details :
Start time : 06/03/2026 15:22 UTC Impacted Service(s) : LLama3.3-70B model is experiencing performance issues.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 6, 15:23 UTC
Identified -
We have new informations regarding the incident that affected your service(s).
Please find below an update on the situation: Update : The issue has been identified and is related to an unusually high load on the LLama3.3-70B model. Our teams are monitoring the situation and working to mitigate the impact. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 6, 09:59 UTC
Investigating -
We are currently investigating an incident affecting our AI & Machine Learning offering, which is causing temporary latency issue in the LLama3.3-70B model.
Here are some supplementary details :
Start time : 04/03/2026 10:00 UTC Impacted Service(s) : LLama3.3-70B model is experiencing performance issues. Customers Impact : Latency and unavailability can be observed on the customer side. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 4, 14:16 UTC
Resolved -
We are pleased to inform you that the incident affecting our MongoDB offering has been resolved.
Start time : 10/03/2026 16:40 UTC End time : 10/03/2026 23:40 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 11, 00:01 UTC
Identified -
We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our MongoDB offer on the specific GRA - DE - SBG - UK - BHS and WAW regions.
Here are some supplementary details :
Start time : 10/03/2026 16:40 UTC Impacted Service(s) : Management operations are temporarily unavailable (e.g., adding users, updating flavors). Existing database services remain operational and continue to serve customer workloads. Customers Impact : Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 21:31 UTC
Investigating -
We are currently investigating an incident affecting our MongoDB offering in the DE and GRA Region.
Here are some supplementary details : Start time : 10/03/2026 16:40 UTC Impacted Service(s) : Management operations are temporarily unavailable (e.g., adding users, updating flavors). Existing database services remain operational and continue to serve customer workloads. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 20:44 UTC
Resolved -
We would like to inform you that the incident on our Storage Volume Snapshot (Ceph) offer has now been resolved.
Here is detail for this incident : Start time : 10/03/2026 07:26 UTC End time : 10/03/2026 17:33 UTC Root Cause : This incident was caused by a network equipment issue.
We thank you for your understanding and patience throughout this incident.
Mar 10, 17:46 UTC
Identified -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Our teams have replaced the faulty network equipment.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 16:57 UTC
Investigating -
We are currently investigating an incident affecting our Storage offering, which is causing temporary availability issue in the GRA2.
Here are some supplementary details :
Start time : 10/03/2026 07:26 UTC Impacted Service(s) : Some CEPH volumes are not available in this region. Customers Impact : Customers may experience performance degradation and deterioration in their data redundancy. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 16:52 UTC
We thank you for your understanding and patience throughout this incident.
Mar 10, 17:38 UTC
Identified -
We have determined the origin of the incident affecting our Managed Private Registry offer on the GRA Region.
Here are some supplementary details : Start time : 10/03/2026 09:47 UTC Impacted Service(s) : Some registries are temporarily unavailable. Customers Impact : Customers are temporarily unable to access their registries located on the specified region. Root Cause : This incident is caused by this incident : https://public-cloud.status-ovhcloud.com/incidents/myp81nc676b8 Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 17:30 UTC
Completed -
The scheduled maintenance has been completed.
Mar 10, 17:36 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 10, 15:00 UTC
Scheduled -
As part of our continuous improvement plan, maintenance is scheduled on our Managed Kubernetes Service and LocalZones offer.
Start time : 10/03/2026 15:00 UTC End time : 10/03/2026 21:00 UTC Service impact : Customers may experience short, intermittent downtimes on hosted LZ APIs during the upgrade. Service improvement : As part of our continuous improvement policy, we will be performing an upgrade on our Managed Kubernetes Service offering in the US-EAST-LZ.
Thank you for your understanding.
Mar 2, 22:06 UTC
Resolved -
We would like to inform you that we had an incident on our Managed Private Registry. This incident has now been resolved.
Here are some supplementary details : Start time : 10/03/2026 09:47 UTC End time : 10/03/2026 13:51 UTC Impacted Service(s) : Some registries were unavailable for several minutes between 09:47 and 13:51 UTC. Root Cause : This incident was caused by this incident : https://public-cloud.status-ovhcloud.com/incidents/9ty1dd080xgw
We thank you for your understanding and patience throughout this incident.
Mar 10, 16:49 UTC
Resolved -
We are pleased to inform you that the incident affecting our Cold Archive offer has been resolved. However, some time will be required to fully clear the backlog.
Start time : 07/11/2025 10:00 UTC End time : 10/03/2026 12:00 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 13:10 UTC
Update -
Please find below an update on the situation:
Update : Archive recovery is still underway. We would like to clarify the situation regarding restorations. In the meantime, if a customer needs to restore a bucket in "archiving" status, he must submit a support ticket (the criticality level is at the customer's discretion) so that we can perform the restoration. Ongoing Actions : Our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Jan 27, 10:36 UTC
Update -
Please find below an update on the situation: Update : The completion rate has now returned to normal levels; however, a considerable amount of time will still be required to fully clear the backlog. Ongoing Actions : Our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Jan 8, 16:42 UTC
Update -
Please find below an update on the situation: Update : We continue to see a backlog of archiving operations that began on 07 Nov 2025 10:00 UTC. The backlog is being processed, but the rate of completion remains below normal levels, resulting in extended latency for new archiving requests. No new tasks are being blocked, but they are progressing more slowly than expected. Ongoing Actions : Our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Dec 29, 16:17 UTC
Update -
Please find below an update on the situation: Update : We are continuing our archiving efforts, though completion is taking longer than expected. Ongoing Actions : Our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Dec 26, 21:12 UTC
Update -
Please find below an update on the situation: Update : We are still catching up on archiving, but it's slower than expected. Ongoing Actions : Our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Dec 17, 10:37 UTC
Update -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Archiving tasks aren't stuck anymore but slowness is observed. As soon as a customer request archiving, the data will be billed as if it were on tape even though it may take a while for the data to actually reach the tape. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Dec 12, 16:58 UTC
Update -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : We are currently catching up on archiving, but we do not have an estimated time frame to share with you at this time. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Dec 1, 14:21 UTC
Identified -
We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our Cold archive offer on the specific region.
Here are some supplementary details :
Start time : 07/11/2025 10:00 UTC Impacted Service(s) : Archiving tasks are temporarily stuck in "Archiving" state. Customers Impact : Customers are temporarily unable to archive their data. Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
Nov 20, 10:48 UTC
Resolved -
We are pleased to inform you that the incident affecting our Dedicated Servers has been resolved.
Start time : 10/03/2026 07:26 UTC End time : 10/03/2026 10:11 UTC Root Cause : This incident was caused by a cooling system issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 10:46 UTC
Monitoring -
Service has been fully restored since 10/03/2026 10:05 UTC. Monitoring remains active to ensure long-term stability.
Here are some supplementary details :
Start time : 10/03/2026 07:26 UTC Impacted Service(s) : Some CEPH volumes were not available in this region. Customers Impact : Customers may have experienced performance degradation and deterioration in their data redundancy. Root Cause : This incident was caused by a network equipment issue.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 10:30 UTC
Identified -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Our teams have replaced the faulty network equipment.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 10:29 UTC
Update -
We are currently investigating an incident affecting our Storage offering, which is causing temporary availability issue in the GRA2.
Here are some supplementary details :
Start time : 10/03/2026 07:26 UTC Impacted Service(s) : Some CEPH volumes are not available in this region. Customers Impact : Customers may experience performance degradation and deterioration in their data redundancy. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 08:06 UTC
Investigating -
We are currently experiencing an event affecting our Storage offer.
Start time : 10/03/2026 07:26 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 10, 07:55 UTC
Completed -
The scheduled maintenance has been completed.
Mar 9, 19:12 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 9, 11:00 UTC
Scheduled -
As part of our continuous improvement plan, we will be carrying out a maintenance on our cooling infrastructure.
Start time : 09/03/2026 11:00 UTC End time : 09/03/2026 19:00 UTC
Service impact : During this maintenance, the cooling system's efficiency may be temporarily reduced for some servers, potentially lowering performance. Despite ongoing mitigation efforts, customers could still experience a temporary reboot or shutdown in the worst case. Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our cooling infrastructure.
Thank you for your understanding.
Mar 3, 10:05 UTC
Resolved -
We are pleased to inform you that the incident affecting our Public Cloud offer has been resolved.
Start time : 09/03/2026 13:30 UTC End time : 09/03/2026 17:10 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 9, 17:12 UTC
Update -
We are currently investigating an incident affecting our API Network, which is causing temporary functionality issue in the GRA11 region.
Here are some supplementary details : Start time : 09/03/2026 13:30 UTC Impacted Service(s) : API Degraded in GRA11 region Customers Impact : Customers may experience slow responses or intermittent HTTP 500 errors when performing certain operations through the API. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 9, 16:26 UTC
Investigating -
We are currently experiencing an event affecting our API Network on our Public Cloud offer in GRA11.
Start time : 09/03/2026 13:30 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 9, 16:11 UTC
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance has been resolved.
Start time : 03/03/2026 16:33 UTC End time : 03/03/2026 18:45 UTC Root Cause : A service disruption occurred due to an unexpected infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 9, 15:50 UTC
Update -
We are pleased to inform you that the incident affecting our Compute - Instance has been resolved.
Start time : 03/03/2026 16:33 UTC End time : 03/03/2026 18:45 UTC Root Cause : A service disruption occurred due to an unexpected infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 3, 18:45 UTC
Update -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation: Update : Most of the instances are now operational again. Ongoing Actions : Our teams are mobilised to restore the remaining instances as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 3, 17:47 UTC
Update -
We have new information regarding the incident that affected your service(s).
Please find below an update on the situation:
Start time : 03/03/2026 16:33 UTC Impacted Service(s) : Some instances in the Graveline region are unreachable. Customers Impact : Some customers are temporarily unable to access and use their instances in the specified region. Root Cause : A service disruption occurred due to an unexpected infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution.
Mar 3, 16:54 UTC
Identified -
We are currently investigating an incident affecting our Compute - Instance offering, which is causing temporary availability issue in the Graveline region.
Here are some supplementary details :
Start time : 03/03/2026 16:33 UTC Impacted Service(s) : Some instances in the Graveline region are unreachable. Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 3, 16:45 UTC
Completed -
The scheduled maintenance has been completed.
Mar 9, 14:05 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 9, 13:00 UTC
Scheduled -
As part of our continuous improvement plan, a maintenance is scheduled on our Data Platform offer.
Here are the details of the maintenance : Start time : 09/03/2026 13:00 UTC End time : 09/03/2026 16:00 UTC Service impact : None Service improvement : As part of our continuous improvement policy, we will be doing a maintenance on our Data Platform infrastructure.
Thank you for your understanding.
Feb 27, 09:59 UTC
Resolved -
We are pleased to inform you that the incident affecting our Network has been resolved.
Start time : 08/03/2026 21:40 UTC End time : 08/03/2026 22:55 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 8, 22:56 UTC
Investigating -
We are currently investigating an incident affecting our Network, which is causing temporary instability issue in the GRA11 region.
Here are some supplementary details :
Start time : 08/03/2026 21:40 UTC Impacted Service(s) : There is an ongoing network instability on this region Ongoing Actions : Our teams are investigating to determine the origin of the incident and fix it.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 8, 22:44 UTC
Resolved -
We are pleased to inform you that the incident affecting our Compute - Instance offer has been resolved.
Start time : 08/03/2026 09:00 UTC End time : 08/03/2026 13:38 UTC Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction.
We apologize for any inconvenience caused and appreciate your understanding.
Mar 8, 13:40 UTC
Identified -
We are currently experiencing an ongoing incident. We have determined the origin of the issue affecting our Compute - Instance offer on the specific region.
Here are some supplementary details :
Start time : 08/03/2026 09:00 UTC Impacted Service(s) : Some instances in the GRA1 region are unreachable. Customers Impact : Some customers are temporarily unable to acess and use their instances in the specified region. Root Cause : A service disruption occurred due to an unexpected underlying infrastructure malfunction. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Mar 8, 09:47 UTC