Our team is investigating an emerging issue where some customers are receiving delayed event notification or notification emails for activities performed on Cloud Manager. We will share additional updates as we have more information.
We are investigating a critical service issue affecting NVIDIA RTX 4000 Ada GPU nodes across multiple regions, including Osaka (osa1), Seattle (sea1), and Chicago (ord1). Affected GPU nodes may report an unrecoverable error state leading to failures in Vulkan initialization and GPU-accelerated workloads. Additionally, some LKE clusters in the Osaka region are currently experiencing Control Plane connectivity issues, resulting in timed-out API requests and errors. Our engineering teams are currently investigating the root cause, focusing on a potential regression in the underlying host hypervisor or GPU firmware. We will provide more information as it becomes available
Our team is investigating an issue affecting connectivity in our Mumbai, IN (in-mum) data center During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Our team is investigating an issue affecting connectivity in our Singapore data center starting at approximately 1905 UTC, February 27, 2026. During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Our team is investigating an issue affecting connectivity in our Chennai, IN (in-maa) and Mumbai Expansion, IN (in-bom-2) data centers During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. We will share additional updates as we have more information.
Our team is aware of an issue that affected the Block Storage service in Stockholm between 07:40 and 10:52 AM UTC on February 12, 2026. During this time, users may have experience stuck operations on attached volumes. At this time we have been able to correct the issues affecting the Block Storage service. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
Starting around 11:04 UTC on February 10, 2026, Longview graph dashboard became unavailable. The investigation revealed that an internal certificate expiry caused the issue. The impact was limited to reading the existing reporting data, and there was no permanent reporting data loss due to this issue. The affected certificate was rotated to mitigate the impact. The impact was mitigated at 23:44 UTC on February 10, 2026. We will continue to monitor to ensure that the impact has been fully mitigated.