All Systems Operational

About This Site

CityCloud status page for public regions

BUF1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
DX1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
FRA1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
KNA1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
LON1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
STO2 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
TKY1 ? Operational
Compute Operational
Storage Operational
Network Operational
API Operational
Online backup ? Operational
City Control Panel ? Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Jul 16, 2020

No incidents reported today.

Jul 15, 2020
Resolved - Octavia load balancer service works again, this incident will be closed.
The reason for this incident was because of issue for the amphora servers to get dhcp to respond on requests which resulted in loss of IP address. The result of this led control plane loosing contact with the amphora servers and tried to fail them over.
Jul 15, 15:46 CEST
Monitoring - Load balancer service is restored and all LB is in active state. This issue will be monitored for a while and if you still experience with non working LBs, please contact support.
Jul 15, 13:45 CEST
Identified - Technicians have identified the reason for this issue and are working with restoring the service.
Jul 15, 12:37 CEST
Update - Investigation on Octavia load balancer is still ongoing by our technicians.
Jul 15, 10:46 CEST
Investigating - We are currently investigate issues with Octavia load balancers in Stockholm
Jul 15, 09:50 CEST
Jul 14, 2020
Resolved - This incident has been resolved.
Jul 14, 12:20 CEST
Monitoring - Evacuation has been completed, net node is taken out of production. We will keep on monitoring.
Jul 14, 10:48 CEST
Identified - Evacuation of net node has begun.
Jul 14, 10:13 CEST
Investigating - We have experienced loss of connectivity in STO2 region. After investigation we pinned down the specific net node that will be taken out of production.
We will start with evacuation of the net node which might lead to intermittent connectivity loss.
The estimate time will be around 60 mins - we will keep updating the status
Jul 14, 10:06 CEST
Jul 13, 2020
Resolved - Incident resolved and connectivity has been restored.
Jul 13, 06:01 CEST
Update - Investigation completed
Jul 13, 06:01 CEST
Investigating - Due to a disruption in the Software Defined Network on a specific compute node in region STO2, connectivity was lost for instance running on the specific node between 1-140 seconds. The loss occurs during the re-provisioning of network configuration of instances on the node in question.
Jul 13, 05:58 CEST
Jul 12, 2020

No incidents reported.

Jul 11, 2020

No incidents reported.

Jul 10, 2020

No incidents reported.

Jul 9, 2020
Resolved - Incident is now resolved.
Jul 9, 18:38 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jul 9, 17:37 CEST
Identified - Technicians identified issues with a net node .
We are currently restarting network node to restore the service
Jul 9, 17:34 CEST
Investigating - We are currently investigating this issue.
Jul 9, 17:15 CEST
Jul 8, 2020

No incidents reported.

Jul 7, 2020

No incidents reported.

Jul 6, 2020

No incidents reported.

Jul 5, 2020

No incidents reported.

Jul 4, 2020

No incidents reported.

Jul 3, 2020

No incidents reported.

Jul 2, 2020

No incidents reported.