All Systems Operational
Interxion FRA8 ? Operational
90 days ago
99.98 % uptime
Today
Edge Network Operational
90 days ago
99.95 % uptime
Today
DDoS Protection Operational
90 days ago
99.99 % uptime
Today
Facilites ? Operational
90 days ago
100.0 % uptime
Today
maincubes FRA01 ? Operational
90 days ago
99.99 % uptime
Today
Core Network Operational
90 days ago
100.0 % uptime
Today
Virtual Servers Operational
90 days ago
99.98 % uptime
Today
Ceph Cluster ? Operational
90 days ago
100.0 % uptime
Today
Network Storage Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Plesk Web Hosting Operational
90 days ago
99.99 % uptime
Today
TeamSpeak3 Servers Operational
90 days ago
99.99 % uptime
Today
Facilites ? Operational
90 days ago
100.0 % uptime
Today
General Services Operational
90 days ago
100.0 % uptime
Today
Avoro CP & Support ? Operational
90 days ago
100.0 % uptime
Today
PHP-Friends CRM & Support ? Operational
90 days ago
100.0 % uptime
Today
Cloud Control Panel ? Operational
90 days ago
100.0 % uptime
Today
Dedicated Control Panel ? Operational
90 days ago
100.0 % uptime
Today
IPMI VPN ? Operational
90 days ago
100.0 % uptime
Today
DDoS Manager ? Operational
90 days ago
100.0 % uptime
Today
Domain Robot ? Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
Webhosting maintenance / web03 & web04 Feb 6, 2025 22:00-23:00 CET
On the mentioned time, we will upgrade the operating system of the mentioned webhosting systems. During the maintenance work, we expect a maximum downtime of approx. 10 minutes per webhost.
Posted on Feb 01, 2025 - 20:11 CET
Past Incidents
Feb 6, 2025

No incidents reported today.

Feb 5, 2025

No incidents reported.

Feb 4, 2025

No incidents reported.

Feb 3, 2025

No incidents reported.

Feb 2, 2025
Resolved - As the hypervisor's load is fine now and no further issues occured, we are closing this incident.
Feb 2, 13:27 CET
Monitoring - All VMs have been booted. The performance of the hypervisor will be back to normal in a few minutes. Booting a large number of VMs causes high load, please refrain from manually rebooting your VMs as it will slow down the process overall.
Feb 2, 13:19 CET
Investigating - We identified web03.dataforest.net as another affected VM which will recover in the next 1-2 minutes.
Feb 2, 13:09 CET
Identified - The host system epyc9754-1, hosting customer VMs as well as the TS3 instances ts05.avoro.eu and ts15.avoro.eu, rebooted unexpectedly 5 minutes ago and is currently in progress of recovery.
Feb 2, 13:01 CET
Feb 1, 2025
Completed - The maintenance was successfully completed without any unforeseen issues or outages. During the individual line card reboots, minimal packet loss (~ 1%) occurred within a one- to two-second time frame, which was expected. Full redundancy and capacity have been restored.
Feb 1, 03:08 CET
In progress - During the preparation work currently in progress, we traced the root cause to a known Junos OS bug triggered by a routine commit operation related to the DDoS protection packet loss issue a few hours earlier (see https://status.dataforest.net/incidents/p27xf22kllpt).

Given the nature of the bug, we have decided not only to remove the specific configuration elements that triggered it during the upcoming maintenance window - this being the official workaround recommended by our vendor, Juniper Networks - but also to reboot all line cards and routing engines in the affected router as a precautionary measure.

Since all components are fully redundant, no service impact is expected. Should any unforeseen issues arise, we will immediately update this status post.

A software update that mitigates the bug will be implemented in the coming months during a separate maintenance window. For that update, the affected PoP will be fully drained of traffic beforehand to prevent any customer impact. As removing the problematic configuration resolves the immediate issue, this update, which requires additional preparation time, is no longer urgent.

Feb 1, 02:06 CET
Scheduled - Following a global increase in network latency (~200–400 ms) observed at 19:25 CET for approximately two minutes, we identified a malfunctioning Packet Forwarding Engine (PFE) in our active edge router. The router automatically disabled the faulty PFE, leading to an immediate recovery of services. Until its deactivation, traffic routed through the affected PFE experienced degradation.

Thanks to our redundant network architecture, no global outage occurred; however, latency-sensitive applications may have experienced severe delays.

During the upcoming maintenance window, we will reboot the line card hosting the affected PFE. This action is not expected to impact the network, but a residual risk remains.

Jan 31, 19:47 CET
Jan 31, 2025
Resolved - This incident has been resolved.
Jan 31, 15:51 CET
Monitoring - Since the workaround has been applied, the situation is stable, however, this involves our DDoS protection being degraded while we work on a permanent solution. IPv4 addresses and prefixes without DDoS protection were not affected, IPv6 was also enirely unaffected.
Jan 31, 15:39 CET
Identified - Starting at 15:09 CET, we observed packet loss for IPv4 addresses under active mitigation / DDoS protection. The issue has been identified and workarounded at 15:14. We are working on a permanent fix.
Jan 31, 15:18 CET
Jan 30, 2025

No incidents reported.

Jan 29, 2025

No incidents reported.

Jan 28, 2025
Resolved - Since our status update, there has been no further instability. We therefore consider this incident resolved.
Jan 28, 04:25 CET
Monitoring - From 01:29 AM to 02:41 AM (CET), there were at least three timeouts (lasting around 60-90 seconds) visible on routes via DE-CIX. There is no disruption within our network. We are actively monitoring the issues and will intervene manually if necessary.
Jan 28, 02:46 CET
Jan 27, 2025

No incidents reported.

Jan 26, 2025

No incidents reported.

Jan 25, 2025
Resolved - This incident has been resolved.
Jan 25, 13:21 CET
Identified - VMs on the host are experiencing performance and reachability issues.
Jan 25, 13:15 CET
Jan 24, 2025
Completed - The maintenance was successfully completed without unforeseen issues or outages within the announced maintenance window. Our 100G/400G aggregation switching platform is now running on the latest vendor-recommended software version (Juniper Networks), addressing minor bugs observed recently. Full redundancy and capacity have been restored.
Jan 24, 05:14 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 24, 03:00 CET
Scheduled - During the specified timeframe, we will perform necessary maintenance on our Edge Routing Point of Presence (PoP) at Interxion FRA8. This PoP handles traffic for all dataforest services across all datacenters. The maintenance is carried out on our aggregation switches to ensure stable operation and address minor software bugs.

Thanks to our network redundancy and the sequential maintenance of switches, no outages are expected. However, the availability of the following 100G and 400G uplink ports will be reduced:

- Direct Peering (IXP): Apple, ByteDance (TikTok), Digital Ocean, Fastly, i3D.net, LWLcom, Microsoft, Netflix, Telefonica (o2)
- Private Peering (PNI): Amazon, Cloudflare, Hetzner
- Public Peering: MegaIX, GNM-IX
- Transit: Inter.link
- Additionally, 40G / 100G / 400G Dedicated Server customers located at this datacenter are required to use LACP to avoid an outage

Traffic with these networks will reroute over alternative paths, which may result in slightly increased latency. To avoid packet loss, affected BGP sessions will be gracefully shut down at the beginning of the maintenance, and each device will only be touched once it is fully drained of traffic. The total bandwidth capacity of the dataforest network and our DDoS protection services will remain sufficient throughout the maintenance.

We would like to point out that a residual risk always exists during such maintenance activities due to potential software issues or unforeseen circumstances. Should any unforeseen issues arise, such as a disruption to our network availability, we will immediately update this status post. As a precautionary measure, the maintenance is scheduled during a period of low traffic to minimize potential disruptions.

Jan 9, 00:30 CET
Jan 23, 2025

No incidents reported.