Update - 193.46.81.0/24 is reachable again.
Dec 28, 2025 - 00:32 CET
Update - 193.46.81.0/24 is currently unreachable, we are working on it.
Dec 28, 2025 - 00:09 CET
Update - Yesterday, we received customer reports indicating that HTTPS connections to and from the United States – particularly involving destinations geographically close to Chicago – experienced intermittent issues for a few hours.

Based on our analysis, the scope was very limited: Most customers reported alerts from external monitoring probes located in the US, while their real visitors and production traffic were unaffected. In a smaller number of cases, specific endpoints (e.g., Docker registries hosted in the US) could not be reached reliably, again without broader customer-facing impact. Only IPv4 traffic was affected.

As soon as the first reports came in, we began an immediate analysis and traced the behavior to the activation of additional network protection measures. To minimize further impact, we reverted these changes and continued debugging during reactivation.

We implemented a corrective change at the start of tonight’s maintenance window announced under https://status.dataforest.net/incidents/tbtvg8bg6ct6, which resolved the issue permanently.

Dec 27, 2025 - 05:25 CET
Update - Two days ago, we published a post-mortem on our blog for the October 23, 2025 network outage. The article also discusses the network upgrades announced here on our status page, in particular the addition of another PoP (Point of Presence).

German version: dataforest.net/blog/post-mortem-23-10-25
English version: dataforest.net/en/blog/post-mortem-23-10-25

Dec 12, 2025 - 01:33 CET
Update - Between 13:26 and 13:27 (CET) packet loss occurred in our network due to a changed attack vector. The situation has been stable since then, and additional measures have already been implemented in the meantime.
Nov 02, 2025 - 14:35 CET
Update - The situation has remained stable for over a week with no disruptions or packet loss, despite ongoing daily attacks. This incident will remain open until the post-mortem for last week’s outage has been published.
Nov 02, 2025 - 10:56 CET
Update - Connectivity was restored at 14:00 CEST, BGP sessions are coming up since then.

A software bug in our aggregation switches led to a major outage when we committed a (regular) change in our edge network related to the DDoS attacks. Only rebooting the devices could restore Layer 2 connectivity which was completely lost. A post-mortem will follow.

Oct 23, 2025 - 14:03 CEST
Update - We are seeing a major outage since one minute.
Oct 23, 2025 - 13:35 CEST
Monitoring - German version below

For several days, parts of our network have experienced occasional packet loss and increased latency, usually lasting a few seconds, sometimes up to a minute. Some days it happens two or three times. Other days it does not happen at all. The cause is ongoing DDoS attacks driven by the “Aisuru” botnet, which is pushing many networks worldwide to their capacity limits. Alongside us, many other large cloud and hosting providers are affected, including some with significantly larger networks. Trade press reports we link below estimate the botnet’s capacity at 20-30 Tbps. Because the attacks target completely different customers and regularly our entire network (“carpet bombing”) at record bandwidths, effective filtering is very complex.

Your data is not at risk: the attacks aim solely to overload servers and network infrastructure.

Since the first attacks, we have been working almost 24/7 on various countermeasures. We have improved our filtering and detection mechanisms, expanded capacity, and used traffic engineering to distribute attacks across our upstreams to prevent overload on individual uplinks. We are also working closely with our transit providers on pre-filtering and early detection.

These measures are now having an effect – the vast majority of attacks are fully filtered and no longer cause damage. However, it is unavoidable that new measures must occasionally be deployed. During that time, latency-sensitive applications like game servers may experience issues. For TCP-based applications like web and mail servers, outages are usually not observed because these applications are not sensitive to short packet loss or latency spikes.

We would like to communicate transparently that it will take several months to fully implement all measures and network expansions. Unfortunately, upgrades in the terabit range, as well as the provisioning of cross-connects and dark fibers, involve lead times that we cannot avoid. We use these unavoidable “waiting periods” to implement other measures, prepare the planned upgrades, and bring new network hardware into operation ahead of schedule, so that you will generally not notice any issues. The situation improves on a daily basis, and most days and weeks will pass completely without disruption.

Rest assured: we are working continuously to restore the usual dataforest quality, and we thank you for your understanding, the motivating words, and your loyalty. A detailed blog post about our network expansion was already planned and will now appear a bit earlier.

Over the coming months, various network maintenance activities will be required, some of which may be scheduled with short notice. We will keep you informed via our status page. This status post will also remain open even if we do not expect or observe any (negative) impact over longer periods of time. Please make sure to contact us as early as possible if you notice any unexpected issues lasting longer than a few seconds.

German

Seit einigen Tagen kommt es in Teilen unseres Netzwerks zu gelegentlichen Paketverlusten und erhöhten Latenzen, die wenige Sekunden bis maximal eine Minute andauern. An manchen Tagen passiert dies zwei- oder dreimal, an anderen gar nicht. Auslöser sind anhaltende DDoS-Angriffe, verursacht durch das “Aisuru”-Botnetz, das viele Netzwerke weltweit an ihre Kapazitätsgrenzen bringt. Neben uns sind viele andere große Cloud- und Hostinganbieter mit teilweise noch erheblich größeren Netzwerken betroffen. In den Fachmedien finden sich hierzu bereits verschiedene Berichte, in denen die Kapazität des Botnetzes mit 20-30 Tbps beziffert wird, die wir unten verlinkt haben. Da sich die Angriffe gegen völlig unterschiedliche Kunden und regelmäßig unser gesamtes Netzwerk (“Carpet Bombing”) richten und mit Rekordbandbreiten ausgeführt werden, ist eine effektive Filterung sehr komplex.

Ihre Daten sind nicht in Gefahr: Die Angriffe zielen rein auf die Überlastung von Servern und Netzwerkinfrastruktur ab.

Wir arbeiten seit den ersten Angriffen fast 24/7 an verschiedensten Maßnahmen zu deren Abwehr. So wurden unsere Filter- und Detektionsmechanismen verbessert, Kapazitäten erweitert und durch Traffic-Engineering sichergestellt, dass sich die Angriffe möglichst ideal auf unsere Upstreams verteilen, damit es zu keinen Überlastungen einzelner Uplinks mehr kommt. Auch arbeiten wir mit unseren Transitprovidern intensiv an der Vorfilterung und Früherkennung dieser Angriffe.

Diese Maßnahmen zeigen inzwischen Wirkung – die allermeisten Angriffe werden vollständig gefiltert und richten keinen Schaden mehr an. Allerdings ist unvermeidbar, dass gelegentlich neue Maßnahmen ergriffen werden müssen. Währenddessen kann es insbesondere bei latenzkritischen Anwendungen wie Gameservern zu Problemen kommen. Bei TCP-basierten Anwendungen wie Web- und Mailservern kann in der Regel kein Ausfall registriert werden, weil diese Anwendungen nicht empfindlich auf kurze Paketverluste oder Latenzspitzen reagieren.

Wir möchten transparent kommunizieren, dass es mehrere Monate dauern wird, bis alle Maßnahmen und Netzwerkerweiterungen umgesetzt sind. Leider sind Upgrades im Terabit-Bereich, Provisionierungen von Crossconnects und Glasfaserstrecken mit Vorlaufzeiten verbunden, die wir nicht umgehen können. Entsprechende “Wartezeiten” nutzen wir zur Realisierung anderer Maßnahmen, Vorbereitung der geplanten Upgrades sowie die vorgezogene Inbetriebnahme neuer Netzwerkhardware, sodass Sie in aller Regel keine Probleme feststellen können werden, die Situation täglich besser wird und die meisten Tage und Wochen völlig störungsfrei verlaufen.

Seien Sie versichert: Wir arbeiten permanent daran, die gewohnte dataforest-Qualität wiederherzustellen, und bedanken uns für Ihr Verständnis, die motivierenden Worte und Ihre Treue. Ein ausführlicher Blogartikel zum Ausbau unseres Netzwerks war ohnehin geplant und wird nun etwas früher erscheinen.

In den nächsten Monaten werden verschiedene Wartungsarbeiten am Netzwerk mit teils kurzer Vorlaufzeit notwendig sein. Wir halten Sie auf unserer Statusseite auf dem Laufenden. Ebenso halten wir diesen Statuspost bis auf Weiteres offen, auch wenn wir über längere Zeiträume keine (negativen) Auswirkungen erwarten oder feststellen. Bitte melden Sie sich unbedingt frühzeitig bei uns, wenn Sie unerwartete Probleme feststellen, die über wenige Sekunden hinausgehen.

Media reports / Medienberichte:
Aisuru’s 30 Tbps botnet traffic crashes through major US ISPs
Another record-breaking DDoS? Aisuru botnet suspected behind 29.69 Tbps gaming outages
US ISP-hosted IoT devices fuel Aisuru DDoS botnet
DDoS Botnet Aisuru Blankets US ISPs in Record DDoS
Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors

Oct 15, 2025 - 01:15 CEST
[dataforest Backbone] Interxion FRA8 Operational
90 days ago
99.99 % uptime
Today
Edge Network (affects all locations) Operational
90 days ago
99.97 % uptime
Today
DDoS Protection (affects all locations) Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers (40G / 100G / 400G) Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] maincubes FRA01 Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
100.0 % uptime
Today
Ceph Cluster Operational
90 days ago
100.0 % uptime
Today
Virtual Servers Operational
90 days ago
99.98 % uptime
Today
Managed Servers Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Plesk Web Hosting Operational
90 days ago
100.0 % uptime
Today
TeamSpeak3 Servers Operational
90 days ago
99.99 % uptime
Today
Network Storage Operational
90 days ago
100.0 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] firstcolo FRA4 Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
99.99 % uptime
Today
Dedicated Servers Operational
90 days ago
99.99 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
General Services Operational
90 days ago
99.99 % uptime
Today
Avoro CP & Support Operational
90 days ago
100.0 % uptime
Today
PHP-Friends CRM & Support Operational
90 days ago
99.99 % uptime
Today
Hotline Operational
90 days ago
100.0 % uptime
Today
Dedicated Control Panel Operational
90 days ago
100.0 % uptime
Today
IPMI VPN Operational
90 days ago
100.0 % uptime
Today
DDoS Manager Operational
90 days ago
99.99 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

Network Maintenance Notification Jan 17, 2026 00:00-06:00 CET

To finalize the integration of our new dataforest PoP (Point of Presence) at Equinix, which increases the redundancy and capacity of our network, we will perform network maintenance to apply prepared configuration changes that have been thoroughly tested in our lab environment.

No customer impact is expected at any datacenter, but it cannot be ruled out entirely. A residual risk remains. In the unlikely event of a relevant impact, we will provide updates via this status post.

As part of this maintenance work, we will complete follow-up improvements outlined in our blog post Post-mortem: Network outage on 23.10.2025 for our maincubes FRA01 location. With this maintenance, the main preparations for the full go-live of our Equinix PoP will be completed, as we have finished migrating all other dataforest locations to our new MPLS and ECMP environment.

Our status page layout will be updated shortly to reflect the new network structure.

Posted on Jan 14, 2026 - 00:05 CET
Jan 15, 2026

No incidents reported today.

Jan 14, 2026
Resolved - Yesterday, some physical links at our Equinix PoP flapped. While these flaps caused no customer impact, they led to an imbalance between internal links connecting our new Equinix PoP and our core PoP at Interxion. Increased bandwidth usage during the day resulted in critical utilization on one port, which required immediate correction.

During the initial corrective change, one port at our firstcolo FRA4 location became saturated at 14:03, causing increased latency for some customers (without packet loss). At 14:15, a further adjustment was applied, successfully and permanently resolving the issue.

Additional configuration changes were implemented over the following hours to prevent similar imbalances within our new MPLS environment in the event of link flaps.

We are still investigating the root cause of the link flaps at Equinix. As redundancy is fully in place and the balancing issue has been resolved, we do not expect any further impact.

Jan 14, 19:33 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 14, 14:17 CET
Investigating - We are currently experiencing increased latency at our firstcolo FRA4 location in Frankfurt. There is no packet loss.
Jan 14, 14:04 CET
Jan 13, 2026
Completed - The scheduled maintenance has been completed.
Jan 13, 03:39 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 13, 01:00 CET
Scheduled - To continue the integration of our new dataforest PoP (Point of Presence) at Equinix, which increases the redundancy and capacity of our network, we will perform network maintenance at our firstcolo location, which hosts some of our dedicated servers and colocation racks. If your dedicated server does not have "fc-fra4" in its name, it's not operated at the affected datacenter.

No customer impact is expected, but it cannot be ruled out entirely. A residual risk remains. In the unlikely event of relevant impact, we will provide updates via this status post.

As part of this maintenance work, we will complete follow-up improvements outlined in our blog post Post-mortem: Network outage on 23.10.2025. We will drain the MPLS links to Equinix (already in production since the last maintenance) in a controlled manner and verify that all redundancy mechanisms work as intended. Afterwards, we will perform a software update at Equinix.

100G Business Dedicated Server customers directly affected at our Equinix location have been informed in advance. Our status page layout will be updated shortly to reflect the new network structure.

Jan 10, 23:09 CET
Jan 12, 2026

No incidents reported.

Jan 11, 2026

No incidents reported.

Jan 10, 2026

No incidents reported.

Jan 9, 2026

No incidents reported.

Jan 8, 2026

No incidents reported.

Jan 7, 2026
Resolved - All VMs are up.
Jan 7, 22:29 CET
Update - epyc9374f-5 is back online. The BIOS update was completed successfully. The first VMs are already up, and the remaining VMs are starting one after another automatically. Please do not intervene manually. It may take up to 15 minutes until the last VM is running again.
Jan 7, 22:23 CET
Monitoring - The affected TS3 instances were booted on replacement hardware and are available again.
Other VMs on the host epyc9374f-5 will remain affected for a bit longer while we apply a BIOS update. This host contains some VMs for our Avoro Power Rootservers. The update mitigates an urgent CPU issue that caused the outage.

Jan 7, 22:03 CET
Identified - The issue has been identified and a fix is being implemented.
You can check in our control panels if you have products on the affected hosts.

Jan 7, 21:45 CET
Jan 6, 2026

No incidents reported.

Jan 5, 2026

No incidents reported.

Jan 4, 2026

No incidents reported.

Jan 3, 2026

No incidents reported.

Jan 2, 2026
Resolved - All VMs are booted since 10 minutes and the host system seems to be stable again.
Jan 2, 03:11 CET
Monitoring - The affected host is currently in recovery and all VMs are starting. We will continue to monitor the situation.
Jan 2, 02:54 CET
Investigating - We are currently experiencing an outage with the vServer node epyc7513-4. Our team is actively working to resolve this issue.
You can check in the control panel if you have VMs on the affected host.

Jan 2, 02:46 CET
Completed - The scheduled maintenance has been completed.
Jan 2, 00:00 CET
In progress -
Dec 24, 09:00 CET
Scheduled - 🎄✨ We wish you relaxing, peaceful holidays and a Merry Christmas! Even between the holidays, we’ve got you covered — our emergency support and on-call service remain available 24/7 as usual. Otherwise, the following applies:

• Dec 24–26: Handling of incidents & emergencies
• Dec 27–30: Regular support operations with full staffing (holiday backlog possible)
• Dec 31–Jan 01: Incidents & emergencies only
• From Jan 02: Normal operations

This year, we expanded our support availability to Monday–Sunday, 9:00–00:00. Our regular support is now staffed seven days a week. Thank you for your trust, and have a great start to the new year!

Dec 24, 01:57 CET
Jan 1, 2026

No incidents reported.