Update - Two days ago, we published a post-mortem on our blog for the October 23, 2025 network outage. The article also discusses the network upgrades announced here on our status page, in particular the addition of another PoP (Point of Presence).

German version: dataforest.net/blog/post-mortem-23-10-25
English version: dataforest.net/en/blog/post-mortem-23-10-25

Dec 12, 2025 - 01:33 CET
Update - Between 13:26 and 13:27 (CET) packet loss occurred in our network due to a changed attack vector. The situation has been stable since then, and additional measures have already been implemented in the meantime.
Nov 02, 2025 - 14:35 CET
Update - The situation has remained stable for over a week with no disruptions or packet loss, despite ongoing daily attacks. This incident will remain open until the post-mortem for last week’s outage has been published.
Nov 02, 2025 - 10:56 CET
Update - Connectivity was restored at 14:00 CEST, BGP sessions are coming up since then.

A software bug in our aggregation switches led to a major outage when we committed a (regular) change in our edge network related to the DDoS attacks. Only rebooting the devices could restore Layer 2 connectivity which was completely lost. A post-mortem will follow.

Oct 23, 2025 - 14:03 CEST
Update - We are seeing a major outage since one minute.
Oct 23, 2025 - 13:35 CEST
Monitoring - German version below

For several days, parts of our network have experienced occasional packet loss and increased latency, usually lasting a few seconds, sometimes up to a minute. Some days it happens two or three times. Other days it does not happen at all. The cause is ongoing DDoS attacks driven by the “Aisuru” botnet, which is pushing many networks worldwide to their capacity limits. Alongside us, many other large cloud and hosting providers are affected, including some with significantly larger networks. Trade press reports we link below estimate the botnet’s capacity at 20-30 Tbps. Because the attacks target completely different customers and regularly our entire network (“carpet bombing”) at record bandwidths, effective filtering is very complex.

Your data is not at risk: the attacks aim solely to overload servers and network infrastructure.

Since the first attacks, we have been working almost 24/7 on various countermeasures. We have improved our filtering and detection mechanisms, expanded capacity, and used traffic engineering to distribute attacks across our upstreams to prevent overload on individual uplinks. We are also working closely with our transit providers on pre-filtering and early detection.

These measures are now having an effect – the vast majority of attacks are fully filtered and no longer cause damage. However, it is unavoidable that new measures must occasionally be deployed. During that time, latency-sensitive applications like game servers may experience issues. For TCP-based applications like web and mail servers, outages are usually not observed because these applications are not sensitive to short packet loss or latency spikes.

We expect this situation to continue for at least a few more days until all measures and network expansions are in place. Unfortunately, upgrades in the terabit range and the provisioning of cross-connects and fiber routes come with lead times we cannot circumvent. We are using this “waiting period” to implement other measures, prepare the planned upgrades, and bring new network hardware online ahead of schedule.

Rest assured: we are working continuously to restore the usual dataforest quality, and we thank you for your understanding, the motivating words, and your loyalty. A detailed blog post about our network expansion was already planned and will now appear a bit earlier.

In the coming days, urgent network maintenance may be necessary with short notice. We will keep you updated in this status post, which we will keep open until further notice, even if we do not expect any (negative) impact. Please contact us early if you notice unexpected issues that last longer than a few seconds.

German

Seit einigen Tagen kommt es in Teilen unseres Netzwerks zu gelegentlichen Paketverlusten und erhöhten Latenzen, die wenige Sekunden bis maximal eine Minute andauern. An manchen Tagen passiert dies zwei- oder dreimal, an anderen gar nicht. Auslöser sind anhaltende DDoS-Angriffe, verursacht durch das “Aisuru”-Botnetz, das viele Netzwerke weltweit an ihre Kapazitätsgrenzen bringt. Neben uns sind viele andere große Cloud- und Hostinganbieter mit teilweise noch erheblich größeren Netzwerken betroffen. In den Fachmedien finden sich hierzu bereits verschiedene Berichte, in denen die Kapazität des Botnetzes mit 20-30 Tbps beziffert wird, die wir unten verlinkt haben. Da sich die Angriffe gegen völlig unterschiedliche Kunden und regelmäßig unser gesamtes Netzwerk (“Carpet Bombing”) richten und mit Rekordbandbreiten ausgeführt werden, ist eine effektive Filterung sehr komplex.

Ihre Daten sind nicht in Gefahr: Die Angriffe zielen rein auf die Überlastung von Servern und Netzwerkinfrastruktur ab.

Wir arbeiten seit den ersten Angriffen fast 24/7 an verschiedensten Maßnahmen zu deren Abwehr. So wurden unsere Filter- und Detektionsmechanismen verbessert, Kapazitäten erweitert und durch Traffic-Engineering sichergestellt, dass sich die Angriffe möglichst ideal auf unsere Upstreams verteilen, damit es zu keinen Überlastungen einzelner Uplinks mehr kommt. Auch arbeiten wir mit unseren Transitprovidern intensiv an der Vorfilterung und Früherkennung dieser Angriffe.

Diese Maßnahmen zeigen inzwischen Wirkung – die allermeisten Angriffe werden vollständig gefiltert und richten keinen Schaden mehr an. Allerdings ist unvermeidbar, dass gelegentlich neue Maßnahmen ergriffen werden müssen. Währenddessen kann es insbesondere bei latenzkritischen Anwendungen wie Gameservern zu Problemen kommen. Bei TCP-basierten Anwendungen wie Web- und Mailservern kann in der Regel kein Ausfall registriert werden, weil diese Anwendungen nicht empfindlich auf kurze Paketverluste oder Latenzspitzen reagieren.

Wir rechnen damit, dass dieser Zustand noch mindestens einige Tage anhalten wird, bis alle Maßnahmen und Netzwerkerweiterungen umgesetzt sind. Leider sind Upgrades im Terabit-Bereich, Provisionierungen von Crossconnects und Glasfaserstrecken mit Vorlaufzeiten verbunden, die wir nicht umgehen können. Entsprechende “Wartezeiten” nutzen wir zur Realisierung anderer Maßnahmen, Vorbereitung der geplanten Upgrades sowie die vorgezogene Inbetriebnahme neuer Netzwerkhardware.

Seien Sie versichert: Wir arbeiten permanent daran, die gewohnte dataforest-Qualität wiederherzustellen, und bedanken uns für Ihr Verständnis, die motivierenden Worte und Ihre Treue. Ein ausführlicher Blogartikel zum Ausbau unseres Netzwerks war ohnehin geplant und wird nun etwas früher erscheinen.

In den nächsten Tagen können dringende Wartungsarbeiten am Netzwerk mit teils kurzer Vorlaufzeit notwendig sein. Wir halten Sie in diesem Statuspost, den wir bis auf Weiteres offenhalten, auch dann auf dem Laufenden, wenn wir keine (negativen) Auswirkungen erwarten. Bitte melden Sie sich unbedingt frühzeitig bei uns, wenn Sie unerwartete Probleme feststellen, die über wenige Sekunden hinausgehen.

Media reports / Medienberichte:
Aisuru’s 30 Tbps botnet traffic crashes through major US ISPs
Another record-breaking DDoS? Aisuru botnet suspected behind 29.69 Tbps gaming outages
US ISP-hosted IoT devices fuel Aisuru DDoS botnet
DDoS Botnet Aisuru Blankets US ISPs in Record DDoS
Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors

Oct 15, 2025 - 01:15 CEST
[dataforest Backbone] Interxion FRA8 Operational
90 days ago
99.99 % uptime
Today
Edge Network (affects all locations) Operational
90 days ago
99.97 % uptime
Today
DDoS Protection (affects all locations) Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers (40G / 100G / 400G) Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] maincubes FRA01 Operational
90 days ago
100.0 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
100.0 % uptime
Today
Ceph Cluster Operational
90 days ago
100.0 % uptime
Today
Virtual Servers Operational
90 days ago
100.0 % uptime
Today
Managed Servers Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Plesk Web Hosting Operational
90 days ago
100.0 % uptime
Today
TeamSpeak3 Servers Operational
90 days ago
100.0 % uptime
Today
Network Storage Operational
90 days ago
100.0 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] firstcolo FRA4 Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
99.97 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
General Services Operational
90 days ago
100.0 % uptime
Today
Avoro CP & Support Operational
90 days ago
100.0 % uptime
Today
PHP-Friends CRM & Support Operational
90 days ago
100.0 % uptime
Today
Hotline Operational
90 days ago
100.0 % uptime
Today
Cloud Control Panel Operational
90 days ago
100.0 % uptime
Today
Dedicated Control Panel Operational
90 days ago
100.0 % uptime
Today
IPMI VPN Operational
90 days ago
100.0 % uptime
Today
DDoS Manager Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

Maintenance - vServer rack redundancy restoration Dec 17, 2025 23:00 - Dec 18, 2025 00:00 CET

We will be performing essential maintenance on one of our vServer racks to resolve an issue with its redundancy. During the maintenance window, brief interruptions to network connectivity may occur.

Affected hosts:

Avoro
epyc9374f-4
epyc9374f-5
epyc9374f-6
epyc9374f-7
epyc9374f-8
epyc9754-3
epyc9754-4
vnode001 (Legacy)
vnode003 (Legacy)
vnode004 (Legacy)

PHP-Friends
vnode30

Customers can verify in the respective control panels which of their products are hosted on these hosts.

We will make every effort to complete the maintenance as quickly and smoothly as possible. Thank you for your understanding and continued trust!

Posted on Dec 11, 2025 - 23:53 CET
Dec 13, 2025

No incidents reported today.

Dec 12, 2025

Unresolved incident: Update on Sporadic Packet Loss Issues.

Dec 11, 2025

No incidents reported.

Dec 10, 2025

No incidents reported.

Dec 9, 2025

No incidents reported.

Dec 8, 2025

No incidents reported.

Dec 7, 2025

No incidents reported.

Dec 6, 2025

No incidents reported.

Dec 5, 2025

No incidents reported.

Dec 4, 2025

No incidents reported.

Dec 3, 2025
Completed - The planned reinstallation of the full configuration completed successfully and restored connectivity with all upstreams. No outage occurred at any time before or during the maintenance. After the successful commit, we performed step-by-step debugging on the router and were able to pinpoint the exact trigger of the bug, which we will report to Juniper together with our findings.
Dec 3, 01:42 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Dec 3, 00:30 CET
Scheduled - Please note that there is currently no impact on our overall network availability.

We have encountered a critical Junos software bug on one of our Juniper MX routers, taking multiple upstream sessions offline. All traffic is currently handled by the remaining upstreams and full redundancy is preserved. There was no traffic disruption or outage when the bug occurred, and all services are operating normally.

To stabilise the router, we need to perform emergency maintenance on short notice. During this maintenance, we will re-install the router's configuration in a controlled manner, which is expected to resolve the issue.

We will carry out this work as cautiously as possible and do not plan for any downtime as part of the maintenance. However, depending on the router's behaviour and any additional steps that may become necessary, we cannot fully rule out a temporary loss of connectivity.

We will update this announcement as soon as new information becomes available.

Customers with BGP sessions to dataforest are advised, as a precaution, to shift their traffic away from our network before the maintenance window begins.

Dec 2, 14:51 CET
Dec 2, 2025

No incidents reported.

Dec 1, 2025

No incidents reported.

Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.