All Systems Operational

[dataforest Backbone] Interxion FRA8 Operational
90 days ago
100.0 % uptime
Today
Edge Network (affects all locations) Operational
90 days ago
100.0 % uptime
Today
DDoS Protection (affects all locations) Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers (40G / 100G / 400G) Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] maincubes FRA01 Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
100.0 % uptime
Today
Ceph Cluster Operational
90 days ago
100.0 % uptime
Today
Virtual Servers Operational
90 days ago
99.98 % uptime
Today
Managed Servers Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Plesk Web Hosting Operational
90 days ago
100.0 % uptime
Today
TeamSpeak3 Servers Operational
90 days ago
99.99 % uptime
Today
Network Storage Operational
90 days ago
100.0 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
[Datacenter] firstcolo FRA4 Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure Operational
90 days ago
99.99 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
General Services Operational
90 days ago
99.99 % uptime
Today
Avoro CP & Support Operational
90 days ago
100.0 % uptime
Today
PHP-Friends CRM & Support Operational
90 days ago
99.99 % uptime
Today
Hotline Operational
90 days ago
100.0 % uptime
Today
Dedicated Control Panel Operational
90 days ago
100.0 % uptime
Today
IPMI VPN Operational
90 days ago
100.0 % uptime
Today
DDoS Manager Operational
90 days ago
99.99 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Feb 4, 2026

No incidents reported today.

Feb 3, 2026
Completed - Maintenance on racks FC-FRA4-X10 and FC-FRA4-X11 was completed successfully.
The outage at firstcolo FRA4 occurred during maintenance on racks FC-FRA4-X12 and FC-FRA4-X13. We are cancelling tonight's maintenance window for these racks and for our maincubes FRA01 location. We will reschedule once we have fully analyzed the root cause and why the impact spread across nearly the entire firstcolo FRA4 core switching infrastructure, to prevent further disruption tonight.

Feb 3, 01:17 CET
Verifying - At 00:25:20, we applied a planned change as part of today’s maintenance at our firstcolo FRA4 location. The change immediately caused severe packet loss, impacting a large number of customers. The incident was escalated internally at 00:27. At 00:31:44, we rolled back the configuration change, which restored connectivity for affected customers. We are currently analyzing the root cause and will provide an update once confirmed. We apologize for the disruption.
Feb 3, 00:37 CET
Update - We see network issues at our firstcolo FRA4 location related to this maintenance and are checking them at the moment.
Feb 3, 00:31 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 3, 00:00 CET
Scheduled - We will be performing planned maintenance on the network connectivity of six dedicated server racks at maincubes FRA01 and firstcolo FRA4.

What we are doing
We will interconnect each rack pair via MLAG (Multi-Chassis Link Aggregation) between the respective top-of-rack switches. This is an improvement to further increase resilience and operational flexibility.

Expected impact
During the maintenance window, there will be one brief network interruption per affected rack pair, expected to be up to ~30 seconds (link flap / LACP re-negotiation). Traffic will recover automatically; no customer action is required.

Affected rack pairs

- maincubes FRA01: MC-FRA01-AZ17 + MC-FRA01-AZ18
- firstcolo FRA4: FC-FRA4-X10 + FC-FRA4-X11
- firstcolo FRA4: FC-FRA4-X12 + FC-FRA4-X13

How to check if you are affected
In our control panel at https://dedicated.dataforest.cloud, open the server and check "Statistics". If one of your ports shows a description containing one of the rack identifiers listed above, your server is part of the maintenance scope.

We will complete the work as quickly as possible and aim to keep the interruption to the minimum stated above. Thank you for your understanding.

Jan 27, 15:53 CET
Feb 2, 2026

No incidents reported.

Feb 1, 2026

No incidents reported.

Jan 31, 2026

No incidents reported.

Jan 30, 2026
Resolved - DE
Die Situation ist seit vielen Wochen stabil. Die im Blog beschriebenen Netzwerkumbauten und -Upgrades sind größtenteils umgesetzt, wie Sie möglicherweise bereits den letzten Wartungsarbeiten entnommen haben. Die restlichen Arbeiten schließen wir voraussichtlich (wie geplant) bis Ende Februar ab.
Um die Statusseite übersichtlich zu halten, schließen wir diesen Eintrag hiermit. Wir erwarten keine weiteren relevanten Probleme.

EN
The situation has been stable for many weeks. The network modifications and upgrades described in the blog have largely been implemented, as you may have already noticed from the latest maintenance work. We expect to complete the remaining work (as planned) by the end of February.
To keep the status page clear, we are closing this entry. We do not expect any further relevant issues.

Jan 30, 23:22 CET
Update - 193.46.81.0/24 is reachable again.
Dec 28, 00:32 CET
Update - 193.46.81.0/24 is currently unreachable, we are working on it.
Dec 28, 00:09 CET
Update - Yesterday, we received customer reports indicating that HTTPS connections to and from the United States – particularly involving destinations geographically close to Chicago – experienced intermittent issues for a few hours.

Based on our analysis, the scope was very limited: Most customers reported alerts from external monitoring probes located in the US, while their real visitors and production traffic were unaffected. In a smaller number of cases, specific endpoints (e.g., Docker registries hosted in the US) could not be reached reliably, again without broader customer-facing impact. Only IPv4 traffic was affected.

As soon as the first reports came in, we began an immediate analysis and traced the behavior to the activation of additional network protection measures. To minimize further impact, we reverted these changes and continued debugging during reactivation.

We implemented a corrective change at the start of tonight’s maintenance window announced under https://status.dataforest.net/incidents/tbtvg8bg6ct6, which resolved the issue permanently.

Dec 27, 05:25 CET
Update - Two days ago, we published a post-mortem on our blog for the October 23, 2025 network outage. The article also discusses the network upgrades announced here on our status page, in particular the addition of another PoP (Point of Presence).

German version: dataforest.net/blog/post-mortem-23-10-25
English version: dataforest.net/en/blog/post-mortem-23-10-25

Dec 12, 01:33 CET
Update - Between 13:26 and 13:27 (CET) packet loss occurred in our network due to a changed attack vector. The situation has been stable since then, and additional measures have already been implemented in the meantime.
Nov 2, 14:35 CET
Update - The situation has remained stable for over a week with no disruptions or packet loss, despite ongoing daily attacks. This incident will remain open until the post-mortem for last week’s outage has been published.
Nov 2, 10:56 CET
Update - Connectivity was restored at 14:00 CEST, BGP sessions are coming up since then.

A software bug in our aggregation switches led to a major outage when we committed a (regular) change in our edge network related to the DDoS attacks. Only rebooting the devices could restore Layer 2 connectivity which was completely lost. A post-mortem will follow.

Oct 23, 14:03 CEST
Update - We are seeing a major outage since one minute.
Oct 23, 13:35 CEST
Monitoring - German version below

For several days, parts of our network have experienced occasional packet loss and increased latency, usually lasting a few seconds, sometimes up to a minute. Some days it happens two or three times. Other days it does not happen at all. The cause is ongoing DDoS attacks driven by the “Aisuru” botnet, which is pushing many networks worldwide to their capacity limits. Alongside us, many other large cloud and hosting providers are affected, including some with significantly larger networks. Trade press reports we link below estimate the botnet’s capacity at 20-30 Tbps. Because the attacks target completely different customers and regularly our entire network (“carpet bombing”) at record bandwidths, effective filtering is very complex.

Your data is not at risk: the attacks aim solely to overload servers and network infrastructure.

Since the first attacks, we have been working almost 24/7 on various countermeasures. We have improved our filtering and detection mechanisms, expanded capacity, and used traffic engineering to distribute attacks across our upstreams to prevent overload on individual uplinks. We are also working closely with our transit providers on pre-filtering and early detection.

These measures are now having an effect – the vast majority of attacks are fully filtered and no longer cause damage. However, it is unavoidable that new measures must occasionally be deployed. During that time, latency-sensitive applications like game servers may experience issues. For TCP-based applications like web and mail servers, outages are usually not observed because these applications are not sensitive to short packet loss or latency spikes.

We would like to communicate transparently that it will take several months to fully implement all measures and network expansions. Unfortunately, upgrades in the terabit range, as well as the provisioning of cross-connects and dark fibers, involve lead times that we cannot avoid. We use these unavoidable “waiting periods” to implement other measures, prepare the planned upgrades, and bring new network hardware into operation ahead of schedule, so that you will generally not notice any issues. The situation improves on a daily basis, and most days and weeks will pass completely without disruption.

Rest assured: we are working continuously to restore the usual dataforest quality, and we thank you for your understanding, the motivating words, and your loyalty. A detailed blog post about our network expansion was already planned and will now appear a bit earlier.

Over the coming months, various network maintenance activities will be required, some of which may be scheduled with short notice. We will keep you informed via our status page. This status post will also remain open even if we do not expect or observe any (negative) impact over longer periods of time. Please make sure to contact us as early as possible if you notice any unexpected issues lasting longer than a few seconds.

German

Seit einigen Tagen kommt es in Teilen unseres Netzwerks zu gelegentlichen Paketverlusten und erhöhten Latenzen, die wenige Sekunden bis maximal eine Minute andauern. An manchen Tagen passiert dies zwei- oder dreimal, an anderen gar nicht. Auslöser sind anhaltende DDoS-Angriffe, verursacht durch das “Aisuru”-Botnetz, das viele Netzwerke weltweit an ihre Kapazitätsgrenzen bringt. Neben uns sind viele andere große Cloud- und Hostinganbieter mit teilweise noch erheblich größeren Netzwerken betroffen. In den Fachmedien finden sich hierzu bereits verschiedene Berichte, in denen die Kapazität des Botnetzes mit 20-30 Tbps beziffert wird, die wir unten verlinkt haben. Da sich die Angriffe gegen völlig unterschiedliche Kunden und regelmäßig unser gesamtes Netzwerk (“Carpet Bombing”) richten und mit Rekordbandbreiten ausgeführt werden, ist eine effektive Filterung sehr komplex.

Ihre Daten sind nicht in Gefahr: Die Angriffe zielen rein auf die Überlastung von Servern und Netzwerkinfrastruktur ab.

Wir arbeiten seit den ersten Angriffen fast 24/7 an verschiedensten Maßnahmen zu deren Abwehr. So wurden unsere Filter- und Detektionsmechanismen verbessert, Kapazitäten erweitert und durch Traffic-Engineering sichergestellt, dass sich die Angriffe möglichst ideal auf unsere Upstreams verteilen, damit es zu keinen Überlastungen einzelner Uplinks mehr kommt. Auch arbeiten wir mit unseren Transitprovidern intensiv an der Vorfilterung und Früherkennung dieser Angriffe.

Diese Maßnahmen zeigen inzwischen Wirkung – die allermeisten Angriffe werden vollständig gefiltert und richten keinen Schaden mehr an. Allerdings ist unvermeidbar, dass gelegentlich neue Maßnahmen ergriffen werden müssen. Währenddessen kann es insbesondere bei latenzkritischen Anwendungen wie Gameservern zu Problemen kommen. Bei TCP-basierten Anwendungen wie Web- und Mailservern kann in der Regel kein Ausfall registriert werden, weil diese Anwendungen nicht empfindlich auf kurze Paketverluste oder Latenzspitzen reagieren.

Wir möchten transparent kommunizieren, dass es mehrere Monate dauern wird, bis alle Maßnahmen und Netzwerkerweiterungen umgesetzt sind. Leider sind Upgrades im Terabit-Bereich, Provisionierungen von Crossconnects und Glasfaserstrecken mit Vorlaufzeiten verbunden, die wir nicht umgehen können. Entsprechende “Wartezeiten” nutzen wir zur Realisierung anderer Maßnahmen, Vorbereitung der geplanten Upgrades sowie die vorgezogene Inbetriebnahme neuer Netzwerkhardware, sodass Sie in aller Regel keine Probleme feststellen können werden, die Situation täglich besser wird und die meisten Tage und Wochen völlig störungsfrei verlaufen.

Seien Sie versichert: Wir arbeiten permanent daran, die gewohnte dataforest-Qualität wiederherzustellen, und bedanken uns für Ihr Verständnis, die motivierenden Worte und Ihre Treue. Ein ausführlicher Blogartikel zum Ausbau unseres Netzwerks war ohnehin geplant und wird nun etwas früher erscheinen.

In den nächsten Monaten werden verschiedene Wartungsarbeiten am Netzwerk mit teils kurzer Vorlaufzeit notwendig sein. Wir halten Sie auf unserer Statusseite auf dem Laufenden. Ebenso halten wir diesen Statuspost bis auf Weiteres offen, auch wenn wir über längere Zeiträume keine (negativen) Auswirkungen erwarten oder feststellen. Bitte melden Sie sich unbedingt frühzeitig bei uns, wenn Sie unerwartete Probleme feststellen, die über wenige Sekunden hinausgehen.

Media reports / Medienberichte:
Aisuru’s 30 Tbps botnet traffic crashes through major US ISPs
Another record-breaking DDoS? Aisuru botnet suspected behind 29.69 Tbps gaming outages
US ISP-hosted IoT devices fuel Aisuru DDoS botnet
DDoS Botnet Aisuru Blankets US ISPs in Record DDoS
Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors

Oct 15, 01:15 CEST
Jan 29, 2026

No incidents reported.

Jan 28, 2026

No incidents reported.

Jan 27, 2026
Resolved - Recovery of first VM: 22:19
Recovery of last VM: 22:25

Jan 27, 22:27 CET
Monitoring - A defective RAM module caused the host system vps-018 to fail. The host is accessible again and is currently starting up all VMs, most of which are already back online.
Jan 27, 22:21 CET
Investigating - We are currently investigating this issue.
You can check in our control panels if you have products on the affected hosts.

Jan 27, 22:11 CET
Jan 26, 2026

No incidents reported.

Jan 25, 2026

No incidents reported.

Jan 24, 2026

No incidents reported.

Jan 23, 2026

No incidents reported.

Jan 22, 2026

No incidents reported.

Jan 21, 2026

No incidents reported.