Update - All servers are back on power. Should you still see any issues with your server, please check it via IPMI before reaching out to our support.

The technicans will replace the defective pdu during the day and we need to move the servers back to the correct feed. We will keep you updated.

Oct 17, 2025 - 03:18 CEST
Monitoring - Affected servers are currently booting with the first ones already being up.
Oct 17, 2025 - 03:02 CEST
Update - We unplugged all servers from the affected PDU, reinserted the fuse, and reconnected the servers one by one to identify which system caused the fuse to trip. Since none of the servers powered on, it became clear that the PDU itself is faulty. We are currently moving the server to another feed to boot the servers.

We are also aware that this happend the second time. We are already in contact with the datacenter and the manufacturer to permanently resolve the issue.

Oct 17, 2025 - 02:48 CEST
Identified - One of the power feeds in X10 (containing only dedicated servers) has failed, while the other remains operational. Servers equipped with redundant power supplies are unaffected. Systems with only a single PSU may be impacted, depending on which feed they are connected to, resulting in roughly a 50% chance of downtime.
If your server is currently unreachable, please rest assured that we are already addressing the issue. There is no need to contact our support team.

Oct 17, 2025 - 02:07 CEST
Monitoring - German version below

For several days, parts of our network have experienced occasional packet loss and increased latency, usually lasting a few seconds, sometimes up to a minute. Some days it happens two or three times. Other days it does not happen at all. The cause is ongoing DDoS attacks driven by the “Aisuru” botnet, which is pushing many networks worldwide to their capacity limits. Alongside us, many other large cloud and hosting providers are affected, including some with significantly larger networks. Trade press reports we link below estimate the botnet’s capacity at 20-30 Tbps. Because the attacks target completely different customers and regularly our entire network (“carpet bombing”) at record bandwidths, effective filtering is very complex.

Your data is not at risk: the attacks aim solely to overload servers and network infrastructure.

Since the first attacks, we have been working almost 24/7 on various countermeasures. We have improved our filtering and detection mechanisms, expanded capacity, and used traffic engineering to distribute attacks across our upstreams to prevent overload on individual uplinks. We are also working closely with our transit providers on pre-filtering and early detection.

These measures are now having an effect – the vast majority of attacks are fully filtered and no longer cause damage. However, it is unavoidable that new measures must occasionally be deployed. During that time, latency-sensitive applications like game servers may experience issues. For TCP-based applications like web and mail servers, outages are usually not observed because these applications are not sensitive to short packet loss or latency spikes.

We expect this situation to continue for at least a few more days until all measures and network expansions are in place. Unfortunately, upgrades in the terabit range and the provisioning of cross-connects and fiber routes come with lead times we cannot circumvent. We are using this “waiting period” to implement other measures, prepare the planned upgrades, and bring new network hardware online ahead of schedule.

Rest assured: we are working continuously to restore the usual dataforest quality, and we thank you for your understanding, the motivating words, and your loyalty. A detailed blog post about our network expansion was already planned and will now appear a bit earlier.

In the coming days, urgent network maintenance may be necessary with short notice. We will keep you updated in this status post, which we will keep open until further notice, even if we do not expect any (negative) impact. Please contact us early if you notice unexpected issues that last longer than a few seconds.

German

Seit einigen Tagen kommt es in Teilen unseres Netzwerks zu gelegentlichen Paketverlusten und erhöhten Latenzen, die wenige Sekunden bis maximal eine Minute andauern. An manchen Tagen passiert dies zwei- oder dreimal, an anderen gar nicht. Auslöser sind anhaltende DDoS-Angriffe, verursacht durch das “Aisuru”-Botnetz, das viele Netzwerke weltweit an ihre Kapazitätsgrenzen bringt. Neben uns sind viele andere große Cloud- und Hostinganbieter mit teilweise noch erheblich größeren Netzwerken betroffen. In den Fachmedien finden sich hierzu bereits verschiedene Berichte, in denen die Kapazität des Botnetzes mit 20-30 Tbps beziffert wird, die wir unten verlinkt haben. Da sich die Angriffe gegen völlig unterschiedliche Kunden und regelmäßig unser gesamtes Netzwerk (“Carpet Bombing”) richten und mit Rekordbandbreiten ausgeführt werden, ist eine effektive Filterung sehr komplex.

Ihre Daten sind nicht in Gefahr: Die Angriffe zielen rein auf die Überlastung von Servern und Netzwerkinfrastruktur ab.

Wir arbeiten seit den ersten Angriffen fast 24/7 an verschiedensten Maßnahmen zu deren Abwehr. So wurden unsere Filter- und Detektionsmechanismen verbessert, Kapazitäten erweitert und durch Traffic-Engineering sichergestellt, dass sich die Angriffe möglichst ideal auf unsere Upstreams verteilen, damit es zu keinen Überlastungen einzelner Uplinks mehr kommt. Auch arbeiten wir mit unseren Transitprovidern intensiv an der Vorfilterung und Früherkennung dieser Angriffe.

Diese Maßnahmen zeigen inzwischen Wirkung – die allermeisten Angriffe werden vollständig gefiltert und richten keinen Schaden mehr an. Allerdings ist unvermeidbar, dass gelegentlich neue Maßnahmen ergriffen werden müssen. Währenddessen kann es insbesondere bei latenzkritischen Anwendungen wie Gameservern zu Problemen kommen. Bei TCP-basierten Anwendungen wie Web- und Mailservern kann in der Regel kein Ausfall registriert werden, weil diese Anwendungen nicht empfindlich auf kurze Paketverluste oder Latenzspitzen reagieren.

Wir rechnen damit, dass dieser Zustand noch mindestens einige Tage anhalten wird, bis alle Maßnahmen und Netzwerkerweiterungen umgesetzt sind. Leider sind Upgrades im Terabit-Bereich, Provisionierungen von Crossconnects und Glasfaserstrecken mit Vorlaufzeiten verbunden, die wir nicht umgehen können. Entsprechende “Wartezeiten” nutzen wir zur Realisierung anderer Maßnahmen, Vorbereitung der geplanten Upgrades sowie die vorgezogene Inbetriebnahme neuer Netzwerkhardware.

Seien Sie versichert: Wir arbeiten permanent daran, die gewohnte dataforest-Qualität wiederherzustellen, und bedanken uns für Ihr Verständnis, die motivierenden Worte und Ihre Treue. Ein ausführlicher Blogartikel zum Ausbau unseres Netzwerks war ohnehin geplant und wird nun etwas früher erscheinen.

In den nächsten Tagen können dringende Wartungsarbeiten am Netzwerk mit teils kurzer Vorlaufzeit notwendig sein. Wir halten Sie in diesem Statuspost, den wir bis auf Weiteres offenhalten, auch dann auf dem Laufenden, wenn wir keine (negativen) Auswirkungen erwarten. Bitte melden Sie sich unbedingt frühzeitig bei uns, wenn Sie unerwartete Probleme feststellen, die über wenige Sekunden hinausgehen.

Media reports / Medienberichte:
https://www.csoonline.com/article/4071594/aisurus-30-tbps-botnet-traffic-crashes-through-major-us-isps.html
https://fastnetmon.com/2025/10/08/another-record-breaking-ddos-aisuru-botnet-suspected-behind-29-69-tbps-gaming-outages/
https://www.scworld.com/brief/us-isp-hosted-iot-devices-fuel-aisuru-ddos-botnet
https://krebsonsecurity.com/2025/10/ddos-botnet-aisuru-blankets-us-isps-in-record-ddos/
https://thehackernews.com/2025/10/researchers-warn-rondodox-botnet-is.html

Oct 15, 2025 - 01:15 CEST
[dataforest Backbone] Interxion FRA8 ? Degraded Performance
90 days ago
99.99 % uptime
Today
Edge Network (affects all locations) Degraded Performance
90 days ago
99.99 % uptime
Today
DDoS Protection (affects all locations) Operational
90 days ago
99.98 % uptime
Today
Dedicated Servers (40G / 100G / 400G) ? Operational
90 days ago
100.0 % uptime
Today
Facilites ? Operational
90 days ago
100.0 % uptime
Today
[Datacenter] maincubes FRA01 ? Operational
90 days ago
99.99 % uptime
Today
Datacenter Routing and Switching Infrastructure ? Operational
90 days ago
100.0 % uptime
Today
Ceph Cluster ? Operational
90 days ago
100.0 % uptime
Today
Virtual Servers Operational
90 days ago
99.98 % uptime
Today
Managed Servers Operational
90 days ago
99.99 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Plesk Web Hosting Operational
90 days ago
99.99 % uptime
Today
TeamSpeak3 Servers Operational
90 days ago
100.0 % uptime
Today
Network Storage Operational
90 days ago
100.0 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites ? Operational
90 days ago
100.0 % uptime
Today
[Datacenter] firstcolo FRA4 ? Operational
90 days ago
99.94 % uptime
Today
Datacenter Routing and Switching Infrastructure ? Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
99.79 % uptime
Today
Colocation Racks Operational
90 days ago
100.0 % uptime
Today
Facilites ? Operational
90 days ago
100.0 % uptime
Today
General Services Operational
90 days ago
99.98 % uptime
Today
Avoro CP & Support ? Operational
90 days ago
100.0 % uptime
Today
PHP-Friends CRM & Support ? Operational
90 days ago
99.94 % uptime
Today
Hotline Operational
90 days ago
100.0 % uptime
Today
Cloud Control Panel ? Operational
90 days ago
99.94 % uptime
Today
Dedicated Control Panel ? Operational
90 days ago
100.0 % uptime
Today
IPMI VPN ? Operational
90 days ago
100.0 % uptime
Today
DDoS Manager ? Operational
90 days ago
99.98 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

Planned maintenance on selected host systems Oct 17, 2025 22:00 - Oct 18, 2025 06:00 CEST

We need to carry out maintenance on the following host systems:

PHP-Friends virtual servers
- vnode14
- vnode15
- vnode22
- vnode27
- vnode29
- vnode30
- vnode31

Avoro virtual servers (legacy products)
- vnode001
- vnode003
- vnode004

Other products of both labels
- nfs1 (network storage, also used for backups on web hosting and managed servers)
- ernie (Plesk web hosting)

Each host will be shut down and rebooted individually. The process is expected to take no longer than 30 minutes per host.
Only a small number of systems are affected. Customers can verify in the respective control panels which of their products are hosted on these servers.
If you operate a virtual server on one of the affected hosts, please ensure that your system correctly handles the shutdown signal and automatically restarts all required services after reboot.

Posted on Oct 13, 2025 - 04:09 CEST
Oct 17, 2025

Unresolved incident: Power Feed Outage in Rack X10.

Oct 16, 2025

No incidents reported.

Oct 15, 2025

Unresolved incident: Update on Sporadic Packet Loss Issues.

Oct 14, 2025

No incidents reported.

Oct 13, 2025

No incidents reported.

Oct 12, 2025

No incidents reported.

Oct 11, 2025
Resolved - This incident has been resolved.
Oct 11, 22:08 CEST
Update - We are continuing to work on a fix for this issue.
Oct 10, 19:35 CEST
Identified - We observed packet loss between 16:30 and 16:33 CEST. The situation is stable since then.
Oct 10, 16:37 CEST
Oct 10, 2025
Oct 9, 2025

No incidents reported.

Oct 8, 2025

No incidents reported.

Oct 7, 2025

No incidents reported.

Oct 6, 2025

No incidents reported.

Oct 5, 2025

No incidents reported.

Oct 4, 2025

No incidents reported.

Oct 3, 2025

No incidents reported.