Once a year, just before the Christmas break, cybersecurity experts from around the world gather together and compete in the SANS NetWars Tournament of Champions, an invite-only cybersecurity competition featuring the top-scoring 200 players from their regional equivalents. This competition tests cybersecurity professionals across hands-on challenges in penetration testing, forensics, and threat detection. Competitors solve increasingly difficult problems under time pressure, earning points for each successful challenge.
This year, Airbus Protect sent two of its cybersecurity experts to represent the company at the event in Washington, D.C. We sat down with Simon Hilchenbach and Kynan Jones to hear about their experience.
Tell us about your roles at Airbus Protect.
Simon Hilchenbach: I work as a Cybersecurity Engineer at Airbus Protect in Germany. I focus on the engineering side of our SOC services, which includes system design, building internal tooling, and automation. A portion of my work also involves operations, making sure everything runs smoothly. Basically, my job is to ensure that our SOC analysts have the right infrastructure and tools to deliver high-quality security monitoring and response to our customers.
Kynan Jones: I’m an Incident Responder at Airbus Protect UK, part of a multinational DFIR (Digital Forensics & Incident Response) team covering the UK, France, and Germany. We specialise in helping organisations navigate critical security events, providing everything from rapid doubt removal to complex ransomware mitigation and aiding in recovery across multi-platform environments.
How did you end up at the Tournament of Champions?
Simon Hilchenbach: To qualify for the Tournament of Champions, you first have to compete in a regional SANS NetWars event and score high enough to earn an invitation. I was attending a SANS training workshop in Amsterdam in August, and the NetWars competition was running alongside it. I’ve always enjoyed competitive challenges, so I decided to participate. Finishing first there earned me the invitation to Washington.
What does the competition actually involve and what was the atmosphere like?
Kynan Jones: Over two days, you work through hands-on cybersecurity challenges covering areas like web pentesting, binary exploitation, and forensics. You earn points for each challenge you solve, and the harder the challenge, the more points you get. The goal is to climb the leaderboard and finish as high as possible.
Simon Hilchenbach: You could really see how much effort was put into the competition’s venue. The event took place in a dark ballroom bathed in red light, with intense music playing the whole time. Players sat at long rows of tables, eyes glued to their screens. Everyone came prepared: custom keyboards, portable monitors, one guy even brought VR glasses. There was a large screen next to the stage showing the top 10 leaderboard in real time. Watching the names move up and down, people fighting for those top spots, the competitive atmosphere was palpable.
How did you both perform?
Simon Hilchenbach: We finished in 4th and 19th place, respectively. At this incredibly high level of competition, I still can’t quite believe it. Due to the 4th place finish, we also got ourselves a nice trophy to showcase!
When you’re in the heat of a tournament like NetWars, or a real-world breach, what is the core philosophy that keeps you grounded?
Kynan Jones: With Objective orientated focus, it is so, so easy to see something interesting whilst in the middle of a task and steer off into a rabbit hole. This can result in significant findings, learnings or a huge time sink. Note it down and continue with your objective. Timelining as you go along is imperative as humans aren’t perfect, and in many instances early on I would spend time going back through my process just to find said gateway to the rabbit hole.
Simon Hilchenbach: 100%. With experience in systems-level thinking, you develop an intuition for where it’s worth digging deeper and where it’s not. There were moments in the competition where I had to resist the urge to explore something interesting and just move on to the next challenge.
Infrastructure is becoming increasingly complex. How do you handle network-based forensics when you’re dealing with massive amounts of data?
Kynan Jones: For massive scale, we move away from traditional full-packet capture, which is often too heavy or unfortunately unfeasible. We rely heavily on on-prem netflow and cloud platform telemetry. Many tools utilise a self-learning AI approach to baseline “normal” behavior, allowing us to spot anomalies without needing to manually sift through every bit. In the cloud, we leverage native logs such as AWS VPC Flow Logs or Azure VNet flows to gain high-level visibility. It’s about finding the signal in the noise before we commit to a deep-dive.
Speaking of the cloud, we’ve seen a massive shift in how attackers compromise businesses. Can you explain this further?
Kynan Jones: While Cloud has brought along massive capability and scalability to businesses of all sizes, it has introduced new threat vectors. We’re seeing a surge in session token theft, which is particularly dangerous because it often requires no MFA once the token is hijacked. We’re also seeing a shift away from traditional credentials in favor of API keys, which are frequently left exposed, hard coded or leaked.
This has added a significant knowledge gap for security professionals. You can’t just be a “Windows guy” anymore; you have to understand the specific monitoring and identity frameworks of multiple cloud platforms to stop a modern attack in its tracks.
Sometimes a broad view isn’t enough. You’ve described some of your work as “pinhole surgery.” Can you elaborate on that?
Kynan Jones: While our standard modus operandi is rapid DFIR, some cases such as insider threats and Advanced Persistent Threats (APTs) demand a different gear. In these instances, a fine combed approach is mandatory. Advanced actors (or employees) often have intimate knowledge of the environment or use sophisticated anti-forensic or defense evasion capabilities that wipe away easy detection opportunities. “Pinhole surgery” involves the research and investigation of lesser-known or undocumented artifacts, and in many cases doom-scrolling a wealth of logs! We look for the tiny, forensic breadcrumbs that weren’t intended to be logs, allowing us to reconstruct a timeline even when the attacker thought they’d left no trace.
What about the malware itself? Are we seeing entirely new threats, or just better versions of the old ones?
Kynan Jones: It’s a bit of both… we still see tools that have been around longer than I have been in the field. However, there is a clear shift toward vibe-coded malware in the form of scripts and executables. When utilised, it means traditional signatures fail because the TTPs (Tactics, Techniques, and Procedures) and functionality during the detonation phase change at a much faster rate than before.
Vibe-coding refers to the trend of building software including malware, primarily through natural language prompts given to AI models (think Gemini,ChatGPT or other LLMs) rather than by a human manually writing every line of code.
In the context of an incident, “vibe-coded” malware isn’t a specific family or strain; it is a method of production characterised by rapid, iterative, and sometimes not as the author intends generation.
In the Windows ecosystem, thanks to the rise of EDR and Microsoft’s enhanced security, attackers are leaning into LoTL (Living off the land) and fileless malware. We’re even seeing EDR killers designed to disable defenses before the main payload arrives, and the creative use of legitimate tools used by DFIR teams like Velociraptor being repurposed for Command and Control (C2) and remote code execution; despite how interesting this is, it mostly can make initial detection for SOCs much harder.
AI has been the buzzword for quite some time now, how are attackers and defenders utilising the technology?
Simon Hilchenbach: Interestingly, the competition had some AI-related challenges too, which is a good reminder that both sides are adopting these tools. In improving our SOC architecture, we’re currently looking at how AI can reduce the time from security incident to understanding. However, in our field, speed means nothing if accuracy suffers. We’re careful not to sacrifice quality for efficiency.
AI is still relatively new in our field, so best practices are still emerging. What we’ve learned is that LLMs work best when they’re properly constrained. You need to treat them as powerful but unpredictable tools that require the right guardrails. This means having strong infrastructure in place: sandboxed environments for testing, well-organised data for context, and systems for auditing AI decisions. Crucially, the infrastructure should be robust, and any automatic action taken by an AI should be revertible. In a mature SOC that was already designed with automation in mind from the start, before anybody even foresaw the current developments, AI can be a force multiplier. But in less mature environments, it could actually introduce new risks.
Turning to the attacker side. As Kynan mentioned, AI is changing how malware is created, but it’s also lowering the barrier to entry there. You no longer need deep technical expertise to write functional malware since AI can get you 80% of the way there. State-of-the-art models have filters to prevent such abuse but many methods to bypass these have been found and are found regularly, and even with consumer-grade hardware, attackers can get really far running an open-source model, which shaped the threat landscape irreversibly. In our SOC service, we’re continuously updating our detection capabilities to stay ahead of these developments.
Kynan Jones: Attackers are using AI to scale social engineering such as highly convincing, automated phishing and to generate code snippets that vary just enough to bypass legacy detection. An interesting note that is not touched on so much too, as such with defenders being able to interpret niche system logs with the use of LLMs, so can attackers understand the functionality and ultimately understand how to break it. For defenders the rise of Model Context Protocols (MCPs) and agentic solutions, makes this a very interesting space to watch and learn. It is quite a difficult topic to conceptualise because there are so many niche cases of its usage, from emulated sandbox environments for deceptive technologies, parsing malware samples and memory images to automating the kill chain.
Beyond malware generation and phishing campaign automation, are there other ways AI might change the threat landscape?
Simon Hilchenbach: There is another perspective I haven’t really seen discussed much yet. Attackers could use coding agents to build trust in open-source projects more efficiently, with the objective of eventually inserting a malicious backdoor. The XZ Utils case showed how feasible this can be: An attacker, likely state-sponsored, spent two years building credibility before implementing a backdoor. That was before coding agents existed, when the amount of code contributed to a project was a proxy for the effort the contributor put in. Now, with AI assistance, a single attacker could efficiently maintain multiple personas across different projects simultaneously, making this approach much more scalable. It’s possible that this is already happening. Large open-source projects with active communities have some protection, but critical infrastructure projects maintained by a few volunteers, as is XZ Utils, just became easier targets. This makes proper software supply chain management even more important. In the event that such an attack is publicly documented, you need to quickly assess whether and where you are affected.
We have touched on automation a few times. What role does it play in the SOC work, and where do you still rely on human judgement?
Simon Hilchenbach: Automation is essential for handling the sheer volume of data and alerts in a modern SOC. As I hinted at before, it should be a guiding principle when designing SOC infrastructure. We use it for enrichment, correlation, and triaging straightforward cases. All the repetitive work that would overwhelm human analysts. This frees up our SOC analysts to focus on the complex investigations that genuinely require human judgment and critical thinking. Beyond that, we also have extensive automation in place to serve our engineering department itself, to help us as engineers use our time and attention more effectively. The goal isn’t to replace people but to let them work on the problems that actually need their expertise.
As an engineer building SOC infrastructure, what principles guide your decisions about what to automate? What makes for good automation versus automation that creates more problems than it solves?
Simon Hilchenbach: The best candidates for automation are tasks that are repetitive, time-consuming, or error-prone when done manually. I look for processes with clear, well-defined inputs and outputs. If the logic is ambiguous or requires significant judgment calls, automation probably isn’t the right fit. The maintenance aspect is crucial. I’ve seen automation that’s so brittle or complex that maintaining it takes more effort than just doing the task manually would have. Good automation should be understandable and maintainable. If it becomes a black box that nobody wants to touch because they’re afraid it’ll break, you’ve created a new problem.
Another pitfall is false confidence. Automation can make people trust the system too much, and they stop questioning whether it’s working correctly. I also don’t think every piece of automation needs to have a directly quantifiable ROI. Sometimes it’s about the mindset. Small automations stack on top of each other over time. Each one might seem minor, but together they create an environment where repetitive work is handled systematically, and people can focus on what actually matters.
How do you ensure consistent service quality for customers with very different environments and needs?
Simon Hilchenbach: In our SOC services, we bring security to a variety of different customers, each with their own policies, processes and company networks. Notably, we also serve customers in the industries of OT, defence, critical infrastructure, and, naturally, being part of the Airbus family, aviation. No two customer environments are the same. We work closely with each customer to tailor our services to their specific setup, their risk profile, and their priorities. At the same time, we maintain a strong baseline of best practices and continuously refine our service based on feedback and lessons learned. It’s a balance between standardization and customization, with enough structure to ensure quality but enough flexibility to provide value to our customers.