Pentests and the SOC
Penetration testing is a critical part of any robust cybersecurity strategy. But a “successful” penetration test shouldn’t be judged only by what the testers found; it should also be judged by what the organisation detected, how quickly it made sense of what it saw, and whether the monitoring stack told a coherent story of what happened.
That’s where the SOC comes in.
I’ve led security operations capabilities supporting multi-cloud banking platforms, and one lesson sticks: a SOC is only as good as the visibility it can operationalise. You can ingest all the logs you like, apply foundation rules, and build dashboards that look impressive in steering meetings, but the real test is whether meaningful activity gets detected and investigated at the right time, for the right reasons, without drowning analysts in noise.
Penetration testing is a perfect opportunity to test that, because it produces “attacker-like” behaviour you can schedule, scope, observe, and iterate on. Too often, though, pentests are run in parallel to the SOC rather than with it. The SOC either gets no context and wastes time chasing false positives, or gets too much context and mentally suppresses anything that looks related—creating the more dangerous outcome where real malicious activity can hide in the shadow of a test.
The goal isn’t to neuter the SOC during a pentest. The goal is to coordinate just enough that the SOC stays effective, and the pentest becomes a joint exercise in both prevention and detection.
The coordination problem no one likes talking about
The first awkward truth is that not every penetration test can be integrated cleanly. Some tests are run by third parties, some are deliberately offline, some are constrained by contractual rules, and some target environments that don’t feed telemetry into the SOC at all. In those cases, the collaboration ceiling is simply lower.
But wherever logs do feed into the SOC, failing to involve them wastes value. It also wastes the one chance you have to validate whether visibility and alerting actually reflect reality.
What the SOC needs before the first packet lands
Before the test starts, the SOC needs a clear understanding of scope: what is being tested, what isn’t, and what “success” looks like from the tester’s perspective. This isn’t about giving the defenders a cheat sheet; it’s about avoiding the chaos where analysts burn cycles investigating test traffic against systems that were never in scope, while ignoring the things that actually matter.
Timing matters just as much. If the SOC doesn’t know the testing window, you’ll generate noise and fatigue—and fatigue is how alerts get missed. The schedule also needs to include what happens when the test runs long, because “it’ll finish by 5pm” has a habit of becoming “we just need another hour”.
Source matters too. If you don’t provide the originating IP ranges, domains, and relevant infrastructure identifiers used by the testers, the SOC can’t reliably triage what it sees. But this is where it gets subtle: “allowlisting” test traffic is rarely the right answer. It creates blind spots, and blind spots are exactly what attackers look for. The more useful pattern is to provide context that helps analysts classify activity, not suppress it.
And then there’s the simplest operational requirement that gets overlooked embarrassingly often: a direct contact route. If testing starts to impact production, or if analysts see something genuinely alarming, the SOC needs a fast way to reach someone who can confirm intent, pause activity, or escalate appropriately. When that line doesn’t exist, organisations default to the worst possible behaviour: they improvise under pressure.
Turning a pentest into detection engineering
Where this becomes genuinely interesting is when you treat the pentest as a detection calibration run. A good SOC doesn’t just want to know “did we alert?” It wants to know whether the alerts were meaningful, whether they were too late, whether they fired for the right reason, and whether the telemetry captured enough detail to tell the story end-to-end.
This is also where mapping activity to MITRE ATT&CK becomes useful, not as a buzzword but as a shared structure. When red-team actions are described in terms of tactics and techniques, the SOC can reason about what should be visible, which detections should trigger, and which parts of the kill chain are currently dark. It also stops post-test conversations becoming vague. Instead of “we didn’t see much”, you can say “we missed the lateral movement technique entirely”, which is the kind of statement that drives concrete engineering changes.
A strong pattern here is to agree, up front, a small set of scenarios the SOC explicitly wants to validate. Not a laundry list, just the areas that matter most to the organisation right now—identity abuse, suspicious remote execution, data access anomalies, privilege escalation paths. Then the pentest becomes a two-way exercise: the testers probe, the SOC watches, and both sides compare notes against what actually happened.
The feedback loop is where the value lives
The most important part of this collaboration happens after the test. If the pentest report lands, gets turned into tickets, and nothing else changes, you’ve missed half the value. The SOC should sit down with the testers and replay key moments: what was attempted, what worked, what should have alerted, what actually alerted, what telemetry was missing, and which detection rules need refinement.
This is also where uncomfortable discoveries show up. Sometimes the SOC “missed” activity because the logging wasn’t there. Sometimes because the logs existed but weren’t parsed correctly. Sometimes because the rule logic was wrong. And sometimes because analysts saw it and dismissed it, because the organisation taught them that pentest windows are periods where “everything is probably fine”.
A pentest where the SOC has no visibility is not just a missed opportunity—it’s a signal. It tells you your monitoring doesn’t describe reality, and that’s an architectural problem, not a tooling problem.
If you want pentesting to be more than an annual compliance ritual, bring the SOC into the loop early, keep them effective during the test, and treat the outcome as a joint calibration of both offensive findings and defensive capability. The result isn’t just a list of vulnerabilities. It’s a clearer understanding of what the organisation can actually see when something starts going wrong.
