We’ve had a very educational and interesting couple of weeks here in Austin. No, not just because of the Annular Eclipse, which was very cool (and a good test run for the total eclipse that is coming to Austin in a few more months), but because I’ve been studying historical breaches. In doing so, I’ve learned a little bit about how 2024 may play out. Interested? Then let me explain.

As part of a presentation I’m putting together, I wanted to understand how companies actually get hurt out there on the Big Bad Internet. To that end, I figured I’d steal inspiration from Shakespeare and remind myself that the “what’s past is prologue” (which also happens to be from one of my favorite Discovery episodes, but I digress). If we want to know where we’re going to go, we have to understand where we’ve been.

Phishing and Insecure Systems: The Buckets I Expected

My trip down memory lane was actually a little surprising. I was expecting two big buckets of insecurity: users being phished (or otherwise attacked using some variation of social engineering) and insecure systems sitting out on the Internet. Within that second bucket, I was mentally including issues that related to misconfiguration as well as CVEs hanging out in public. A review of the last couple of years did indeed turn up a lot of content in these buckets and I continue to believe they represent a large segment of the risk companies face.

At the end of the day, your people are going to be targeted, and someone, somewhere, is going to make a mistake. What you need to focus on is the safety net that catches those inevitable missteps. It’s a very powerful technique, but accepting and dealing with reality and is going to require a profound lens shift. As we discussed in a recent blog post, this is inherently a process problem, not a people problem

Similarly, you’ll have a bucket of technical issues: known vulnerabilities hanging out on your attack surface. Attack Surface Management (ASM) does a solid job of handling these and, if you do it correctly, it doesn’t just give you a big “to do” list of places you need to patch. Rather, it allows you to carefully prioritize these weaknesses in terms of risk. Experience has taught us that all vulnerabilities were not created equal, and many CVSS 9-point-somethings go for years without exploitation in the wild (though leaving them in place is still not something I would recommend). You have too many assets to patch, even if you knew where everything was. Prioritization is one of the best tools you have in your toolbox for improving defensive ROI.

API Insecurity: The Bucket I Didn’t Expect

The surprise for me, though, was my third bucket, and one I forgot about when making my mental list: API insecurity. While I know this happens, a review of history highlighted just how impactful these weaknesses can be when things go wrong. Moreover, traditional ASM solutions are not going to pick up most of these issues. Moreover, solutions that rely solely on automation are particularly prone to missing this type of vulnerability.

A good example is worth a thousand words, so without further ado, I give to you an issue Twitter had which led to the release of several million user records. While the data disclosed was not super-private (mostly phone numbers, etc.), it is clearly undesirable. If that’s not enough, Microsoft also inadvertently exposed a large number of accounts via an API that should have required authentication but did not. I could go on (and on) but the idea is pretty simple: large customers get breached through their own vulnerable or misconfigured APIs.

Why That Third Bucket Matters

The reasons I found this interesting are two-fold. First, the impact was surprising: I know Praetorian’s engineers find this kind of thing on AppSec reviews we do all the time during engagements, but I hadn’t mentally connected it to some of the biggest data breaches in the wild. Second, a Red Team or AppSec engagement would pick up these kinds of issues with relative ease (provided the scope wasn’t too broad), but automation won’t typically find them. I write “typically” here as I do see potential utility in some LLM-based automation, but at least today, that’s a research topic more than it is a solution (not to mention the security challenges LLMs can have when handling content that is untrusted, which is the topic of another post). That means that human testing is the number one way to find these issues. This leaves defenders with a couple different challenges.

Reactive Red Team

First, the more I see, the less I believe that the rigid point-in-time Red Team model is the optimal way to use offensive security. The rate of change of systems is high, so having a Red Team make an assessment gives us just a single pixel of a much more complex picture. Yes, it’s data, and yes, it can help reduce operational risk, but the time-bound nature of the engagement means that significant windows of vulnerability are likely to exist around these momentary check ins. For that reason, I feel like traditional Red Teaming is a bit more reactive than it could be. In contrast, continuous Red Teaming, assisted by automation, can give you that quick detection of new risk in your environment.

Automation Gaps

Second, I think this leads to some questions you should be asking your vendor in the EASM space. How much manual testing do they actually do? What kind of coverage beyond traditional “Hey, you’re running Apache 0.6.2” do they provide? How is that work done, and at what cadence?” While a weak answer here doesn’t mean the service isn’t providing value (it is… my Saturday afternoon breach reading list had lots of examples of issues EASM would pick up on), it does allow you as the buyer to contextualize that value and understand your gaps in coverage. You can then figure out how to deal with those omissions most effectively.

Conclusion

The harsh reality of being a CISO in today’s threat environment is that it is not about eliminating all risk: that’s simply impractical. You have users, you have interfaces, you have systems. Sooner or later, the wrong set of stars will align and things will go wrong. Your job is to make that alignment a rarity in the most efficient way possible. Doing so requires you to use every tool at your disposal, and to understand exactly what you’re getting when you deploy them. While a risk of zero just isn’t possible a meaningful reduction in actual risk is. To get there, remember that what’s past is prologue. While times change, it’s a pretty good bet that last year’s breaches can tell us a lot about this year’s effective defenses. History is a wonderful teacher.