Capital One and EC2 Hack – an Overview

There’s been a ton of coverage of the recently discovered Capital One breach.

I’m generally very skeptical when AWS security makes the news; so far, most “breaches” have been a result of the customer implementing AWS services in an insecure manner, usually by allowing unrestricted internet access and often overriding defaults to remove safeguards (I’m looking at you, NICE and Accenture and Dow Jones!).  Occasionally, a discovered “AWS vulnerability” impacts a large number of applications in AWS – and it also impacts any similarly-configured applications that are *not* in AWS (see, for example, this PR piece…um, I mean “article” from SiliconAngle).  Again, this is a lack of basic security hygiene – anyone who’s worked in IT in the last 20 years knows that you need to patch any internet-facing software before an attacker finds it (and, incidentally, the time you have until a vulnerability gets found and exploited is continuously getting smaller, so you better find a way to automate that – but that’s another discussion for another post).

When I looked at the Capital One breach, I immediately assumed it would fit into one of those categories, but instead it looks like we finally have an honest-to-goodness AWS-specific hack.  Furthermore, from what I can tell, it was the result of a customer trying to follow best practices.

Although I didn’t have a chance to look at the exploit before it was taken down, we can get some idea of how it worked from the text of the complaint (primarily by reading between the lines of the agent’s description of the attacker’s deployment).  I’ll go into tech detail in another post, but the short version is that the attacker found a way – almost certainly through some misconfigured 3rd-party software – to get temporary AWS credentials from an EC2 instance’s metadata.  The temporary credentials gave the attacker access to an S3 bucket that contained sensitive data, which she then posted online.

Notice that I didn’t write “a misconfigured EC2 instance” above; the EC2 configuration (called an “associated EC2 IAM Role”) is a recommended practice when developing applications for AWS.  This is, unfortunately, an increasingly common issue with security-oriented tools and best practices; having them in place but not using them correctly (or, in the case of this attack, using them *almost* correctly) can sometimes be even worse than not using them at all. This is particularly heartbreaking to see as a security professional – they tried to do this correctly, but it completely backfired and opened a backdoor.

I will leave it to the reader to decide if Capital One should be forgiven for this; my personal opinion is that they have enough money and resources for a detailed security review, particularly for applications that will be collecting sensitive information from people.  A cursory security review would probably have passed, but a deep dive probably would have revealed the underlying vulnerabilities (or at least reduced or eliminated the impact).

I’ll get into some of the tech details in part 2 of this article, because they really are very interesting, and in part 3 I’ll dive into what organizations can do in the future to prevent themselves from this type of attack.

Related Posts

Reader Interactions