I described how the Capital One breach took advantage of an EC2-specific function to obtain AWS credentials which were then used to obtain multiple files containing sensitive information. If you haven’t already done so, I’d encourage you to read parts one and two before continuing. You might also want to pull up the complaint for reference; the juicy bits describing the attack are on pages 6-8.
In this final installment of the article, I’ll describe some measures that Capital One could have taken to prevent this kind of attack. However, before I do that, I do want to point out something in defense of Capital One; on the surface of it, this application probably looked secure. I don’t have any way to test most of these, but I’m going to guess they did the following:
- Only allowed required ports for their application, both internally and externally
- Enforced HTTPS on connections from the Internet
- Enabled automatic encryption of objects in the S3 buckets and EBS Volumes
- Used associated IAM roles rather than static credentials (*)
- Enabled CloudTrail (*)
- Implemented a Web Application Firewall (WAF) (*)
(*) – The complaint states or strongly implies that this was implemented
Honestly, this puts Capital One ahead of many other implementations I’ve seen. If Capital One followed a security review checklist (and I’m guessing they did), this application ticked all the boxes.
on the surface of it, this application probably looked secure
With that qualifier out of the way, here are some relatively easy additional steps Capital One could have taken to avoid this issue:
Easy step 1: practice least privilege in IAM Roles:
Simply put, don’t give any more permissions to an application than it needs. If this server was only functioning as a WAF (and not, for example, also as an application server) then it probably didn’t need any S3 access except perhaps to back up and restore its configuration. It definitely didn’t need the ability to list the S3 buckets owned by the account, and it probably didn’t need the ability to list anything at all. Had Capital One simply denied any “s3:List*” API access in the policy, the attacker could have full read and write privileges but still be effectively blind. A better approach still would be to only allow those S3 API calls required to the resources it explicitly needed. As it is, the high level of access implied that the Role simply had list and read privileges for all S3 objects and buckets.
Easy step 2: limit S3 access to sensitive data to the local VPC:
S3 bucket policies provide the ability to restrict access to just the local network in AWS – this means that requests from the Internet will be denied, so even if the attacker had the credentials she wouldn’t be able to do anything with them.
I’m a little hesitant to put this here; if the attacker was already able to get the IAM credentials, then theoretically she should have been able to craft HTTP requests to do her misdeeds through the EC2 instance, so adding this step would have slowed her down but might not have stopped her. In general, though, it’s another form of least privilege that absolutely should be exercised.
Easy step 3: use separate KMS keys in S3 for different projects:
AWS generally offers two choices for encryption of resources: AES-256 or KMS. These names are a bit of a misnomer – it’s really a choice between using a master key shared with all of AWS, or using a master key managed by the individual AWS account. The AWS-managed key effectively doesn’t have any access control on it, so even though the data is encrypted at rest (thereby checking the relevant compliance boxes), it’s not preventing anyone else with an AWS account from reading it. A customer-managed key, on the other hand, has a default “deny” policy, and much like S3 itself requires both the key and the requester’s IAM policy to allow access. The result of using KMS encryption in S3 is that even if credentials are breached with an overly generous S3 policy, any data encrypted by KMS is still safe unless that policy also enables decrypting data with that key.
The three suggestions above are relatively easy to implement and can easily be added to security checklists for projects with sensitive data. Although none of them would have stopped the attack, they would have greatly reduced the impact (referred to as the “blast radius” in security parlance).
There are also some general steps that can be taken to cover multiple projects, which should have been standard practice for a bank the size of Capital One:
Shared step 1: monitor API requests
This really should have already been implemented: any AWS API access from a known anonymizer VPN or TOR exit IP should raise some alarms. CloudTrail provides full logging of pretty much all S3 API calls (which is how Capital One was able to give the FBI such detailed forensic data later on), and there are plenty of tools that can scour through the logs and search for any successful API requests from any suspicious IP. Honestly, it’s a little disconcerting that Capital One didn’t catch this attack from CloudTrail logs.
Checking for suspicious IPs in CloudTrail is the tip of the iceberg and pretty easy to implement – an advanced DevSecOps team should also be looking for irregularities: why is this IAM role that typically hits the API once every few days suddenly mass downloading? Why are we seeing tons of new requests from this new IP that doesn’t belong to us or to AWS? These take time, money, and engineering brainpower, but Capital One should have plenty of all three.
Shared step 2: filter out the metadata IP address with a WAF
All EC2 metadata (including IAM role credentials) is accessed by an HTTP call to the IP address “169.254.169.254”. I can’t think of any conceivable reason to have this IP address as part of your request body or post payload; therefore, any request that includes it should probably get dropped. You can use the AWS WAF to create a role like this or add it to your own WAF (although if the WAF itself was the attack vector, that might not have saved it).
All of the above suggestions are far from the full list of precautions that Capital One could have (and should have) taken to avoid this, but implementing any one of them would have either prevented the attack or at least alerted Capital One at the time of the attack.
Besides taking the above recommended steps as general practice, a project that’s going to be collecting sensitive data and be exposed to the Internet should generally have a detailed security review as a best practice. A security architect would have asked questions like “how are we implementing least privileges?” and “For each of these components, how are we limiting the fallout if they’re compromised?” Security architects aren’t inexpensive, but they’re cheaper than a lawsuit.
For the reader: if you got some value out of this, please let me know about it in the comments. If you’d like to further discuss your security posture in AWS, feel free to reach out to me or contact us through our website. For those in South Florida, I’m going to be presenting a talk based on this article on August 22, 2019 at Venture Cafe Miami – hope to see you there!