Understanding Identity and Access Management Roles for ECS/EKS

IAM Roles for ECS/EKS: Right Permissions, Right Place

IAM role configuration is central to container security in AWS. The principle of least privilege applies everywhere, but it gets ignored in container orchestration more often than almost anywhere else. A common pattern: the infrastructure is locked down tight, but the container task runs with AdministratorAccess because someone couldn’t get S3 writes working late on a Friday. That’s a serious security gap worth closing.

The Role Confusion: Workers vs. Managers

IAM roles are the gatekeepers of your AWS estate. In ECS and EKS, getting them right is the difference between a minor application bug and a full-scale cloud compromise. The key is understanding that not all roles do the same job, and mixing them up is where the trouble starts.

Each IAM role has two sides: a permission policy (what the role can do) and a trust policy (who or what is allowed to assume the role). Getting the permission policy right is critical, but neglecting the trust policy is equally dangerous — an overly permissive trust relationship can let unintended principals assume a role and inherit its permissions.

First, there’s the Task Role. This is the one people get wrong most often. The Task Role is attached to the specific task (in ECS) or the Pod (in EKS), and it defines what the application code inside the container is allowed to do. If your Python script reads a file from S3 and writes a record to DynamoDB, those permissions belong here. This is the primary tool for least privilege. If a specific container only calculates tax, it has no business holding permissions to delete your customer database. Generic, over-privileged “AppRoles” are alarmingly common.

In ECS, there is a major distinction between the Task Role and the Task Execution Role that often trips people up. While the Task Role covers your app, the Task Execution Role is for the ECS agent — the infrastructure itself. It allows ECS to pull your Docker image from ECR and send logs to CloudWatch. A frequent mistake is dumping application permissions into the Execution Role, blurring the lines between the plumbing and the water flowing through it.

Then there are the Service Roles. These are the permissions the underlying machinery — the scheduler or the cluster — uses to keep things running. They allow ECS or EKS to manage Elastic Load Balancers, register targets, and spin up or terminate EC2 instances. In most cases, AWS handles these Service-Linked Roles automatically now, which helps — though certain custom configurations still require you to define them yourself. The goal is to give the orchestration layer the power to manage infrastructure without exposing those permissions to the application code. Your web app container has no reason to know how to terminate an EC2 instance, and with proper role separation, it never will.

The Art of Least Privilege

Keeping this tidy requires a disciplined approach to permissions. Start with zero and add them back one by one. If a developer asks for s3:*, it’s worth asking why they need to delete buckets when they only need to read objects. It’s painful at first, but it pays off later.

For EKS, the strategy shifts slightly. Attaching IAM policies to the worker nodes (EC2) gives every pod on that node the same permissions — a poor outcome. Instead, use IRSA (IAM Roles for Service Accounts) or the newer EKS Pod Identity. These map an AWS IAM role to a specific Kubernetes Service Account, allowing granular, pod-level isolation. A word of caution on AWS Managed Policies: they are convenient but often overly broad. A “Read Only” managed policy might still grant access to data you consider sensitive, so always review the JSON before attaching it.

One threat that deserves special attention is credential theft via the Instance Metadata Service (IMDS). If your ECS tasks or EKS pods run on EC2 and an attacker lands a Server-Side Request Forgery (SSRF) vulnerability, they can query http://169.254.169.254 to steal the IAM credentials attached to the instance or task role. Enforce IMDSv2 (which requires a session token and defeats simple SSRF attacks) and, on EKS, use IRSA or Pod Identity so that credentials are scoped to the pod rather than inherited from the node.

Hardening the Container Runtime

Even with perfectly configured IAM roles, an insecure container runtime still leaves you exposed. It starts with the golden rule: never run as root. By default, Docker containers run as root, meaning if an attacker breaks out of that container, they effectively have root access on the host node.

Fixing this is straightforward. In your Dockerfile, create a user and switch to it:

FROM alpine:3.19
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

For an extra layer of defence, look at User Namespaces. These map the root user inside the container to a non-privileged user on the host, acting as a failsafe if your configuration slips up.

Beyond identity, strip the container down to the essentials. Use minimal base images like Alpine or “Distroless” variants. If curl or bash isn’t in the image, an attacker has a much harder time running a script after gaining access. Also drop Linux capabilities that aren’t needed — your web app likely doesn’t need NET_ADMIN rights — and mount the root filesystem as read-only. In ECS, set readonlyRootFilesystem to true in your task definition’s container properties; in Kubernetes, set readOnlyRootFilesystem: true in the pod’s securityContext. If your app can’t write to the disk, malware struggles to download and persist executables. Some applications need a writable /tmp; you can mount a small tmpfs for that purpose without opening up the entire filesystem.

That covers IAM roles and container hardening. Getting these foundations right is what lets you sleep at night while your clusters scale up and down.