Is there any benefit to using separate ECS task execution roles? - security

When deploying containers on ECS Fargate, ECS itself uses a task execution role to fetch secrets, Docker images, ship logs, etc. The AssumeRolePolicy must allow ECS to assume it:
{
"Version": "2012-10-17"
"Statement": {
"Sid": ""
"Effect": "Allow"
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
}
"Action": "sts:AssumeRole"
}
}
Now, I've heard some people recommend using separate ECS task execution roles for separate tasks, but I don't see how this would result in greater security. (Provided no other IAM principals are allowed to assume the role.)
I can imagine that, under the hood, the role is assumed by an agent running on an underlying EC2 instance, but at least with Fargate, the responsibility to secure it is AWS's.
So is there any good reason to have separate task execution roles?

Related

Can't access S3 bucket from within Fargate container (Bad Request and unable to locate credentials)

I created a private s3 bucket and a fargate cluster with a simple task that attempts to read from that bucket using python 3 and boto3. I've tried this on 2 different docker images and on one I get a ClientError from boto saying HeadObject Bad request (400) and the other I get NoCredentialsError: Unable to locate credentials.
The only real different in the images is that the one saying bad request is being run normally and the other is being run manually by me via ssh to the task container. So I'm not sure why one image is saying "bad request" and the other "unable to locate credentials".
I have tried a couple different IAM policies, including (terraform) the following policies:
data "aws_iam_policy_document" "access_s3" {
statement {
effect = "Allow"
actions = ["s3:ListBucket"]
resources = ["arn:aws:s3:::bucket_name"]
}
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:GetObjectVersionTagging",
]
resources = ["arn:aws:s3:::bucket_name/*"]
}
}
Second try:
data "aws_iam_policy_document" "access_s3" {
statement {
effect = "Allow"
actions = ["s3:*"]
resources = ["arn:aws:s3:::*"]
}
}
And the final one I tried was a build in policy:
resource "aws_iam_role_policy_attachment" "access_s3" {
role = "${aws_iam_role.ecstasks.name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
the bucket definition is very simple:
resource "aws_s3_bucket" "bucket" {
bucket = "${var.bucket_name}"
acl = "private"
region = "${var.region}"
}
Code used to access s3 bucket:
try:
s3 = boto3.client('s3')
tags = s3.head_object(Bucket='bucket_name', Key='filename')
print(tags['ResponseMetadata']['HTTPHeaders']['etag'])
except ClientError:
traceback.print_exc()
No matter what I do, I'm unable to use boto3 to access AWS resources from within a Fargate container task. I'm able to access the same s3 bucket with boto3 on an EC2 instance without providing any kind of credentials and only using the IAM roles/policies. What am I doing wrong? Is it not possible to access AWS resources in the same way from a Fargate container?
Forgot to mention that I am assigning the IAM roles to the task definition execution policy and task policy.
UPDATE: It turns out that the unable to find credentials error I was having is a red herring. The reason I could not get the credentials was because my direct ssh session did not have the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable set.
AWS Fargate will inject an environment variable named AWS_CONTAINER_CREDENTIALS_RELATIVE_URI on your behalf which contains a url to what boto should use for grabbing API access credentials. So the Bad request error is the one I'm actually getting and need help resolving. I checked my environment variables inside the container and the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI value is being set by Fargate.
I struggled quite a bit with this issue and constantly having AWS_CONTAINER_CREDENTIALS_RELATIVE_URI wrongly set to None, until I added a custom task role in addition to my current task execution role.
1) The task execution role is responsible for having access to the container in ECR and giving access to run the task itself, while 2) the task role is responsible for your docker container making API requests to other authorized AWS services.
1) For my task execution role I'm using AmazonECSTaskExecutionRolePolicy with the following JSON;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
2) I finally got rid of the NoCredentialsError: Unable to locate credentials when I added a task role in addition to the task execution role, for instance, responsible of reading from a certain bucket;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
In summary; make sure to both have a role for 1) executionRoleArn for access to run the task and 2) taskRoleArn for access to make API requests to authorized AWS services set in your task definition.
To allow Amazon S3 read-only access for your container instance role
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles.
Choose the IAM role to use for your container instances (this role is likely titled ecsInstanceRole). For more information, see Amazon ECS Container Instance IAM Role.
Under Managed Policies, choose Attach Policy.
On the Attach Policy page, for Filter, type S3 to narrow the policy results.
Select the box to the left of the AmazonS3ReadOnlyAccess policy and choose Attach Policy.
You should need an IAM Role to access from your ecs-task to your S3 bucket.
resource "aws_iam_role" "AmazonS3ServiceForECSTask" {
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs-tasks.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
data "aws_iam_policy_document" "bucket_policy" {
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
}
actions = [
"s3:ListBucket",
]
resources = [
"arn:aws:s3:::${var.your_bucket_name}",
]
}
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
}
actions = [
"s3:GetObject",
]
resources = [
"arn:aws:s3:::${var.your_bucket_name}/*",
]
}
}
You should need add your IAM Role in task_role_arn of your task definition.
resource "aws_ecs_task_definition" "_ecs_task_definition" {
task_role_arn = aws_iam_role.AmazonS3ServiceForECSTask.arn
execution_role_arn = aws_iam_role.ECS-TaskExecution.arn
family = "${var.family}"
network_mode = var.network_mode[var.launch_type]
requires_compatibilities = var.requires_compatibilities
cpu = var.task_cpu[terraform.workspace]
memory = var.task_memory[terraform.workspace]
container_definitions = module.ecs-container-definition.json
}
ECS Fargate task not applying role
After countless hours of digging this parameter finally solved the issue for me:
auto_assign_public_ip = true inside a network_configuration block on ecs service.
Turns out my tasks ran by these service didn't have IP assigned and thus no connection to outside world.
Boto3 has a credential lookup route: https://boto3.readthedocs.io/en/latest/guide/configuration.html. When you use AWS provided images to create your EC2 instance, the instance pre-install the aws command and other AWS credential environmental variables. However, Fargate is only a container. You need to manually inject AWS credentials to the container. One quick solution is to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to the fargate container.

Custom IAM policy for RDS security group not working

Our project is currently hosted on AWS. We are using RDS service for data tier. I need to give permission to one of my IAM user to handle IP address addition/removal request for the security group associated with my RDS instance. Tried making custom policy for this case. Below is my JSON for policy -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:AuthorizeDBSecurityGroupIngress",
"rds:ListTagsForResource",
"rds:DownloadDBLogFilePortion",
"rds:RevokeDBSecurityGroupIngress"
],
"Resource": [
"arn:aws:rds:ap-south-1:608862704225:secgrp:<security-group name>",
"arn:aws:rds:ap-south-1:608862704225:db:<db name>"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"rds:DescribeDBClusterSnapshots",
"rds:DownloadCompleteDBLogFile"
],
"Resource": "*"
}
]
This isn't working despite various changes. Can anybody suggest where am I going wrong ? Any solution too would be welcomed.
Got the answer myself, actually was trying to make that work through permissions in RDS instance directly. Instead, security group permissions needs to be handled in ec2 policies.

S3 VPC end point Bucket policy

I have multiple EC2 instances originating form a single VPC and i want to assign a bucket policy to my s3 to make sure that only that VPC traffic will be allowed to access the bucket so i created an end point for that VPC and it added all the policies and routes in routing table. I assigned a following policy to my bucket
{
"Version": "2012-10-17",
"Id": "Policy1415115909153",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringtEquals": {
"aws:sourceVpce": "vpce-111bbb22"
}
}
}
]
}
but it does not work when i connect to my Bucket using AWS-SDK for nodejs i get access denied error. The nodejs application is actually running in the Ec2 instance launched in same VPC as end point.
I even tried VPC level bucket policy but still i get access denied error. Can anyone tell me if i need to include any endpoint parameter in SDK S3 connection or any other thing?

Setting up Amazon Linux instance for CodeDeploy with IAM user credentials

I have created all that are needed for a successful deployment.
I tried to make the deployment without configuring the CodeDeploy agent in the Amazon instance and the deployment [obviously] failed.
After setting it up though, succeeded.
So, my question is, should I configure every instance that I use manually?
What if I have 100 instances in the deployment group?
Should I create an AMI with the CodeDeploy agent tool already configured?
EDIT
I have watched this:
https://www.youtube.com/watch?v=qZa5JXmsWZs
with this:
https://github.com/andrewpuch/code_deploy_example
and read this:
http://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
I just cannot understand why I must configure with the IAM creds the instance. Isn't it supposed to take the creds from the role I launched it with?
I am not an expert in aws roles and policies, but from the CD documentation this is what I understood.
Is there a way to give the IAM user access to the instance so I wont have to setup the CD agent?
EDIT 2
I think that this post kind of answers: http://adndevblog.typepad.com/cloud_and_mobile/2015/04/practice-of-devops-with-aws-codedeploy-part-1.html
But as you can see, I launched multiple instances but I only installed CodeDeploy agent on one instance, what about others? Do I have to repeat myself and login to them and install them separately? It is OK since I just have 2 or 3. But what if I have handers or even thousand of instances? Actually there are different solutions for this. One of them is, I setup all environment on one instances and create an AMI from it. When I launch my working instance, I will create instance from the one I’ve already configured instead of the AWS default ones. Some other solutions are available
Each instance only requires the CodeDeploy agent installed on it. It does not require the AWS CLI to be installed. See AWS CodeDeploy Agent Operations for installation and operation details.
You should create an instance profile/role in IAM that will grant any instance the correct permissions to accept a code deployment through CodeDeploy service.
Create a role called ApplicationServer. To this role, add the following policy. This assumes you are using S3 for your revisions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::codedeploy-example-com/*"
]
},
{
"Sid": "Stmt1414002531000",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1414002720000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
To your specific questions:
So, my question is, should I configure every instance that I use
manually?
What if I have 100 instances in the deployment group? Should I create
an AMI with the aws-cli tool already configured?
Configure AMI with your base tools, or use CloudFormation or puppet to manage software installation on a given instance as needed. Again the AWS CLI is not required for CodeDeploy. Only the most current version of the CodeDeploy agent is required.

Issue connecting to AWS SQS using IAM Role with Boto 2.38

I cannot authenticate to AWS Simple Queue Service (SQS) from an EC2 instance using its associated IAM Role with Boto 2.38 library (and Python 3).
I couldn't find anything specific on documentation about it, but as far as I could understand from examples and other questions around, it was supposed to work just opening a connection like this.
conn = boto.sqs.connect_to_region('us-east-1')
queue = conn.get_queue('my_queue')
Instead, I get a null object from the connect method, unless I provide credentials on my environment, or explicitly to the method.
I'm pretty sure my role is ok, because it works for other services like S3, describing EC2 tags, sending metrics to CloudWatch, etc, all transparently. My SQS policy is like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SQSFullAccess",
"Effect": "Allow",
"Action": [
"sqs:*"
],
"Resource": [
"arn:aws:sqs:us-east-1:<account_id>:<queue_name1>",
"arn:aws:sqs:us-east-1:<account_id>:<queue_name2>"
]
}
]
}
In order to get rid of any suspicion about my policy, I even associated a FullAdmin policy to my role temporarily, without success.
I also verified that it won't work with AWS CLI as well (which, as far as I know, uses Boto as well). So, the only conclusion I could come up with is that this is a Boto issue with SQS client.
Would anyone have a different experience with it? I know that switching to Boto 3 would probably solve it, but I don't consider doing it right now and if it is really a bug, I think it should be reported on git, anyway.
Thanks.
Answering myself.
Boto's 2.38 SQS client does work with IAM Roles. I had a bug in my application.
As for AWS CLI, a credential file (~/.aws/credentials) was present in my local account, and being used instead of the instance's role, because the role is the last one to be looked up by the CLI.

Resources