I'm unable to delete a role policy from my AWS account with Boto3. I get an error:
botocore.errorfactory.NoSuchEntityException: An error occurred (NoSuchEntity) when calling the DeleteRolePolicy operation: The role policy with name potatoman9000Policy cannot be found.
The policy and role are created and deleted within the same script. The policy is detached prior to this particular bit of code occurs. I'm not sure why its finding the policy name.
Here is the creation:
# Create IAM policy and Role
def iam_creation(client_name):
iam_client = boto3.client('iam')
# Policy template
client_onboarding_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
f"arn:aws:s3:::{client_name}"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": f"arn:aws:s3:::{client_name}/*"
}
]
}
# Role template
role_onboarding_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"transfer.amazonaws.com",
"s3.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
# Create policy from template
iam_client.create_policy(
PolicyName=f'{client_name}Policy',
PolicyDocument=json.dumps(client_onboarding_policy)
)
# Create Role from template and create trust relationships
iam_client.create_role(
RoleName=f'{client_name}',
AssumeRolePolicyDocument=json.dumps(role_onboarding_policy)
)
# Attach created policy to created role
iam_client.attach_role_policy(
PolicyArn=f'arn:aws:iam::111111111111:policy/{client_name}Policy',
RoleName=f'{client_name}'
)
The creation goes off without any issues. Here is the delete
# Delete IAM policy and role
def iam_delete(client_name):
iam_client = boto3.client('iam')
iam_resource = boto3.resource('iam')
role_policy = iam_resource.RolePolicy(f'{client_name}', f'{client_name}Policy')
role = iam_resource.Role(f'{client_name}')
# Detach policy from role
iam_client.detach_role_policy(
PolicyArn=f'arn:aws:iam::111111111111:policy/{client_name}Policy',
RoleName=f'{client_name}'
)
# Delete policy
role_policy.delete()
# Delete role
role.delete()
I imagine it has something to do with the way I've named the role policy or not named it. I have confirmed that the Role potatoman9000 does exist in IAM as well as the Policy potatoman9000Policy. Any help is greatly appreciated
RolePolicy is for inline policies, not managed policies.
When you call delete, it errors out because you are using managed policies.
From docs about delete:
Deletes the specified inline policy that is embedded in the specified IAM role.
To delete managed policy you should be using delete_policy.
Deletes the specified managed policy.
Related
I'm trying to setup my AWS CLI to assume a role using MFA and expiring the creds after 15 minutes (minimum duration_seconds allowed, apparently).
My IAM role policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/myuser"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
},
"NumericLessThan": {
"aws:MultiFactorAuthAge": "900"
}
}
}
]
}
My CLI config is setup as follows:
[profile xxx]
role_arn = arn:aws:iam::XXXXXXXXXXXX:role/mfa
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/foobar
source_profile = mfa
When I run a command using the xxx profile above, MFA is asked the first time and remains valid for all the subsequent requests. However, after 15 minutes, the token is still valid and MFA isn't asked again.
$ aws s3 ls --profile xxx
I tried setting the duration_seconds parameter on my CLI as below:
[profile xxx]
role_arn = arn:aws:iam::XXXXXXXXXXXX:role/mfa
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/foobar
source_profile = mfa
duration_seconds = 900
But now, I'm asked the MFA token for every command issued, even if the time difference is in the order of seconds.
Am I missing something here?
AWS CLI version: aws-cli/2.0.49 Python/3.7.4 Darwin/19.6.0 exe/x86_64
Appreciate any help.
Thanks in advance!
So, I discovered the reason for this behavior:
As described in this Github Issue, AWS CLI treats any session within 15min as expired, refreshing the creds automatically (or asking for a new one-time passcode, in case of MFA).
So, setting the session duration for 15min (900s) is basically the same as getting a one-time credential.
I just tested setting the session_duration to 930 (15min + 30s), and the session is indeed valid for 30 seconds.
Seems that the issue is with your conditions.
"Bool": {
"aws:MultiFactorAuthPresent": "true"
},
This will always require the MFA
instead try something like that:
{
"Version": "2012-10-17",
"Statement":[
{
"Sid": "AllowActionsForEC2",
"Effect": "Allow",
"Action": ["ec2:RunInstances",
"ec2:DescribeInstances",
"ec2:StopInstances "],
"Resource": "*"
},
{
"Sid": "AllowActionsForEC2WhenMFAIsPresent",
"Effect":"Allow",
"Action":"ec2:TerminateInstances",
"Resource":"*",
"Condition":{
"NumericLessThan ":{"aws:MultiFactorAuthAge":"300"}
}
}
]
}
The Users with short-term credentials older than 300 seconds must reauthenticate to gain access to this API.
So for your policy it will look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/myuser"
},
"Action": "sts:AssumeRole",
"Condition": {
"NumericLessThan": {
"aws:MultiFactorAuthAge": "900"
}
}
}
]
}
The issue is that every conditions needs to be met in order for you to be able to assume the role. "aws:MultiFactorAuthPresent": "true" will always check if you used the MFA to login.
I have created lambda in region A and a S3 bucket in region B , trying to access bucket from lambda boto-3 client but getting an error(access denied).Please suggest some solution for this in python CDK. Will I need to create any specific policy for it.
Your lambda function requires permissions to read S3.
The easiest way to enable that is to add AWS managed policy:
arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
to your lambda execution role.
Specifying region is not required, as S3 buckets have global scope.
You have to explicitly pass the region name of the bucket if it is not in the same region as the lambda (because AWS have region specific endpoints for S3 which needs to be explicitly queried when working with s3 api).
Initialize your boto3 S3 client as:
import boto3
client = boto3.client('s3', region_name='region_name where bucket is')
see this for full reference of boto3 client:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.client
---------Edited------------
you also need the following policy attached to (or inline in) the role of your lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
If you need to list and delete the objects too, then you need to have the following policy instead, attached to (or inline in) the role of the lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt1",
"Action": [
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
},
{
"Sid": "ExampleStmt2",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME"
]
}
]
}
I have a lambda function in account A trying to access resources from account B. Created a new lambda role with basic execution with access to upload logs to cloud watch.
Here's my function code in Python 3.7:
import boto3
allProfiles = ['account2']
def lambda_handler(event, context):
sts_connection = boto3.client('sts')
acct_b = sts_connection.assume_role(
RoleArn="arn:aws:iam::2222222222:role/role-on-source-account",
RoleSessionName="account2"
)
for profiles in allProfiles:
print("\nusing profile %s" % profiles)
newMethod..
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Also modified the trust policy of the assumed role in account B as mentioned in documentation: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/
ERROR: An error occurred (AccessDenied) when calling the AssumeRole
operation: User:
arn:aws:sts::11111111:assumed-role/lambda-role/lambda-function is not
authorized to perform: sts:AssumeRole on resource:
arn:aws:iam::2222222222:role/role-on-source-account"
Note: I am able to run this in my local successfully but not in lambda.
Created a new lambda role with basic execution with access to upload logs to cloud watch
Basic execution role for lambda is not enough. You need to explicitly allow your function to AssumeRole. The following statement in your execution role should help:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::2222222222:role/role-on-source-account"
]
}
Ok so what we have is:
Your (your own trusted account) accountA need to assume a specific role in the AccountB account
A role in the AccountB (the trusting account) that your lambda is going to access a, let's say a bucket on.
AccountBBucket
You mentioned you had Basic execution for your lambda and that alone would not be enough...
Solution:
Create a role "UpdateBucket":
you need to establish trust between AccountB(account ID number:999) and AccountA.
Create an IAM role and define the AccountA as a trusted entity, specify a permission policy that allows trusted users to update the AccountB_resource(AccountBBucket).
We Are in AccountA now
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::AccountBBucket"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::AccountBBucket/*"
}
]
}
Grant Access:
we will now use the role we created earlier, the (UpdateBucket) role
This needs to be added to your AccountB permissions
We are now in AccountB now:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::999:role/UpdateBucket"
}
}
Note that the 999 above is AccountB account Id and the UpdateBucket is the role that was created in AccountA
This would create the trust you need for your lambda to access the bucket on AccountB
More info on here:
Delegate Access Across AWS Accounts Using IAM Roles
I have a terraform script that provides a lambda function on aws to send emails. I pieced this terraform script from tutorials and templates on the web to use AWS SES, Api Gateway, Lambda and Cloudwatch services.
To get permissions to work though, I had to run the script and then, separately, build a policy in the AWS console and apply it to the lambda function so that it could fully access the SES and Cloudwatch services. But it's not at all not clear to me how to take that working policy and adapt it to my terraform script. Could anyone please provide or point to guidance on this matter?
The limited/inadequate but otherwise working role in my terraform script looks like this:
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
} EOF
}
... and the working policy generated in the console (by combining two roles together for all-Cloudwatch and all-SES access):
{
"permissionsBoundary": {},
"roleName": "las_role_new",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:Describe*",
"cloudwatch:*",
"logs:*",
"sns:*",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "arn:aws:iam::*:role/aws-service-role/events.amazonaws.com/AWSServiceRoleForCloudWatchEvents*",
"Condition": {
"StringLike": {
"iam:AWSServiceName": "events.amazonaws.com"
}
}
}
]
},
"name": "CloudWatchFullAccess",
"id": "ANPAIKEABORKUXN6DEAZU",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/CloudWatchFullAccess"
},
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": "*"
}
]
},
"name": "AmazonSESFullAccess",
"id": "ANPAJ2P4NXCHAT7NDPNR4",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
],
"trustedEntities": [
"lambda.amazonaws.com"
]
}
There are fields
So my question in summary, and put most generally, is this:
given a "policy" built in the aws console (by selecting a bunch of roles, etc. as in ), how do you convert that to a "role" as required for the terraform script?
To anyone else who might struggle to understand terraform-aws-policy matters, here's my understanding after some grappling. The game here is to carefully distinguish the various similar-sounding terraform structures (aws_iam_role, aws_iam_role_policy, aws_iam_role, assume_role_policy, etc.) and to work out how these black-box structures fit together.
First, the point of an aws role is to collect together policies (i.e. permissions to do stuff). By assigning such a role to a service (e.g. lambda), you thereby give that service the permissions described by those policies. A role must have at least one policy sort of built-in to it: the 'assume-role' policy that specifies which service(s) can use ('assume') that role. This assume-role policy is relatively simple and so 'might as well' be included in the terraform script explicitly (using the <<EOF ... EOF syntax above).
Secondly, if you want to now let that service with the (basic) role do anything to other services, then you need to somehow associate additional policies with that role. I've learned that there are several ways to do this but, in order to answer my question most succinctly, I'll now describe the most elegant way I have found to incorporate multiple template policies offered in the AWS console into one's terraform script.
The code is:
# Define variable for name of lambda function
variable "role_name" {
description = "Name for the Lambda role."
default = "las-role"
}
# Create role with basic policy enabling lambda service to use it
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# Define a list of policy arn's given in the AWS console
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = ["arn:aws:iam::aws:policy/CloudWatchFullAccess", "arn:aws:iam::aws:policy/AmazonSESFullAccess"]
}
# Create attachment of the policies for the above arn's to our named role
# The count syntax has the effect of looping over the items in the list
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = var.role_name
count = length(var.iam_policy_arn_list)
policy_arn = var.iam_policy_arn_list[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}
As you can see, the template policies are included here using the arns which can be found in the AWS console. For example, here's the view for finding the arn for full access to Amazon SES via the AWS Management Console:
When you succesfully deploy your lambda to AWS using terraform, it will pull down these policies from the arns and generate a permission json for your lambda function (which you can view in the lambda-service section of the aws console) that looks a lot like the json I posted in the question.
I have an EMR cluster that involves steps to write and delete objects on S3 bucket. I have been trying to create a bucket policy in the S3 bucket that denies deleting access to all principals except for the EMR role and the instance profile. Below is my policy.
{
"Version": "2008-10-17",
"Id": "ExamplePolicyId123458",
"Statement": [
{
"Sid": "ExampleStmtSid12345678",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject*"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAI3FK4OGNWXLHB7IXM:*", #EMR Role Id
"AROAISVF3UYNPH33RYIZ6:*", # Instance Profile Role ID
"AIPAIDBGE7J475ON6BAEU" # Instance Profile ID
]
}
}
}
]
}
As I found somewhere, it is not possible to use wildcard entries to specify every Role session in the "NotPrincipal" section so I have used the condition of aws:userId to match.
Whenever I run the EMR step without the bucket policy, the step completes successfully. But when I add the policy to bucket and re run, the step fails with following error.
diagnostics: User class threw exception:
org.apache.hadoop.fs.s3a.AWSS3IOException: delete on s3://vr-dump/metadata/test:
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects could not be deleted
(Service: null; Status Code: 200; Error Code: null; Request ID: 9FC4797479021CEE; S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=), S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=: One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: 9FC4797479021CEE; S3 Extended Request ID: QWit1wER1s70BJb90H/0zLu4yW5oI5M4Je5aK8STjCYkkhZNVWDAyUlS4uHW5uXYIdWo27nHTak=)
What is the problem here? Is this related to EMR Spark Configuration or the bucket policy?
Assuming these role ids are correct (they start in AROA so they have a valid format) I believe you also need the aws account number on the policy. For example:
{
"Version": "2008-10-17",
"Id": "ExamplePolicyId123458",
"Statement": [
{
"Sid": "ExampleStmtSid12345678",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject*"
],
"Resource": [
"arn:aws:s3:::vr-dump",
"arn:aws:s3:::vr-dump/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAI3FK4OGNWXLHB7IXM:*", #EMR Role Id
"AROAISVF3UYNPH33RYIZ6:*", # Instance Profile Role ID
"AIPAIDBGE7J475ON6BAEU", # Instance Profile ID
"1234567890" # Your AWS Account Number
]
}
}
}
]
}