I am have written some code in Python for my Lambda function. This should take the CSV data from the url and should upload/put it in one of the s3 bucket in same aws account. All policies and IAM role has been set but still lambda is not performing it's task. The code is below. Can someone please check the code and let me know the error.
from urllib.request import urlopen
import boto3
import os
import time
BUCKET_NAME = '***'
CSV_URL = 'http://***'
def lambda_handler(event, context):
response = urlopen(CSV_URL)
s3 = boto3.client('s3')
s3.upload_fileobj(response, BUCKET_NAME,time.strftime('%Y/%m/%d'))
response.close()
I have attached the following policy to my lambda function apart from the basic execution.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::**",
"arn:aws:s3:::*/*"
]
}
]
}
Related
I have created lambda in region A and a S3 bucket in region B , trying to access bucket from lambda boto-3 client but getting an error(access denied).Please suggest some solution for this in python CDK. Will I need to create any specific policy for it.
Your lambda function requires permissions to read S3.
The easiest way to enable that is to add AWS managed policy:
arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
to your lambda execution role.
Specifying region is not required, as S3 buckets have global scope.
You have to explicitly pass the region name of the bucket if it is not in the same region as the lambda (because AWS have region specific endpoints for S3 which needs to be explicitly queried when working with s3 api).
Initialize your boto3 S3 client as:
import boto3
client = boto3.client('s3', region_name='region_name where bucket is')
see this for full reference of boto3 client:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.client
---------Edited------------
you also need the following policy attached to (or inline in) the role of your lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
If you need to list and delete the objects too, then you need to have the following policy instead, attached to (or inline in) the role of the lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt1",
"Action": [
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
},
{
"Sid": "ExampleStmt2",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME"
]
}
]
}
I'm getting a malformed syntax error when running boto3 to create_policy command surprisingly I don't get the error in AWS console. I tried to debug this using the AWS Console's "Policy Editor" and click the "Validate" button and it creates the policy No error. Does anyone know what I'm doing wrong?
iam_client.create_policy(PolicyName='xxxxx-policy',
PolicyDocument=json.dumps(dir_name + 'xxxxx-policy.json'))
This policy contains the following error: botocore.errorfactory.MalformedPolicyDocumentException: An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"iam:ListRoles",
"sts:AssumeRole"
],
"Resource": "*"
}
]
}
json.dumps will turn a Python dictionary into a JSON string. The input shouldn't be a file name. In fact, you don't need json package to do this.
import boto3
with open('xxx-policy.json', 'r') as fp:
iam_client = boto3.client('iam')
iam_client.create_policy(
PolicyName='xxx-policy',
PolicyDocument=fp.read()
)
You are reading your document from file:
with open(dir_name + 'xxxxx-policy.json', 'r') as f:
policy_document = f.read()
iam_client.create_policy(
PolicyName='xxxxx-policy',
PolicyDocument=policy_document)
I have a lambda function in account A trying to access resources from account B. Created a new lambda role with basic execution with access to upload logs to cloud watch.
Here's my function code in Python 3.7:
import boto3
allProfiles = ['account2']
def lambda_handler(event, context):
sts_connection = boto3.client('sts')
acct_b = sts_connection.assume_role(
RoleArn="arn:aws:iam::2222222222:role/role-on-source-account",
RoleSessionName="account2"
)
for profiles in allProfiles:
print("\nusing profile %s" % profiles)
newMethod..
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Also modified the trust policy of the assumed role in account B as mentioned in documentation: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/
ERROR: An error occurred (AccessDenied) when calling the AssumeRole
operation: User:
arn:aws:sts::11111111:assumed-role/lambda-role/lambda-function is not
authorized to perform: sts:AssumeRole on resource:
arn:aws:iam::2222222222:role/role-on-source-account"
Note: I am able to run this in my local successfully but not in lambda.
Created a new lambda role with basic execution with access to upload logs to cloud watch
Basic execution role for lambda is not enough. You need to explicitly allow your function to AssumeRole. The following statement in your execution role should help:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::2222222222:role/role-on-source-account"
]
}
Ok so what we have is:
Your (your own trusted account) accountA need to assume a specific role in the AccountB account
A role in the AccountB (the trusting account) that your lambda is going to access a, let's say a bucket on.
AccountBBucket
You mentioned you had Basic execution for your lambda and that alone would not be enough...
Solution:
Create a role "UpdateBucket":
you need to establish trust between AccountB(account ID number:999) and AccountA.
Create an IAM role and define the AccountA as a trusted entity, specify a permission policy that allows trusted users to update the AccountB_resource(AccountBBucket).
We Are in AccountA now
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::AccountBBucket"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::AccountBBucket/*"
}
]
}
Grant Access:
we will now use the role we created earlier, the (UpdateBucket) role
This needs to be added to your AccountB permissions
We are now in AccountB now:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::999:role/UpdateBucket"
}
}
Note that the 999 above is AccountB account Id and the UpdateBucket is the role that was created in AccountA
This would create the trust you need for your lambda to access the bucket on AccountB
More info on here:
Delegate Access Across AWS Accounts Using IAM Roles
I need to access a file and print its content from a subfolder in an s3 bucket. My file (file_abc) is in a sub folder (subfolder_abc) in a folder (folder_abc) in s3 bucket.
I am using the following code to do so -
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket='Bucket_abc',
Key='folder_abc/subfolder_abc' + "/" + 'file_abc')
result = str(response["Body"].read())
print (result)
I am getting the following error -
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
How to access data of files in subfolders?
Can you show us the permissions for the bucket?
The way you are attempting to read the file looks correct, I assume you have an issue with the permissions for reading files in that bucket.
If you can show us the permissions for the bucket and the role you function is executing as, we can be of more help.
Here is a policy example that would allow all access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket/*"
],
}
]
}
I am getting an error when calling to assume role method of STS. It says that the user is not authorized to perform sts:AsumeRole on resource xxx.
I did the following:
I created a role to access to S3 bucket.
I ran a test over policy simulator and works fine
I created a new group, and in it, i created a new policy that
enables all sts actions, over all resources.
I ran a test with the policy simulator, to sts assume role, pointing
to the ARN of role created at step one; and it works fine
I created a new user, and put it in group created at step 3
With the credentials of the new user, i try to get a new credentials
using sts asume role, but throw me an error that say my user is not
authorized to perform sts:AssumeRole
What am I doing wrong?
Policy in Group
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "some-large-id",
"Effect": "Allow",
"Action": [
"sts:*"
],
"Resource": [
"*"
]
}
]
}
Policy in role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "another-large-id",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
And finally calling like this
let policy = {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "new-custom-id",
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::my-bucket-name/*"]
}
]
};
let params = {
DurationSeconds: 3600,
ExternalId: 'some-value',
Policy: JSON.stringify(policy),
RoleArn: "arn:aws:iam::NUMBER:role/ROLE-NAME", //Cheked, role is the same that step one
RoleSessionName: this.makeNewSessionId()
};
let sts = new AWS.STS({ apiVersion: '2012-08-10' });
sts.assumeRole(params, (err, data) => {
if(err) console.log(err);
else console.log(data);
});
There is a step that was missing: set trust relationship on role created in step one. No matter what privileges the user had, if the trust relationship is not set, STS will refuse the request.
Troubleshooting IAM Roles explain how it works.
On the role that you want to assume, for example using the STS Java V2 API (not Node), you need to set a trust relationship. In the trust relationship, specify the user to trust. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:user/JohnDoeā€¯ //Specify the AWS ARN of your IAM user.
},
"Action": "sts:AssumeRole"
}
]
}
Now you can, for example, run a Java program to invoke the assumeRole operation.
package com.example.sts;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.sts.StsClient;
import software.amazon.awssdk.services.sts.model.AssumeRoleRequest;
import software.amazon.awssdk.services.sts.model.StsException;
import software.amazon.awssdk.services.sts.model.AssumeRoleResponse;
import software.amazon.awssdk.services.sts.model.Credentials;
import java.time.Instant;
import java.time.ZoneId;
import java.time.format.DateTimeFormatter;
import java.time.format.FormatStyle;
import java.util.Locale;
/**
* To make this code example work, create a Role that you want to assume.
* Then define a Trust Relationship in the AWS Console. YOu can use this as an example:
*
* {
* "Version": "2012-10-17",
* "Statement": [
* {
* "Effect": "Allow",
* "Principal": {
* "AWS": "<Specify the ARN of your IAM user you are using in this code example>"
* },
* "Action": "sts:AssumeRole"
* }
* ]
* }
*
* For more information, see "Editing the Trust Relationship for an Existing Role" in the AWS Directory Service guide.
*/
public class AssumeRole {
public static void main(String[] args) {
String roleArn = "arn:aws:iam::000540000000:role/s3role" ; // args[0];
String roleSessionName = "mysession101"; // args[1];
Region region = Region.US_EAST_1;
StsClient stsClient = StsClient.builder()
.region(region)
.build();
try {
AssumeRoleRequest roleRequest = AssumeRoleRequest.builder()
.roleArn(roleArn)
.roleSessionName(roleSessionName)
.build();
AssumeRoleResponse roleResponse = stsClient.assumeRole(roleRequest);
Credentials myCreds = roleResponse.credentials();
//Display the time when the temp creds expire
Instant exTime = myCreds.expiration();
// Convert the Instant to readable date
DateTimeFormatter formatter =
DateTimeFormatter.ofLocalizedDateTime( FormatStyle.SHORT )
.withLocale( Locale.US)
.withZone( ZoneId.systemDefault() );
formatter.format( exTime );
System.out.println("The temporary credentials expire on " + exTime );
} catch (StsException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}
Without setting the Trust Relationship, this code does not work.
I met the same problem. These steps I fixed as below:
create new Role attach the policy: AmazonS3FullAccess, (copy the role ARN, use in code below)
Select Trust relationships tab - Edit Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::IAM_USER_ID:user/haipv",//the roleARN need to be granted, use * for all
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
update Trust relationships
Example code:
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.securitytoken.AWSSecurityTokenService;
import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClientBuilder;
import com.amazonaws.services.securitytoken.model.AssumeRoleRequest;
import com.amazonaws.services.securitytoken.model.AssumeRoleResult;
import com.amazonaws.services.securitytoken.model.Credentials;
public class Main {
public static void main(String[] args) {
Regions clientRegion = Regions.AP_SOUTHEAST_1;
String roleARN = "arn:aws:iam::IAM_USER_ID:role/haipvRole"; // the roleARN coppied above
String roleSessionName = "haipv-session";
String bucketName = "haipv.docketName";//file_example_MP4_640_3MG.mp4
String accesskey = "YOURKEY";
String secretkey = "YOUR SECRET KEY";
try {
BasicAWSCredentials credentials = new BasicAWSCredentials(accesskey, secretkey);
// Creating the STS client is part of your trusted code. It has
// the security credentials you use to obtain temporary security credentials.
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withRegion(clientRegion)
.build();
// Obtain credentials for the IAM role. Note that you cannot assume the role of an AWS root account;
// Amazon S3 will deny access. You must use credentials for an IAM user or an IAM role.
AssumeRoleRequest roleRequest = new AssumeRoleRequest()
.withRoleArn(roleARN)
.withRoleSessionName(roleSessionName);
AssumeRoleResult roleResponse = stsClient.assumeRole(roleRequest);
Credentials sessionCredentials = roleResponse.getCredentials();
// Create a BasicSessionCredentials object that contains the credentials you just retrieved.
BasicSessionCredentials awsCredentials = new BasicSessionCredentials(
sessionCredentials.getAccessKeyId(),
sessionCredentials.getSecretAccessKey(),
sessionCredentials.getSessionToken());
// Provide temporary security credentials so that the Amazon S3 client
// can send authenticated requests to Amazon S3. You create the client
// using the sessionCredentials object.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withRegion(clientRegion)
.build();
// Verify that assuming the role worked and the permissions are set correctly
// by getting a set of object keys from the bucket.
ObjectListing objects = s3Client.listObjects(bucketName);
System.out.println("No. of Objects: " + objects.getObjectSummaries().size());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
Refer the official document in this link
It works for me.
In my case, in addition to adding the "Action": "sts:AssumeRole" (for the specific ARN) under the Trust relationship tab, I also had to add the following in Permissions tab:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "*"
}
]
}
In addition to setting a trust relationship, if the region configured in your environment, is one of the enabled regions, e.g. af-south-1 and that region is not enabled in the account of the role you assume, you will get unauthorized error. This is even if all your permissions are configured correctly.
Just putting this here for people also encountering this. In my .aws/config file I had a line for role_arn and I mistakenly put in my user arn. You don't need to have that in there if you were assuming a role beforehand.