Read and write from/to S3 bucket using access points with boto3 - python-3.x

I have to access S3 bucket using access points with boto3.
I have created an access point with a policy to allow reading and writing (<access_point_arn> is my access point ARN):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "<access_point_arn>/object/*"
]
}
In the official documentation there is a mention about access points, where access point ARN has to come in place of bucket name (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html). There are no examples on the official documentation site for developers (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html).
So based on the information I assume that the right way to use it is:
import boto3
s3 = boto3.resource('s3')
s3.Bucket('<access_point_arn>').download_file('hello.txt', '/tmp/hello.txt')
When I execute this code in Lambda with AmazonS3FullAccess managed policy attached I am getting an ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
Both Lambda and S3 access point are connected to the same VPC.

My first guess is that you are missing permissions that have to be defined (1) on the bucket (bucket policy) and (2) on the IAM user or role which you are using in the boto3 SDK.
(1) From the documentation I can see that
For an application or user to be able to access objects through an access point, both the access point and the underlying bucket must permit the request.
You could, for instance, add a bucket policy that is delegating access control to access points so that you don't have to specify each principal that comes via the access points. An example is given in the linked docs.
(2) As stated in your question, you are already using AmazonS3FullAccess policy in your LambdaExecutionRole. My only guess (i.e. what happened to me) is that there is, e.g., KMS encryption on the objects in your bucket and your role is missing permissions for kms actions. Try executing the function with Admin policy attached and see if it works. If it does, find out which specific permissions are missing.
Some further notes: I assume you
didn't restrict the access point to be available within a specific VPC only.
are blocking public access.

replace...
"Resource": "arn:aws:s3:region_name:<12-digit account_id>:bucket_name"
s3.Bucket('bucket_name').download_file('hello.txt', '/tmp/hello.txt')
Hope it helps...

Related

Why are my lambda/alexa-hosted skill permissions being denied?

My goal is to integrate an Alexa-hosted skill with AWS IoT. I'm getting an access denied exception runinng the following python code from this thread:
iota = boto3.client('iotanalytics')
response = iota.get_dataset_content(datasetName='my_dataset_name',versionId='$LATEST',roleArn = "arn:aws:iam::123456789876:role/iotTest")
contentState = response['status']['state']
if (contentState == 'SUCCEEDED') :
url = response['entries'][0]['dataURI']
stream = urllib.request.urlopen(url)
reader = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
What's weird is that the get_dataset_content() method described here has no mention of needing permissions or credentials. Despite this, I have also gone through the steps to use personal AWS resources with my alexa-hosted skill with no luck. As far as I can tell there is no place for me to specify the ARN of the role with the correct permissions. What am I missing?
Oh, and here's the error message the code above throws:
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetDatasetContent operation: User: arn:aws:sts::123456789876:assumed-role/AlexaHostedSkillLambdaRole/a224ab4e-8192-4469-b56c-87ac9a34a3e8 is not authorized to perform: iotanalytics:GetDatasetContent on resource: arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have created a role called demo, which has complete admin access. I have also given it the following trust relationship:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "iotanalytics.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:role/AlexaHostedSkillLambdaRole"
},
"Action": "sts:AssumeRole"
}
]
}
--- The trust relationships tab displays this as well: ---
Trusted entities
The identity provider(s) iotanalytics.amazonaws.com
arn:aws:iam::858273942573:role/AlexaHostedSkillLambdaRole
I ran into this today and after an hour of pondering what is going on, i figured out my problem, and i think it may be the same as what you were running into.
As it turns out, most of the guides out there don't mention the fact that you have to do some work to have the assumed role be the actual role that is used when you build up the boto3 resource or client.
This is a good reference for that - AWS: Boto3: AssumeRole example which includes role usage
Basically, from my understanding, if you do not do that, the boto3 commands will still execute under the same base role that the Alexa Lambda uses - you must first create the assumed role, and then use it.
Additionally, your role you're assuming must have the privileges that it needs to do what you are trying to do - but that's the easy part.
As I look at your code, I see: roleArn = "arn:aws:iam::123456789876:role/iotTest"
Replace it with the correct ARN of a role that has allow iotanalytics:GetDatasetContent
In addition, I assume you didn't paste all of your code, since you are trying to access the arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have doubts that your account id is 123456789876, it looks like you miss some more ARNs in your code.

How can I assign bucket-owner-full-control when creating an S3 object with boto3?

I'm using the Amazon boto3 library in Python to upload a file into another users bucket. The bucket policy applied to the other users bucket is configured like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3BucketList",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bbb"
},
{
"Sid": "DelegateS3ObjectUpload",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bbb",
"arn:aws:s3:::bbb/*"
]
}
]
}
where uuu is my user id and bbb is the bucket name belonging to the other user. My user and the other user are IAM accounts belonging to different organisations. (I know this policy can be written more simply, but the intention is to add a check on the upload to block objects without appropriate permissions being created).
I can then use the following code to list all objects in the bucket and also to upload new objects to the bucket. This works, however the owner of the bucket has no access to the object due to Amazons default of making objects private to the creator of the object
import base64
import hashlib
from boto3.session import Session
access_key = "value generated by Amazon"
secret_key = "value generated by Amazon"
bucketname = "bbb"
content_bytes = b"hello world!"
content_md5 = base64.b64encode(hashlib.md5(content_bytes).digest()).decode("utf-8")
filename = "foo.txt"
sess = Session(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
bucket = sess.resource("s3").Bucket(bucketname)
for o in bucket.objects.all():
print(o)
s3 = sess.client("s3")
s3.put_object(
Bucket=bucketname,
Key=filename,
Body=content_bytes,
ContentMD5=content_md5,
# ACL="bucket-owner-full-control" # Uncomment this line to generate error
)
As soon as I uncomment the ACL option, the code generates an Access Denied error message. If I redirect this to point to a bucket inside my own organisation, the ACL option succeeds and the owner of the bucket is given full permission to the object.
I'm now at a loss to figure this out, especially as Amazons own advice appears to be to do it the way I have shown.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
https://aws.amazon.com/premiumsupport/knowledge-center/s3-require-object-ownership/
It's not enough to have permission in bucket policies only.
Check if your user (or role) is missing s3:PutObjectAcl permission in IAM.
When using the resource methods in boto3, there can be several different API calls being made, and it isn't always obvious which calls are being made.
In comparison, when using client methods in boto3, there is a 1-to-1 mapping between the API call that is being made in boto3, and the API call received by AWS.
Therefore, it is likely that the resource.put_object() method is calling an additional API, such as PutObjectAcl. You can confirm this by looking in AWS CloudTrail and seeing which API calls are being made from your app.
In such a case, you would need the additional s3:PutObjectAcl permission. This would be needed if the upload process first creates the object, and then updates the object's Access Control List.
When using the client methods for uploading a file, there is also the ability to specify an ACL, which I think gets applied directly rather than requiring a second API call. Thus, using the client method to create the object probably would not require this additional permission.

Terraform - how to remove ability to edit via console?

I have looked in the terraform documentation for a solution to this issue but have not found anything. I have a problem where my AWS account has 1000s of EC2s, SQS queues, SNS topics, dynamo tables and tons of other stuff. Some of this stuff is managed by terraform and some of it is not. I want to be able to make it so a given terraform resource is not able to be edited via the console. A simple example of an ideal conguration is as follows:
resource "aws_sns_topic" "my_topic" {
name = "my_topic_name"
is_console_configurable = false
}
Is something like the above possible to do? Or what is the best way to go about solving this issue?
Thanks in advance
Terraform itself can't directly control what the AWS console allows or does not allow.
I think in order to get an effect like this you'd need to use very granular IAM policies so that the credentials that your team is using to log in to the AWS Console do not have access to make changes to the objects managed by Terraform. You'd then use different credentials to run Terraform which do have the necessary access.
Coordinating policies at such a fine level of detail will be complicated, though. I think the closest approximation of what you showed in your example would be an IAM policy containing "Deny" statements, which you would then associate with all of the principals associated with users who have AWS Console access.
resource "aws_sns_topic" "my_topic" {
name = "my_topic_name"
}
resource "aws_iam_policy" "disable_sns_console" {
name = "SNS Topic Disable Console"
# ...
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Resource": aws_sns_topic.my_topic.arn,
},
]
})
}
You'd need to find some suitable IAM user, role, or group object to attach this policy to and ensure that every credential used for console access is associated with whatever object confers this policy.
This sort of "default allow, deny specific objects" policy is tricky because it will "fail open" if you don't set it up correctly. However, if your goal is more to inspire good behavior than to implement an infallible security layer then perhaps this compromise is reasonable.

Unable to access S3 file with IAM role from EC2

I created an IAM role 'test' and assigned to an EC2 instance. And I created a S3 bucket with bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1475837721706",
"Statement": [
{
"Sid": "Stmt1475837720370",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::770370070203:role/test"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-role-123/*"
}
]
}
From EC2, I got the AccessKey and SecretKey from this AWS article by sending a curl request to
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>
Using the response from the above, I wrote a node script to make a request to the resource in the bucket
var AWS = require('aws-sdk');
var d = {
"Code" : "Success",
"LastUpdated" : "2016-10-07T12:28:09Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIMJBHYLH6GWOWNMQ",
"SecretAccessKey" : "7V/k5nvFdhXOcT+nhYjGqHM4QmUWjNBUM1ERJQJs",
"Token" : "FQoDYXdzEO7//////////wEaDGG+SgxD4Es4Z1RBZCKzAz855JuKfm8s7LDcP5T9TGvDdJELsYTzPi47HJ9Q5oaK8OTb0Us0RjvpGW278Mb1gg1dNip1VD2N/GW5/1TFC6xhNpnnZ9+LNkJAwVVZg5raGM91k56X/VOA++/5WivSpO4jWg8fZDibivVyHuoMJJTkurFtEXrweDOCqpiabypTCc5jFtX8NfQuHubwl4C1jp2pMasVS1jwhjU72TA8Pn9EsIIvh8JXDC1dVfppwnslolAeJyOOAHdL1AQSs3nI6IvPCtKhBjtDaVuoiH/lHrnKrw6AeMHoTYQay4wOYRnE4ffngtksekZEULXvERWE4NCs3leXGMqrdzOr8xdZ9m0j3IkshqSS56fkq6E9JtLhSVGyy44ELrL7kYW/dpHE03V+dwQPXMhRafjsVsPD7sUnBfH/+4yyL0VDX1vlFRKbRi50i/Eqvxsb9bcSTsE0W5yWmOWR8reTTYWcWyQXGvxKVYVxLWZKVRfmNfx6IX2sqan7e7pjCtUrqXB1TBMpXdy8KSH9qoJtNAQTYBXws7oFLYY+F2esnNCma0bdNcCeAQ6t/6aPfUdpdLgv8BcGciZxayiqqd6/BQ==",
"Expiration" : "2016-10-07T18:51:57Z"
};
AWS.config.accessKeyId = d.AccessKeyId;
AWS.config.secretAccessKey = d.SecretAccessKey;
var s3params = {Key: "test.json", Bucket:"test-role-123"};
AWS.config.region = 'ap-south-1';
var s3 = new AWS.S3();
s3.getSignedUrl('getObject', s3params, function(err, url) {
console.log(url);
});
On running this code I am getting the signed url. But this is giving an InvalidAccessKeyId error. I doubted if the s3 bucket policy is wrong so tried to get with similar policy with an IAM user credentials. It is completely working.
Any hints or suggestions are welcome.
There are three things to note:
How credentials are provided and accessed from an Amazon EC2 instance
How to assign permissions for access to Amazon S3
How Pre-Signed URLs function
1. How credentials are provided and accessed from an Amazon EC2 instance
When an Amazon EC2 instance is launched with an IAM Role, the Instance Metadata automatically provides temporary access credentials consisting of an Access Key, Secret Key and Token. These credentials are rotated approximately every six hours.
Any code that uses an AWS SDK (eg Python, Java, PHP) knows how to automatically retrieve these credentials. Therefore, code running on an Amazon EC2 instance that was launched with an IAM role does not require you to retrieve nor provide access credentials -- it just works automagically!
So, in your above code sample, you could remove any lines that specifically refer to credentials. Your job is simply to ensure that the IAM Role has sufficient permissions for the operations you wish to perform.
This also applies to the AWS Command-Line Interface (CLI), which is actually just a Python program that provides command-line access to AWS API calls. Since it uses the AWS SDK for Python, it automatically retrieves the credentials from Instance Metadata and does not require credentials when used from an Amazon EC2 instance that was launched with an IAM Role.
2. How to assign permissions for access to Amazon S3
Objects in Amazon S3 are private by default. There are three ways to assign permission to access objects:
Object ACLs (Access Control Lists): These are permissions on the objects themselves
Bucket Policies: This is a set of rules applied to the bucket as a whole, but it can also specify permissions related to a subset of a bucket (eg a particular path within the bucket)
IAM Policies that are applied to IAM Users, Groups or Roles: These permissions apply specifically to those entities
Since you are wanting to grant access to Amazon S3 objects to a specific IAM User, it is better to assign permissions via an IAM Policy attached to that user, rather than being part of the Bucket Policy.
Therefore, you should:
Remove the Bucket Policy
Create an Inline Policy in IAM and attach it to the desired IAM User. The policy then applies to that User and does not require a Principal
Here is a sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
]
}
]
}
I have recommended an Inline Policy because this policy applies to just one user. If you are assigning permissions to many users, it is recommended to attach the policy to an IAM Group and then the Users assigned to that group will in inherit the permissions. Alternatively, create an IAM Policy and then attach that policy to all relevant Users.
3. How Pre-Signed URLs function
Amazon S3 Pre-Signed URLs are a means of granting temporary access to Amazon S3 objects. The generated URL includes:
The Access Key of an IAM User that has permission to access the object
An expiration time
A signature created via a has operation that authorises the URL
The key point to realise is related to the permissions used when generating the pre-signed URL. As mentioned in the Amazon S3 documentation Share an Object with Others:
Anyone with valid security credentials can create a pre-signed URL. However, in order to successfully access an object, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon.
This means that the credentials used when generating the pre-signed URL are also the credentials used as part of the pre-signed URL. The entity associated with those credentials, of course, needs permission to access the object -- the pre-signed URL is merely a means of on-granting access to an object for a temporary period.
What this also means is that, in the case of your example, you do not need to create a specific role for granting access to the object(s) in Amazon S3. Instead, you can use a more permissive IAM Role with your Amazon EC2 instance (for example, one that can also upload objects to S3) but when it generates a pre-signed URL it is only granting temporary access to the object (and not other permissions, such as the upload permission).
If the software running on your Amazon EC2 instance only interacts with AWS to created signed URLs, then your Role that has only GetObject permissions is fine. However, if your instance wants to do more, then create a Role that grants the instance the appropriate permissions (including GetObject access to S3) and generate Signed URLs using that Role.
If you wish to practice generating signed URLs, recent versions of the AWS Command-Line Interface (CLI) includes a aws s3 presign s3://path command that can generate pre-signed URLs. Try with with various --profile settings to see how it works with different IAM Users.

User is not authorized to perform: dynamodb:PutItem on resource

I am trying to access DynamoDB from my Node app deployed on AWS ElasticBeanStalk. I am getting an error
User is not authorized to perform: dynamodb:PutItem on resource
It works perfectly fine locally, but when I deploy to the AWS it stops performing.
The dynamoDB access denied is generally a Policy issue. Check the IAM/Role policies that you are using. A quick check is to add
AmazonDynamoDBFullAccess
policy in your role by going to "Permissions" tab in AWS console. If it works after that then it means you need to create a right access policy and attach it to your role.
Check the access key you are using to connect to DynamoDB in your Node app on AWS. This access key will belong to a user that does not have the necessary privileges in IAM. So, find the IAM user, create or update an appropriate policy and you should be good.
For Beanstalk you need to setup user policies when you publish. Check out the official docs here.
And check out the example from here too, courtesy of #Tirath Shah.
Granting full dynamodb access using aws managed policy AmazonDynamoDBFullAccess is not recommended and is not a best practice.
Try adding your table arn in the resource key in the policy in your role policy json.
"Resource": "arn:aws:dynamodb:<region>:<account_id>:table:/dynamodb_table_name"
In my case (I try to write to a DynamoDB table through a SageMaker Notebook for experimental purposes), the complete error looks like this:
ClientError: An error occurred (AccessDeniedException) when calling the UpdateItem operation: User: arn:aws:sts::728047644461:assumed-role/SageMakerExecutionRole/SageMaker is not authorized to perform: dynamodb:UpdateItem on resource: arn:aws:dynamodb:eu-west-1:728047644461:table/mytable
I needed to go to AWS Console -> IAM -> Roles -> SageMakerExecutionRole, and Attach these two Policies:
AmazonDynamoDBFullAccess
AWSLambdaInvocation-DynamoDB
In a real-world scenario though, I'd advise to follow the least-permissions philosophy, and apply a policy that allows put item method to go through, in order to avoid accidents (e.g. deleting a record from your table).
Sign in to IAM > Roles, select the service name. Make sure the DynamoDB Resource is correct.

Resources