Amazon Web Services : Setting S3 policy to allow putObject and getObject but deny listBucket - security

I am using getObject and putObject requests on Amazon S3 and in creating a policy for access to the bucket I discovered that if I don't allow listBucket I get an 'access denied' error.
The problem with this is that listBucket means a user can list the keys in a bucket and this presents
a security threat.
Is it possible to allow getObject and putObject without allowing listBucket?
or is there a workaround for this?
Here is the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt##",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myBucket"
]
},
{
"Sid": "Stmt##",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::myBucket/*"
]
}
]
}

From the Get Object documentation:
You need the s3:GetObject permission for this operation. For more information, go to Specifying Permissions in a Policy in the Amazon Simple Storage Service Developer Guide. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
I've confirmed this behavior by editing a policy that was essentially identical to yours.
I am able to get an existing object without trouble, whether or not I have the s3:ListBucket privilege, as long as I have the s3:GetObject privilege.
The behavior changes only if I don't also have the s3:ListBucket privilege, and I request an object that does not exist. In that case, S3 will not admit to me whether that object exists -- I'm denied access to knowledge about the existence of the object, since I'm not authorized to see the list.
Response for a valid request to a nonexistent object, without s3:ListBucket:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>xxxx</RequestId>
<HostId>xxxx</HostId>
</Error>
Response for a valid request for the same nonexistent object, with s3:ListBucket:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>fakefile.txt</Key>
<RequestId>xxxx</RequestId>
<HostId>xxxx</HostId>
</Error>
So, on objects that don't actually exist, "access denied" is the expected response without s3:ListBucket. Otherwise, it works as expected.

Related

Why are my lambda/alexa-hosted skill permissions being denied?

My goal is to integrate an Alexa-hosted skill with AWS IoT. I'm getting an access denied exception runinng the following python code from this thread:
iota = boto3.client('iotanalytics')
response = iota.get_dataset_content(datasetName='my_dataset_name',versionId='$LATEST',roleArn = "arn:aws:iam::123456789876:role/iotTest")
contentState = response['status']['state']
if (contentState == 'SUCCEEDED') :
url = response['entries'][0]['dataURI']
stream = urllib.request.urlopen(url)
reader = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
What's weird is that the get_dataset_content() method described here has no mention of needing permissions or credentials. Despite this, I have also gone through the steps to use personal AWS resources with my alexa-hosted skill with no luck. As far as I can tell there is no place for me to specify the ARN of the role with the correct permissions. What am I missing?
Oh, and here's the error message the code above throws:
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetDatasetContent operation: User: arn:aws:sts::123456789876:assumed-role/AlexaHostedSkillLambdaRole/a224ab4e-8192-4469-b56c-87ac9a34a3e8 is not authorized to perform: iotanalytics:GetDatasetContent on resource: arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have created a role called demo, which has complete admin access. I have also given it the following trust relationship:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "iotanalytics.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:role/AlexaHostedSkillLambdaRole"
},
"Action": "sts:AssumeRole"
}
]
}
--- The trust relationships tab displays this as well: ---
Trusted entities
The identity provider(s) iotanalytics.amazonaws.com
arn:aws:iam::858273942573:role/AlexaHostedSkillLambdaRole
I ran into this today and after an hour of pondering what is going on, i figured out my problem, and i think it may be the same as what you were running into.
As it turns out, most of the guides out there don't mention the fact that you have to do some work to have the assumed role be the actual role that is used when you build up the boto3 resource or client.
This is a good reference for that - AWS: Boto3: AssumeRole example which includes role usage
Basically, from my understanding, if you do not do that, the boto3 commands will still execute under the same base role that the Alexa Lambda uses - you must first create the assumed role, and then use it.
Additionally, your role you're assuming must have the privileges that it needs to do what you are trying to do - but that's the easy part.
As I look at your code, I see: roleArn = "arn:aws:iam::123456789876:role/iotTest"
Replace it with the correct ARN of a role that has allow iotanalytics:GetDatasetContent
In addition, I assume you didn't paste all of your code, since you are trying to access the arn:aws:iotanalytics:us-east-1:123456789876:dataset/my_project_name
I have doubts that your account id is 123456789876, it looks like you miss some more ARNs in your code.

How can I assign bucket-owner-full-control when creating an S3 object with boto3?

I'm using the Amazon boto3 library in Python to upload a file into another users bucket. The bucket policy applied to the other users bucket is configured like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3BucketList",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bbb"
},
{
"Sid": "DelegateS3ObjectUpload",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bbb",
"arn:aws:s3:::bbb/*"
]
}
]
}
where uuu is my user id and bbb is the bucket name belonging to the other user. My user and the other user are IAM accounts belonging to different organisations. (I know this policy can be written more simply, but the intention is to add a check on the upload to block objects without appropriate permissions being created).
I can then use the following code to list all objects in the bucket and also to upload new objects to the bucket. This works, however the owner of the bucket has no access to the object due to Amazons default of making objects private to the creator of the object
import base64
import hashlib
from boto3.session import Session
access_key = "value generated by Amazon"
secret_key = "value generated by Amazon"
bucketname = "bbb"
content_bytes = b"hello world!"
content_md5 = base64.b64encode(hashlib.md5(content_bytes).digest()).decode("utf-8")
filename = "foo.txt"
sess = Session(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
bucket = sess.resource("s3").Bucket(bucketname)
for o in bucket.objects.all():
print(o)
s3 = sess.client("s3")
s3.put_object(
Bucket=bucketname,
Key=filename,
Body=content_bytes,
ContentMD5=content_md5,
# ACL="bucket-owner-full-control" # Uncomment this line to generate error
)
As soon as I uncomment the ACL option, the code generates an Access Denied error message. If I redirect this to point to a bucket inside my own organisation, the ACL option succeeds and the owner of the bucket is given full permission to the object.
I'm now at a loss to figure this out, especially as Amazons own advice appears to be to do it the way I have shown.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
https://aws.amazon.com/premiumsupport/knowledge-center/s3-require-object-ownership/
It's not enough to have permission in bucket policies only.
Check if your user (or role) is missing s3:PutObjectAcl permission in IAM.
When using the resource methods in boto3, there can be several different API calls being made, and it isn't always obvious which calls are being made.
In comparison, when using client methods in boto3, there is a 1-to-1 mapping between the API call that is being made in boto3, and the API call received by AWS.
Therefore, it is likely that the resource.put_object() method is calling an additional API, such as PutObjectAcl. You can confirm this by looking in AWS CloudTrail and seeing which API calls are being made from your app.
In such a case, you would need the additional s3:PutObjectAcl permission. This would be needed if the upload process first creates the object, and then updates the object's Access Control List.
When using the client methods for uploading a file, there is also the ability to specify an ACL, which I think gets applied directly rather than requiring a second API call. Thus, using the client method to create the object probably would not require this additional permission.

AWS: Authentication issues trying to push metrics to CloudWatch from EC2

I am trying to set up an EC2 (RHEL7) instance to push metrics to cloudwatch using perl scripts as described in http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
I get an HTTP status 400 message "The security token included in the request is invalid"
An instance profile is associated with the instance which has the following policy attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
I am pulling the AWSAccessKeyId and AWSSecretKey from the instance meta-data as follows:
ROLE=$(curl http://169.254.169.254/latest/meta-data/iam/security-credentials/)
CRED=$(curl http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE)
AWSAccessKeyId=$(sed '/AccessKeyId/!d; s/.*:\ \+"\(.\+\)",/\1/g' <<< "$CRED")
AWSSecretKey=$(sed '/SecretAccessKey/!d; s/.*:\ \+"\(.\+\)",/\1/g' <<< "$CRED")
...the values set in the above variables are correct when i check them...
I am running the script to push to cloudwatch as follows (using creds stored in variables from above):
./mon-put-instance-data.pl --mem-util --verbose --aws-access-key-id $AWSAccessKeyId --aws-secret-key $AWSSecretKey
Any ideas why it is rejecting my credentials?
Thanks in advance
If you're using an IAM profile, you don't need to put in the credentials for your script call. Remove the access key and secret key from your call.
./mon-put-instance-data.pl --mem-util --verbose
Yes. You are both right - it automatically picks up the role creds if you dont specify and seems to work.
Not sure why it doesnt work by manually setting the creds but i can look at the perl script to work this out fairly easily.
Thanks for your help.
From documentation:
The following examples assume that you provided an IAM role or awscreds.conf file. Otherwise, you must provide credentials using the --aws-access-key-id and --aws-secret-key parameters for these commands.
As an option when you get an error:
ERROR: Failed to call CloudWatch: HTTP 400. Message: The security token included in the request is invalid.
the script uses a config file (e.g. with wrong credentials):
aws-scripts-mon/awscreds.conf
So, an IAM role or awscreds.conf or --aws* params have to be provided.
And the IAM role has correct permissions:
cloudwatch:PutMetricData
cloudwatch:GetMetricStatistics
cloudwatch:ListMetrics
ec2:DescribeTags

How do I connect my alexa app to dynamo db with the alexa node sdk?

I have created a lambda function that attempts to make a connection with Dynamo DB through the Alexa Skills Kit for Node according to the documentation all you need to connect to the database is
alexa.dynamoDBTableName = 'YourTableName'; // That's it!
For some reason I get the following error
User: arn:aws:sts::XXXXXXXXXXX:assumed-role/lambda_basic_dynamo/MyApp is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:XXXXXXXXX:table/McCannHealth"
The weird thing is that I made new roll called lambda_full_access and changed it for the skill, but it's still assuming another roll. What am I doing wrong.
I don't know if you already figured it out, but you'd have to edit the permission JSON yourself. So when your creating a new IAM role, open the "Advanced settings" and change the content of the JSON to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:*",
"cognito-identity:ListIdentityPools",
"cognito-sync:GetCognitoEvents",
"cognito-sync:SetCognitoEvents",
"dynamodb:*",
"events:*",
"iam:ListAttachedRolePolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"kinesis:DescribeStream",
"kinesis:ListStreams",
"kinesis:PutRecord",
"lambda:*",
"logs:*",
"s3:*",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTopics",
"sns:Subscribe",
"sns:Unsubscribe",
"sns:Publish",
"sqs:ListQueues",
"sqs:SendMessage",
"kms:ListAliases",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"iot:GetTopicRule",
"iot:ListTopicRules",
"iot:CreateTopicRule",
"iot:ReplaceTopicRule",
"iot:AttachPrincipalPolicy",
"iot:AttachThingPrincipal",
"iot:CreateKeysAndCertificate",
"iot:CreatePolicy",
"iot:CreateThing",
"iot:ListPolicies",
"iot:ListThings",
"iot:DescribeEndpoint"
],
"Resource": "*"
}
]
}
Above gives a full access to DynamoDB. JSON for other permissions are available on AWS as well.
This is clearly permission issue. You have selected a role "lambda_full_access". If you have created that role then please check you have give dynamoDB GetItem permission to that role. If you have selected one of the default role then you can either you can edit that role and attach a custom policy with below policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "YouID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:Scan"
],
"Resource": [
"YOUR DYNAMODB ARN HERE"
]
}
]
}
It means now your role will have full lambda access and dynamoDB access for only "GetItem" and "Scan". If you want more permission like "PutItem" etc. you can add it.
Alternatively you can create a custom role and can attach policies for Lambda access and can create a custom policy with the above given setting.

Amazon S3 - Failed to load resource: the server responded with a status of 403 (Forbidden)

This is the first time I use Amazon S3. I've read questions and answers. They all seem similar to this problem but none of the answers fixed it for me.
I can successfully upload pictures but I can get them to display them (403 forbidden status).
This is the bucket's policy:
{
"Version": "2012-10-17",
"Id": "Policy1475848347662",
"Statement": [
{
"Sid": "Stmt1475848335256",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::235314345576:user/userx"
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::bucketdev/*"
}
]}
This is the CORS config:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
This is the user's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]}
Using this component: https://www.npmjs.com/package/react-dropzone-s3-uploader.
Can anyone help?
Thanks.
There are two things to note:
Where to assign permissions for access to Amazon S3
Which permissions to assign
Where to assign permissions for access to Amazon S3
Objects in Amazon S3 are private by default. There are three ways to assign permission to access objects:
Object ACLs (Access Control Lists): These are permissions on the objects themselves
Bucket Policies: This is a set of rules applied to the bucket as a whole, but it can also specify permissions related to a subset of a bucket (eg a particular path within the bucket)
IAM Policies that are applied to IAM Users, Groups or Roles: These permissions apply specifically to those entities
If your intention is to keep the content of the S3 bucket private but allow access to a specific user, then you should assign permissions to the IAM User (as you have done). It also means that you do not require a Bucket Policy since granting access via any one of the above methods is sufficient.
See documentation: Guidelines for Using the Available Access Policy Options
Also a CORS Policy is only required if a HTML page served from one domain is referring to content from another domain. It is quite possible that you do not require the CORS Policy -- do some testing to confirm whether this is the case.
Which permissions to assign
This is always confusing... Some permissions are associated with the Bucket, while some permissions are associated with the contents of the Bucket.
The following permissions from your policy should be at the Bucket level (arn:aws:s3:::MyBucket):
s3:CreateBucket
s3:DeleteBucket
s3:DeleteBucketPolicy
s3:GetBucketPolicy
s3:GetLifecycleConfiguration
s3:ListBucket
s3:ListBucketMultipartUploads
s3:PutBucketPolicy
s3:PutLifecycleConfiguration
Other API calls (eg GetObject) should be at the object-level (eg arn:aws:s3:::MyBucket/*).
See: Specifying Permissions in a Policy
Therefore, the policy associated with your IAM User should look more like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET"
]
},
{
"Sid": "Stmt2",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
]
}
]
}
This grants GetObject permission to objects within the bucket, rather than on the bucket itself.
Just if some body will face with the same problem - be sure that all files was uploaded to bucket, because if you use "Add files" button it does not upload nested folders. Better use "drag and drop".

Resources