To rename a file in a bucket, I copy the file to the new name and delete the old one. But when I was migrating from the old aws-sdk to the new S3-client, I now get a access denied on the copy object command. I have triple checked the permissions on the account accessing the objects and nothing seems wrong to me. I have tried applying all the permissions, but sadly with the same results. My permissions look like this:
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:PutObjectVersionTagging",
"s3:ListBucket",
"s3:PutObjectTagging",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
]
The parameter I give to the command look like this
{
"Bucket": "bucket",
"CopySource": "pictures/1014/2.png",
"Key": "pictures/1014/1.png"
}
And the output of the command is a 403 AccesDenied. The same s3-client gets used to do normal puts and gets on the same bucket, no problem there. Thanks for helping.
Related
My IAM account has "admin" privilege, at least supposedly. I can perform all operations as far as I can tell in web console. For example,
Recently I downloaded aws-cli and quickly configured it by supplying access keys, default region and output format. I then tried to issue some commands and found most of them, but not all, have permission issues. For example
$ aws --version
aws-cli/1.16.243 Python/3.7.4 Windows/10 botocore/1.12.233
$ aws s3 ls s3://test-bucket
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
$ aws ec2 describe-instances
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
$ aws iam get-user
{
"User": {
"Path": "/",
"UserName": "xxx#xxx.xxx",
"UserId": "xxxxx",
"Arn": "arn:aws:iam::nnnnnnnnnn:user/xxx#xxx.xxx",
"CreateDate": "2019-08-21T17:09:25Z",
"PasswordLastUsed": "2019-09-21T16:11:34Z"
}
}
It appears to me that cli, which is authenticated using access key, has a different permission set from web console, which is authenticated using MFA.
Why is permission inconsistent between CLI and GUI? How to make it consistent?
It turns out following statement in one of my policies blocked CLI access due to lacking MFA.
{
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
},
"Resource": "*",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
],
"Sid": "DenyAllExceptListedIfNoMFA"
},
If you replace BoolIfExists with Bool, it should work. Your CLI requests would not be denied because of not using MFA.
Opposite of https://aws.amazon.com/premiumsupport/knowledge-center/mfa-iam-user-aws-cli/
To remain really secure check this good explanation: MFA token for AWS CLI
In few steps
Get a temporary 36 hours session token.
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user --token-code code-from-token
{
"Credentials": {
"SecretAccessKey": "secret-access-key",
"SessionToken": "temporary-session-token",
"Expiration": "expiration-date-time",
"AccessKeyId": "access-key-id"
}
}
Save these values in a mfa profile configuration.
[mfa]
aws_access_key_id = example-access-key-as-in-returned-output
aws_secret_access_key = example-secret-access-key-as-in-returned-output
aws_session_token = example-session-Token-as-in-returned-output
Call with the profile
aws --profile mfa
Ps: Don't do the cron job as suggested, it goes again the security.
I had this same issue and I fixed it by adding my user to a new group with administrator access in IAM.
to do this go to IAM, Users, click on your user and then [add permissions]
in the next screen click [Create group] and then pick administrator access
I created a CodeBuild Project that uses a docker image for node8. The purpose of this CodeBuild project is to do unit testing. It takes an input artifact from CodeCommit. And in the buildspec.yml it runs a test command.
This is my (simple) buildspec file:
version: 0.2
phases:
install:
commands:
- echo "install phase started"
- npm install
- echo "install phase ended"
pre_build:
commands:
- echo "pre_build aka test phase started"
- echo "mocha unit test"
- npm test
- echo "mocha unit test ended"
build:
commands:
- echo "build phase started"
- echo "build complete"
The build is failing at the DOWNLOAD_SOURCE phase with the following:
PHASE - DOWNLOAD_SOURCE
Start time 2 minutes ago
End time 2 minutes ago
Message Access Denied
The only logs in the build logs are the following
[Container] 2018/01/12 11:30:22 Waiting for agent ping
[Container] 2018/01/12 11:30:22 Waiting for DOWNLOAD_SOURCE
Thanks in advance.
Screenshot of the CodeBuild policies.
I found a fix. It was a problem with my permissions. I added this to make it work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:eu-west-1:723698621383:log-group:/aws/codebuild/project",
"arn:aws:logs:eu-west-1:723698621383:log-group:/aws/codebuild/project:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-eu-west-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters"
],
"Resource": "arn:aws:ssm:eu-west-1:723698621383:parameter/CodeBuild/*"
}
]
}
I had the same error, a permissions issue accessing S3 bucket url. Originally I used an auto-generated codepipeline-us-west-2-* bucket name with the policy:
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-west-2-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
After changing to my own bucket name, the policy had to be updated to:
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::project-name-files/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
I had similar error and will post my fix in case it helps anyone else. I was using CodePipeline and had two separate builds happening. Build #1 would complete its build and the output artifact for that was to be the input artifact for Build #2. Build #2 was failing on the the DOWNLOAD_SOURCE phase with the following error:
AccessDenied: Access Denied status code: 403
The problem was that in my build spec for Build #1, I didn't have the artifacts defined. After calling out the artifact files/folders in Build #1, then Build #2 was able to download the source without issue.
I was experiencing the same symptoms but my issue was due to the default encryption on the S3 bucket as described in this post.
So everything in S3 is encrypted at rest. When you don't specify how you want to encrypt them, objects in S3 will be encrypted by the default KMS key. And other accounts won't be able to get access to objects in the bucket because they don't have that KMS key for decryption. So to get around this issue, you need to create your own KMS key and use it to encrypt (let the CodeBuild to use this KMS Key you have created in this case). Then allow roles in other accounts to use this key by configure AssumeRole permissions. From what I see, most S3 access denial happens at not being able to decrypt objects. And this is specified here Troubleshoot S3 403 Access Denied - encrypted objects will also cause 403 Access Denied.
In my case, the keys that were being used were mismatched which was causing the decryption failure.
I faced the same issue.
My source was from an S3 folder. The fix involved putting a / at the end of the source path. It seems that without the / CodeBuild thinks it is a key.
Hope this helps someone save time.
In my case I fixed the issue that way - when I was creating a build project configuration there is a step in which you have to provide Service role and Role name. There are two options for that step 1) create new one and 2) choose existing one. I created a new one. After that I faced the issue author described. After some research I added this policies to that role in IAM module and the issue went away.
AWSCodeDeployRoleForECS AWS managed Permissions policy
AWSCodeDeployRole AWS managed Permissions policy
AWSCodeDeployRoleForCloudFormation AWS managed Permissions policy
AWSCloudFormationFullAccess AWS managed Permissions policy
AWSCodeDeployRoleForLambda AWS managed Permissions policy
I've figured out how to set up CORS and IAM so I post images to and display images from S3. I have two main issues.
What I have seems insecure because from my understanding of what I have, anyone could access it.
If I secure it, I can no longer test properly on localhost. And I don't have the option of setting it to be accessible on localhost from a work network because we're all remote.
Policy
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
},
{
"Sid": "read only policy",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]}
CORS
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
So, how do I secure my configurations while still allowing development from localhost?
Will using "Condition": {"StringLike": {"aws:Referer": [ ... ]}} prevent access from localhost?
You can lock down the S3 bucket without using the Referer tag. It is probably more secure to use it after testing is complete.
In the AWS S3 console, you can grant permissions to access the bucket to yourself. As long as the Grantee does not say 'Everyone', everyone does not have access to the bucket.
You can then generate an IAM access key for yourself in IAM > Users > 'user_name'. This key can be used to authenticate with the bucket.
You will also want to grant the user the AmazonS3FullAccess by attaching it as a policy.
You should be able to use the user's credentials that you just generated to access and modify files in S3.
I've got an ASG that assigns an IAM Role to each of the instances that join it. Therefore, each instance has the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables baked-in, which will be used upon instantiation to download and decrypt credentials that are stored in an S3 bucket and encrypted using KMS keys.
So I'll have the following components:
An S3 bucket called top-secret.myapp.com
All objects in this bucket are encrypted using a KMS key called My-KMS-Key
An IAM instance role with inline policies attached granting it the ability to interact with both the bucket and the KMS key used to encrypt/decrypt the contents of the bucket (see below)
A user data script that installs the aws-cli upon instantiation and then goes about attempting to download and decrypt an object from the top-secret.myapp.com bucket.
The User Data Script
Upon instantiation, any given instance runs the following script:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://top-secret.myapp.com/secrets.sh . --region us-east-1
chmod +x secrets.sh
. secrets.sh
shred -u -z -n 27 secrets.sh
IAM Role Policies
The IAM role for my ASG instances has three policies attached inline:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::top-secret.myapp.com"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::top-secret.myapp.com/secrets.sh"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:*"
],
"Resource": "arn:aws:kms:us-east-1:UUID-OF-MY-SECRET-KEY-HERE"
}
]
}
The first policy is essentially a full-root-access policy with no restrictions. Or so I thought, but it doesn't work. So I thought it might be that I need to explicitly apply policies that allow interaction with S3 encryption and/or KMS, makes sense.
So I added the second policy that allows the IAM instance role to list the top-secret.myapp.com bucket, and LIST and GET the secrets.sh object within the bucket. But this produced the error illustrated below.
The (Unknown) Error I'm Getting
download failed: s3://top-secret.myapp.com/secrets.sh to ./secrets.sh
A client error (Unknown) occurred when calling the GetObject operation: Unknown
Anyone have any idea what could be causing this error?
Note: This method for transferring encrypted secrets from S3 and decrypting them on-instance works fine using the standard Amazon S3 service master key
For me, the issue was two-fold:
If you're using server-side encryption via KMS, you need to supply the --sse aws:kms flag to the aws s3 cp [...] command.
I was installing an out-of-date version of awscli (version 1.2.9) via apt, and that version didn't recognize the --sse aws:kms command
Running apt-get remove awscli and installing via pip install awscli gave me version 1.10.51, which worked.
EDIT:
If you're using a different KMS key than the default master key for your account, you will need to also add the following flag:
--sse-kms-key-id [YOUR KMS KEY ID]
I've deployed a nodejs worker. However whenever I try to start it, it gets red and this error is showned:
ERROR Instance: i-6eef007a Module: AWSEBAutoScalingGroup ConfigSet: null Command failed on instance. Return code: 1 Output: Error occurred during build: Command 01-start-sqsd failed .
I don't know if it's related, sometimes I get this error on the screen:
IamInstanceProfile: The environment does not have an IAM instance profile associated with it. To improve deployment speed please associate an IAM instance profile with the environment.
I've already given permission to SQS and set key and secret. I don't know what else to do.
Log attached.
Thank you very much.
You need to have an IAM role with the appropriate permissions to create an Elastic Beanstalk worker environment.
The IAM role should have the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueueAccess",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "MetricsAccess",
"Action": [
"cloudwatch:PutMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Detailed documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.roles.aeb.html#AWSHowTo.iam.policies.actions.worker
For debugging you can ssh to the instance and look at /var/log/aws-sqsd/default.log to see the logs. If you want to avoid ssh'ing to the instance you can also snapshot logs from the AWS Console as shown here.
You can read more about worker role environments here.