How can i get putobject access to s3 from specific ec2 instance - node.js

I created S3 static web - public bucket and by default all the ec2 instance that i have in my account can upload files to the s3 bucket.
My goal is to limit the access to upload files to the bucket just from spesific instance (My bastion instance) .
So I created a role with all s3 permission and attach the role to my bastion instance , than I put this policy in the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::name/*"
},
{
"Sid": "allow only OneUser to put objects",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::name/*"
}
]
}
But now all the ec2 instance include the bastion instance cant upload files to the s3 bucket..
Im trying to change this arn line:
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
To user arn and its work .. But I want this is work on the role
I was able to do the operation on a specific user but not on a specific instance (role).
What Im doing wrong?

Refer to the "Granting same-account bucket access to a specific role" section of this AWS blog. The gist is as given below.
Each IAM entity (user or role) has a defined aws:userid variable. You will need this variable for use within the bucket policy to specify the role or user as an exception in a conditional element. An assumed-role’s aws:userId value is defined as UNIQUE-ROLE-ID:ROLE-SESSION-NAME (for example, AROAEXAMPLEID:userdefinedsessionname).
To get AROAEXAMPLEID for the IAM role, do the following:
Be sure you have installed the AWS CLI, and open a command prompt or shell.
Run the following command: aws iam get-role -–role-name ROLE-NAME.
In the output, look for the RoleId string, which begins with AROA.You will be using this in the bucket policy to scope bucket access to only this role.
Use this aws:userId in the policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket",
"arn:aws:s3:::MyExampleBucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"111111111111"
]
}
}
}
]
}

{
"Role": {
"Description": "Allows EC2 instances to call AWS services on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "AROAUXYsdfsdfsdfsdf
L",
"CreateDate": "2023-01-09T21:36:26Z",
"RoleName": "Ec2AccessToS3",
"Path": "/",
"RoleLastUsed": {
"Region": "eu-central-1",
"LastUsedDate": "2023-01-10T05:43:20Z"
},
"Arn": "arn:aws:iam::32sdfsdf218:role/Ec2AccessToS3"
}
}

I just want to update , Im trying to give access to spesific user instead ..
this is not working to..
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::name.com",
"arn:aws:s3:::name.com/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AIDOFTHEUSER",
"ACCOUNTID"
]
}
}
}
]
}

Related

Access Denied issue in AWS Cross Account S3 PutObject encrypted by AWS Managed Key

I am trying to put a text file from Lambda which is in Account B to S3 bucket in account A. S3 bucket(test-bucket) is having AWS-KMS encryption with aws/s3 Managed Key enabled.
1. I added below permissions in Account A- S3 bucket (test-bucket):
```
{"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/Lambda-Role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Added below inline policy to my Lambda execution role in Account B:
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
}
]
}
This is my Lambda Code:
res = s3.put_object(
Body=message,
Key=file_name,
Bucket='test-bucket',
ACL='bucket-owner-full-control'
)
Getting below error while running this code from Account B Lambda:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Since the S3 bucket is encrypted by AWS Managed Key so I cannot edit the KMS policy what we do in case of Customer Managed Key.
Someone please guide me what am I missing.
Try granting your lambda function s3:PutObject action permission. So the inline policy of your lambda role should be something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
I've been troubleshooting this for a couple of hours myself.
I don't believe this is possible with the default "AWS Managed Key" when using SSE-KMS. Instead you have to create a CMK and grant the cross account user access to this key.
HTH
Cross account access cannot be granted for AWS Managed Key. Need to use customer managed key or default encryption.
This can be useful- https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/

Download s3 files without using cli from specific IP sources

I have a requirement to download a file (as ex: https://hematestpolicy.s3.amazonaws.com/test/ca-dev2.png) s3 object across many instances in my aws vpc without having to install aws cli. The file should be protected, can be accessed only within VPC. I have applied below bucket policy on my s3 bucket hematestpolicy. Am able to view the file in my instances using aws s3 ls commands but unable to download it using wget command. Can anyone suggest if it is achievable or a better solution for file being private to vpc and downloaded without use of AWS CLI
`
{
"Version": "2012-10-17",
"Id": "CreditApplications",
"Statement": [
{
"Sid": "AllowCreditAppProcessing",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::975472539761:root",
"arn:aws:iam::975472539761:role/hema-ghh"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
}
}
},
{
"Sid": "DenyEveryoneElse",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
},
"ArnNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::975472539761:role/hema-ghh",
"arn:aws:iam::975472539761:root"
]
}
}
}
]
}`
Unless you have a VPC endpoint all outgoing connections would be via a public source (for public instances this would be their public IP via an internet gateway, and for private this would be via a NAT).
If you want to limit to allow objects to only be retrieved via the VPC you should look at using a VPC endpoint for S3. By creating this and adding it to your route tables it will actually also provide a internal connection to S3 vs using the public internet.
Once you have this in place you can create a bucket policy that completely limits the requests to the source of that VPC endpoint.
For example the below policy would Deny access where not from the VPC Endpoint.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::awsexamplebucket1",
"arn:aws:s3:::awsexamplebucket1/*"],
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}
Be aware that when using a bucket policy denying everything will restrict all access to that bucket (including management tasks) to only be available from that source VPC endpoint, so you should try to limit the scope of actions i.e. GetObject.

Python3.7 script to export CloudWatch logs to S3

I am using below code to copy CloudWatch logs to S3:-
import boto3
import collections
from datetime import datetime, date, time, timedelta
region = 'eu-west-1'
def lambda_handler(event, context):
yesterday = datetime.combine(date.today()-timedelta(1),time())
today = datetime.combine(date.today(),time())
unix_start = datetime(1970,1,1)
client = boto3.client('logs')
response = client.create_export_task(
taskName='Export_CloudwatchLogs',
logGroupName='/aws/lambda/stop-instances',
fromTime=int((yesterday-unix_start).total_seconds() * 1000),
to=int((today -unix_start).total_seconds() * 1000),
destination='bucket',
destinationPrefix='bucket-{}'.format(yesterday.strftime("%Y-%m-%d"))
)
return 'Response from export task at {} :\n{}'.format(datetime.now().isoformat(),response)
I gave below policy to role:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateExportTask",
"logs:DescribeExportTasks",
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
2nd policy:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetBucketAcl"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::${var.source_market}-${var.environment}-${var.bucket}/*"],
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } }
}
]
}
EOF
I am getting below error if I execute this in AWS console:-
{
"errorMessage": "An error occurred (InvalidParameterException) when calling the CreateExportTask operation: GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.",
"errorType": "InvalidParameterException"
I have referred many blocks after appending role with appropriate policies.
Check the encryption settings on your bucket. I had the same problem and it was because I had it set to AWS-KMS. I was getting this error with the same permissions you have and then it started working as soon as I switched the encryption to AES-256
It seems like an issue with s3 bucket permissions. You need to attach this policy to your s3 bucket. Please amend the policy by changing the bucket name and aws region for cloudwatch.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
}
]}
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
I had the same error, the issue was that I put on "destination" parameter something like bucket/something while on the policy I just had bucket, so removing the something prefix on the parameter fixed the problem, so check that the policy and the parameter match.

IAM policy attached to role not working

I have a node application that is invoking assumeRoleWithWebIdentity in the following manner:
var params = {
DurationSeconds: 3600,
RoleArn: "arn:aws:iam::role/my_test_role",
RoleSessionName: "session_name",
WebIdentityToken: req.body.id_token
};
sts.assumeRoleWithWebIdentity(params, function(err, data) {
//create s3 client with data.Credentials.SecretAccessKey, AccessKeyId, sessionToken
//call s3.listObjectsV2({Bucket: 'my-bucket'}).
});
No, i have a role in IAM call my_test_role. Attached to that role is a policy called my_test_policy which looks as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my_bucket",
"Condition": {"StringLike": {"s3:prefix": [
"",
"home/",
"home/BOB/*"
]}}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket/home/BOB",
"arn:aws:s3:::my_bucket/home/BOB/*"
]
}
]
}
In s3, I have a bucket called my_bucket, and in that bucket is the folder home. In home are a bunch of user folders:
my_bucket/home/ALICE
my_bucket/home/BOB
my_bucket/home/MARY
When my node application lists objects, it lists all the objects in home. The intention with my policy is to limit the listing to the user that has assumed the role. So if BOB has assumed the role, he should only see my_bucket/home/BOB and nothing else. I'll eventually replace the hard coded 'BOB' in the policy with ${my_oidc_url:sub}. But before I get to that step, I thought I would just hardcode "BOB" and see if that works. It does not. The assumed roles sees all of the folders. Any suggestions?
In your s3:ListBucket policy you have allowed the home/ folder to be listed, so of course it will list everything in there.
If you only had home/BOB/* directory, then I think you would get the desired behavior.
To test this situation, I did the following:
Created an Amazon S3 bucket and uploaded files.
The contents are:
2018-05-11 08:57:55 10096 foo
2018-05-11 08:57:38 10096 home/alice/foo
2018-05-11 08:57:32 10096 home/bob/foo
2018-05-11 08:57:51 10096 home/foo
2018-05-11 08:57:45 10096 home/mary/foo
Created an IAM Role called bob.
The permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjects",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringLike": {
"s3:prefix": [
"home/bob/*"
]
}
}
},
{
"Sid": "AccessObjects",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/home/bob/*"
}
]
}
(ListObjects is equivalent to ListBucket)
Assumed the bob role
Via the CLI command:
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/bob --role-session-name bob
Saved the resulting credentials to a bob profile
I could then do anything in the home/bob path, but nothing in other paths:
$ aws --profile bob s3 ls s3://my-bucket/home/bob/
2018-05-11 09:16:23 10096 foo
$ aws --profile bob s3 cp foo s3://my-bucket/home/bob/foo
upload: ./foo to s3://my-bucket/home/bob/foo
$ aws --profile bob s3 cp s3://my-bucket/home/bob/foo .
download: s3://my-bucket/home/bob/foo to ./foo
$ aws --profile bob s3 ls s3://my-bucket/home/
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
$ aws --profile bob s3 ls s3://my-bucket/
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Policy Variables
While an IAM User can be easily substituted into a policy variable, this does not seem as easy with an Assumed Role. This is because the variables will be set to:
aws:username will be undefined
aws:userid will be set to role id:caller-specified-role-name
This is not as easy as simply referencing a value of 'bob'. You'd effectively need to name the S3 path something like: AIDAJQABLZS4A3QDU576Q:bob
Ok, it ended up being a few things.
my node.js app wasn't using my temporary credentials, but was instead using the static ones. This was because my s3 client was initialized incorrectly after assuming the role.
My node.js app was sending an empty prefix as well as an incorrect prefix in an instance
The following policy worked for me
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket2"
},
{
"Sid": "DenyAllListingExpectForHomeAndUserFolders",
"Effect": "Deny",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket2",
"Condition": {
"Null": {
"s3:prefix": "false"
},
"StringNotLike": {
"s3:prefix": [
"",
"home/",
"home/${MY_OIDC_URL:sub}/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket2/home/${MY_OIDC_URL:sub}/*"
}
]
}

Limiting access to AWS S3 with policy is not working as expected

I have a user group which we use for one of our environments in AWS.
We are trying to limit access of that group only to specific S3 bucket.
So, I created a policy as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::staging"
}
]
}
If I use AWS policy simulator, all shows as expected (at least looks like it).
But, through the app, that uses the API key of a user in this group I am getting access denied when I upload a file.
What am I doing wrong?
This gives the same result
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": [
"arn:aws:s3:::staffila-staging",
"arn:aws:s3:::staffila-staging/*"
]
}
]
}
Use this policy this will work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::staging"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": ["arn:aws:s3:::staging/*"]
}
]
}

Resources