I have a requirement to download a file (as ex: https://hematestpolicy.s3.amazonaws.com/test/ca-dev2.png) s3 object across many instances in my aws vpc without having to install aws cli. The file should be protected, can be accessed only within VPC. I have applied below bucket policy on my s3 bucket hematestpolicy. Am able to view the file in my instances using aws s3 ls commands but unable to download it using wget command. Can anyone suggest if it is achievable or a better solution for file being private to vpc and downloaded without use of AWS CLI
`
{
"Version": "2012-10-17",
"Id": "CreditApplications",
"Statement": [
{
"Sid": "AllowCreditAppProcessing",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::975472539761:root",
"arn:aws:iam::975472539761:role/hema-ghh"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
}
}
},
{
"Sid": "DenyEveryoneElse",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
},
"ArnNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::975472539761:role/hema-ghh",
"arn:aws:iam::975472539761:root"
]
}
}
}
]
}`
Unless you have a VPC endpoint all outgoing connections would be via a public source (for public instances this would be their public IP via an internet gateway, and for private this would be via a NAT).
If you want to limit to allow objects to only be retrieved via the VPC you should look at using a VPC endpoint for S3. By creating this and adding it to your route tables it will actually also provide a internal connection to S3 vs using the public internet.
Once you have this in place you can create a bucket policy that completely limits the requests to the source of that VPC endpoint.
For example the below policy would Deny access where not from the VPC Endpoint.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::awsexamplebucket1",
"arn:aws:s3:::awsexamplebucket1/*"],
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}
Be aware that when using a bucket policy denying everything will restrict all access to that bucket (including management tasks) to only be available from that source VPC endpoint, so you should try to limit the scope of actions i.e. GetObject.
Related
I created S3 static web - public bucket and by default all the ec2 instance that i have in my account can upload files to the s3 bucket.
My goal is to limit the access to upload files to the bucket just from spesific instance (My bastion instance) .
So I created a role with all s3 permission and attach the role to my bastion instance , than I put this policy in the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::name/*"
},
{
"Sid": "allow only OneUser to put objects",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::name/*"
}
]
}
But now all the ec2 instance include the bastion instance cant upload files to the s3 bucket..
Im trying to change this arn line:
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
To user arn and its work .. But I want this is work on the role
I was able to do the operation on a specific user but not on a specific instance (role).
What Im doing wrong?
Refer to the "Granting same-account bucket access to a specific role" section of this AWS blog. The gist is as given below.
Each IAM entity (user or role) has a defined aws:userid variable. You will need this variable for use within the bucket policy to specify the role or user as an exception in a conditional element. An assumed-role’s aws:userId value is defined as UNIQUE-ROLE-ID:ROLE-SESSION-NAME (for example, AROAEXAMPLEID:userdefinedsessionname).
To get AROAEXAMPLEID for the IAM role, do the following:
Be sure you have installed the AWS CLI, and open a command prompt or shell.
Run the following command: aws iam get-role -–role-name ROLE-NAME.
In the output, look for the RoleId string, which begins with AROA.You will be using this in the bucket policy to scope bucket access to only this role.
Use this aws:userId in the policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket",
"arn:aws:s3:::MyExampleBucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"111111111111"
]
}
}
}
]
}
{
"Role": {
"Description": "Allows EC2 instances to call AWS services on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "AROAUXYsdfsdfsdfsdf
L",
"CreateDate": "2023-01-09T21:36:26Z",
"RoleName": "Ec2AccessToS3",
"Path": "/",
"RoleLastUsed": {
"Region": "eu-central-1",
"LastUsedDate": "2023-01-10T05:43:20Z"
},
"Arn": "arn:aws:iam::32sdfsdf218:role/Ec2AccessToS3"
}
}
I just want to update , Im trying to give access to spesific user instead ..
this is not working to..
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::name.com",
"arn:aws:s3:::name.com/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AIDOFTHEUSER",
"ACCOUNTID"
]
}
}
}
]
}
I am trying to implement a proxy to our Aurora RDS instance, but having difficulty getting the IAM access to work properly. We have a microservice in an ECS container that is attempting to access the database. The steps I've followed so far:
Created a secret containing the DB credentials
Created the proxy with the following config options:
Engine compatibility: MySQL
Require TLS - enabled
Idle timeout: 20 minutes
Secret - Selected DB credential secret
IAM Role - Chose to create new role
IAM Authentication - Required
Modified the policy of the proxy IAM role as per the details on this page.
Enabled enhanced logging
When issuing GET requests to the microservice, I see the following in the CloudWatch logs:
Credentials couldn't be retrieved. The IAM role "arn:our-proxy-role"
is not authorized to read the AWS Secrets Manager secret with the ARN
"arn:our-db-credential-secret"
Another interesting wrinkle to all of this: I pulled up the policy simulator, selecting the RDS proxy role and all of the actions under the Secrets Manager service, and all actions show up as being allowed.
I would sincerely appreciate any kind of guidance to indicate what I'm missing here.
arn:our-proxy-role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds:us-east-1:ACCOUNT:dbuser:*/*"
]
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"secretsmanager:GetRandomPassword",
"secretsmanager:CreateSecret",
"secretsmanager:ListSecrets"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "secretsmanager:*",
"Resource": [
"arn:aws:our-db-credential-secret"
]
},
{
"Sid": "GetSecretValue",
"Action": [
"secretsmanager:GetSecretValue"
],
"Effect": "Allow",
"Resource": [
"arn:aws:our-db-credential-secret"
]
},
{
"Sid": "DecryptSecretValue",
"Action": [
"kms:Decrypt"
],
"Effect": "Allow",
"Resource": [
"arn:aws:kms:us-east-1:ACCOUNT:key/our-db-cluster"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.us-east-1.amazonaws.com"
}
}
}
]
}
The issue was related to security groups. I needed to specify an additional inbound rule to allow incoming traffic from itself so as to facilitate communication between resources that are part of the same security group.
I am trying to put a text file from Lambda which is in Account B to S3 bucket in account A. S3 bucket(test-bucket) is having AWS-KMS encryption with aws/s3 Managed Key enabled.
1. I added below permissions in Account A- S3 bucket (test-bucket):
```
{"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/Lambda-Role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Added below inline policy to my Lambda execution role in Account B:
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
}
]
}
This is my Lambda Code:
res = s3.put_object(
Body=message,
Key=file_name,
Bucket='test-bucket',
ACL='bucket-owner-full-control'
)
Getting below error while running this code from Account B Lambda:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Since the S3 bucket is encrypted by AWS Managed Key so I cannot edit the KMS policy what we do in case of Customer Managed Key.
Someone please guide me what am I missing.
Try granting your lambda function s3:PutObject action permission. So the inline policy of your lambda role should be something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
I've been troubleshooting this for a couple of hours myself.
I don't believe this is possible with the default "AWS Managed Key" when using SSE-KMS. Instead you have to create a CMK and grant the cross account user access to this key.
HTH
Cross account access cannot be granted for AWS Managed Key. Need to use customer managed key or default encryption.
This can be useful- https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/
I'm working on setting up an ElasticSearch instance on AWS. My goal is to only allow http request from my Lambda function to the ElasticSearch instance. I have created one policy, that gives the 'Lambdaaccess to theElasticSearchinstance. The part I'm struggling with is the inline resource policy forElasticSearchthat will deny all other request that aren't from the 'Lambda.
I have tried setting the ElasticSearch resource policy to Deny all request and then giving my Lambda a role with access to ElasticSearch. While the Lambda is using that role I am signing my http requests using axios and aws4 but the request are rejected with The request signature we calculated does not match the signature you provided. I don't think the issue is the actual signing of the request but instead the polices I created. If anyone can steer me in the right direction that would really help.
Lambda Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"es:ESHttpGet",
"es:CreateElasticsearchDomain",
"es:DescribeElasticsearchDomainConfig",
"es:ListTags",
"es:ESHttpDelete",
"es:GetUpgradeHistory",
"es:AddTags",
"es:ESHttpHead",
"es:RemoveTags",
"es:DeleteElasticsearchDomain",
"es:DescribeElasticsearchDomain",
"es:UpgradeElasticsearchDomain",
"es:ESHttpPost",
"es:UpdateElasticsearchDomainConfig",
"es:GetUpgradeStatus",
"es:ESHttpPut"
],
"Resource": "arn:aws:es:us-east-1:,accountid>:domain/<es-instance>"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"es:PurchaseReservedElasticsearchInstance",
"es:DeleteElasticsearchServiceRole"
],
"Resource": "*"
}
]
}
ElasticSearch Inline Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-east-1:<account-number>:domain/<es-instance>/*"
}
]
}
Lambda Code Using Aws4 and Axios
//process.env.HOST = search-<es-instance>-<es-id>.us-east-1.es.amazonaws.com
function createRecipesIndex(url, resolve, reject){
axios(aws4.sign({
host: process.env.HOST,
method: "PUT",
url: "https://" + process.env.HOST,
path: '/recipes/',
}))
.then(response => {
console.log("----- SUCCESS INDEX CREATED -----");
resolve();
})
.catch(error => {
console.log("----- FAILED TO CREATE INDEX -----");
console.log(error);
reject();
});
}
Note: I have tried creating my index with the inline policy on ElasticSearch set to allow *(all) and removing the aws4 library signature and it works fine. Right now I just want to secure access to this resource.
I found the solution to my issue and it was 2 fold. The first issue was my inline resource policy on my ElasticSearch instance. I needed to update it to allow the role that I have given to my Lambda. This was done by getting the role arn from IAM and then creating the below policy to be attached inline on the ElasticSearch instance.
My second issue was with aws4. the path and the url I set did not match. My path had /xxxx/ while my url was https://search-<es-instance>-<es-id>.us-east-1.es.amazonaws.com/xxxx. Since the path contained an extra forward slash not found in the url, the signing failed. For anyone else using the library make sure those values are consistent. I hope this helps someone else out in the future :D
Elastic Search Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/service-role/<role-name>"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:<account-id>:domain/<es-instance>/*"
}
]
}
I need to integrate a AWS IOT based MQTT Service. Some other developer already setup the MQTT and give me the aws account credentials. They also gave us two topics name. one for publishing data other for subscribing for getting status data.
For testing purpose I created a device into AWS IOT Panel and it give me the node iot sdk download. Which I setup on local machine. Then I playing with device-example script in examples folder. I modify aws policy attached with my device for allow to access two topics one for publish and one for subscribe.
But all this failed. Script given following output.
connect
offline
close
reconnect
connect
offline
close
and so on..
When I checked in AWS CloudWatch Logs for IOT I got issue.
{
"timestamp": "2018-10-25 07:13:10.056",
"logLevel": "ERROR",
"traceId": "TRACEID",
"accountId": "ACCOUNTID",
"status": "Failure",
"eventType": "Subscribe",
"protocol": "MQTT",
"topicName": "status topic name",
"clientId": "sdk-nodejs-uuid",
"principalId": "clientid",
"sourceIp": "IP",
"sourcePort": PORT,
"reason": "AUTHORIZATION_FAILURE"
}
My changed policy are
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name",
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
}
]
}
Then I even gave all iot permission for all topics but still get authentication error
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
For publish I get only connect console output and also did not get any logs on cloud watch so I am not very sure if it succeed or not.
UPDATE: Ok i got the issue after some search and that is to add topicfilter along with topic in policy. It look like required for subscribe topics. Updated policy is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topicfilter/*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
Have you also configured an IoT policy? To connect to the IoT platform with an IAM user (MQTT over WSS), you do not only need an IAM policy which allows access, but also an IoT policy which does so. On top of this, you should check if your policies are using the correct resource identifier. There is a difference between how resources are defined for iot:publish versus iot:subscribe.