I am trying to implement a proxy to our Aurora RDS instance, but having difficulty getting the IAM access to work properly. We have a microservice in an ECS container that is attempting to access the database. The steps I've followed so far:
Created a secret containing the DB credentials
Created the proxy with the following config options:
Engine compatibility: MySQL
Require TLS - enabled
Idle timeout: 20 minutes
Secret - Selected DB credential secret
IAM Role - Chose to create new role
IAM Authentication - Required
Modified the policy of the proxy IAM role as per the details on this page.
Enabled enhanced logging
When issuing GET requests to the microservice, I see the following in the CloudWatch logs:
Credentials couldn't be retrieved. The IAM role "arn:our-proxy-role"
is not authorized to read the AWS Secrets Manager secret with the ARN
"arn:our-db-credential-secret"
Another interesting wrinkle to all of this: I pulled up the policy simulator, selecting the RDS proxy role and all of the actions under the Secrets Manager service, and all actions show up as being allowed.
I would sincerely appreciate any kind of guidance to indicate what I'm missing here.
arn:our-proxy-role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds:us-east-1:ACCOUNT:dbuser:*/*"
]
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"secretsmanager:GetRandomPassword",
"secretsmanager:CreateSecret",
"secretsmanager:ListSecrets"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "secretsmanager:*",
"Resource": [
"arn:aws:our-db-credential-secret"
]
},
{
"Sid": "GetSecretValue",
"Action": [
"secretsmanager:GetSecretValue"
],
"Effect": "Allow",
"Resource": [
"arn:aws:our-db-credential-secret"
]
},
{
"Sid": "DecryptSecretValue",
"Action": [
"kms:Decrypt"
],
"Effect": "Allow",
"Resource": [
"arn:aws:kms:us-east-1:ACCOUNT:key/our-db-cluster"
],
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.us-east-1.amazonaws.com"
}
}
}
]
}
The issue was related to security groups. I needed to specify an additional inbound rule to allow incoming traffic from itself so as to facilitate communication between resources that are part of the same security group.
Related
I created S3 static web - public bucket and by default all the ec2 instance that i have in my account can upload files to the s3 bucket.
My goal is to limit the access to upload files to the bucket just from spesific instance (My bastion instance) .
So I created a role with all s3 permission and attach the role to my bastion instance , than I put this policy in the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::name/*"
},
{
"Sid": "allow only OneUser to put objects",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::name/*"
}
]
}
But now all the ec2 instance include the bastion instance cant upload files to the s3 bucket..
Im trying to change this arn line:
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
To user arn and its work .. But I want this is work on the role
I was able to do the operation on a specific user but not on a specific instance (role).
What Im doing wrong?
Refer to the "Granting same-account bucket access to a specific role" section of this AWS blog. The gist is as given below.
Each IAM entity (user or role) has a defined aws:userid variable. You will need this variable for use within the bucket policy to specify the role or user as an exception in a conditional element. An assumed-role’s aws:userId value is defined as UNIQUE-ROLE-ID:ROLE-SESSION-NAME (for example, AROAEXAMPLEID:userdefinedsessionname).
To get AROAEXAMPLEID for the IAM role, do the following:
Be sure you have installed the AWS CLI, and open a command prompt or shell.
Run the following command: aws iam get-role -–role-name ROLE-NAME.
In the output, look for the RoleId string, which begins with AROA.You will be using this in the bucket policy to scope bucket access to only this role.
Use this aws:userId in the policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket",
"arn:aws:s3:::MyExampleBucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"111111111111"
]
}
}
}
]
}
{
"Role": {
"Description": "Allows EC2 instances to call AWS services on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "AROAUXYsdfsdfsdfsdf
L",
"CreateDate": "2023-01-09T21:36:26Z",
"RoleName": "Ec2AccessToS3",
"Path": "/",
"RoleLastUsed": {
"Region": "eu-central-1",
"LastUsedDate": "2023-01-10T05:43:20Z"
},
"Arn": "arn:aws:iam::32sdfsdf218:role/Ec2AccessToS3"
}
}
I just want to update , Im trying to give access to spesific user instead ..
this is not working to..
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::name.com",
"arn:aws:s3:::name.com/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AIDOFTHEUSER",
"ACCOUNTID"
]
}
}
}
]
}
In my terraform script I have the following resource -
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = var.apigw_cloudwatch_role_arn
}
In the Apply stage, I see the following error -
2020/09/21 20:20:48 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: Updating API Gateway Account failed: AccessDeniedException:
status code: 403, request id: abb0662e-ead2-4d95-b987-7d889088a5ef
Is there a specific permission that needs to be attached to the role in order to get rid of this error?
Ran into the same problem as #bdev03, took me 2 days to identify the missing permission is "iam:PassRole", be so good if terraform is able to point that out, hope this helps.
Since neither this thread (so far) nor the official documentation is doing a very good job at solving this problem... The minimal policies required for this action are:
{
"Sid": "AllowPassingTheRoleToApiGateway",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": ["apigateway.amazonaws.com"]
}
}
}
{
"Sid": "AllowAPIGatewayUpdate",
"Effect": "Allow",
"Action": [
"apigateway:UpdateRestApiPolicy",
"apigateway:PATCH",
"apigateway:GET"
],
"Resource": "*"
}
I haven't tested, but I believe the role needs what's shown below. See more context at the source: "To enable CloudWatch Logs" section at https://docs.aws.amazon.com/apigateway/latest/developerguide/stages.html
For common application scenarios, the IAM role could attach the
managed policy of AmazonAPIGatewayPushToCloudWatchLogs, which contains
the following access policy statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
] }
The IAM role must also contain the following trust relationship
statement:
{ "Version": "2012-10-17", "Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
} ] }
I am trying to put a text file from Lambda which is in Account B to S3 bucket in account A. S3 bucket(test-bucket) is having AWS-KMS encryption with aws/s3 Managed Key enabled.
1. I added below permissions in Account A- S3 bucket (test-bucket):
```
{"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/Lambda-Role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Added below inline policy to my Lambda execution role in Account B:
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
}
]
}
This is my Lambda Code:
res = s3.put_object(
Body=message,
Key=file_name,
Bucket='test-bucket',
ACL='bucket-owner-full-control'
)
Getting below error while running this code from Account B Lambda:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Since the S3 bucket is encrypted by AWS Managed Key so I cannot edit the KMS policy what we do in case of Customer Managed Key.
Someone please guide me what am I missing.
Try granting your lambda function s3:PutObject action permission. So the inline policy of your lambda role should be something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
I've been troubleshooting this for a couple of hours myself.
I don't believe this is possible with the default "AWS Managed Key" when using SSE-KMS. Instead you have to create a CMK and grant the cross account user access to this key.
HTH
Cross account access cannot be granted for AWS Managed Key. Need to use customer managed key or default encryption.
This can be useful- https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/
I have a requirement to download a file (as ex: https://hematestpolicy.s3.amazonaws.com/test/ca-dev2.png) s3 object across many instances in my aws vpc without having to install aws cli. The file should be protected, can be accessed only within VPC. I have applied below bucket policy on my s3 bucket hematestpolicy. Am able to view the file in my instances using aws s3 ls commands but unable to download it using wget command. Can anyone suggest if it is achievable or a better solution for file being private to vpc and downloaded without use of AWS CLI
`
{
"Version": "2012-10-17",
"Id": "CreditApplications",
"Statement": [
{
"Sid": "AllowCreditAppProcessing",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::975472539761:root",
"arn:aws:iam::975472539761:role/hema-ghh"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
}
}
},
{
"Sid": "DenyEveryoneElse",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::hematestpolicy",
"arn:aws:s3:::hematestpolicy/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"172.31.0.0/16",
"192.168.2.6/16"
]
},
"ArnNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::975472539761:role/hema-ghh",
"arn:aws:iam::975472539761:root"
]
}
}
}
]
}`
Unless you have a VPC endpoint all outgoing connections would be via a public source (for public instances this would be their public IP via an internet gateway, and for private this would be via a NAT).
If you want to limit to allow objects to only be retrieved via the VPC you should look at using a VPC endpoint for S3. By creating this and adding it to your route tables it will actually also provide a internal connection to S3 vs using the public internet.
Once you have this in place you can create a bucket policy that completely limits the requests to the source of that VPC endpoint.
For example the below policy would Deny access where not from the VPC Endpoint.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::awsexamplebucket1",
"arn:aws:s3:::awsexamplebucket1/*"],
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}
Be aware that when using a bucket policy denying everything will restrict all access to that bucket (including management tasks) to only be available from that source VPC endpoint, so you should try to limit the scope of actions i.e. GetObject.
I need to integrate a AWS IOT based MQTT Service. Some other developer already setup the MQTT and give me the aws account credentials. They also gave us two topics name. one for publishing data other for subscribing for getting status data.
For testing purpose I created a device into AWS IOT Panel and it give me the node iot sdk download. Which I setup on local machine. Then I playing with device-example script in examples folder. I modify aws policy attached with my device for allow to access two topics one for publish and one for subscribe.
But all this failed. Script given following output.
connect
offline
close
reconnect
connect
offline
close
and so on..
When I checked in AWS CloudWatch Logs for IOT I got issue.
{
"timestamp": "2018-10-25 07:13:10.056",
"logLevel": "ERROR",
"traceId": "TRACEID",
"accountId": "ACCOUNTID",
"status": "Failure",
"eventType": "Subscribe",
"protocol": "MQTT",
"topicName": "status topic name",
"clientId": "sdk-nodejs-uuid",
"principalId": "clientid",
"sourceIp": "IP",
"sourcePort": PORT,
"reason": "AUTHORIZATION_FAILURE"
}
My changed policy are
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name",
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
}
]
}
Then I even gave all iot permission for all topics but still get authentication error
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
For publish I get only connect console output and also did not get any logs on cloud watch so I am not very sure if it succeed or not.
UPDATE: Ok i got the issue after some search and that is to add topicfilter along with topic in policy. It look like required for subscribe topics. Updated policy is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topicfilter/*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
Have you also configured an IoT policy? To connect to the IoT platform with an IAM user (MQTT over WSS), you do not only need an IAM policy which allows access, but also an IoT policy which does so. On top of this, you should check if your policies are using the correct resource identifier. There is a difference between how resources are defined for iot:publish versus iot:subscribe.