Our project is currently hosted on AWS. We are using RDS service for data tier. I need to give permission to one of my IAM user to handle IP address addition/removal request for the security group associated with my RDS instance. Tried making custom policy for this case. Below is my JSON for policy -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:AuthorizeDBSecurityGroupIngress",
"rds:ListTagsForResource",
"rds:DownloadDBLogFilePortion",
"rds:RevokeDBSecurityGroupIngress"
],
"Resource": [
"arn:aws:rds:ap-south-1:608862704225:secgrp:<security-group name>",
"arn:aws:rds:ap-south-1:608862704225:db:<db name>"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"rds:DescribeDBClusterSnapshots",
"rds:DownloadCompleteDBLogFile"
],
"Resource": "*"
}
]
This isn't working despite various changes. Can anybody suggest where am I going wrong ? Any solution too would be welcomed.
Got the answer myself, actually was trying to make that work through permissions in RDS instance directly. Instead, security group permissions needs to be handled in ec2 policies.
Related
We are building a number of Azure Function Apps, each function app by default has its own IP white list.
We have multiple third parties that will consume these function apps. Each third party will likely have multiple IP addresses. Some function apps may be consumed by all third parties, other by one but not another, etc.
We would like a central way of managing this. We have a Powershell that we've used in the past to maintain the IP address, but was wondering if there was a better solution - perhaps are some templates built into Azure itself?
This must be a fairly common problem, does anyone have any suggestions please?
You can use Microsoft.Web/sites/config ARM object. You can deploy config object on top of your existing functions or include it in ARM definition of a complete functionApp template. In that way you can centrally manage IP rules and version control them. With PowerShell, you can orchestrate ARM deployments of IP rules based on your criteria.
https://learn.microsoft.com/en-us/azure/templates/microsoft.web/2018-11-01/sites/config
{
"type": "Microsoft.Web/sites/config",
"apiVersion": "2018-11-01",
"name": "[concat(variables('functionName'), '/web')]",
"location": "East US",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('functionName'))]"
],
"properties": {
"ipSecurityRestrictions": [
{
"ipAddress": "00.00.00.00/00",
"action": "Allow",
"tag": "Default",
"priority": 1000,
"name": "Rule 1"
},
{
"ipAddress": "00.00.00.00/00",
"action": "Allow",
"tag": "Default",
"priority": 2000,
"name": "Rule 2"
},
{
"ipAddress": "Any",
"action": "Deny",
"priority": 2147483647,
"name": "Deny all",
"description": "Deny all access"
}
]
}
}
The main problem is that IP-addresses can change quite often. I prefer to control this by subscription keys per client or / per client and API.
To do that, you can add an API Management (API Gateway Pattern) in front of your API's. You can also keep controlling per IP address using API Management, but I would say the api key is a good practice.
more info:
https://learn.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies#RestrictCallerIPs
https://microservices.io/patterns/apigateway.html
Hello I am having an issue with S3 buckets.
I have different environments dev, qa, staging and production, each s3 bucket have a specific IAM role per environment, so it won't be possible to share the same AWS Api keys.
What I need is the possibility to sync content from {ENV_1}_S3_Bucket to {ENV_2}_S3_Bucket using the nodejs aws sdk.
Is there anything that can help ? I wouldn't like to mess a lot with the IAM roles.
Thanks in advance and regards.
The role you want to access the bucket with must be explicit listed in the S3 Bucket Policy:
(S3 web console -> the bucket -> tab Permissions -> button Bucket Policy)
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/your-bucket-role-1",
"arn:aws:iam::222222222222:role/your-bucket-role-1"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::your-bucket",
"arn:aws:s3:::your-bucket/*"
]
}
]
}
I'm working on an serverless project with aws-python3 template.
The lambda needs to write to amazon elastic search database.
So how do I mention these thing in serverless.yml file in iamroles section.
Build it up using CloudFormation syntax in the resources: section.
Below is an example IAM Policy that allows a Lambda function to write to an AWS ElasticSearch database.
I suggest you employ the use of the popular plugin serverless-pseudo-parameters to fill in the AccountID, Region, etc.
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
}
]
}
Ref: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html
I have multiple EC2 instances originating form a single VPC and i want to assign a bucket policy to my s3 to make sure that only that VPC traffic will be allowed to access the bucket so i created an end point for that VPC and it added all the policies and routes in routing table. I assigned a following policy to my bucket
{
"Version": "2012-10-17",
"Id": "Policy1415115909153",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringtEquals": {
"aws:sourceVpce": "vpce-111bbb22"
}
}
}
]
}
but it does not work when i connect to my Bucket using AWS-SDK for nodejs i get access denied error. The nodejs application is actually running in the Ec2 instance launched in same VPC as end point.
I even tried VPC level bucket policy but still i get access denied error. Can anyone tell me if i need to include any endpoint parameter in SDK S3 connection or any other thing?
I have created all that are needed for a successful deployment.
I tried to make the deployment without configuring the CodeDeploy agent in the Amazon instance and the deployment [obviously] failed.
After setting it up though, succeeded.
So, my question is, should I configure every instance that I use manually?
What if I have 100 instances in the deployment group?
Should I create an AMI with the CodeDeploy agent tool already configured?
EDIT
I have watched this:
https://www.youtube.com/watch?v=qZa5JXmsWZs
with this:
https://github.com/andrewpuch/code_deploy_example
and read this:
http://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
I just cannot understand why I must configure with the IAM creds the instance. Isn't it supposed to take the creds from the role I launched it with?
I am not an expert in aws roles and policies, but from the CD documentation this is what I understood.
Is there a way to give the IAM user access to the instance so I wont have to setup the CD agent?
EDIT 2
I think that this post kind of answers: http://adndevblog.typepad.com/cloud_and_mobile/2015/04/practice-of-devops-with-aws-codedeploy-part-1.html
But as you can see, I launched multiple instances but I only installed CodeDeploy agent on one instance, what about others? Do I have to repeat myself and login to them and install them separately? It is OK since I just have 2 or 3. But what if I have handers or even thousand of instances? Actually there are different solutions for this. One of them is, I setup all environment on one instances and create an AMI from it. When I launch my working instance, I will create instance from the one I’ve already configured instead of the AWS default ones. Some other solutions are available
Each instance only requires the CodeDeploy agent installed on it. It does not require the AWS CLI to be installed. See AWS CodeDeploy Agent Operations for installation and operation details.
You should create an instance profile/role in IAM that will grant any instance the correct permissions to accept a code deployment through CodeDeploy service.
Create a role called ApplicationServer. To this role, add the following policy. This assumes you are using S3 for your revisions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::codedeploy-example-com/*"
]
},
{
"Sid": "Stmt1414002531000",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1414002720000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
To your specific questions:
So, my question is, should I configure every instance that I use
manually?
What if I have 100 instances in the deployment group? Should I create
an AMI with the aws-cli tool already configured?
Configure AMI with your base tools, or use CloudFormation or puppet to manage software installation on a given instance as needed. Again the AWS CLI is not required for CodeDeploy. Only the most current version of the CodeDeploy agent is required.