I'm currently using Cloud Formation to deploy a stack where I deploy, among other things:
A VPC
A Subnet inside the created VPC
An EC2 Instance inside the created Subnet
A RDS Postgres database
At first I couldn't connect to the DBInstance because it didn't have a properly configured SecurityGroup.
When I tried to create the SecurityGroup, the deploy failed because the DBInstance and the SecurityGroup were being created on different VPCs.
But I can't find a property on any RDS related resource on Cloud Formation to adjust in which VPC is my database going to be created. Searching around, I've found the alternative of creating a DBSubnetGroup.
But in order to use a DBSubnetGroup, I need to have at least two subnets (because it needs to cover at least 2 Availability Zones). I wish to avoid creating an empty subnet on another AZ just to make this work.
Is there a better alternative? What's the easiest way to give my EC2 instances access to my DBInstance using only Cloud Formation?
If you don't want to go with the DBSubnetGroup way, the only possibility of creating RDS instance is to use Default VPC. If you do not specify DBSubnetGroup, your RDS Instance will be created in the default VPC.
Now there are two ways for your EC2 instance to access the RDS Instance.
Let your RDS Instance be publicly accessible. Ensure that you have tight SecurityGroup configurations to deny possibility of attacks. Then EC2 instance should be able to access the database instance.
Mark publicly accessible as false. Connect the Default VPC with the VPC which you have created using VPC Peering Connections. I recommend this way as your RDS instance will not be publicly accessible and you get your job done.
On top of this, you have mentioned
But in order to use a DBSubnetGroup, I need to have at least two subnets (because it needs to cover at least 2 Availability Zones). I wish to avoid creating an empty subnet on another AZ just to make this work.
RDS doesn't work that way. When you specify MultiAZ = true and have DBSubnetGroup, in your RDS template, a replica for your DBInstance will be maintained in another subnet that is available in different AZ. When your primary node goes down, this replica node comes up and acts as the master. Keeping this in mind, I would strongly recommend you to use DBSubnetGroup when creating RDS instance.
More reading available here
Hope this helps.
Related
I have AWS EKS nodes access RDS where I have have whitelisted EKS node's public IPs in RDS's security group. But this is not viable solution because EKS Nodes can get replaced and its public IP can changes with it.
How can I make this EKS node's connecting to RDS more stable ?
Last year we have introduced a new feature to assign Security Groups to Kubernetes pods directly to overcome having to assign them at the node level (to avoid ephemerality problems you call out and to create a more secure environment where only the pod that needs to talk to RDS can do so Vs the ENTIRE node). You can follow this tutorial to configure this feature or refer to the official documentation.
If your eks cluster is in the same vpc as the Rds instance, then you can just whitelist your vpc's private ip-address (cidr) range in RDS security group. If they are in different vpc's, then connect both vpc with vpc-peering and whitelist's eks vpc's IP range in rds security group. Dont use public ip's as they will go through outside AWS network. Instead, always use private connections wherever possible as they are faster, reliable and more secure. If you don't want to whitelist complete cidr Then you can also create a NAT gateway for your eks cluster and make routes for outside traffic going outside the EKS cluster go through that NAT gateway and then you can whitelist NAT's IP in rds security group
I am unable to use AWS SSM sessions manager for secure login to my private instances with NACL rules applied. Whereas AWS SSM works if I update NACL rules open To public(0.0.0.0/0).I want my private instances to be secure and not have open connections in NACL.
Please help.
I want my private instances to be secure and not have open connections
To use AWS SSM in a completely private subnet that has no inbound or outbound access to the internet, you need to use VPC endpoints. Follow the steps described in the AWS docs to do this:
Amazon EC2 instances must be registered as managed instances to be
managed with AWS Systems Manager. Follow these steps:
Verify that SSM Agent is installed on the instance.
Create an AWS Identity and Access Management (IAM) instance profile for Systems Manager. You can create a new role, or add the needed
permissions to an existing role.
Attach the IAM role to your private EC2 instance.
Open the Amazon EC2 console, and then select your instance. On the Description tab, note the VPC ID and Subnet ID.
Create a VPC endpoint for Systems Manager. For Service Name, select com.amazonaws.[region].ssm (for example, com.amazonaws.us-east-1.ssm).
For a full list of Region codes, see Available Regions. For VPC,
choose the VPC ID for your instance. For Subnets, choose a Subnet ID
in your VPC. For high availability, choose at least two subnets from
different Availability Zones within the Region. Note: If you have more
than one subnet in the same Availability Zone, you don't need to
create VPC endpoints for the extra subnets. Any other subnets within
the same Availability Zone can access and use the interface. For
Enable DNS name, select Enable for this endpoint. For more
information, see Private DNS for interface endpoints. For Security
group, select an existing security group, or create a new one. The
security group must allow inbound HTTPS (port 443) traffic from the
resources in your VPC that communicate with the service. If you
created a new security group, open the VPC console, choose Security
Groups, and then select the new security group. On the Inbound rules
tab, choose Edit inbound rules. Add a rule with the following details,
and then choose Save rules: For Type, choose HTTPS. For Source, choose
your VPC CIDR. For advanced configuration, you can allow specific
subnets' CIDR used by your EC2 instances. Note the Security group ID.
You'll use this ID with the other endpoints. Optional: For advanced
setup, create policies for VPC interface endpoints for AWS Systems
Manager. Repeat step 5 with the following change:
For Service Name, select com.amazonaws.[region].ec2messages.
Repeat step 5 with the following change: For Service Name, select com.amazonaws.[region].ssmmessages. You must do this if you want to
use Session Manager.
After the three endpoints are created, your instance appears in
Managed Instances, and can be managed using Systems Manager.
i want to send mail using node mailer in NodeJS if my lambda function is developed in default VPC because I have to access RDS too from the lambda function.
I am unable to send success mail for data successfully inserted in RDS if I deployed my lambda function in default VPC WHAT changes I need to do so I can send.
IF I choose NO vpc then I am unable to set data to database.
From https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html,
When you connect a function to a VPC in your account, it does not have access to the internet unless your VPC provides access.
I take this to mean that if you wish to access both RDS and the internet from lambda from within your VPC, you need a NAT gatway (or to spin up your own instance). In other words, lambda does not support internet access with a public IP through an Internet Gateway, which is the mechanism of internet access within your vpc.
If you don't mind the cost, about 4.5 cents an hour plus data transfer last I checked, the simplest solution is probably:
add another subnet to your VPC.
Add a NAT Gateway to your VPC.
Add a route table to the subnet that routes through the NAT Gateway
put your lambda in that subnet
This essentially creates a connection to the internet in that VPC without your lambda holding a Public IP address.
Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.
The AWS docs are almost useless when trying to describe an entire system. Is there any resource or compiled list of all the resources that can belong to a security group and the different types of security groups?
Here is what I have so far:
EC2-Classic instance
EC2-VPC instance
RDS
ElasticCache
Anything else I'm missing? Any really good doc resource I'm missing?
The main concept to understand about an AWS Security Group is that it determines what traffic is permitted in/out of a resource on a virtual network.
Therefore, think about what can be launched "into" a virtual network:
Amazon EC2 instances
Services that launch EC2 instances:
AWS Elastic Beanstalk
Amazon Elastic MapReduce
Services that use EC2 instances (without appearing directly in the EC2 service):
Amazon RDS (Relational Database Service)
Amazon Redshift
Amazon ElastiCache
Amazon CloudSearch
Elastic Load Balancing
Lambda
Resources do not "belong" to a security group. Rather, one or more Security Groups are associated to a resource. This is often a difficult concept to understand since Security Groups have similar abilities to firewalls, and firewalls generally "encase" a number of devices. Rather than "belonging to", or "being encased by", a security group, the virtual network simply uses the definitions contained within a security group to determine what traffic to permit in/out of the resource.
For example, imagine two EC2 instances that are associated with a "Web" security group and the security group is configured to permit incoming traffic on port 80. While both instances are associated to the same security group, they cannot communicate with each other. This is because they do not "belong" to the security group, and are not "within" the security group. Rather, the security group definition is used to filter traffic in/out of the instances. The security group can, of course, be configured to permit incoming traffic from the security group itself (a self-reference), which really means that incoming traffic is permitted from any resource that is, itself, associated with the security group. (See, I told you that it's a difficult concept grasp!)
Also, a security group is not actually associated with an EC2 instance within a VPC. Rather, the security group is associated with the Elastic Network Interface (ENI) that is attached to an EC2 instance. Think of the ENI as a "network card" that links an instance to a VPC subnet. An instance can have multiple ENIs and can therefore connect to multiple subnets. Each ENI can have its own association with security groups. Thus, the actual security groups being used depends upon where the traffic is flow in/out of the instance, rather than actually being associated with the instance.
There are only two "types" of security groups:
EC2 Classic (the legacy network configuration)
EC2 VPC (the modern private network configuration)
Either type of security group can be associated with any other resource, as long as they are in the same network type (classic or VPC).
A Lambda Function can also be associated with a Security Group. That might not have been the case in 2015, when the original answer was written.
Fargate tasks can also be assigned to security groups.
AWS EFS (Elastic File System) needs a security group attached.
From the AWS document:
To connect your Amazon EFS file system to your Amazon EC2 instance,
you must create two security groups: one for your Amazon EC2 instance
and another for your Amazon EFS mount target.
Reference: https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-create-security-groups.html
interface endpoints can also be associated with security groups, this is a good question and so far not easy to find on AWS documentation.