Building AWS Infrastructure - Security Questions - security

I am building an cloud infrastructure on AWS.
I have some backend applications (like database servers) and other front end app (like webservers) that needs ingoing/outgoing traffic.
I also have some devops/dev app like Jenkins, and Airlfow (a workflow management tahts has a web UI) that i need to protect. Some of these apps, like Airflow, doesn't have security mechanism (for example login/password). And I still need access it on 80 port from Internet.
I was thinking to setup a AWS VPC, with a private subnet and public subnet. In the public subnet I will put the fron end apps and the private subnet I will put the backend services (like databases).
For the backend services, I need a way to my dev team to connect, for example, in a MySQL database (port: 3306).
What is the correct way of do this?
I need to expose port 3306?
Do I need a NAT or a bastion host? What is the difference between them?
If I setup a NAT/Bastion hosti will make a port foward rigth? If I have two instances of a mysql database, how can I connect to each other using the bastion? I need to allocate different ports on bastion and make the port foward?
For the devops/dev app:
Which subnet do I choose?
If i put on the private subnet, how can my team access it on 80 port?
Do i need a intranet/vpc foo this applications?

These are all quite common problems people are faced with on AWS. You have lots of options.
You could put all of your backend and dev opps services in the private subnet. You then have a number of choices to connect to them securely.
Option 1
Use Security Groups to limit access to these nodes. You can use Security groups to only allow specific IP addresses to connect to your resources.
Option 2
Use a bastion host.
Referring to your question "What is the difference NAT and bastion host?".
NAT simply allows instance inside a private subnet to connect to the internet by routing all their traffic through the NAT instance. The NAT instance then directs the return traffic from the internet back to the correct nodes in the private subnet. NAT alone does not allow you to connect to instances inside your private subnet from the outside, you'd need to combine it with Port Address Translation to achieve this.
A Bastion host is an instance that you place in a public subnet of your VPC. You can therefore connect to it from the internet. Once you're connected to your Bastion host, you can connect to any other instance inside your VPC using the private IP. Once you ensure maximum security to your bastion host, you're in business.
As a result, you could use a bastion host to connect to all those special nodes in your private subnet.
Option 3
Set up a VPN connection to your VPC using the built in functionality in VPC or setting up a VPN instance with something like OpenSwan running on it.
VPN connections are extremely secure but can often be a tad temperamental (*personal opinion from personal experience).
So, you have lots of choices. I'd recommend doing a few more google searches and digging deeper into the AWS docs as these are all commonly asked questions!
Good luck! :)

Related

Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

Azure WebApp SNAT Exhaustion - could Private Endpoints improve

Der Azure Networking Experts,
Our WebApps are frequently running out of outbound TCP connections. Most of the outbound connections are actually Azure-internal connections (SQL, BlobStore, Backend Services). But we don't have Virtual Network and Private Endpoints in place yet.
Could Virtual Network and Private Endpoints help solve our issue? I'd expect that by using internal IP addresses, there's no SNAT IP matching required at all?
I'm unfortunately no expert in networking, but I'm looking at this issue from WebApp-Developer perspective.. (the recommendations for how to save connections, like keepalive etc. are just not enough to fix the issue).. Any advice appreciated, however, we definietly prefer to use managed Azure services like auto-scalable WebApp farms.
SNAT is used when you route outbound traffic through a public load balancer. App service plans are provisioned with public IP addresses and would not need SNAT out of the box.
App service plans support vNet integration and can accesses other resources on their private IP addresses if the vnet peering is correctly configured.
Q: Could Virtual Network and Private Endpoints help solve our issue?
A: Yes, it would be a matter of configuring the vnet integration on the App Service plans and configuring private end points on the other azure resources. One might also look to utilize the public IPs of the services rather than putting things behind a load balancer.
Q: I'd expect that by using internal IP addresses, there's no SNAT IP matching required at all?
A: Yes, you should not need any SNAT IP matching.

AWS, NodeJS - Connecting app to Mongodb on another EC2 instance

I am trying to connect my app, running on one EC2 instance, to MongoDB, running on another EC2 instance. I'm pretty sure the problem is in the security settings, but I'm not quite sure how to handle that.
First off, my app's instance is in an autoscaling group that sits behind an ELB. The inbound security settings for the instance and ELB allow access to port 80 from anywhere, as well as all traffic from its own security group.
The EC2 instance that runs Mongo is able to take connections if the security group for that instance accepts all inbound traffic from anywhere. Any other configuration that I've tried causes the app to say that it cannot make a connection with the remote address. I've set rules to accept inbound traffic from all security groups that I have, but it only seems to work when I allow all traffic from anywhere.
Also, my db instance is set up with an elastic ip. Should I have this instance behind an ELB as well?
So my questions are these:
1) How can I securely make connections to my EC2 instance running mongo?
2) In terms of architecture, does it make sense to run my database this way, or should I have this behind a load balancer as well?
This issue is tripping me up a lot more than I thought it would, so any help would be appreciated.
NOTE
I have also set the bind_ip=0.0.0.0 in /etc/mongo.conf
Your issue is that you are using the public elastic IP to connect to your database server from your other servers. This means that the connection is going out to the internet and back into your VPC, which presents the following issues:
Security issues due to the data transmission not being contained within your VPC
Network latency issues
Your database server's security group can't identify the security group of the inbound connections
Get rid of the elastic IP on the MongoDB server, there is no need for it unless you plan to connect to it from outside your VPC. Modify your servers to use the private internal IP address assigned to your database server when creating connections to it. Finally, lock your security group back down to only allow access to the DB from your other security group(s).
Optional: Create a private hosted zone in Route53, with an A record pointing to your database server's private IP address, then use that hostname instead of the internal IP address.

AWS CLI to restrict inbound connections from a dynamic IP

My internet provider doesn't offer static IP, so I have to connect to my AWS instances with a dynamic IP. That means that my VPC security group in AWS has a ssh port that can be accessed from every IP (source: 0.0.0.0/0), obviously if you have the key.
I would want to restrict this rule, and I was thinking of writing a CLI script that revokes this 0.0.0.0 rule and creates a new inbound rule with my (dynamic) IP.
Is it possible? Is it a good idea?
You could connect through a VPN. Then SSH from inside the VPN.
setup a software VPN (OpenVPN, OpenSwan) on an existing instance and open just that port to the outside world. Once setup it would essentially be free if you are running it on an instance that you would normally run. This will have a little more setup involved but it's not too hard.
Previously I suggested the Amazon VPC VPN. But that requires a static IP so that will not work

How does one setup two non-load-balanced VM web servers in Azure, capable of communicating on private ports?

I'd like to setup the below infrastructure in Azure. I have one possible solution, but it seems like it makes compromises in security architecture. Is there a better way to do this in Azure than in my compromised workaround?:
VM #1: Role: SQL Server and IIS. Server should have a unique public IP address. The hosted websites will be available through public port 80, and connect to local SQL Server.
VM #2: Role: IIS. Server should have a unique public IP address. The hosted websites will be available through public port 80, and will connect to SQL Server on VM #1.
This has been my experience so far:
No issues setting up VM #1.
With VM #2, I tried building it in the same cloud service as VM #1. When I did that, it was assigned the same public IP address as VM #1. Thus, in this scenario, hosting websites on port 80 on both machines doesn't work.
Next I tried building VM #2 in a different cloud service. This resulted in assignment of a unique public IP address. However, I was unable to obtain connectivity to SQL Server on VM #1.
Things I tried for the above: VM #1 SQL Server set as mixed mode, named SQL account provisioned (and connectivity confirmed locally), SQL configured to allow incoming remote TCP connections, firewall rule opened for incoming connections on TCP port that SQL runs under, but so far have not been able to connect to it from VM #2.
One architecture I believe would work is to open a public port on VM #1 and map that to the private SQL Server port. Then VM #2 could connect using the fully-qualifed public DNS name of VM #1. I believe Azure also would allow connectivity to be constrained to the public IP address of VM #2.
However, this seems less than ideal, because now SQL communication is being routed through a more public route than one would normally design for a data center, and an extra public port has to be opened on VM #1 (even if constrained by IP address, I'd rather not expose that surface area if not necessary). Additionally, sending the SQL Server data over a more public network hypothetically means transport security may need to be considered.
Research indicates connectivity between 2 VMs on different cloud services may not be possible using private ports, although the info I've found so far is not conclusive. So again, is there a better way to do this in Azure?
A single cloud service is a security boundary and the only way into it is through a public (input) endpoint on the unique public VIP of the service. A Virtual Network (VNET) can be used to host multiple cloud services and allow private visibility among them without going through a public endpoint.
A typical model would be to put an IIS website in a PaaS cloud service with a public VIP and the backend SQL Server in an IaaS cloud service with a public VIP but NO public endpoints declared on it. Both these cloud services would be hosted in the same VNET. This allows the front end web role instances access to the backend SQL Server instance over the private VNET. There is a hands-on lab in the Windows Azure Training Kit that describes precisely how to implement this.
In this case I would recommend separating the IIS/SQL Server combination so that the SQL Server box is in an IaaS cloud service with no public endpoint (although it will always have a public VIP). I would also recommend using either a Point-to-Site or Site-to-Site VPN which would allow you to access the VMs without exposing a public RDP endpoint. A point-to-site VPN is developer focused and very easy to configure. A site-to-site VPN is more of an IT thing since it requires configuration of a VPN router such as Cisco, Juniper or Windows Server.

Resources