Linux and windows instance on AWS - linux

How can I create a VPC/subnet on AWS and launch a Windows instance and Linux instance in that same subnet. whenever I try to create a vpc it will not give ssh access to other terminals even though I give permissions in route tables.

This could be many things. I'd probably recommend looking at security groups (adding TCP/22 inbound from your IP address) and depending on your setup (make sure an IGW is attached), making sure a public IP address is assigned to your instances (add an Elastic IP if your instance doesn't already have one). Without more information on your environment, I can only provide some background to help you troubleshoot.
The network path to an instance in a VPC looks like this:
Internet -> AWS Boundary -> VPC Boundary -> VPC Router -> Subnet boundary -> Elastic Network Interface
AWS Boundary
Most of the time this part isn't important. From the Internet, eventually your traffic will be routed into an Amazon autonomous system. This is generally transparent. But if you have a public-type Direct Connect, the Amazon side of the connection dumps you here. In this area, you can access public API endpoints, but you cannot access anything within a VPC without progressing further down the stack. See below.
VPC Boundary
This is the point at which traffic enters/leaves the VPC. There are a number of ways this can occur.
Internet Gateway (IGW)
This performs a one-to-one NAT between an amazon-owned publicly routable IP address and a VPC internal private IP address (do not confuse this with a NAT Gateway, as that's different and described below). Only instances assigned a public IP address can use an IGW. Being a 1-to-1 NAT, to the Instances perspective it's a connection between it's private IP address and an Internet-routable IP. From the outside it's a connection to the instance's public IP address. To stress, instances without a public IP address assigned (elastic IP or otherwise) will not be able to communicate with the Internet in either direction through an IGW. Conversely, if you don't have an IGW, there is no use for public IP addresses on your instances. Also worth mentioning, there are also egress-only gateways which allow IPv6 connectivity for outbound initiated connections only.
Virtual Private Gateway (VGW)
This can be seen as a router of sorts. If you have a VPN connecting to AWS' VPN service or if you're using Private Direct Connect, it will traverse this virtual device, which will be associated with a VPC. What's noteworthy, is that is a peering of sorts and uses VPC private IP addresses for communication. You don't need a public IP address to talk through a VGW, and you cannot traverse a VGW by using the public IP address of instances in your VPC.
VPC Peering Connections (PCX)
This is very similar to a VGW. It can only connect two VPCs (with transit gateway, I suppose this is an oversimplification), and it connects them on the Private IP address layer. You cannot reference resources across a PCX by using their Public IP addresses.
VPC Endpoint (VPC-E)
This is only accessible from inside of the VPC with connections going out (obviously return traffic with come back through this, but . It is a connection to a specific AWS Endpoint within the AWS Public Boundary (like the S3 API endpoint). This doesn't use an instance's public IP address.
VPC Router
All traffic exiting or entering the VPC hits this router, and it routes to/from all of the egress/ingress points at the VPC boundary and to/from each of the subnets. You can adjust this router to control which traffic goes where, and you can have different route tables for each subnet. Thus a "public" subnet is one in a VPC that has a IGW and has a default route (0.0.0.0/0) to that IGW. A private subnet has no route to an IGW. If there isn't a route to an IGW, having a public IP address on your instance is useless and wasteful.
You can also route to an ENI if you want to control traffic within your VPC and send it to an EC2 Instance (web proxying / IDS / traffic capturing / etc.), however, that ENI has to reside in a subnet with a different route table, or otherwise it's own outbound traffic will be routed back to itself. All traffic exiting/entering any subnet and all traffic exiting/entering the VPC traverses this router and is subject to the routes you configure. You cannot configure routing within your subnets, any packet destined for an address within your VPC's private IP space will be automatically routed to that particular subnet, and you cannot override this functionality with more specific routes.
Subnet Boundary
At the subnet boundary, traffic is subject to the Network Access Control Lists (NACLs). This is a stateless, rule-based firewall. By default it is wide-open and requires no configuration to allow traffic. There is no rule to allow "existing connections" so if you start to lock down a subnet with NACLs, you'll probably need to open up all of the ephemeral ports in the direction you're expecting return traffic. Any traffic between instances within the same subnet will not hit the NACL. But anything that leaves or enters the subnet (whether it's going to another subnet in the same VPC, or leaving the VPC altogether) will hit the NACL and be subject to its rules. Generally leave these alone, unless you need to protect traffic at the subnet level, NACLs are a little unwieldy.
Elastic Network Interface (ENI)
Finally traffic goes through an ENI, where it is subject to a security group. This is a stateful implicit-deny firewall for which you can only add allow rules. If the security group doesn't have a rule allowing outbound traffic from the instance, that traffic will never leave the ENI. If the security group doesn't have a rule allowing a kind of inbound traffic, that kind of inbound traffic will never be sent to the Instance (i.e. the OS will not be able to detect it).
NAT Gateway
This is a device that can reside in a subnet. It will need a route to an IGW and it will need a public IP address (Elastic IPs work). It performs a many-to-one NAT of any private IP addresses within your VPC to a publicly-routable IP (technically it performs a many-to-one NAT translating the many private IP addresses within your VPC to its one own private IP address, and when it communicates with the IGW, that does a one-to-one NAT to translate the NAT Gateway's private IP address into a public IP address). This only works for IPv4. And it only works if instances send their traffic to the NAT Gateway's ENI. Generally, you'd have a NAT Gateway reside in a public Subnet (with a default route to the IGW), and all of your instances in private subnets would have a default route to the NAT Gateway's ENI.
Summary
Bare minimum to connect to an EC2 Instance:
VPC has an IGW attached
NACL has allow rules for the desired traffic in both directions (configured by default)
EC2 Instance has a security group allowing the desired traffic (TCP/22 for SSH, etc.)
EC2 Instance has a public IP address associated with it (must be configured when you launch the instance, or can be added afterward by attaching an Elastic IP).
This configuration allows you to directly connect to the instance over the public internet.
Well-Architected VPC Pattern
The generic architectural pattern advised by AWS for simple best-practice networks is:
VPC with an IGW attached
two-or-more public subnets (each in separate availability zones) with a default route to the IGW.
a NAT Gateway in each of the public subnets
two-or-more private subnets (each in the same AZ as the public subnets) each with default routes to the NAT Gateway in the public subnet in the same AZ.
a bastion host in an auto-scaling group of 1 spanning all of the public subnets allowing SSH inbound (Whether this is advisable is debatable)
if needed, a VPN connection from your corporate network to the VPC
security groups on the private instances allowing inbound from specific resources within the VPC (referencing by security group ID where possible) and whatever inbound traffic is needed through the VPC, and outbound traffic TCP/443 outbound to the world (or more/less depending on your risk tolerance and needs).
if needed, and VPC Endpoint to S3 or whatever API endpoints you expect to send large volumes of traffic.
This architecture allows your private instances to connect to public internet resources outbound (using IPv4, at least), and inbound traffic has to go through the VPN or the bastion host. For public-facing services, setting up an Elastic Load Balancer in the public subnets is the desired way to provide public connectivity for your instances, so that you can continue to keep them protected in a private subnet.

Related

Target VPC server FROM the private IP address of a public server

I have two servers: ExternalSrv and InternalSrv, on the same EC2 VPC.
I have a very simple setup using Nodejs, Express and Axios.
ExternalSrv handles requests from the public, which, of course, come in to ExternalSrv's public IP address. ExternalSrv calls InternalSrv to do some of the work.
In order to simplify the security group inbound rules on InternalSrv, I would like to allow ALL VPC IP address, but nothing else.
I find that ExternalSrv always uses its Public IP address when making requests to InternalSrv's Private IP address. Therefore, the security group needs to be updated with ExternalSrv's Public IP address whenever that address changes (Stop/Start, new instance, more instances, etc.). That seems like a fragility point in ongoing maintenance.
This seems like this should be easy, but I've been searching for an answer for quite some time.
Any insight would be appreciated.
Bill
When two Amazon EC2 instances in the same VPC communicate with each other, it is best to perform this communication via private IP addresses. This has several benefits:
Security Groups can refer to other Security Groups
Traffic stays within the VPC (if communicating via Public IP addresses, the traffic will exit the VPC and then come back in)
It is cheaper (there is a 1c/GB charge for traffic going out of the VPC and then back in)
The best-practice security setup for your situation would be:
Create a Security Group on ExternalSrv (SG-External) that would allow inbound traffic as necessary (eg port 80, 443), together with default "Allow All" outbound traffic
Create a Security Group on InternalSrv (SG-Internal) that allows inbound traffic from SG-External
That is, SG-Internal specifically references SG-External in its rules. This way, inbound traffic will be accepted from ExternalSrv without needing to know its IP address. It also allows other servers to be added to the Security Group in future and they will also be permitted access.
Yes, you could simply add a rule that limits inbound access to the CIDR of the VPC, but good security is always about having multiple layers of security. Restricting access will cut-down potential attack vectors.

Restrict inbound traffic to only come through Azure Load Balancer

Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....
After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

How to whitelist source IPs on Azure VMs fronted by Azure Load Balancer

I have a public facing, standard sku, Azure Load Balancer that forwards the incoming requests for a certain port to a virtual machine, using load balancing rules. This virtual machine has a NSG defined at the subnet level, that allows incoming traffic for that port, with source set to as 'Internet'.
Presently, this setup works, but I need to implement whitelisting - to allow only a certain set of IP addresses to be able to connect to this virtual machine, through the load balancer. However, if I remove the 'Internet' source type in my NSG rule, the VM is no longer accessible through the Load Balancer.
Has anyone else faced a similar use case and what is the best way to setup IP whitelisting on VMs that are accessible through Load Balancer. Thanks!
Edit: to provide more details
Screenshot of NSGs
These are the top level NSGs defined at the subnet.
We have a public load balancer that fronts the virtual machine where above NSGs are applied. This virtual machine doesn’t have a specific public IP and relies on the Load Balancer’s public IP.
The public Load Balancer forwards all traffic on port 8443 and port 8543 to this virtual machine, without session persistence and with Outbound and inbound using the same IP.
Below are the observations I have made so far:
Unless I specify the source for NSG rule Port_8443 (in above table) as ‘Internet’, this virtual machine is not accessible on this port, via the load balancer’s public IP.
When I retain the NSG rule Port_8543, which whitelists only specific IP addresses, this virtual machine is not accessible on this port, via the load balancer’s public IP – even when one of those whitelisted clients try to connect to this port.
I tried adding the NSG rule Custom_AllowAzureLoadBalancerInBound, to a higher priority than the port_8543, but it still didn’t open up this access.
I also tried to add the Azure Load balancer VIP (168.63.129.16) to the Port_8543 NSG, but that too didn’t open-up the access to port 8543, on load balancer’s public IP.
I have played with Load Balancing rules options too, but nothing seems to achieve what I am looking for – which is:
Goal 1: to open-up the virtual machine’s access on port 8443 and port 8543 to only the whitelisted client IPs, AND
Goal 2: allow whitelisted client IPs to be able to connect to these ports on this virtual machine, using the load balancer’s public IP
I am only able to achieve one of the above goals, but not both of them.
I have also tried the same whitelisting with a dedicated public IP assigned to the virtual machine; and that too loses connectivity to ports, where I don't assign 'Internet' source tag.
Azure has default rules in each network security group. It allows inbound traffic from the Azure Load Balancer resources.
If you want to restrict the clients to access your clients, you just need to add a new inbound port rule with the public IP address of your clients as the Source and specify the Destination port ranges and Protocol in your specific inbound rules. You could check the client's public IPv4 here via open that URL on your client's machine.
Just wanted to add a note for anyone else stumbling here:
If you are looking to whitelist an Azure VM (available publicly or privately) for few specific client IPs, below are the steps you must perform:
Create a NSG for the VM (or subnet) - if one not already available
Add NSG rules to Allow inbound traffic from specific client IPs on specific ports
Add a NSG rule to Deny inbound traffic from all other sources [This is really optional but will help in ensuring security of your setup]
Also, please note that look at all public IPs that your client machines are planning to connect with. Especially while testing, use public IPs and not the VPN gateway address ranges - which is what we used and ended up getting a false negative of our whitelisting test.

Where can I found the configuration of my VNet with my Web-App on Azure?

The scenario in here is that I have created a WebApp which has Dynamic Outbound IPs, and we needed those IPs to get whitelisted on the DB side, Since there were too many IPs, we created a NAT Gateway, VNet and a single Public IP address through which we will communicate to the DB.
I need to know where lies the configuration for VNet with my Azure web app?
You need to whitelist the public IP address to your firewall of DB because NAT provides source network address translation (SNAT) for a subnet.
NAT gateway resources specify which static IP addresses virtual
machines use when creating outbound flows. Static IP addresses come
from public IP address resources, public IP prefix resources, or both.
If a public IP prefix resource is used, all IP addresses of the entire
public IP prefix resource are consumed by a NAT gateway resource. A
NAT gateway resource can use a total of up to 16 static IP addresses
from either.
If you have enabled web app with VNet Integration, By default, BGP routes affect only your RFC1918 destination traffic. If WEBSITE_VNET_ROUTE_ALL is set to 1, all outbound traffic can be affected by your BGP routes.

Source security group isn't working as expected in AWS

I have an EC2 node, node1 (security group SG1) which should be accessible from another EC2 node, node2 (security group SG2) on port 9200. Now, when I add an inbound rule in SG1 with port 9200 and specify SG2 as source in Custom IP section, I can't access node1 from node2. On the other hand, if I specify an inbound rule in SG1 with source as 0.0.0.0/0 or IP of node2, it works fine. What is wrong in my approach?
Are you attempting to connect to node1's public or private address? From the documentation:
When you specify a security group as the source or destination for a
rule, the rule affects all instances associated with the security
group. For example, incoming traffic is allowed based on the private
IP addresses of the instances that are associated with the source
security group.
I've been burned on this before by trying to connect to an EC2 instance's public address... sounds very similar to your setup, actually. When you wire up the inbound rule so that the source is a security group, you must communicate through the source instance's private address.
Some things to be aware of:
In EC2 Classic, private IP addresses can change on stop/start of an EC2 instance. If you're using EC2 classic you may want to look into this discussion on Elastic DNS Names for a more static addressing solution.
If you set up your environment in VPC, private IP addresses are static. You can also change security group membership of running instances.
Reason: Inter security-group communication works over private addressing. If you use the public IP address the firewall rule will not recognise the source security group.
Solution: You should address your instances using the Public DNS record - this will actually be pointed at the private IP address when one of your instances queries the DNS name.
e.g. if your instance has public IP 203.0.113.185 and private IP 10.1.234.12, you are given a public DNS name like ec2-203-0-113-185.eu-west-1.compute.amazonaws.com.
ec2-203-0-113-185.eu-west-1.compute.amazonaws.com will resolve to 203.0.113.185 if queried externally, or 10.1.234.12 if queried internally. This will enable your security groups to work as intended.
This will enable you to use an elastic IP as you simply use the Public DNS entry of the elastic IP. Also, having the DNS resolve to the internal IP means that you are not incurring bandwidth charges for your data between instances:
Instances that access other instances through their public NAT IP
address are charged for regional or Internet data transfer, depending
on whether the instances are in the same region.
The Public DNS didn't work for me.
What I did instead was create a custom inbound rule using the security group of the other instance.

Resources