I've Hadoop running on Amazon EC2 in 2 different sites, but when the components starts, they get the internal IP. I want to put the components in different sites communicating with each other using internal IP. I'm not discussing if it's safe. I've an idea to put a DNS server that translates the internal IPs to external IPs, without the components notice. So, when traffic goes with the internal IP, the DNS relays the traffic to the other site.
Is it possible? Any suggestion on how to put a DNS server in EC2?
Two options:
Use VPC, in which case you have control of what internal ips are assigned to your instances. Some limitations however.
Use elastic IPs. Connecting to the DNS name of the elastic ip will resolve to the internal IP within an aws region.
Related
Disclaimers: I come from AWS background but relatively very new to GCP. I know there are a number of existing similar questions (e.g, here and here etc) but I still cannot get it work since the exact/detailed instructions are still missing. So please bear with me to ask this again.
My simple design:
Public HTTP/S Traffic (Ingress) >> GCP Load Balancer >> GCP Servers
GCP Load Balancer holds the SSL Cert. And then it uses Port 80 for downstream connections to the Servers. Therefore, LB to the Servers are just HTTP.
My question:
How do I prevent the incoming HTTP/S Public Traffic from reaching to the GCP Servers directly? Instead, only allow the Load Balancer (as well as it's Healthcheck Traffic)?
What I tried so far:
I went into Firewall Rules and removed the previously allowing rule of Ports 80/443 (Ingress Traffic) from 0.0.0.0/0. And then, added (allowed) the External IP address of Load Balancer.
At this point, I simply expected the Public Traffic should be rejected but the Load Balancer's. But in reality, both seemed to be rejected. Nothing reached the Servers anymore. The Load Balancer's External IP wasn't seemed to be recognised.
Later I also noticed the "Healthchecks" were also not recognised anymore. Therefore Healthchecks couldn't reach to Servers and then failed. Hence the Instances were dropped by Load Balancer.
Please also note that: I cannot pursue the approach of simply removing the External IPs on the Servers. (Although many people say this would work.) But we still want to maintain the direct SSH accesses to the Servers (by not using a Bastion Instance). Therefore I still need the External IPs, on each and every Web Servers.
Any clear (and kind) instructions will be very much appreciated. Thank you all.
You're able to setup HTTPS connectivity between your load balancer and your back-end servers while using HTTP(S) load balancer. To achieve this goal you should install HTTPS certificates on your back-end servers and configure web-servers to use them. If you decided to completely switch to HTTPS and disable HTTP on your back-end servers you should switch your health check from HTTP to HTTPS also.
To make health check working again after removing default firewall rule that allow connection from 0.0.0.0/0 to ports 80 and 443 you need to whitelist subnets 35.191.0.0/16 and 130.211.0.0/22 which are source IP ranges for health checks. You can find step by step instructions how to do it in the documentation. After that, access to your web servers still be restricted but your load balancer will be able to use health check and serve your customers.
I'm trying to use Google Cloud Platform's Cloud DNS to resolve internal IPs of Compute Engine instances by DNS from my local machine. I was able to setup an OpenVPN server on an instance by following this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04
My VPN configuration successfully connects to the OpenVPN server, and allows me to ping internal IPs of my GCE instances. The instance hosting my OpenVPN server is able to resolve and ping cloud DNS entries, but my client local machine is unable to do the same.
Here's the content of my /etc/resolve.conf file after connecting to the VPN server.
search openvpn
nameserver 169.254.169.254
What additional configuration do I need to do to allow my local machine to resolve Cloud DNS addresses?
In Compute Engine, DNS resolution is performed against the metadata server, which always has IP 169.254.169.254. The issue arises from the fact that this IP is link-local and is non-routable, thus will not work over VPN/IPSEC.
There are a few solutions/workarounds for it:
You could map all internal GCE instances IPs in the hosts files of the servers in your private network - the drawback is that the process is manual and time-consuming depending on how many instances you have.
The second option would be an internal GCE server (internal resolver) running a DNS server which could cross networks. More information on this is available in this documentation.
I was browsing the web using Firefox with my EC2 instance located in Ashburn, Virginia (IP Addr: 54.159.107.46) I visited www.supremenewyork.com and it did not load (other websites like Google did load.) I did some research and found the IP of Supreme's site: 52.6.25.180 . I found out that the location of that IP is ALSO IN ASHBURN, Virginia, which could only mean that supreme is using AWS to host their site. This is an issue for my instance because I want to connect to supreme using it, but because the IPs are in the same Server Building or in Amazon's IP range I can't. Is there a workaround to this issue? Please help.
By the way: I tried pinging Supreme's IP from my EC2 instance – 100% packet loss.
NOTE THAT I CAN ACCESS SUPREME FROM MY HOME COMPUTER: IT IS NOT DOWN
Is there a security problem because I am trying to connect to their site?
I ran some tests locally and on AWS machines. My conclusion: www.supremenewyork.com blocks traffic that originates from AWS. It is easy to block traffic from AWS using IP tables. AWS publishes IP Address Ranges and it is easy to write a simple script like AWS Blocker to block all traffic from AWS IPs.
Why do some vendors block traffic from AWS? Increasing DDoS traffic and bot attacks from AWS hosted machines. Many attackers exploit compromised machines running in AWS to launch their attack. I have seen too many such incidents. AWS does its best to thwart such attempts. But if you see most of the attacks from a set of IP ranges, naturally you will try to block traffic from those IPs. I suspect the same in this case.
The website is not pingable because ICMP traffic is blocked from all IPs. There is nothing you can do (unless you go through a VPN) to access the vendor website from your EC2 machine.
We have 2 servers hosting a particular service on google cloud. How to do a simple round-robin DNS configuration to distribute the load?
According to this thread Google Cloud DNS does not support round-robin.
You can set up DNS round robin with Cloud DNS simply by adding more than one IP address to your DNS record.
You might want to look into Google Compute Engine's Load Balancing options. This will allow you to have one IP address that sends traffic to your two servers. This has a few advantages, including that you can configure it to automatically stop sending traffic to an instance if it fails a health check.
I have set up an EC2 instance and an Elastic IP which is associated to the instance. I have also set an A record in my DNS provider's Zone editor so that the domain name points to the elastic IP e.g. example.com = 123.123.123.123.
After reading many posts, this seems like it should be enough to work but my domain name still isn't resolving. I can't even ping the IP address! Weirdly I CAN ssh into the EC2 instance via the elastic IP and everything seems fine, except that my domain name doesn't resolve to the EC2 instance!
Any thoughts?
DNS names take a while to propagate so that is probably your first issue.
Go to http://www.whatsmydns.net/ and enter your domain name. If all of the locations are returning with the correct ip then you can safely assume its not a DNS propagation issue.
Enable ICMP rules in the security group. If using the aws console create a new rule for "All ICMP" with a source of "0.0.0.0/0". Enabling this creates a security risk for your server so only enable this temporarily while testing. At this point you should be able to ping your instance.
If using HTTP or HTTPS enable the correct ports on the security group for those protocols and as long as the instance is configured correctly with Apache you should be up and running.
Please check your EC2 security group & make sure desired ports are open