I have an ECS cluster in which I run a task with many container. Three of them need to be reached from the internet. These container are exposed on port 80, 8080 and 8880 of the cluster's ec2 instance.
I have a DNS name registered (say example.com), and I can create a CNAME record that points to the ec2 dns name, but if I do so, the app will be reachable as
example.com:80
example.com:8080
example.com:8880
Instead what I would like to do is to reach the three container like this:
app1.example.com (instead of example.com:80)
app2.example.com (instead of example.com:8080)
app3.example.com (instead of example.com:8880)
I can't do it with the DNS CNAME because is not possible with CNAME to redirect to specific ports.
I hope the question make sense.
Any suggestion from anyone would be appreciate.
Thanks in advance!
You will need to place an AWS Application Load Balancer in front of the ECS containers in order to accomplish this. You would have 3 different target groups (one for each container) and configure the Load Balancer to use host-based routing to send the traffic to the appropriate target group/container.
Related
I have an ec2 box, it is an ubuntu 18.04 OS. I can using "ssh -i {pemfile} ubuntu#{ip address}" also "ssh -i {pemfile} ubuntu#{ip-ipaddress.us-east-2.compute.internal}" from another EC2 box. Now I wanted to change the hostname and use it in ssh. I followed some of linux and AWS articles AWS Article and changed /etc/hostname and /etc/hosts file. Can not use route 53 DNS entry as per requirement.
/etc/hosts = 10.0.1.190 dev-host.example.trade
/etc/hostname = dev-host.example.trade
Getting below error "ssh -i {pemfile} ubuntu#dev-host.example.trade"
ssh: Could not resolve hostname dev-host.example.trade: Name or service not known.
As you’ve made the changes on the server only these will only be resolvable on that host (otherwise anyone could use any domain).
There are a few options you can take if you want to use a custom domain name.
The obvious one is you can use a domain you control, this will allow it to resolve across any hosts that are able to resolve your public DNS. If you don’t own a domain you can purchase one through a registrar (such as Route 53).
The second option is to look at using Route 53 private hosted zones. By attaching to your VPC you can set DNS records that resolve within your VPC. If you want these records to resolve in a hybrid network you would need to look at adding a DNS resolver.
The third option is to look at using a resource that can resolve the domain name, to do this you would either join a domain (using a service such as managed active directory or simple directory), or you could setup an EC2 host to resolve DNS. This is an expensive solution and the most complex if you’re using a hybrid architecture.
Take a look at the Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway post for more information about hybrid DNS
Your local machine knows nothing about changes you've made to the EC2 configuration. Those changes are local to the EC2 instance.
One way to connect to your cloud instance via a DNS name like dev-host.example.trade is to associate an elastic IP to the EC2 instance. Elastic IPs persist even if the instance is rebooted.
Next, create a new A-type DNS record at your DNS provider pointing to the newly issued IP address.
You can now connect to the server with the DNS name.
How to add multiple ingress or Load balancers in kubernetes for separate services,
here is the post who I ended up creating a ingress to my sub-domain. Is there any way we can specific the same IP address created by GCE to launch multiple Ingress resources.
I am using GCE for hosting my cluster. If there is a better way to handle this scenario to have multiple resources to expose a service with a sub-domain www.app1.domain.com, www.app2.domain.com which are entirely different apps and have two ingress resources that point to two these specific services using same external IP address.
From the post I could able to create but unable to specify the external IP address to it.
Any help is much appreciated, thank you.
You can just define multiple Ingress resources and put them to Kubernetes - they don't have to be in the same yaml file. All ingress resources share the same proxy and they are routed via the defined hostname and path to the wanted service.
I am not sure what you mean with the external IP address.
I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.
I want to make a coreos cluster consists of local machines behind one public ip address and coreos hosts on a cloud service like digitalocean.
I am wondering whether this is possible? Since all the local machines will have the same public ip address. If this is possible please let me know how to do this setup.
Thank you
Jake He
You can achieve this using DNS since it can hold multiple records for the same domain name. look here
You can achieve this using load balancer - create a virtual IP and a pool with all the local IPs of the CoreOS servers - but take into consideration that some load balancers force you to create a single pool and virtual server for each service port you would like to balance. (Such as bigip-api for instance)
I've Hadoop running on Amazon EC2 in 2 different sites, but when the components starts, they get the internal IP. I want to put the components in different sites communicating with each other using internal IP. I'm not discussing if it's safe. I've an idea to put a DNS server that translates the internal IPs to external IPs, without the components notice. So, when traffic goes with the internal IP, the DNS relays the traffic to the other site.
Is it possible? Any suggestion on how to put a DNS server in EC2?
Two options:
Use VPC, in which case you have control of what internal ips are assigned to your instances. Some limitations however.
Use elastic IPs. Connecting to the DNS name of the elastic ip will resolve to the internal IP within an aws region.