So I have a simple node app for "hello node" and I am running it on a kubernetes cluster, I have exposed it as a service, configured an ingress for it and now I would like to target that service with a domain name, so I can use that instead of the ip address.
Does anybody know how to configure this? I have tried searching for External domain names and dns, but they keep pointing me back to how to configure some part of kubernetes which configures the external dns, which is not what I am looking for.
I am perhaps not using the correct terminology for this. Can anybody help?
Related
When I run aws eks update-kubeconfig, my ~/.kube/config file contains the following line:
server: https://1234567890ABCDEF1234567890ABCDEF.xx0.region.eks.amazonaws.com
This hostname resolves to some IP address in our VPC.
Which used to work fine, but now my company is migrating to a DNS-based VPN and, due to factors outside my team's control, blanket DNS routing of a domain we don't control, such as eks.amazonaws.com, is not an option. Also the server hostnames are constantly changing because we use Blue-Green Deployment.
There's a really crappy workaround in which we manually keep a CNAME record in Route53 and manually edit that address in kube config after we run update-kubeconfig.
Is there a way to tell EKS to use a Route53 Record instead of that amazonaws.com URLs in a way that update-kubeconfig will know about?
DNS is the core discovery system for EKS, Kubernetes. Having said that, a potential solution is external DNS which integrates with Route53.
I have a tomcat server with port 8080 which is running on a Google cloud platform VM instance. Also i have enabled SSL for my server. In that i have deployed my web application. When i enter my domain name in browser my application will be running.
But it will be appended with the port 8443. It looks like hostname:8443. By using load balancing in GCP i can able to achieve it. But i am new to GCP so i don't know how to configure and all. Eventhough i have configured but it shows some error like problem with backend service.
Kindly anyone can help me to resolve this.
If I understand correctly you would like to know whether in the DNS record you need to add VM instance External IP or Load Balancer’s External IP address. If my understanding is correct, in order to use Load Balancer, you need to put the load balancer’s External IP in your DNS A record.
Regarding your 1 backend service is unhealthy, I would request you to check ‘Firewall rules’ section of GCP’s Creating Health Checks documentation. You need to create ingress firewall rules applicable to all VMs being load balanced to allow traffic from health check prober IP ranges. You did mentioned which load balancer you are using. You will find GCP load balancers offering from this link. Based on the Load Balancer you are using, you need to create an appropriate heal check firewall rule.
I would recommend posting this type of questions in ServerFault as StackOverflow is for Q&A for professional and enthusiast programmers.
I'm new to Kubernetes and Rancher, but have a cluster setup and a workload deployed. I'm looking at setting up an ingress, but am confused by what my DNS should look like.
I'll keep it simple: I have a domain (example.com) and I want to be able to configure the DNS so that it's routed through to the correct IP in my 3 node cluster, then to the right ingress and load balancer, eventually to the workload.
I'm not interested in this xip.io stuff as I need something real-world, not a sandbox, and there's no documentation on the Rancher site that points to what I should do.
Should I run my own DNS via Kubernetes? I'm using DigitalOcean droplets and haven't found any way to get Rancher to setup DNS records for me (as it purports to do for other cloud providers).
It's really frustrating as it's basically the first and only thing you need to do... "expose an application to the outside world", and this is somehow not trivial.
Would love any help, or for someone to explain to me how fundamentally dumb I am and wha tI'm missing!
Thanks.
You aren't dumb, man. This stuff gets complicated. Are you using AWS or GKE? Most methods of deploying kubernetes will deploy an internal DNS resolver by default for intra-cluster communication. These URLs are only useful inside the cluster. They take the form of <service-name>.<namespace>.svc.cluster.local and have no meaning to the outside world.
However, exposing a service to the outside world is a different story. On AWS you may do this by setting the service's ServiceType to LoadBalancer, where kubernetes will automatically spin up an AWS LoadBalancer, and along with it a public domain name, and configure it to point to the service inside the cluster. From here, you can then configure any domain name that you own to point to that loadbalancer.
How to add multiple ingress or Load balancers in kubernetes for separate services,
here is the post who I ended up creating a ingress to my sub-domain. Is there any way we can specific the same IP address created by GCE to launch multiple Ingress resources.
I am using GCE for hosting my cluster. If there is a better way to handle this scenario to have multiple resources to expose a service with a sub-domain www.app1.domain.com, www.app2.domain.com which are entirely different apps and have two ingress resources that point to two these specific services using same external IP address.
From the post I could able to create but unable to specify the external IP address to it.
Any help is much appreciated, thank you.
You can just define multiple Ingress resources and put them to Kubernetes - they don't have to be in the same yaml file. All ingress resources share the same proxy and they are routed via the defined hostname and path to the wanted service.
I am not sure what you mean with the external IP address.
I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.