CoreOS Cluster DNS setup - coreos

There is a coreos cluster of 3 units - 2 web servers and an nginx load balancer - each residing on its own digital ocean instance. How do I set up dns in such way that it always points to the load balancer instance given that it may end up on any of the machines?
Thanks!

Largely that will depend on your DNS infrastructure. Using TSIG with the nsupdate client is going to be your best bet.

An alternative would be to take the load balancer(s) out of the cluster and using sidekick units to announce the various services' IP to the load balancer. That way your load balancers have static IP's which are easier to manage from a DNS standpoint.

You can use MachineID option of fleet and always run load balancer on a specific CoreOS
Reference: https://coreos.com/docs/launching-containers/launching/fleet-unit-files/

I prefer to start nginx on all my coreos servers. This way I get rid of the "this machine does that" thing and I don't care where my services are running, and if something goes wrong I just add or remove a machine to the cluster.
Nginx being super light, that's not an issue.

Related

How do I configure my DNS to work with Rancher 2.0 ingress?

I'm new to Kubernetes and Rancher, but have a cluster setup and a workload deployed. I'm looking at setting up an ingress, but am confused by what my DNS should look like.
I'll keep it simple: I have a domain (example.com) and I want to be able to configure the DNS so that it's routed through to the correct IP in my 3 node cluster, then to the right ingress and load balancer, eventually to the workload.
I'm not interested in this xip.io stuff as I need something real-world, not a sandbox, and there's no documentation on the Rancher site that points to what I should do.
Should I run my own DNS via Kubernetes? I'm using DigitalOcean droplets and haven't found any way to get Rancher to setup DNS records for me (as it purports to do for other cloud providers).
It's really frustrating as it's basically the first and only thing you need to do... "expose an application to the outside world", and this is somehow not trivial.
Would love any help, or for someone to explain to me how fundamentally dumb I am and wha tI'm missing!
Thanks.
You aren't dumb, man. This stuff gets complicated. Are you using AWS or GKE? Most methods of deploying kubernetes will deploy an internal DNS resolver by default for intra-cluster communication. These URLs are only useful inside the cluster. They take the form of <service-name>.<namespace>.svc.cluster.local and have no meaning to the outside world.
However, exposing a service to the outside world is a different story. On AWS you may do this by setting the service's ServiceType to LoadBalancer, where kubernetes will automatically spin up an AWS LoadBalancer, and along with it a public domain name, and configure it to point to the service inside the cluster. From here, you can then configure any domain name that you own to point to that loadbalancer.

Azure migrating Load Balancer to Standard tier with no downtime

Hello I have Virtual Machine Scale Set behind Load Balancer Basic and I would like to migrate to Load Balancer Standard tier. Is it possible with no downtime, or do I have to simply remove backend LB pool for basic and afterwards create backend pool for my VMSS within the LB Standard?
Thank you in advance.
Since you can't add multiple load balancers to VMSS, the only alternative you'd have for no downtime would be to place an alternative load balancer there for a while.
Build an HA PRoxy box out front, configure it to load balance your apps, switch over DNS to the new IP. When DNS has propagated delete old load balancer, build new one. switch back.
If you're using the frontend IP rather than DNS, then I can't think of a way to do this without downtime. But swapping frontend IP to a temp VM would be quicker than deleting / recreating an LB.

DNS configuration to distribute traffic to multiple host on google cloud

We have 2 servers hosting a particular service on google cloud. How to do a simple round-robin DNS configuration to distribute the load?
According to this thread Google Cloud DNS does not support round-robin.
You can set up DNS round robin with Cloud DNS simply by adding more than one IP address to your DNS record.
You might want to look into Google Compute Engine's Load Balancing options. This will allow you to have one IP address that sends traffic to your two servers. This has a few advantages, including that you can configure it to automatically stop sending traffic to an instance if it fails a health check.

Coreos Clustering Local and Cloud Hosts

I want to make a coreos cluster consists of local machines behind one public ip address and coreos hosts on a cloud service like digitalocean.
I am wondering whether this is possible? Since all the local machines will have the same public ip address. If this is possible please let me know how to do this setup.
Thank you
Jake He
You can achieve this using DNS since it can hold multiple records for the same domain name. look here
You can achieve this using load balancer - create a virtual IP and a pool with all the local IPs of the CoreOS servers - but take into consideration that some load balancers force you to create a single pool and virtual server for each service port you would like to balance. (Such as bigip-api for instance)

How to test that Availability set is working in Azure?

I have 2 Virtual Machines in the same Availability Set under Azure. Let´s call them A and B.
I have created the A first, then cloned the VHD and created the B with that. I can connect to both using RDP and both are the same. Both are under the same domain xxxxx.cloudbox.net as the Cloud service says.
I have a domain testAB.com pointing the the common IP of both, let say 10.0.0.1 for example. I can connect to testAB.com without any problem.
As far as I understand, if I turn off A, then I should be able to connect to B in a transparent way.
But this is not working and when I try to get testAB.com, B doesn´t get it.
Ideas?
An availabilty set is not the same thing as a load balancer. When you talk about connecting in a transparent way I think you mean through a load balancer. In that case you need to set up what azure calls load balanced endpoints on each VM, say port 80. Then you should be able to connect via http to both VMs "transparently". Keep in mind that failover is not instantaneous.
The answer for my question is to use the Traffic Manager option in Azure. Nothing to do with availability set. Just follow the Traffic Manager instructions here

Resources