Standby setup for redundancy using Kubernetes Ingress - dns

I am trying to set-up a slave\standby as below:
Normally, the clients connect to the primary, a hostname (for example) updates.mysite.com which resolves to an Ingress LoadBalancer IP say A behind which is my actual (primary) application.
I have setup another slave application - which is basically a copy (in a different data center) , which the client would need to connect to, in case the primary site goes down. The clients would then connect to the LoadBalancer IP B behind which is the slave application.
Options I considered:
When the primary goes down, bring up the slave so that now updates.mysite.com points to the IP B. We set the TTL to be very low, say 1 minute. For a maximum of 1 minute, clients may try to connect to the primary after which it would be switched to the new IP (same hostname). The problem here is it leads to unnecessary calls to fetch the DNS record again - for something that happens not very frequently.
Setup updates2.mysite.com, let clients know that on disconnection or when updates.mysite.com is not available - they need to connect to updates2.mysite.com
Setup updates2.mysite.com which would point to the IP B. Expose an API that would give the hostname to connect to, clients should get the hostname from here - before trying to connect - and also on disconnections. We make sure that the API gives the correct hostname to connect to.
What would be the best approach and are there any other effective ways I can achieve this? I prefer the Option 1 due to the least amount of work and simple setup - but I might be missing something.

Related

How to point single subdomain to same server with two IP address

For example, I've a server hosted at my home with 2 NICs for redundancy obviously.
NIC1 has been assigned with the public IP 103.204.82.22 from ISP1
NIC2 has been assigned with the public IP 144.110.12.64 from ISP2
I can access the server with both IP as usual.
Now, I have a domain acme.com. I've created a subdomain server.acme.com. I want to point server.acme.com to both the IPs so that in case one ISP fails to provide connectivity my server still remains online with the other one.
I've already tried with A and CNAME records. But it isn't working. It's working with A record if I use only one IP for the subdomain.
Can anyone tell me what and how can I point both the IPs to the single subdomain?
Thanks in advance
What you are describing is called DNS round robin, but that won't give you your expected outcome.
Anything you do with DNS if one ISP connection is down, traffic will still go there.
You may have your terminology mixed up a little to start with.
in this case, I suspect you really mean that server.acme.com is a host record, rather than a subdomain. (A subdomain would mean that the server address would be at servername.server.acme.com)
If you create an A record, and put both IP addresses in, and keep the TTL (time to live) short, then when a client wants to contact your machine it will randomly pick one of the addresses. If that address is unavailable, it will move on to the next. If that address stops working, it will keep trying it for the 'TTL' time.
Presuming that the IP addresses don't change, which would be a different problem altogether, then this provide basic load balancing and failover to both connections.
Amazon provide a more advanced type of DNS, that will actively monitor your connections and only provide responses that are live. - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

How to get haproxy to use a specific cluster computer via the URI

I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.

How to have a VIP for 2 servers ( Prod and Dr )

We have prod and dr servers, we would like to have VIP for them. They are not exposed to internet. Any one server will be active for 1st 6 months, and after DR drill DR server will act as prod for next 6 months. Here, we have upstream systems which pushes files ( csv or text or zip ) via SFTP to our servers which would be nearly 200 - 300mb size per day. Currently, during every DR drill these upstream systems need to raise change request to update the IP before DR drill. This take atleast 2 weeks. to resolve this issue we decided to provide a VIP from our end. So that they can use VIP to transfer files via SFTP.
Note: DR server will be up and won't be active. App Services won't be up
File transfer is not recommended via SFTP on F5 network.( we are not on F5 ).
Both the servers Prod and DR running on vmWare.
We would like to have a VIP for these servers. Need your advice and suggestion.
Thanks in advance.
Bala
Bala, I think I understand your question, it's not quite clear what the question is but my perception leads me to believe that you are trying to determine how to load balance the two server nodes.
== > first of all your group will have to acquire an F5 load balancer that is configured in accordance with your network requirements. I am assuming the load balancer is already live on the network. In order to load balance the two servers, you will have to create a pool consisting of the two servers, once the pool has been created, you then create a virtual server and associate the pool with the virtual server. Below are the essential steps required to make this happen. Also note that, the server nodes have to be added to the Nodes in the load balancer(this has to be done first)
Add Node:
Go to local Traffic --- > Nodes -- > Create
a. Give the nodes a name
b. enter the IP address of the node in the IP field.
c. In the Configuration section, select "Node Default" for the Health Monitor
leave the rest at their default settings of 1,0,0.
Create Pool:
From the GUI go to Local Traffic -- > Pool --- > Create
a. give the pool a name
b. For now use tcp as the monitor (select from the available options)
c. In the Resources section Fill in the ff
Load Balancing Method == > Round Robin (traffic distributed in a circular
fashion)
Other options include Least connections, observed, random .... much more, a
good reference which has links to creating pools, vips etc.
https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_pools.html
Create the Virtual Server:
In the GUI, go to --- > Local Traffic --- > Virtual Servers -- > create
a. Give the virtual server a name
b. In the Type description Select "Standad" in this case, there are other options, that do not apply to your request at the moment, but I advice you to read up on them they are (forwarding (layer 2), Forwarding IP, Performance HTTP, Performance (layer 4), stateless, Reject, Internal)
c. In the source field enter 0.0.0.0/0
d. In the Destination Field Select "Host" and enter the IP address of the vip
which is normally the url's dns address.
e. Select the service Port, for http traffic select http/80, this could be whatever port your services are being heard on. Note for port 443/https, you will require an ssl certificate.
f. In the Configuration section , select Advanced and select the ff:
Protocol: TCP
Protocol Profile (client): tcp
Http Profile: http
Snat : Autosnat
I am assuming you are using Autosnat here, this is much simpler to deal with, otherwise, a snat pool will have to be created.
g. At the bottom under "Resources" In the drop down for the "Default Pool" select the pool you created above.
h Select "Source Address" for the "Default Persistence Profile"
Click finish.
At this point, if the server nodes are live, configured with the appropriate page, access to the appropriate resources should be reachable. There are other criteria such as monitors which can be specifically configured to monitor a particular page, but that is for another session.
I hope I pointed you in the right direction.
Note: You have to determine the type of service and application running on the servers, if the url requires request to return to the same server then in this case

Azure load balancer session affinity not sticking. Why?

My client makes two http requests to my cloud service which has two replicas.
According to documentation (1) and since connection is kept alive, I'd expect the two requests to go to the same replica.
However, I see each request goes to a different replica. For performance reasons, this is undesirable.
What is causing the distribution?
How do I debug load balancer?
(1) https://azure.microsoft.com/en-us/documentation/articles/load-balancer-distribution-mode/
The default distribution will be 5-tuple (SourceIP, Destination IP, source Port, Destination Port, Protocol). It means that each new connection initiated by a client may land to a different server
If you use sourceIP, then the stickiness will be based on the client IP address
If you need application based stickiness (such as cookie based affinity, then you may look at https://azure.microsoft.com/en-us/documentation/services/application-gateway/
Yves

Two external IPs one WebServer/Website

I'm having the following dilemma, I have a website on IIS with two internal IPs, each one of those IPs are NATed to different external IPs (each IP is from a different ISP). I also configured a RoundRobin DNS Service (two A hosts with the same name but with a different IP). Basically what this does is that the traffic is balanced between the two ISPs, and that's what we want. The thing is that apparently this configuration (DNS Roundrobin) is meant for when you have a cluster of server so each server has its own ISP on its own NIC, so the traffic from the webserver to the client is made over that ISP.
Right now we are being told that no matter where our inbound traffic comes from, the outbound traffic is always through our main WAN, which is also OK, because we have tested that when the primary WAN link is down, the website keeps working on the secondary link.
OK, the question is, do you think there may be problem with this configuration? Is the DNS Rounrobin also useful on this configuration?.
Thanks a lot for your feedback.
normally when you host a web service the responses are much bigger compared to the inbound traffic (normally you receive an HTTP GET/ and deliver the whole content back) - so it would make much more sense to balance the outbound traffic over your ISPs to get value out of your additional bandwidth.
does it make sense - yes - you can loose one ISP and your site is still available (assuming you do Healthchecks on your DNS server to determine if the sites are available before you send the IP address back - if you always deliver both IPs even when one ISP is down it won't help you at all)
it would be better to add an additional server - OR do policy based routing on your single server - so sending the response out of the interface where it was received.
hope that helps!

Resources