How to create and manage floating IP for a local highly available cluster? - linux

I currently have a highly available cluster for multiple services of my application. The cluster is working without any problem on AWS and now I want to replicate and adapt the whole structure within a local network.
I use Pacemaker/Corosync to share AWS Elastic IP between two HAProxy instances. But I'm not sure if it is possible to create the same flow within my local network since I don't know how to share a single local IP between two of the computers.
Is it possible to manage a single local IP as a floating IP within local network?

Have a look at the HAPROXY with VRRP and Keepalived setup. I will have to do a test in my homelab if you need configs.

Related

Coreos Clustering Local and Cloud Hosts

I want to make a coreos cluster consists of local machines behind one public ip address and coreos hosts on a cloud service like digitalocean.
I am wondering whether this is possible? Since all the local machines will have the same public ip address. If this is possible please let me know how to do this setup.
Thank you
Jake He
You can achieve this using DNS since it can hold multiple records for the same domain name. look here
You can achieve this using load balancer - create a virtual IP and a pool with all the local IPs of the CoreOS servers - but take into consideration that some load balancers force you to create a single pool and virtual server for each service port you would like to balance. (Such as bigip-api for instance)

Recommended replica set config in Azure

We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.

How to create Azure Input Endpoint to VRRP Virtual IP Address

I'm setting up a test web farm in Azure. Consisting of:
Four Ubuntu Servers
Two balancers running HAProxy + Keepalived
Two web servers running Apache
Keepalived has been configured and everything has been working fine. HAProxy performs great.
My issue is that I want to enable the Keepalived failover clustering, but I can't seem to figure out how to create an Input Endpoint in Azure for the virtual IP address that the Keepalived VRRP is using.
In other words, I want to create an Input Endpoint for a virtual IP address in Azure, but not for an existing VM. So far, the only thing I've been able to do is create Input Endpoints for existing VMs (using their IP) for specific port numbers. I want to be able to configure:
Take TCP requests on port XX and map them to IP address YY.YY.YY.YY on port ZZ
Anyone know of a way to do this? I've looked on both portals (new and old) and the closest thing I see is the Cloud Services page for my VNET has the Input Endpoints listed. But no add/edit button.
This is not currently possible in Azure. Azure IaaS VMs do not yet support multiple IPs per interface, so keepalived will not be able to move a VIP between the nodes. We tried to do the same thing and were told it's not currently available. However, it's supposed to be on the road-map and it is "coming", as is the ability to have multiple interfaces per machine.
Input endpoints are to expose some service on a single VM (it's a NAT), and they are not attachable to an actual interface. The only option that I thought through, was to use Azure's Traffic Manager to round robin between the two HAProxy instances using two exposed endpoints, with a health check to fail to a single HAProxy instance if one fails.

Exposing a floating IP address inside an Azure Virtual Network to the outside world

I'm dealing with the following scenario. I've got two Linux VMs on the same virtual network. High Availability is implemented through Pacemaker which maintains a floating IP for the cluster so that either VM A or VM B is reachable under that IP.
I haven't figured out a way to expose a well known port on the floating cluster IP inside the virtual network to the public internet. There's nothing in the Azure Portal that would indicate that you can forward a port for a reserved IP to an arbitrary address inside a virtual network.
Any suggestions?
Your two Linux VMs will exist in a Cloud Service and hopefully you have deployed them as part of an Availability Set. If you have made the machines operate in an Availability Set you will be ensuring that at least one VM is operational (note this isn't based on the health of a particular service on your VM).
If you have deployed both instances into a single Cloud Service then you will already have a load balancer in front of these VMs and incoming traffic will be being load balanced between the two.
While I'm not familiar with Pacemaker I don't believe you'll need to use it if you follow the standard approach to provisioning and running virtual machines in a HA configuration. There is a good overview on how to achieve this here: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-manage-availability/
If in ARM, I suggest you look at adding a Load Balancer in front. That will help your scenario.

Fixed internal ips in azure

We are currently evaluating azure, to see if we can use it for our stress and production environments.
Our environment is pretty complex, including web servers, mysql servers, hadoop and cassandra servers, as well as monitoring and deployment servers.
To set the stress environment, we need to install the environment, and then load large amounts of data into it, before we can run a stress test. This takes time and effort, and so, since we pay by the hour, we would like to be able to completely shutdown the environment, and start it up again ready to go when we want to run additional stress tests.
Here's our issue - we could not find a way to set a fixed internal ip address for a vm in azure. In AWS it is possible with VPC, but in azure, even if you define a virtual network, there seems to be no way to set a fixed internal ip (at least none that we can find).
This creates several issues for us -
1. Hadoop relies on all nodes in the cluster being able to translate all the modes hostnames to ip addresses.
2. A cassandra cluster that has all the ip addresses in the cluster change at once freaks out. We actually lost data in a test cassandra cluster because of this.
Our questions are:
1. Is there a way to set a fixed internal ip for a vm in azure?
2. If not, did anyone have an experience with running hadoop and cassandra on azure? How did you handle the changing ip addresses when the cluster is shut down?
Any advice on these issues will be much appreciated,
Thanks
Amir
Please note that the portal doesn't always expose all the capabilities of Azure. Some of the features in Azure are only possible through the REST API and PowerShell.
If you take a look at the new release of the PowerShell Cmdlets, you'll notice there is a new option for Static IPs in VNets.
https://github.com/WindowsAzure/azure-sdk-tools

Resources