How to create Azure Input Endpoint to VRRP Virtual IP Address - azure

I'm setting up a test web farm in Azure. Consisting of:
Four Ubuntu Servers
Two balancers running HAProxy + Keepalived
Two web servers running Apache
Keepalived has been configured and everything has been working fine. HAProxy performs great.
My issue is that I want to enable the Keepalived failover clustering, but I can't seem to figure out how to create an Input Endpoint in Azure for the virtual IP address that the Keepalived VRRP is using.
In other words, I want to create an Input Endpoint for a virtual IP address in Azure, but not for an existing VM. So far, the only thing I've been able to do is create Input Endpoints for existing VMs (using their IP) for specific port numbers. I want to be able to configure:
Take TCP requests on port XX and map them to IP address YY.YY.YY.YY on port ZZ
Anyone know of a way to do this? I've looked on both portals (new and old) and the closest thing I see is the Cloud Services page for my VNET has the Input Endpoints listed. But no add/edit button.

This is not currently possible in Azure. Azure IaaS VMs do not yet support multiple IPs per interface, so keepalived will not be able to move a VIP between the nodes. We tried to do the same thing and were told it's not currently available. However, it's supposed to be on the road-map and it is "coming", as is the ability to have multiple interfaces per machine.
Input endpoints are to expose some service on a single VM (it's a NAT), and they are not attachable to an actual interface. The only option that I thought through, was to use Azure's Traffic Manager to round robin between the two HAProxy instances using two exposed endpoints, with a health check to fail to a single HAProxy instance if one fails.

Related

Azure gateway with a virtual network

I've got multiple questions on the setup of a gateway and VM, so here is what I have actually.
I've got an Application Gateway, and two VM Ubuntu, everything hosted on Azure. They are all on the same Virtual Network. Both VM have only a private IP (10.1.0.4 and 10.1.0.5) and the Gateway have a private IP (10.1.1.4) and a public IP. Because only the Gateway have a public IP, I guess that everything have to go through it, and this is what I want to.
The goals I try to achieve :
Make a load balancer on the port 1680, redirected to port 1680.
To redirect the SSH of each VM to connect specifically to one because at the moment, they have no public IP. Is it possible to do this with a path based rule ? Like www.example.com/VM1 to connect by SSH to the first VM ? If no, what can be used to differentiate the SSH connection of the VM1 and of the VM2 ?
To redirect the port 80 of the gateway to the port 8080 of a specific VM. As my previous example, www.example.com/adminPanelVM1 to connect to the first VM on port 80 (redirected to port 8080 on the VM)
I already managed to create the redirection of the port 1680 of the Gateway with an HTTP Parameter, a Listener and a Rule.
Azure Application Gateway
The Azure Application Gateway operates at the layer 7 in the OSI model on the HTTP/HTTPS/WebSocket protocols, because of that any other protocol (like SSH), is not possible to route.
You got a few options tho.
You can use a Network Security Group, or NSG, for access control to your virtual machines. In the NSG you define where the traffic can come from that is allowed access to the VMs.
A NSG behaves like a access-control-list filtering traffic based on source and destination information and evaluating rules in order of priority. See this page for more information about NSGs.
Another option is to use a load balancer.
Azure Load Balancer
If you need to do port mapping, like you describe in your question, then a simple load balancer might be a better solution for you. An Azure Load Balancer works at a lower level in the in the OSI model, namely layer 4 (transport layer), handling TCP/UDP traffic.
So, if you are using a load balancer, then you can set up NAT rules to forward your traffic to specific machines, in other words, if you want to do:
LB port 1234 redirects to VM1 port 22 and
LB port 4312 redirects to VM2 port 22
you can do that using PowerShell as described in the Creating a public load balancer in Resource Manager by using PowerShell article.
There are quite a few steps but it walks you through the whole process of creating NAT rules, NICs and associated virtual machines.
Azure Application Gateway vs Azure Load Balancer?
These two cervices are distinctly different services and are trying to solve different problem, although those problems might look similar :)
The primary uses of an Application Gateway are:
SSL termination
cookie-based session affinity
round robin for load balancing traffic
Where as the Azure Load Balancer service works as the TCP/UDP level and support e.g. port mapping.
Cost wise, the load balancer service is free while the application gateway is billed per hour.
There are many great articles on this topic, when to pick which service. See for example the links for more details
When to use Azure Load Balancer or Application Gateway
Frequently asked questions for Application Gateway

Configuring Azure load balancer and NAT rules

I'm trying to build a simple two-tier wordpress environment on CentOS 7.2 in Azure.
I've defined a virtual network, have connected it to my home-lab via IPsec VPN, and I've defined several subnets in Azure (for Web tier, SQL tier, and utility tier role segregation using Network Security Groups).
I have two web-tier VMs, both members of the same Availability Set, and are both on the web-tier subnet. They have internet access (outbound), I can SSH to them from my home-lab, and the seem fine operationally to me - httpd is listening on 80/tcp, and I can hit the web pages from my home-lab network by visiting each web server directly on its 192.168.x address.
I should mention my web servers DO NOT have public IPs assigned, but I can't see this being an issue.. they're intended to be behind the load balancer.
So, I've created a Load Balancer, and:
assigned a public IP to the LB
added a backend pool (selected my availability set, and chose my two web servers)
added a probe (http probing the two web servers)
added a load balancer rule
Notice I did NOT add an inbound NAT rule. I can't figure out what that's for, or if I need it.
On my web tier, I tcpdump port 80 and see the probes. In httpd logs, I see 200 success messages for the probes. I go to a web browser, hit the external VIP I assigned to the LB, and nothing. It just times out. I cannot connect to the LB VIP.
What am I missing? What are the NAT rules about?
Any help would be appreciated. All I can find online are examples doing this in powershell etc.. and I'm using the Azure web interface.
Thanks!
Edit: Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..

Recommended replica set config in Azure

We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.

How to do load balancing / port forwarding on Azure?

I am evaluating the convenience of moving to azure. Currently, I am trying to figure out how to balance the load and make routing for different websites on the same machine. I saw tutorials where a user created a separate LB on a different VM. I also found many articles about the possibility to balance the load using Azure load balancing.
So I assume both are possible, is that correct?
I would like to know how to connect between machines on azure. Would it be possible to do so using a local ip, machinename, or dns?
I also need to figure out how to forward traffic to different ports based on http header, is that possible without a seperate machine as load balancer? I see the endpoint config in my azure dashboard and found the official documentation, but unfortunately it's not enough for my understanding.
Currently, I am trying to figure out how to balance the load and make
routing for different websites on the same machine.
You can have different web sites on the same machine by configuring virtual hosting on IIS. This is accomplished using host header. VM, Cloud Service or even Websites supports this functionality. VMs and Cloud Services should be pretty straight forward. Example using websites:
Hosting multiple domains under one Azure Website
http://blogs.msdn.com/b/cschotte/archive/2013/05/30/hosting-multiple-domains-under-one-azure.aspx
I also found many articles about the possibility to balance the load
using Azure load balancing.
LB for VMs are as easy as creating a load balance set inside endpoint configuration wizard. Once you create a balance set, for example, enpoint HTTP port 80, you can assign this balance set to any VM on the same cloud service. All requests to port 80 would be automatically balanced across all VMs in the set.
So I assume both are possible, is that correct?
Yes.
I would like to know how to connect between machines on azure. Would
it be possible to do so using a local ip, machinename, or dns?
You just have to create a virtual network and deploy the VMs to it. Websites (through preview portal only), Cloud Services and VMs supports VNet.
Virtual Network Overview
https://msdn.microsoft.com/library/azure/jj156007.aspx/
I also need to figure out how to forward traffic to different ports
based on http header, is that possible without a seperate machine as
load balancer?
Not at this moment. Best you can have with native Azure Services is a 3-tuple (Source IP, Destination IP, Protocol) load balance configuration.
Azure Load Balancer new distribution mode
http://azure.microsoft.com/blog/2014/10/30/azure-load-balancer-new-distribution-mode/
depending on how you're deploying there's a couple of options:
first of all: LB sets in VM's in a cloud service. For this the Cloud service acts as the LB. this can only be achieved when using a standard sku VM.
second of all in Azure WebApps : load balancing is achieved automagically when deploying through standard means, since scaling is foreseen here.
Third of all there's Cloud Services with roles, who also do this "automagically".
Now none of that seem to apply to your needs. you can also start thinking about using traffic manager, something with a little more bite :-)
have you read this article by any chance? http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-load-balance/
I'd like to advise you to add different endpoints to your VM's work with traffic manager and ake sure you IIS has all the headers on the correct ports (cause i'm assuming that's what you're doing already)

azure multiple public ip for a vm

I want to have three public ip addresses for my VM in azure. I got one when I created the VM and now I want to assign two reserved ip addresses to my VM. I was able to create the reserved ip address but not sure how to assign them to existing VM or assign multiple to a new VM. Any suggestions on how to do this?
In Azure, a Load Balancer is required in order to direct traffic from multiple VIP addresses to a single (or multiple) VMs.
If, for example, you want a single VM to host multiple websites, all of which need to be accessible externally via port 443, you'd need three VIP addresses assigned to the Load Balancer, with a NAT on each at least two of the VIPs; i.e.
Site a: Incoming 443-443 to VM
Site b: Incoming 443-444 to VM
Site c: Incoming 443-445 to VM
All the traffic from the Load Balancer could then be routed to one VM, where you'd direct traffic on each incoming port to the required website based. This MS article explains it really well: https://azure.microsoft.com/en-gb/documentation/articles/load-balancer-multivip/
Reserved IP addresses are a way of ensuring that your VIP is no longer dynamic, which they are by default. The following article explains it well, including how to take an existing Cloud Service's currently-running dynamic VIP and making it static (Reserved): https://azure.microsoft.com/en-gb/documentation/articles/virtual-networks-reserved-public-ip/
An Azure VM can have two public IP addresses - one is the VIP of the cloud service containing the VM (as long as there are endpoints configured for the VM) and the other is the PIP (or public instance IP address) associated with the VM. A reserved IP address is an orthogonal concept to VIPs and PIPs and its use is documented here. I did a post on VIPs, DIPs and PIPs that you may find helpful.

Resources