In Windows Azure, is it possible to have a load balanced endpoint that's only accessible by traffic from a connected virtual network? - azure

I have a Cloud Service that is connected to a LAN through a virtual network. I have a web role that machines on the LAN will be hitting for tasks like telling the cloud service that data needs to be refreshed. It it possible to have and endpoint that's load-balanced, but that only accepts traffic through the virtual network?

Well... you have a few things to think about.
You could set up your own load balancer in a separate role, which then does the load balancing. You'd probably want two instances to deal with high availability, and if there was any stateful/sticky-session data you'd need to sync it between your two load balancers. OR...
Now: If your code needing load-balancing lived in a Virtual Machine, rather than in a web/worker role, you could take advantage of the brand-new IP-level endpoint ACL feature introduced at TechEd. With this feature, you can have an endpoint that allows/blocks traffic based on source IP address. So you could have a load-balanced endpoint balancing traffic between a few virtual machines, and you could then limit access to, say, your LAN machines, and even add your existing Cloud Service (web/worker) VIP so that your web and worker role instances could access the service, all through the endpoint without going through the VPN. This way, you'd get to take advantage of Azure's built-in load balancer, while at the same time providing secure access for your app's services.
You can see more details of endpoint ACLs here.

No. The load balancer for a cloud service is public only. You can't predict the ip addresses of the individual instances on the virtual network, so you can't even hook them into your own load balancer. Yes, you can do it with VMs (as David recommends) — but then you're doing old-school IIS, not a cloud service. I went through this in November 2012, and was unable to find a decent solution.

Related

Inside load balancer in Azure

In Azure, I have 3 Web Apps (for simplicity):
Frontend website
Endpoint 1
Endpoint 2
The frontend website requests data from an endpoint.
Both endpoints are synchronized all the time (outside the scope of this question), but sometimes I need to do some maintenance on them, which gives me some downtime.
Can I somehow setup a loadbalancer only my frontend website can see, and get any of the online endpoints - like this:
The last line of this article says Internal Load Balancers might be the fit:
Can I use ILB on PaaS services (Web/Worker roles)?
ILB is designed to work with web/worker roles as well, and it is available from SDK 2.4 onwards.
Does anyone know of a guide, or have tried making this with Web Apps?
I dont think this is something you can achieve "natively" with load balancers. App Services are not actually bound to the VNet. Previously you could only use point-to-site vpn to connect them to vnet, right now there is a new vnet integration feature in preview which might allow you to use internal load balancers, but I doubt that, because they (load balancers) only allow to use virtual machines\scale sets\availability sets as backend pools.
Application gateways can be bound to the App Services. And they can be internal as well. You'd also need to restrict App Service(s) to receive traffic from anything that is not you Application gateway.
You can use traffic manager\front door for this sort of load balancing, but the endpoints won't be private

Trying to understand load balancing in azure cloud service

I am maintaining a azure cloud service which has 1 web role and few worker roles. The webrole has multiple instances. When I open the cloud service from the resources, I can see the service endpoint and public ip address. I want to understand how is the traffic load balanced in this azure cloud service. I searched for load balancers but I could not find it in the subscription. I was also not able to get the reference of some document which explains load balancing in the cloud service specifically.
Any info in this regard?
Long story short,
The default distribution mode for Azure Load Balancer is a 5-tuple hash. The tuple is composed of the source IP, source port, destination IP, destination port, and protocol type. The hash is used to map traffic to the available servers and the algorithm provides stickiness only within a transport session.
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
Internal load balancer is supported for cloud services. An internal load balancer endpoint created in a cloud service that is outside a regional virtual network will be accessible only within the cloud service.
I found these docs which might be helpful to you. These explain setting internal load balancer for cloud services.
Classic : https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-ilb-classic-cloud
ARM : https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-ilb-arm-ps
Just to make it clear, the information below is about classic services. For information about classic and resource manager model, see this page.
In cloud services you get a load balancer automatically configured when you create the service. If you want to configure it, you can do so using the service model.
The load balancer can be of two different types,
internal load balancer
external loab balancer
The internal one can only be accessed inside the cloud service, while the external one got a public IP. See this page for how to make an internal load balancer.
Load balancers keep track of the health state of the endpoints by regularly probing them. Check out this page for how to configure the probing. As long as the internal services return a HTTP 200, they are kept in the load balancers pool.
Have a look at this page for more general information on load balancers for cloud services.
Also, see this page as well. It contains a good information about the service.

How do you set up Azure load balancing for micro-services?

We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).
We have some API's that are designed to accept external requests and some that are internal APIs only.
The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:
You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced
So how do people set this up with micro-services? I can see three ways, none of which are ideal:
Separate out the APIs to different scalesets. Not ideal as the
services are very lightweight and I don't want to triple my Azure VM
expenses.
Convert the internal LB to an external LB (add a public
IP address). Then put that LB in it's own network security
group/subnet to only allow calls from our Azure IP range. I would
expect more latency here and exposing the endpoints externally in
any way creates more attack surface area as well as more
configuration complexity.
Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be
handled by the same VM. This defeats the purpose of micro-services
behind a VIP. An internal micro-service may be down on the same
machine for some reason and available on another...thats' the reason
we set up health probes on the ILB for each service separately. If
it just goes back to the same machine, you lose resiliency.
Any pointers on how others have approached this would be appreciated.
Thanks!
I think your problem is related to service discovery.
Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS).
Service discovery makes your microservices call directly each others after being discovered.
Also take a look at client-side load balancing tools such as Ribbon.
#Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:
For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.
You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.

How to do load balancing / port forwarding on Azure?

I am evaluating the convenience of moving to azure. Currently, I am trying to figure out how to balance the load and make routing for different websites on the same machine. I saw tutorials where a user created a separate LB on a different VM. I also found many articles about the possibility to balance the load using Azure load balancing.
So I assume both are possible, is that correct?
I would like to know how to connect between machines on azure. Would it be possible to do so using a local ip, machinename, or dns?
I also need to figure out how to forward traffic to different ports based on http header, is that possible without a seperate machine as load balancer? I see the endpoint config in my azure dashboard and found the official documentation, but unfortunately it's not enough for my understanding.
Currently, I am trying to figure out how to balance the load and make
routing for different websites on the same machine.
You can have different web sites on the same machine by configuring virtual hosting on IIS. This is accomplished using host header. VM, Cloud Service or even Websites supports this functionality. VMs and Cloud Services should be pretty straight forward. Example using websites:
Hosting multiple domains under one Azure Website
http://blogs.msdn.com/b/cschotte/archive/2013/05/30/hosting-multiple-domains-under-one-azure.aspx
I also found many articles about the possibility to balance the load
using Azure load balancing.
LB for VMs are as easy as creating a load balance set inside endpoint configuration wizard. Once you create a balance set, for example, enpoint HTTP port 80, you can assign this balance set to any VM on the same cloud service. All requests to port 80 would be automatically balanced across all VMs in the set.
So I assume both are possible, is that correct?
Yes.
I would like to know how to connect between machines on azure. Would
it be possible to do so using a local ip, machinename, or dns?
You just have to create a virtual network and deploy the VMs to it. Websites (through preview portal only), Cloud Services and VMs supports VNet.
Virtual Network Overview
https://msdn.microsoft.com/library/azure/jj156007.aspx/
I also need to figure out how to forward traffic to different ports
based on http header, is that possible without a seperate machine as
load balancer?
Not at this moment. Best you can have with native Azure Services is a 3-tuple (Source IP, Destination IP, Protocol) load balance configuration.
Azure Load Balancer new distribution mode
http://azure.microsoft.com/blog/2014/10/30/azure-load-balancer-new-distribution-mode/
depending on how you're deploying there's a couple of options:
first of all: LB sets in VM's in a cloud service. For this the Cloud service acts as the LB. this can only be achieved when using a standard sku VM.
second of all in Azure WebApps : load balancing is achieved automagically when deploying through standard means, since scaling is foreseen here.
Third of all there's Cloud Services with roles, who also do this "automagically".
Now none of that seem to apply to your needs. you can also start thinking about using traffic manager, something with a little more bite :-)
have you read this article by any chance? http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-load-balance/
I'd like to advise you to add different endpoints to your VM's work with traffic manager and ake sure you IIS has all the headers on the correct ports (cause i'm assuming that's what you're doing already)

Does Azure Point-to-Site or Site-to-Site VPN support cloud services?

I haven't dug in completely, but I'm trying to figure out if the new Azure VPN offerings are just for your own VMs or if they will allow cloud services to connect to your corporate network. For example, I can I use it to have my worker role print to a network printer on my corporate network.
As long as your cloud service is part of a virtual network, it will have an IP address of the VPN subnet assigned to it, and all addresses are accessible (subject to your own networking configuration). Two things to be careful of:
The VPN IP address of the individual instances are subject to change. Every time a role recycles, or you redeploy, the instance IP address will change. This may be a problem if your security requires specific IP addresses. This can be helped by maintaining these ip addresses in your own DNS.
The cloud service load balancer is 'external' and cannot be placed on the virtual network. This means that your cloud service is not addressable as a single endpoint. You have to communicate with each individual role and load balance yourself. Similarly, outgoing data comes from individual roles, not the cloud service (see 1 above).
I haven't tried personally, but you should be able to do just that by joining your cloud service to a virtual network. See this article for details on how to do this: http://convective.wordpress.com/2012/08/26/windows-azure-cloud-services-and-virtual-networks/.

Resources