Load balancing for custom client server app in the cloud - azure

I'm designing a custom client server tcp/ip app. The networking requirements for the app are:
Be able to speak a custom application layer protocol through a secure TCP/IP channel (opened at a designated port)
The client-server connection/channel needs to remain persistent.
If multiple instances of the server side app is running, be able to dispatch the client connection to a specific instance of the server side app (based on a server side unique ID).
One of the design goals is to make the app scale so load balancing is particularly important. I've been researching the load-balancing capabilities of EC2 and Windows Azure. I believe requirement 1 is supported by most offerings today. However I'm not so sure about requirement 2 and 3. In particular:
Do any of these services (EC2, Azure) allow the app to influence the load-balancing policy, by specifying additional application-layer requirements? Azure, for example, uses round-robin job allocation for cloud services, but requirement 3 above clearly needs to be factored in as part of the load balancing decision, i.e. forward based on unique ID, but uses round-robin allocation if the unique ID is not found at any of the server side instances.
Do the load-balancer work with persistent connection, per requirement 2? My understanding from Azure is that you can specify a public and private port-pair as an end-point, so the load-balancer monitors the public port and forward the connection request to the private port of some running instance, so basically you can do whatever you want with that connection thereafter. Is this the correct understanding?
Any help would be appreciated.

Windows Azure has input endpoints on a hosted service, which are public-facing ports. If you have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances; you cannot choose which instance to route to (e.g. you must support a stateless app model).
If you wanted to enforce a sticky-session model, you'd need to run your own front-end load-balancer (in a Web / Worker role). For instance: You could use IIS + ARR (application request routing), or maybe nginx or other servers supporting this.
What I said above also applies to Windows Azure IaaS (Virtual Machines). In this case, you create load-balanced endpoints. But you also have the option of non-load-balanced endpoints: Maybe 3 servers, each with a unique port number. This bypasses any type of load balancing, but gives direct access to each Virtual Machine. You could also just run a single Virtual Machine running a server (again, nginx, IIS+ARR, etc.) which then routes traffic to one of several app-server Virtual Machines (accessed via direct communication between load-balancer Virtual Machine and app server Virtual Machine).
Note: The public-to-private-port mapping does not let you do any load-balancing. This is more of a convenience to you: Sometimes you'll run software that absolutely has to listen on a specific port, regardless of the port you want your clients to visit.

Related

Load balancer for Azure Service Fabric Cluster on-premises

As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.

Making locally hosted server accessible ONLY by AWS hosted instances

Our system has 3 main components:
A set of microservices running in AWS that together comprise a webapp.
A very large monolithic application that is hosted within our network, and comprises of several other webapps, and exposes a public API that is consumed by the AWS instances.
A locally hosted (and very large) database.
This all works well in production.
We also have a testing version of the monolith that is inaccessible externally.
I would like to able to spin up any number of copies of the AWS environment for testing or demo purposes that can access the demo testing version of the monolith. However, because it's a test system, it needs to remain inaccessbile to the public. I know how to achieve this with AWS easily enough (security groups etc.), but how can I secure the monolith so it can be accessed ONLY by any number of dynamically created instances running in AWS (given that the IP addresses are dynamic and can therefore not be whitelisted)?
The only idea I have right now is to use an access token, but I'm not sure how secure that is.
Edit - My microservices are each running on an EC2 instance.
Assuming you are running your microservices on EC2, if you want API calls from your application servers running in AWS to come from a known IP/IPs then this can be accomplished by using a NAT instance or a proxy. This way even though your application servers are dynamic, the apparent source of the requests is not.
For a NAT you would run your EC2 instances in a private subnet and configure them to send all of their Internet traffic out over the NAT instance which will have a constant IP. Using a proxy server or fleet of proxy servers can be accomplished in much the same way, but would require your microservice applications be configured to use it.
The better approach would be to simply not send the traffic to your microservices over the public Internet.
This can be accomplished by establishing a VPN from your company network to your VPC. Alternatively, you could establish a Direct Connect to bridge the networks.
Side note, if your microservices are actually running in AWS Lambda then this answer does not apply.

Traffic manager with multiple endpoints in same location

I'm trying to add web app endpoints from the same location, to an azure traffic manager, when I try to do this, it tells me that App Service will use load balancing to do this for me, when we apps are in the same location.
My understanding is that load balancing is for distributing requests between multiple VMs on one web app. The plan was to use out single DNS and allow traffic manager to determine which endpoint to go to using round-robin or failover. How will load balancing know to direct to one of the web apps from this single address?
Azure Web Apps already have built in load balancing between instances within the web app. So for example if you have a web app with 10 instances under the endpoint: tester.azurewebsites.net, Azure load balances appropriately across those instances.
When you bring in traffic manager, that is looking for different endpoints to facilitate between. Incoming requests will be routed based on proximity to endpoints it is managing, load and if the endpoint is available. Traffic Manager takes care of all of those complexities for you.
This allows you to have a single endpoint myapp.azurewebsites.net; which may route to myapp-west.azurewebsites.net and myapp-east.azurewebsites.net. That routing as I indicated is based on proximity, load and availability.
How it actually works is the magic sauce of Azure Traffic Manager. I use it in production and it has been working very well for me. I primarily use it for routing based on proximity, and have yet to experience a failure on a web app to test a production failover reroute.
Hope that helps!

Allowing only local network connections to a Windows Azure VM?

I am trying lock down a virtual machine that acts as an app server for a web application. I have a two VM's: One for the app server and another one running the web server. I have to open a ton of ports to allow the web server talk to some wcf services, but I only want to allow those connections from the web server and no one outside of that network. I have to add endpoints in order for the web server to access the wcf services, but this also makes them accesible to the public IP. How can I only allow this traffic on the
For Virtual Machines, the only way of accessing ports from outside the hosted service is by defining input endpoints (with or without load-balancing across a set of machines). In your case, you'd just open, say, 80 and 443, specifically for your web server (e.g. not load-balanced). This is considered a port-forwarded endpoint since traffic on these two ports get forwarded directly to your web server. For more clarity around port-forwarded endpoints, I suggest Michael Washam's blog post, here.
At this point, you'd open various other ports on your app server (through its firewall config), and now your web server can talk to the app server, yet the outside world won't be able to reach the app server. Note: I'm assuming you placed your web server and app server in the same hosted service. Otherwise, you'd need to find a different way to connect between web and app servers, such as configuring a Virtual Network.
EDIT 6/5/2013 You can now enable ACLs on input endpoints, allowing (or blocking) IP ranges. Today ACLs may only be managed through PowerShell, with the June 2013 update. See this post to learn more.
Machines that exist on the same virtual network will be able to talk to each other as long as the local firewall has been opened to those ports. This problem was with my configuration in my application and not because of this. I also didn't have the correct ports open. Now it works like a charm.

how does a client know where a WCF endpoint is when it calls a windows azure WCF service

for ex you have:
2 instances of a worker role, 2 instances of a web role.
the worker role calls a WCF service on the web role. If I have only one web role, it knows it's addressand all is fine. But if I have 2 web roles, how do they accomplish load ballance, how does the worker role know which instance of the web role to call to?
Load balancing in general works by having a separate piece of hardware that acts as the designated target for the service that is being balanced. As each new request arrives it is then simply forwarded to one of the actual target machines that provide this service implementation.
In your particular case the load balancer will be the single public endpoint for your web roles. DNS lookups or direct IP addressing will result in requests arriving at the load balancer machine and not directly to any of the web roles. The balancer then forwards the request to one of the two web role instances that are known by the load balancer.
One of the advantages of this approach is that you can quickly start new web role instances if you anticipate a spike in traffic. All Azure needs to do is inform the load balancer those new instances are available and they will immediately start accepting new requests. Likewise you can scale back the number of instances. Because the load balancer itself is not being restarted it means your service is not disrupted.
You can find more detailed information at...
Cisco Definition
Wikipedia
DevCentral
Windows Azure load balances Web Roles automatically, you don't have to do anything.
If you need to address specific web role instances, I would suggest you reconsider your architecture, specifically look at using shared state via the SQL Server session state provider, or look at the appfabric cache provider in the training kit labs.

Resources