I am trying to load-balance rpc calls to multiple node.js backends from single node.js client using an haproxy load-balancer with TCP-based load-balancing. I am providing this load-balancer's dns-name:port while creating grpc client, which according to https://github.com/grpc/grpc/blob/master/doc/load-balancing.md, should be treated as a load-balancer address and a subchannel should open with each of the lb's backend servers. But I can see that the only channel that opens is with the load-balancer and all the rpc are sent to a single server only, until tcp idle connection-timeout hits and a new tcp connection is setup with a new server.
Just wanted to ask how does grpc detects if the client is connecting with load-balancer or a server? And is there any way to tell the client that the address it is connecting to is a load-balancer address, and hence it should use grpc-lb policy instead of pick-first?
I later understood that for load-balancing policy to figure out if the address fed is of load-balancer or not, it needs an additional boolean indicating if it is an lb-address or not. So I needed to set SRV records for my load-balancer address to have this additional information required by load-balancing policy of grpc, according to this https://github.com/grpc/proposal/blob/master/A5-grpclb-in-dns.md
Related
I am not a system administrator or network administrator thus I having hard time trying to figure it out a work around to support IPv6 on an Azure Service Fabric Cluster without using the Load Balancer.
From here: IPv6 support for Azure other than the load balancer thing
I have checked that IPv6 is only supported by that lb appliances but the entry point of my current cluster is an application gateway.
Is there a recommended work around for adding Ipv6 support for using a Azure App Gateway
Is there a recommended work around for adding Ipv6 support for using a Azure App Gateway
There is no nice way to do that, only work-arounds.
Anyway, you can do the following:
instanciate an Azure back-end server,
configure this server to establish an IPv6 over IPv4 tunnel to an IPv6 public tunnel broker,
install a reverse-proxy on your back-end server, listening to an IP address chosen inside the IPv6 prefix offered by your tunnel broker,
configure this reverse-proxy to translate the accepted IPv6 https connections into outgoing http or https IPv4 requests to your Azure app gateway (the connection stays inside the Azure network, so you may accept not to encrypt it, using http instead of https).
But this will not be very efficient because:
1- this is your back-end server that will terminate and decrypt ssl connections;
2- IPv6 packets from/to your servers in Azure will go through your tunnel broker and Azure, you will not have direct connections between the clients and Azure.
To find a free IPv6 tunnel broker, see for instance Hurricane Electric.
I need to connect Firebase to a Node setup on AWS/Elastic Beanstalk. There are 1-4 Node servers, behind an ALB load balancer and an Nginx proxy. Firebase uses WSS protocol (hence the need for ALB, because the regular ELB does not support sockets). When a Node instance authenticates with Firebase it gets a socket the app can listen to.
My question: Since there could be any of the Node servers communicating with Firebase, how can the sockets be made sticky, so that regardless of which of the Node servers opened a socket, it will be the right socket for each communication?
Thanks!
ZE
You can enable Sticky sessions in AWS ALB for WSS protocol to work and send traffic to the same EC2 instance over a period of time.
Also note that you need to configure stickiness at Target Group level.
I created a 2nd Target Group call "xxxSocket" with Stickiness enabled,
and leaving the 1st Target Group "xxxHTTP" without Stickiness
(default). Finally, in my Application Load Balancer, I added a new
Rule to have "Path Pattern" = /socket.io then route to Target Group
"xxxSocket", leaving the default pattern route to "xxxHTTP".
Reference: AWS Forum Anyone gotten the new Application Load Balancer to work with websockets?
Also the WebSockets connections are inherently sticky.
WebSockets connections are inherently sticky. If the client requests a
connection upgrade to WebSockets, the target that returns an HTTP 101
status code to accept the connection upgrade is the target used in the
WebSockets connection. After the WebSockets upgrade is complete,
cookie-based stickiness is not used.
Reference: Target Groups for Your Application Load Balancers
I have the next setup in Azure Resource Manager :
1 scale set with 2 virtual machines having Windows Server 2012 .
1 Azure Redis cache (C1 standard)
1 Azure load balancer (Layer 4 in the OSI network reference stack)
Load balancer is basically configured using :
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-internet-portal
. Session persistence is set to none for the rules.
On both VMs from scale set I have deployed a test web app which uses SignalR 2 on .Net 4.5.2.
The test web app uses Azure Redis cache as backplane .
The web app project can be found here on github : https://github.com/gaclaudiu/SignalrChat-master.
During the tests I did notice that after a signalr connection is opened , all the data sent from the client, in the next requests, arrives on the same server from the scale set , it seems to me that SignalR connection know on which sever from the scale set to go.
I am curios to know more on how this is working , I tried to do some research on the internet but couldn't find something clear on this point .
Also I am curios to know what happens in the next case :
Client 1 has a Signalr opened connection to Server A.
Next request from the client 1 through SignalR goes to the Server B.
Will this cause an error ?
Or client will just be notified that no connection is opened and it will try to open a new one?
Well I am surprised that it is working at all. the problem is: signalr performs multiple requests until the connection is up and running. There is no guarantee that all requests go to the same VM. Especially if there is no session persistence enabled. I had a similar problem. You can activate session persistence in the Load Balancer but as you pointed out acting on OSI layer 4 will do this using the client ip (imagine all guys from the same office hitting your API using the same IP). In our project we use Azure Application Gateway which works with cookie affinity -> OSI Application layer. So far it seems to work as expected.
I think you misunderstand how the load balancer works. Every TCP connection must send all of its packets to the same destination VM and port. A TCP connection would not work if, after sending several packets, it then suddenly has the rest of the packets sent to another VM and/or port. So the load balancer makes a decision on the destination for a TCP connection once, and only once, when that TCP connection is being established. Once the TCP connection is setup, all its packets are sent to the same destination IP/port for the duration of the connection. I think you are assuming that different packets from the same TCP connection can end up at a different VM and that is definitely not the case.
So when your client creates a WebSocket connection the following happens. An incoming request for a TCP connection is received by the load balancer. It decides, determined by the distribution mode, which destination VM to send the request onto. It records this information internally. Any subsequent incoming packets for that TCP connection are automatically sent to the same VM because it looks up the appropriate VM from that internal table. Hence, all the client messages on your WebSocket will end up at the same VM and port.
If you create a new WebSocket it could end up at a different VM but all the messages from the client will end up at that same different VM.
Hope that helps.
On your Azure Load Balancer you'll want to configure Session persistence. This will ensure that when a client request gets directed to Server A, then any subsequent requests from that client will go to Server A.
Session persistence specifies that traffic from a client should be handled by the same virtual machine in the backend pool for the duration of a session. "None" specifies that successive requests from the same client may be handled by any virtual machine. "Client IP" specifies that successive requests from the same client IP address will be handled by the same virtual machine. "Client IP and protocol" specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.
SignalR only knows the url you provided when starting the connection and it uses it to send requests to the server. I believe Azure App Service uses sticky sessions by default. You can find some details about this here: https://azure.microsoft.com/en-us/blog/azure-load-balancer-new-distribution-mode/
When using multiple servers and scale-out the client can send messages to any server.
Thank you for your answers guys.
Doing a bit of reading , it seems that azure load balancer use by default 5-touple distribution.
Here is the article https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
The problem with 5-touple is that is sticky per transport session .
And I think this is causing the client request using signalr to hit the same Vm in the scale set. I think the balancer interprets the signalr connection as a single transport session.
Application gateway wasn't an option from the beginning because it has many features which we do not need ( so it doesn't make sense to pay for something we don't use).
But now it seems that application gateway is the only balancer in azure capable of doing round robin when balancing traffic.
What do you suggest as the best way to protect your web servers IP address for outgoing requests? I'm already using Cloudflare for inbound requests but if my web server (nodejs) is making outbound connections for sending webhooks or something, I would prefer not to expose my origins IP. I have a firewall set up to prevent any connections inbound not coming from Cloudflare but I don't want my IP to expose where I'm hosted only to have my datacenter receive a DDoS.
There actually aren't any good articles I can find anywhere regarding protecting your IP with outbound connections.
Two thoughts:
1) Set up a second datacenter containing proxy servers and route outbound web server traffic through the proxy servers.
2) Set up a webhook queue, send webhooks to the queue and have servers in a 2nd datacenter work the queue.
Ideas?
I have worked at my company with a number of models over the years, including both ones that you listed. We started out using a queue that were available to web hook processors on remote data centers, but we transitioned over to a model that had less emphasis on queues, and instead simplified it; an originating server chooses one of the available notification/web hook senders, that in turns calls the web hook subscriber. The sender also takes care of buffering, resending, alerting and aging of messages.
For the purpose of protecting your IP address, it depends on a number of variables. In our case, we acquire additional IP address ranges for the senders, but you can achieve your goal by having the proxy hosted on AWS or similar.
Why would you want to do this? Your inbound requests are already dropped if they aren't from cloudflare.
Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx