I'd like to implement NAT Punchthrough as part of a client application to allow clients to connect to each other when behind a router. I'm hoping to use Azure Mobile Services to accomplish this, but in order to do so, the server needs to save the ip address and port of all incoming connections in a database (so that other clients can lookup the host, and connect back to the client that posted the data).
Is there anyway to acquire this connection (ip address & port) information in the server side scripts? If not, what alternative services exist that'll let me setup an API like this?
Thanks!
I found got an answer on another thread over on the windows azure forums.
Headers are exposed through the mobile services custom api feature. Additionally, azure uses a forwarding machine to route incoming requests to the appropriate vm. This machine is a proxy which saves incoming connection information into the x-forwarded-for http header. Thus, from a custom script, we can query for incoming connection information from the headers. It should be noted that the x-forwarded-for header is supposed to include both the ip address and the port number.
Here's the custom api example given in the other thread.
exports.get = function(request, response) {
var ip = request.headers['x-forwarded-for'];
response.send(statusCodes.OK, ip);
};
The other thread is here: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/a6aa306c-f117-4893-a50a-94418fafc1a9/client-ip-address-from-serverside-scripts-azure-mobile-services?forum=azuremobile&prof=required
For the minute this isn't available. The Azure team are working on increasing the amount of information about the request to the script. As to timescales of when this will be available I'm unsure.
Related
I have the next setup in Azure Resource Manager :
1 scale set with 2 virtual machines having Windows Server 2012 .
1 Azure Redis cache (C1 standard)
1 Azure load balancer (Layer 4 in the OSI network reference stack)
Load balancer is basically configured using :
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-internet-portal
. Session persistence is set to none for the rules.
On both VMs from scale set I have deployed a test web app which uses SignalR 2 on .Net 4.5.2.
The test web app uses Azure Redis cache as backplane .
The web app project can be found here on github : https://github.com/gaclaudiu/SignalrChat-master.
During the tests I did notice that after a signalr connection is opened , all the data sent from the client, in the next requests, arrives on the same server from the scale set , it seems to me that SignalR connection know on which sever from the scale set to go.
I am curios to know more on how this is working , I tried to do some research on the internet but couldn't find something clear on this point .
Also I am curios to know what happens in the next case :
Client 1 has a Signalr opened connection to Server A.
Next request from the client 1 through SignalR goes to the Server B.
Will this cause an error ?
Or client will just be notified that no connection is opened and it will try to open a new one?
Well I am surprised that it is working at all. the problem is: signalr performs multiple requests until the connection is up and running. There is no guarantee that all requests go to the same VM. Especially if there is no session persistence enabled. I had a similar problem. You can activate session persistence in the Load Balancer but as you pointed out acting on OSI layer 4 will do this using the client ip (imagine all guys from the same office hitting your API using the same IP). In our project we use Azure Application Gateway which works with cookie affinity -> OSI Application layer. So far it seems to work as expected.
I think you misunderstand how the load balancer works. Every TCP connection must send all of its packets to the same destination VM and port. A TCP connection would not work if, after sending several packets, it then suddenly has the rest of the packets sent to another VM and/or port. So the load balancer makes a decision on the destination for a TCP connection once, and only once, when that TCP connection is being established. Once the TCP connection is setup, all its packets are sent to the same destination IP/port for the duration of the connection. I think you are assuming that different packets from the same TCP connection can end up at a different VM and that is definitely not the case.
So when your client creates a WebSocket connection the following happens. An incoming request for a TCP connection is received by the load balancer. It decides, determined by the distribution mode, which destination VM to send the request onto. It records this information internally. Any subsequent incoming packets for that TCP connection are automatically sent to the same VM because it looks up the appropriate VM from that internal table. Hence, all the client messages on your WebSocket will end up at the same VM and port.
If you create a new WebSocket it could end up at a different VM but all the messages from the client will end up at that same different VM.
Hope that helps.
On your Azure Load Balancer you'll want to configure Session persistence. This will ensure that when a client request gets directed to Server A, then any subsequent requests from that client will go to Server A.
Session persistence specifies that traffic from a client should be handled by the same virtual machine in the backend pool for the duration of a session. "None" specifies that successive requests from the same client may be handled by any virtual machine. "Client IP" specifies that successive requests from the same client IP address will be handled by the same virtual machine. "Client IP and protocol" specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.
SignalR only knows the url you provided when starting the connection and it uses it to send requests to the server. I believe Azure App Service uses sticky sessions by default. You can find some details about this here: https://azure.microsoft.com/en-us/blog/azure-load-balancer-new-distribution-mode/
When using multiple servers and scale-out the client can send messages to any server.
Thank you for your answers guys.
Doing a bit of reading , it seems that azure load balancer use by default 5-touple distribution.
Here is the article https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
The problem with 5-touple is that is sticky per transport session .
And I think this is causing the client request using signalr to hit the same Vm in the scale set. I think the balancer interprets the signalr connection as a single transport session.
Application gateway wasn't an option from the beginning because it has many features which we do not need ( so it doesn't make sense to pay for something we don't use).
But now it seems that application gateway is the only balancer in azure capable of doing round robin when balancing traffic.
I'm running an Azure Worker Role that with self-hosted OWIN Web API
Currently, the host is initialized with a single URL like so:
var options = new StartOptions(endpoint);
_app = WebApp.Start<Startup>(options);
The endpoint has a port hard-coded in it. I'd like to have it listen on a range of ports.
My real issue is as follows:
My Web API host is not getting round-robined by Azure's load balancer. I believe this is because the default ("none") affinity setting uses SourceIP, Source Port, Target IP, Target Port, Protocol type to perform its load balancing. However, the clients to my Web API (in the thousands) are always the same clients - they connect every minute to perform some operations. Thus, these client's ports and IP's don't change. My listening port and IP do not change since the host is hard-coded to a port. All of the requests from these clients are getting round-robined to the same instance all the time. I've verified this over and over and over again. My first worker role instance gets all the traffic, as soon as 2nd instance is rebooted. 2nd instance never kicks in.
I would like to try to have my OWIN hosted Web API listen on a range of ports. Is this the right approach? If so, how can this be done?
What do you suggest as the best way to protect your web servers IP address for outgoing requests? I'm already using Cloudflare for inbound requests but if my web server (nodejs) is making outbound connections for sending webhooks or something, I would prefer not to expose my origins IP. I have a firewall set up to prevent any connections inbound not coming from Cloudflare but I don't want my IP to expose where I'm hosted only to have my datacenter receive a DDoS.
There actually aren't any good articles I can find anywhere regarding protecting your IP with outbound connections.
Two thoughts:
1) Set up a second datacenter containing proxy servers and route outbound web server traffic through the proxy servers.
2) Set up a webhook queue, send webhooks to the queue and have servers in a 2nd datacenter work the queue.
Ideas?
I have worked at my company with a number of models over the years, including both ones that you listed. We started out using a queue that were available to web hook processors on remote data centers, but we transitioned over to a model that had less emphasis on queues, and instead simplified it; an originating server chooses one of the available notification/web hook senders, that in turns calls the web hook subscriber. The sender also takes care of buffering, resending, alerting and aging of messages.
For the purpose of protecting your IP address, it depends on a number of variables. In our case, we acquire additional IP address ranges for the senders, but you can achieve your goal by having the proxy hosted on AWS or similar.
Why would you want to do this? Your inbound requests are already dropped if they aren't from cloudflare.
From Bluemix I want to access an application in a customers data center using Secure Gateway service. I also want to give access to the destination (the customer application) to the Bluemix application only.
In the Secure Gateway dashboard under Advanced options of the gateway or the destination definition is a Network option where I can specify an IP address or address range plus port or port range. The help text says: "Set this destination to private to only allow access from specific IPs and ports." This is exactly what I am looking for.
But: How can I use this with a Bluemix app? I don't know the IP address of the Bluemix app. I am aware that I can figure it out but it is not static, the moment I stop and restart an app on Bluemix, the IP address may change. So this setting of the Network option would have to be done by some API call from the Bluemix application itself. Is this possible?
If not, why have this function at all?
The cloud application will use the "cap-sg-prd-<#>.integration.ibmcloud.com" hostname and the port they were given to connect into the cloud service. The client uses the destination configuration, which is downloaded to the client, to perform the backend, on-premises connection to their on-premises resource. So only their cloud application need to know about the cap-sg*/port number, all other connectivity is taken care by using the already established SecureGateway client connection.
In the form for the IP address you can also specify hostnames. You could try to provide the hostname of your Bluemix app. In my tests I did not succeed and had the entire connections cut off. Thus I cannot recommend trying to restrict connections right now.
By binding your Secure Gateway to the app or, even better, utilizing user-provided services to bind a database to an app you can leave the connection information internal to Bluemix. Here is a blog post with steps for user-provided services and on github is a demo for on-premise database integration utilizing the user-provided services and the Secure Gateway.
The hint regarding hostnames can be found in the Bluemix documentation for the Secure Gateway. The information about the Secure Gateway in the Knowledge Center is shy about it.
i am using nodejs and express to build a restful webservice with no DB behind, but communicate with diffrent remote restful webservices, and i encountered the following problem,
My server located on US, but i have users from all around the world,
One of my remotes restful services that i am working with, does not support geo querying (not even by ip), and he asked me to do forwarding tcp request to his api, so he'll recogize users geo on his side.
I am using hyper-request as my module for sending request, but relavnet solution with any other module will be helpful. Thanks.
If you want your other remote services to lookup the Geographic data of the IP you will need to proxy the requests with an extra X-Forwarded-For HTTP Header, and they will have to whitelist your server's IP as a trustworthy proxy.
But you cannot spoof your source IP so that it matches your original user's.