Are there any forwarding proxies that holds state in Azure? - azure

I'm wondering if there are any serverless services on Azure that i could use to implement a ratelimit for outgoing requests. The requests will be sent from different workers and a distributed lock doesn't feel like the correct way forward.
One idea was to host a forwarding proxy that manages and limits the rate of outgoing http requests to a specific endpoint, but utilizing something serverless would be preferred.

Related

Azure Application Gateway - How to control traffic for different application

I am creating an application gateway and that will be a single point of entry for my multi tenant application. That means I will have multiple application request on this application gateway and then I need to redirect to backend pool. If I will have one application A deployed in app service A then it will listen at port 80 of app gateway. Similar if I have another application, I can expose it using similar way on different port. How can I achieve it. I tried creating multiple rules but not working.
If I read your question correctly, you want multiple app services, each on a potentially different port, to be served by a single application gateway. And it sounds possible you might want to make requests to that application gateway on different port. Sound right?
If so, then what you need to do is something along these lines:
Set up a backend pool for each app service.
Set up an HTTP setting for each backend pool, specifying port, session affinity, protocol, etc. - This will be the port that your App Service takes requests on.
Create a front end IP configuration to expose a public and/or private IP address
Create a listener for each app service, and port that you want to support. This will be the port you want the client requests to use. You can do two listeners per service to allow 80 & 443 (HTTP & HTTPS) traffic, for example.
Create a rule to connect each listener to its backend pool and HTTP setting combination
Optional - set up health probes that target monitoring endpoints based on an URL and a specific HTTP setting entry

Azure Application Gateway for on-premise load balancer

We have a cisco load balancer on-premise which routes traffic to our DMZ Servers on-premise
We want to use Azure Load Balancer or Azure Solutions (AG) which can balance traffic to our DMZ Servers on-premise, basically replace the CISCO with Azure
Is it possible? we have SFT/HTTPS sites currently hosted on our DMZ Environment.
TIA
What you're proposing isn't the use-case for Application Gateways. Application Gateways are Layer 7 load balancers / reverse proxies. What you want to do is almost treat them as a one-site forward proxy. It's not a good architecture and even if it were possible would ultimately be more costly in the long-run since you would pay for data egress as your App Gateway accepts requests and then forwards on to your web servers via an outbound connection over the Internet. They then receive the response headers/body from your web servers and again send that result on to the original caller.
In that scenario, you are forced to have to use end-to-end SSL for your applications, removing any possibility of using the App Gateway for SSL offload in the future. If your traffic isn't encrypted or doesn't need to be, the predictability of the source and destination of your traffic increases the security risk to your website's users and your company.
You also have the possible security implications of this type of architecture. Your web servers still need to be accessible at the very least by your Application Gateway, which means they are either freely available on the Internet anyway (in which case why bother with an App Gateways at all) or they're firewalled at a single layer and permit only traffic from the source IP address of your Application Gateway.
The bad news with the firewall approach is that you cannot assign a static public IP address to an Application Gateway, it is forced as Dynamic. Realistically the public IP won't change until the App Gateways are rebooted but you should know that when, not if, they do, your firewall rules will be wrong and your App Gateways won't be able to get to your DMZ servers any more, which means an outage. The only true solution for that is a firewall that can do URI based firewall rules...the impact there is cost (time and CPU) to perform a DNS lookup, see if the traffic is from the App Gateway by its DNS address - something like bd8f86bb-5d5a-4498-bc0c-e1a48b3873bf.cloudapp.net and then either permit or deny the request.
As discussed above, a further security consideration is that your traffic will be fairly consistently originating from one location (the App Gateways) and arriving at your DMZ. If there's a well defined source of traffic, that fact could be used in an attack against your servers/DMZ. While I'm sure attacking this is non-trivial, you damage your security posture by making source and destination traffic predictable across the Internet.
I've configured a good number of Application Gateways now for Enterprise applications and out of morbid curiosity I had a go at configuring a very basic one using HTTP to do what you're attempting - fortunately (yes, fortunately) I received an HTTP 502 so I'm going say that this isn't possible. I'll add that I'm glad it isn't possible because it's a Bad Idea (TM).
My suggestion is that you either migrate your DMZ servers to Azure (for the best performance/network latency) or implement a VPN or (preferably) ExpressRoute. You'll then be able to deploy an Application Gateway using the correct architecture where you terminate your users' connections at the App Gateway and that re-transmits the request within your RFC1918 network to your DMZ servers which respond within the network back to the App Gateway and ultimately back to the requestor.
Sorry it's not what you wanted to hear. If you're determined to do this, perhaps nginx could be made to?

Protect outbound IP

What do you suggest as the best way to protect your web servers IP address for outgoing requests? I'm already using Cloudflare for inbound requests but if my web server (nodejs) is making outbound connections for sending webhooks or something, I would prefer not to expose my origins IP. I have a firewall set up to prevent any connections inbound not coming from Cloudflare but I don't want my IP to expose where I'm hosted only to have my datacenter receive a DDoS.
There actually aren't any good articles I can find anywhere regarding protecting your IP with outbound connections.
Two thoughts:
1) Set up a second datacenter containing proxy servers and route outbound web server traffic through the proxy servers.
2) Set up a webhook queue, send webhooks to the queue and have servers in a 2nd datacenter work the queue.
Ideas?
I have worked at my company with a number of models over the years, including both ones that you listed. We started out using a queue that were available to web hook processors on remote data centers, but we transitioned over to a model that had less emphasis on queues, and instead simplified it; an originating server chooses one of the available notification/web hook senders, that in turns calls the web hook subscriber. The sender also takes care of buffering, resending, alerting and aging of messages.
For the purpose of protecting your IP address, it depends on a number of variables. In our case, we acquire additional IP address ranges for the senders, but you can achieve your goal by having the proxy hosted on AWS or similar.
Why would you want to do this? Your inbound requests are already dropped if they aren't from cloudflare.

Routing clients using Azure Mobile Services

I'd like to implement NAT Punchthrough as part of a client application to allow clients to connect to each other when behind a router. I'm hoping to use Azure Mobile Services to accomplish this, but in order to do so, the server needs to save the ip address and port of all incoming connections in a database (so that other clients can lookup the host, and connect back to the client that posted the data).
Is there anyway to acquire this connection (ip address & port) information in the server side scripts? If not, what alternative services exist that'll let me setup an API like this?
Thanks!
I found got an answer on another thread over on the windows azure forums.
Headers are exposed through the mobile services custom api feature. Additionally, azure uses a forwarding machine to route incoming requests to the appropriate vm. This machine is a proxy which saves incoming connection information into the x-forwarded-for http header. Thus, from a custom script, we can query for incoming connection information from the headers. It should be noted that the x-forwarded-for header is supposed to include both the ip address and the port number.
Here's the custom api example given in the other thread.
exports.get = function(request, response) {
var ip = request.headers['x-forwarded-for'];
response.send(statusCodes.OK, ip);
};
The other thread is here: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/a6aa306c-f117-4893-a50a-94418fafc1a9/client-ip-address-from-serverside-scripts-azure-mobile-services?forum=azuremobile&prof=required
For the minute this isn't available. The Azure team are working on increasing the amount of information about the request to the script. As to timescales of when this will be available I'm unsure.

Directly accessing Azure workers; bypassing the load balancer

Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx

Resources