What do you suggest as the best way to protect your web servers IP address for outgoing requests? I'm already using Cloudflare for inbound requests but if my web server (nodejs) is making outbound connections for sending webhooks or something, I would prefer not to expose my origins IP. I have a firewall set up to prevent any connections inbound not coming from Cloudflare but I don't want my IP to expose where I'm hosted only to have my datacenter receive a DDoS.
There actually aren't any good articles I can find anywhere regarding protecting your IP with outbound connections.
Two thoughts:
1) Set up a second datacenter containing proxy servers and route outbound web server traffic through the proxy servers.
2) Set up a webhook queue, send webhooks to the queue and have servers in a 2nd datacenter work the queue.
Ideas?
I have worked at my company with a number of models over the years, including both ones that you listed. We started out using a queue that were available to web hook processors on remote data centers, but we transitioned over to a model that had less emphasis on queues, and instead simplified it; an originating server chooses one of the available notification/web hook senders, that in turns calls the web hook subscriber. The sender also takes care of buffering, resending, alerting and aging of messages.
For the purpose of protecting your IP address, it depends on a number of variables. In our case, we acquire additional IP address ranges for the senders, but you can achieve your goal by having the proxy hosted on AWS or similar.
Why would you want to do this? Your inbound requests are already dropped if they aren't from cloudflare.
Related
How are two appservices in the same appservice plan uniquely identified though I understand they have different urls but it is explained that at the backend this urls are converted into IP addresses however the appservices have the same Outbound IP addresses
URLs are not converted into IP addresses on the backend side, domain names like myapp.azurewebsites.net are resolved to IP addresses by DNS servers - and afterwards the client sends an HTTP request to the derived IP address which belongs to a server on Azure side. Indeed this means that the Azure backend wouldn't be able to assign a request to the right app service so there is another property necessary for this matching - which is the HTTP Host header. This header is used by an internal load balancer (called a "front end" on Azure side) which distributes the request to the worker(s) your application is running on.
Yes the inbound and outbound IP addresses has little to do with the selection of app services running on ASP's, the outbound IP can even be totally different than the ASP addresses if you are using a NAT GateWay or other services.
If you want a more specific answer please ask a more specific question as in what problem are you trying to resolve?
I am using the Azure Standard Load Balancer (client -> external lb => firewall => internal lb => server), when my api request gets to the server I need to be able to identify the originating clients ip address.
I have tried to use X-Forwarded-By and some other request headers but it looks like they're either not supported or have been stripped.
I have not been able to find any documentation online pertaining to the issue - does anyone know how I can access the client ip address?
Thanks
It sounds like you are using the LB for a HTTP backend. Thus, its important to understand what LB does - and what not. There are many good articles out there if you search of "azure load balancer vs application gateway". Here is one example which sums it up well:
The Load Balancer is a TCP/UDP load balancing and port forwarding
engine only. It does not terminate, respond, or otherwise interact
with the traffic. It simply routes traffic based on source IP address
and port, to a destination IP address and port.
Thus, it does not add anything to your HTTP headers etc.
So, LB is more like a router than a proxy. If you want the latter, I suggest you look at Azure Application Gateway. This, btw, can include Web Application Firewall. So you might be able to reduce three components into one.
I am trying to find a powershell command which helps find out a way that there is no open connections or any traffic is flowing to endpoint1 or confirm traffic is moving smoothly to endpoint2 after disabling endpoint1:
$e[0].EndpointStatus = "Disabled"
Set-AzureRmTrafficManagerEndpoint -TrafficManagerEndpoint $e
Is there a command to do this? I am not able to find anything in google or should I use some wait command to wait for like a minute to flush out all open connections?
*Basically looking for a way to make sure all in-flight connections are drained from one endpoint before disabling it.
Traffic does not flow through your Traffic Manager instance. Therefore, the functionality you are asking for from Traffic Manager does not exist. Traffic Manager simply resolves DNS queries to an IP address of one of your endpoints using the routing method (priority, weighted, performance, etc) you configured it for.
After disabling an endpoint, you could still see traffic going to the disabled endpoint for a period of time measured by your Traffic Manager profile DNS TTL setting. For example, if you disable an endpoint at 3:01:00 and your DNS TTL setting is 90 seconds, then you could see traffic until 3:02:30 because that's how long it could take for any client's DNS cache to expire. One way to monitor this is through the Queries by Endpoint Returned metric described here. This should work in most cases. However, it's not 100%. Just because you disabled an endpoint in Traffic Manager won't stop a client that know's the IP address of your endpoint from calling it. You can decide whether or not this scenario is likely for your application and clients. So, to be absolutely certain there are no active clients using the endpoint, you will need some monitoring in place at the endpoint.
Finally, if you gracefully stop your web app, virtual machine, or other service hosting the endpoint you want disabled, then any active requests to your application will complete before the service shuts down, assuming your application completes requests in a reasonable time (a few seconds).
Documentation on how to test and verify your Traffic Manager settings is available here.
We have a cisco load balancer on-premise which routes traffic to our DMZ Servers on-premise
We want to use Azure Load Balancer or Azure Solutions (AG) which can balance traffic to our DMZ Servers on-premise, basically replace the CISCO with Azure
Is it possible? we have SFT/HTTPS sites currently hosted on our DMZ Environment.
TIA
What you're proposing isn't the use-case for Application Gateways. Application Gateways are Layer 7 load balancers / reverse proxies. What you want to do is almost treat them as a one-site forward proxy. It's not a good architecture and even if it were possible would ultimately be more costly in the long-run since you would pay for data egress as your App Gateway accepts requests and then forwards on to your web servers via an outbound connection over the Internet. They then receive the response headers/body from your web servers and again send that result on to the original caller.
In that scenario, you are forced to have to use end-to-end SSL for your applications, removing any possibility of using the App Gateway for SSL offload in the future. If your traffic isn't encrypted or doesn't need to be, the predictability of the source and destination of your traffic increases the security risk to your website's users and your company.
You also have the possible security implications of this type of architecture. Your web servers still need to be accessible at the very least by your Application Gateway, which means they are either freely available on the Internet anyway (in which case why bother with an App Gateways at all) or they're firewalled at a single layer and permit only traffic from the source IP address of your Application Gateway.
The bad news with the firewall approach is that you cannot assign a static public IP address to an Application Gateway, it is forced as Dynamic. Realistically the public IP won't change until the App Gateways are rebooted but you should know that when, not if, they do, your firewall rules will be wrong and your App Gateways won't be able to get to your DMZ servers any more, which means an outage. The only true solution for that is a firewall that can do URI based firewall rules...the impact there is cost (time and CPU) to perform a DNS lookup, see if the traffic is from the App Gateway by its DNS address - something like bd8f86bb-5d5a-4498-bc0c-e1a48b3873bf.cloudapp.net and then either permit or deny the request.
As discussed above, a further security consideration is that your traffic will be fairly consistently originating from one location (the App Gateways) and arriving at your DMZ. If there's a well defined source of traffic, that fact could be used in an attack against your servers/DMZ. While I'm sure attacking this is non-trivial, you damage your security posture by making source and destination traffic predictable across the Internet.
I've configured a good number of Application Gateways now for Enterprise applications and out of morbid curiosity I had a go at configuring a very basic one using HTTP to do what you're attempting - fortunately (yes, fortunately) I received an HTTP 502 so I'm going say that this isn't possible. I'll add that I'm glad it isn't possible because it's a Bad Idea (TM).
My suggestion is that you either migrate your DMZ servers to Azure (for the best performance/network latency) or implement a VPN or (preferably) ExpressRoute. You'll then be able to deploy an Application Gateway using the correct architecture where you terminate your users' connections at the App Gateway and that re-transmits the request within your RFC1918 network to your DMZ servers which respond within the network back to the App Gateway and ultimately back to the requestor.
Sorry it's not what you wanted to hear. If you're determined to do this, perhaps nginx could be made to?
Typically, access to Azure workers is done via endpoints that are defined in the service definition. These endpoints, which must be TCP or HTTP(S), are passed through a load balancer and then connected to the actual IP/port of the Azure machines.
My application would benefit dramatically from the use of UDP, as I'm connecting from cellular devices where bytes are counted for billing and the overhead of SYN/ACK/FIN dwarfs the 8 byte packets I'm sending. I've even considered putting my data directly into ICMP message headers. However, none of this is supported by the load balancer.
I know that you can enable ping on Azure virtual machines and then ping them -- http://weblogs.thinktecture.com/cweyer/2010/12/enabling-ping-aka-icmp-on-windows-azure-roles.html.
Is there anything preventing me from using a TCP-based service (exposed through the load balancer) that would simply hand out an IP address and port of an Azure VM address, and then have the application communicate directly to that worker? (I'll have to handle load balancing myself.) If the worker gets shut down or moved, my application will be smart enough to reconnect to the TCP endpoint and ask for a new place to send data.
Does this concept work, or is there something in place to prevent this sort of direct access?
You'd have to run your own router which exposes an input (external) endpoint and then routes to an internal endpoint of your service, either on the same role or a different one (this is actually how Remote Desktop works). You can't directly connect to a specific instance by choice.
There's a 2-part blog series by Benjamin Guinebertière that describes IIS Application Request Routing to provide sticky sessions (part 1, part 2). This might be a good starting point.
Ryan Dunn also talked about http session routing on the Cloud Cover Show, along with a follow-up blog post.
I realize these two examples aren't exactly what you're doing, as they're routing http, but they share a similar premise.
There's a thing called InstanceInputEndpoint which you can use for defining ports on the public IP which will be directed to a local port on a particular VM instance. So you will have a particular port+IP combination which can directly access a particular VM.
<InstanceInputEndpoint name="HttpInstanceEndpoint" protocol="tcp" localPort="80">
<AllocatePublicPortFrom>
<FixedPortRange max="8089" min="8081" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
More info:
http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx