I am struggling to have a simple service acting as a reverse proxy in Azure.
I need it because the API that I want to communicate with uses IP whitelisting. That's why I want to set up a reverse proxy service with a static IP. My application (whose IP is cannot be static) would communicate with the target API via the reverse proxy.
With some research I found the following options:
Creating a Web App with some custom IIS config - I'm not sure if that's still valid, because the guides I found are pretty old
API Management - that seems pretty heavy and I've heard it's not going to be easy to configure
Application Gateway - that requires a VNet, which I do not even need.
Azure Function with Proxy - I think that option is no longer available with Functions V4. What's more, I would have to also set up NAT Gateway to have a single outbound IP. That seems overly complicated.
Creating a custom Web App with code that does the proxy logic
All these options seem to be too complicated for a simple task that I need. Basically, I want to have public reverse proxy, when I hit https://my-reverse-proxy.com/*, the proxy would return data from https://my-whitelisted-api.com/*.
Is there any easier alternative?
Most of the Azure services where you host your app has a single outbound ip or a range of outbound ip-addresses that can be whitelisted, but it's hard to know if that works in your case as you did not mention in what Azure service you host your app.
A generic solution could be that you provision an Nginx proxy in Azure Container Apps. Then you will will have the Container Apps Environment public ip address as outbound ip to whitelist. Beware though that anyone can call your proxy from any ip, which means that you are completely disabling the protection they put in place by whitelisting ips and opening up their API to the whole world. So without knowing your circumstances, this would probably not be recommended.
Related
I am more a C# dev than a network admin and I am not understanding what happened.
I have a website hosted on an Azure Web App and I started to get a lot of repetitive requests from IP address 172.16.5.1, to a point that it affected the web server stability
The only way I found to fix the problem is to block this IP address, but I still have questions.
1) Is blocking the IP the best solution to the problem?
2) After googling, I found that this IP address is in the range of Private IP addresses. How can a private address reach my public web server?
3) Could it be another resource from my Azure subscription that could be making these requests?...I only have a web app configured so I don't know where these requests could come from internally
4) Can this be a DDoS attack?
This IP Address is private range as you found it, but needs more information to answer your couriosity.
I could say that’s not the best solution, you need to find out which resources on your Azure that use that IP and see why it sends a request to Web App.
This is possible when your Web App connected to the Virtual Network, discuss with your Network Admin or System Architect.
I’m quiet sure that your Web App is connected to Virtual Network or could be another instance of your Web App requesting each other.
I’m not quiet sure that was an external DDoS attack.
That appears to be the default gateway for a subnet. Check the Networking blade to see if you are integrated with a VNet. I would expect that probes from something in the VNet (AppGW, Azure Firewall, NVA, etc.) would come from the instance IP of that resource and not the default gateway, but you really need to see the subnet range and know what's in there. If this is a WebApp that is integrated with a VNet via point-to-site VPN, then maybe this is something from the VNet Gateway, like Keep Alives. That might be apparent in a network trace. Blocking that IP could result in some other service marking the WebApp as unhealthy and not routing traffic to it.
Lots of conjecture here, but like Rudy said, you're not getting an external DDOS attack from a private IP.
So I have setup a web app, virtual network, application gateway using this link. I also added a virtual network gateway to the vnet so that I can integrate my web app to the vnet.
Now correct me if I am wrong but isn't the purpose of integrating your web app with virtual network to make it more secure? if so then I should only be able to access my web app through the application gateway public IP correct?
Currently when I hit the the myapp.azurewebsites.net, I get to the application.
Do I have to do something extra here?
You are mixing several different things here.
Application gateway is just a proxy (more or less). It has no control over whatever server it is routing traffic to. It cannot magically make it not accept traffic not from application gateway. That is for the server to decide (in your case you need to use web.config as far as I remember to restrict incoming IP addresses only to the IP addresses of the application gateway). So adding application gateway to the mix doesnt make it more or less secure.
Vnet integration works only one way, FROM the webapp inside the Vnet. Things inside the vnet cannot talk back to the webapp using "internal" vnet traffic, they have to use external IP address. So this wont really help you.
If you want your webapp to be available only inside the VNet your best bet is App Service Environment, but its a lot more expensive :(
We have a cisco load balancer on-premise which routes traffic to our DMZ Servers on-premise
We want to use Azure Load Balancer or Azure Solutions (AG) which can balance traffic to our DMZ Servers on-premise, basically replace the CISCO with Azure
Is it possible? we have SFT/HTTPS sites currently hosted on our DMZ Environment.
TIA
What you're proposing isn't the use-case for Application Gateways. Application Gateways are Layer 7 load balancers / reverse proxies. What you want to do is almost treat them as a one-site forward proxy. It's not a good architecture and even if it were possible would ultimately be more costly in the long-run since you would pay for data egress as your App Gateway accepts requests and then forwards on to your web servers via an outbound connection over the Internet. They then receive the response headers/body from your web servers and again send that result on to the original caller.
In that scenario, you are forced to have to use end-to-end SSL for your applications, removing any possibility of using the App Gateway for SSL offload in the future. If your traffic isn't encrypted or doesn't need to be, the predictability of the source and destination of your traffic increases the security risk to your website's users and your company.
You also have the possible security implications of this type of architecture. Your web servers still need to be accessible at the very least by your Application Gateway, which means they are either freely available on the Internet anyway (in which case why bother with an App Gateways at all) or they're firewalled at a single layer and permit only traffic from the source IP address of your Application Gateway.
The bad news with the firewall approach is that you cannot assign a static public IP address to an Application Gateway, it is forced as Dynamic. Realistically the public IP won't change until the App Gateways are rebooted but you should know that when, not if, they do, your firewall rules will be wrong and your App Gateways won't be able to get to your DMZ servers any more, which means an outage. The only true solution for that is a firewall that can do URI based firewall rules...the impact there is cost (time and CPU) to perform a DNS lookup, see if the traffic is from the App Gateway by its DNS address - something like bd8f86bb-5d5a-4498-bc0c-e1a48b3873bf.cloudapp.net and then either permit or deny the request.
As discussed above, a further security consideration is that your traffic will be fairly consistently originating from one location (the App Gateways) and arriving at your DMZ. If there's a well defined source of traffic, that fact could be used in an attack against your servers/DMZ. While I'm sure attacking this is non-trivial, you damage your security posture by making source and destination traffic predictable across the Internet.
I've configured a good number of Application Gateways now for Enterprise applications and out of morbid curiosity I had a go at configuring a very basic one using HTTP to do what you're attempting - fortunately (yes, fortunately) I received an HTTP 502 so I'm going say that this isn't possible. I'll add that I'm glad it isn't possible because it's a Bad Idea (TM).
My suggestion is that you either migrate your DMZ servers to Azure (for the best performance/network latency) or implement a VPN or (preferably) ExpressRoute. You'll then be able to deploy an Application Gateway using the correct architecture where you terminate your users' connections at the App Gateway and that re-transmits the request within your RFC1918 network to your DMZ servers which respond within the network back to the App Gateway and ultimately back to the requestor.
Sorry it's not what you wanted to hear. If you're determined to do this, perhaps nginx could be made to?
We try to migrate our Platform from classical IIS hosting to a service fabric micro service architecture. So fare we learned that a service fabric lives in a virtual machine scale set and uses Load balancer to communicate to the outside world.
The Problem we now facing is that we have different access points to our application. Like one for browser, one for mobile app. Both use the standard https port, but are different applications.
In iis we could use host headers to direct traffic to one or the other application. But with service fabric we can’t. easiest way for us would be multiple public IP’s. With that we could handle it with dns.
We considered a couple solutions with no success.
Load balancer with Multiple public ip’s. Problem: it looks like that only works with Cloud Services and we need to work with the new Resource Manager World there it seems to be not possible to have multiple public ip’s.
Multiple public load balancer. Problem: Scale Sets accept only on load balancer instance pert load balancer type.
Application Gateway. Seems not to support multiple public ip’s or host header mapping.
Path mapping. Problem: we have the same path in different applications.
My questions are:
Is there any solution to use multiple IP’s and map the traffic internally to different ports?
Is there any option to use host header mapping with service fabric?
Any suggestion how I can solve my problem?
Piling on some Service Fabric-specific info to Eli's answer: Yes you can do all of this and use an http.sys-based self-hosted web server to host multiple sites using different host names on a single VIP, such as Katana or WebListener in ASP.NET Core 1.
The piece to this that is currently missing in Service Fabric is a way to configure the hostname in your endpoint definition in ServiceManifest.xml. Service Fabric services run under Network Service by default on Windows, which means the service will not have access to create a URL ACL for the URL it wants to open an endpoint on. To help with that, when you specify an HTTP endpoint in an endpoint definition in ServiceManifest.xml, Service Fabric automatically creates the URL ACL for you. But currently, there is no place to specify a hostname, so Service Fabric uses "+", which is the strong wildcard that matches everything.
For now, this is merely an inconvenience because you'll have to create a setup entry point with your service that runs under elevated privileges to run netsh to setup the URL ACL manually.
We do plan on adding a hostname field in ServiceManifest.xml to make this easier.
It's definitely possible to use ARM templates to deploy a Service Fabric cluster with multiple IPs. You'll just have to tweak the template a bit:
Create multiple IP address resources (e.g. using copy) - make sure you review all the resources using the IP and modify them appropriately
In the load balancer:
Add multiple frontendIPConfigurations, each tied to its own IP
Add loadBalancingRules for each port you want to redirect to the VMs from a specific frontend IP configuration
Add probes
As for host header mapping, this is handled by the Windows HTTP Server API (see this article). All you have to do is use a specific host name (or even a URL path) when configuring an HTTP listener URL (in OWIN/ASP.NET Core).
From Bluemix I want to access an application in a customers data center using Secure Gateway service. I also want to give access to the destination (the customer application) to the Bluemix application only.
In the Secure Gateway dashboard under Advanced options of the gateway or the destination definition is a Network option where I can specify an IP address or address range plus port or port range. The help text says: "Set this destination to private to only allow access from specific IPs and ports." This is exactly what I am looking for.
But: How can I use this with a Bluemix app? I don't know the IP address of the Bluemix app. I am aware that I can figure it out but it is not static, the moment I stop and restart an app on Bluemix, the IP address may change. So this setting of the Network option would have to be done by some API call from the Bluemix application itself. Is this possible?
If not, why have this function at all?
The cloud application will use the "cap-sg-prd-<#>.integration.ibmcloud.com" hostname and the port they were given to connect into the cloud service. The client uses the destination configuration, which is downloaded to the client, to perform the backend, on-premises connection to their on-premises resource. So only their cloud application need to know about the cap-sg*/port number, all other connectivity is taken care by using the already established SecureGateway client connection.
In the form for the IP address you can also specify hostnames. You could try to provide the hostname of your Bluemix app. In my tests I did not succeed and had the entire connections cut off. Thus I cannot recommend trying to restrict connections right now.
By binding your Secure Gateway to the app or, even better, utilizing user-provided services to bind a database to an app you can leave the connection information internal to Bluemix. Here is a blog post with steps for user-provided services and on github is a demo for on-premise database integration utilizing the user-provided services and the Secure Gateway.
The hint regarding hostnames can be found in the Bluemix documentation for the Secure Gateway. The information about the Secure Gateway in the Knowledge Center is shy about it.