Azure Firewall Limitation - Updating Rules - azure

I'm trying to get the company to use Azure Firewall as we start to move production workloads to Azure, however the network team have stated there are limitations when using Azure firewall. For example they have saod the Firewall reboots or drops all connections when you update a rule on it?
Is this true? Would anyone know of any limitations of using Azure Firewall. The network team prefer to use Checkpoint firewalls in Azure which are fine, but I would rather use Azure firewall, if its not going to fall down eveytime we do an update to teh rules.
That just doesn't sound like its right, as Azure Firewall is prouction ready resource.

• Please find the link below for detailed known documented limitations of Azure Firewall: -
https://learn.microsoft.com/en-us/azure/firewall/overview#known-issues
It clearly states all the issues regarding configuration of rules, NAT of UDR and other features of Azure that are used in integration with it. There, it clearly states that for configuration updates regarding the Azure Firewall, it takes three (3) to five (5) minutes on average to take effect independently, i.e., if multiple configuration updates are done to the Azure Firewall, each configuration update takes separate time to take effect and reciprocate it in its functioning. Thus, please check this, and as for updating of rules is concerned, I don’t think the existing rules defined on Azure Firewall are dysfunctional or the Azure Firewall as an appliance goes down for few periods of time when the rules are getting updated.

Related

Outbound IP addresses for Azure functions

I'm running an azure function which gets data from an API and stores it in a blob. Everything worked fine and stopped working out of nowhere. We then got in contact with our provider and they told us they made some changes in their API. After we made the necessary changes in our code,started getting an IP denied error from their part. I then searched and found the possible outbound IP addresses for the Azure Function. They whitelisted the whole list and still
They aren't getting any requests from those IP's,
We are not able to access that data for the same reason our IP is denied.
We've been running the code in a local machine and it works completely fine, but this is just a temporary fix and we want to keep everything in the cloud.
I've been stuck with this for about 3 weeks. I've looked into different solutions and I found about Azure Logic Apps and Azure Service Fabric.
Is there something missing in my Azure Function that isn't allowing me to make requests to the API? Am I using the wrong outbound IP? Also, if I use any of the other two services, will I encounter with the same problem? I did some research on them and I think they both also use multiple outbound IP addresses, so I'm worried I'll get the same problem.
Using NAT gateway you can specify a static IP address for outbound traffic, your function app need to be attached on a subnet which is not available for consumption plan.
Here is where you should be getting the Azure IP address ranges from. Azure Functions originate from the App Service Plan ranges. Note that this is updated weekly, things change, but not too often. Your provider will need to open all the relevant ranges and keep up to date with any changes. If your solution is not mission critical with a high SLA, then having your service provider open the relevant ranges and deal with failures, updating the ranges on an ad-hoc basis should be fine.
Secondly, if you have a good relationship with the provider, ask them to check the firewall, they will be able to give you an indication of the IP's getting blocked by checking the firewall logs. This will help you find the right range.
The only guaranteed way to solve this in a mission critical solution is to run your Azure Functions from an dedicated app service plan with a dedicated IP address. This is an expensive option but will be the most robust.
Additional helpful information here on how App Services works with IPs can be found here.

AZURE SQL DB Vulnerability Assessment mark High risk after Database fire wall setup

I have setup Azure ATP on Azure sql server and removed all the fire wall settings from the server. However after that when I ran the vulnerability assessment it removed the “Server-level firewall rules should be tracked and maintained at a strict minimum” which is high risk. However when I add the firewall setting from database view (not from the server view) and ran the vulnerability assessment scan above risk showing again. How it possible even server firewalls also not showing the record.
I am bit confused how this happening.
If I understand correctly you are clicking on the button "Set server firewall" from within the database blade in the Azure portal. Correct? If so, this will indeed set the server firewall rules, and so it will be flagged by VA2065. Database level firewall rules are not accessible through the Azure portal. You can read more about this here.
Please note that the fact that VA2065 fails does not necessarily mean that you should close off the server level firewall entirely. Instead, you should evaluate the results (click on the failing check) and make sure that the rules in the firewall are correct. If they are - set that as your baseline. Now scan the database again and VA2065 will "pass per baseline". Only subsequent changes to the server's firewall rules (that do not match the set baseline) will result in a failure.
I have encountered the same issue. The reason you get this "Vulnerability Assessment" is because "Advanced data security" is ON on your sql server. The Assessment is quite strict and has a cost to it as well. The purpose of this assement if to make you think about security issue even "basic stuff" like should all azure instances be allowed to connect to your database?
That is what the "AllowAllWindowsAzureIps" is all about and would you as an database azure admin allowed for other instances in Azure to connect to this SQL Server and Database?
VA2065 - Server-level firewall rules should be tracked and maintained at a strict minimum
Firewall Rule Name: AllowAllWindowsAzureIps
Start Address: 0.0.0.0
End Address: 0.0.0.0
If the answer is yes... and yes is the most logical choice then the way to solve this is to allow "AllowAllWindowsAzureIps" for the given server by pressing the button "Approve as Baseline" which will tell the server that it is okay to have "AllowAllWindowsAzureIps" going forward.

How do you update the NetworkConfiguration of an Azure Cloud Service deployment while it's running

How can we change the IP Restriction Rules while the Cloud Service is running. (A reboot is acceptable).
For example, add / remove an IPAddress.
We don't really want to redeploy and we definitely don't want to repackage.
https://learn.microsoft.com/en-us/azure/cloud-services/schema-cscfg-networkconfiguration
Using Network Security Groups. This can be done at any time.
https://learn.microsoft.com/en-us/azure/virtual-network/security-overview
In NSG overview add either the tag or IP range and the desired restriction or allow.

A rule represent ONLY Azure IPs for Azure firewall

Can we create a firewall rule in azure to allow connections from azure data centre ONLY?
Strictly speaking, this can't be done.
Fundamentally, Azure doesn't reside within a singe IP address range, nor does each Azure region. In fact, each region is broken into multiple address ranges which aren't necessarily concurrent. The ability to define a single firewall rule (which covers the entirety of Azure's infrastructure) would require some work in Azure to define, and maintain, a variable which holds all of these values.
It may be worth pointing out that Azure does already offer similar solutions for Internet and VirtualNetwork, which are applied in the default NSG rules. As the majority of infrastructure within Azure, but outside of your virtual network, is essentially the Internet, setting such a variable for all Azure IPs would give a user the option to, potentially unknowingly, open up their resources to any kind of malicious activity.
Depending upon what exactly it is you are attempting to achieve, Azure does offer workarounds in the form of Service Endpoints. This functionality has recently left the preview phase, and allows a user to create a security rule between certain PaaS resources and your virtual network. Currently, this functionality is restricted to Azure Storage, Azure SQL Database and Azure SQL Data Warehouse.
An extremely sloppy way of implementing firewall rules for all Azure IP ranges would be to manually enter the address ranges from the region(s) you require, which can be downloaded here. However, doing this would be highly discouraged due to security flaws previously mentioned, plus these IP ranges are not entirely static, therefore it would be easy to get caught out if Microsoft was to edit certain address ranges.

Is Azure Traffic manager is reliable for failover? what are other problems I should be worried about?

I am planning to use Azure Traffic manager to do a failover of my app running on one Azure zone to Azure zone.
I need some suggestion, if that is the correct approach to do a failover ? We have seen issue with Azure that, most of the services in one region goes down for few hours. Although I understand that Azure traffic manager is not associated with the region. But is it possible that Azure traffic manager goes down or that traffic manager endpoint is not reachable although my backend webapp is reachable?
If I am planning to use Azure traffic manager, what are other problems I should be worried about ?
I've been working with TM for some time now, so here are a few issues I haven't seen mentioned before:
Keep-Alive
If your service allows Keep-Alive, then your DNS entry will be ignored as long as the connection remains open. I've seen some exceptionally odd behavior result from this, including users being stuck on a fallback page since they kept using the connection, causing it to remain open indefinitely. If you have access to IIS Manager, you can force Keep-Alive to be false.
Browser DNS Caching
Most browsers have their own DNS cache, and very few honor DNS Time To Live. In my experience Chrome is pretty responsive, with IE and Edge having significant delays if you need them to rollover quickly. I've heard that Opera is particularly bad.
Other DNS Caching
Even if you're not accessing your service through a browser, other components can have DNS caches, and some of them will allow you to manage the cache yourself. This can in theory even depend on ISP's DNS caching, though reports on the magnitude of this vary significantly.
Traffic Manager works at the DNS level, which itself is replicated. However, even then, you should still build in redundancy into your solution.
Take a look at the Azure Architecture Center under "Make all things redundant" and you will see a recommendation for Traffic Manager:
consider adding another traffic management solution as a failback. If
the Azure Traffic Manager service fails, change your CNAME records in
DNS to point to the other traffic management service.
The Traffic Manager internal architecture is resilient to the failure of any single Azure region. So, even if a region fails, Traffic Manager should stay up. That applies to all Traffic Manager components: control plane, endpoint monitoring, and DNS name servers.
Since Traffic Manager works at the DNS level, it doesn't have an 'endpoint' that proxies your traffic--it uses DNS to direct clients to the appropriate endpoint, and clients then connect to those endpoints directly. Thus, an unreachable endpoint is an application problem, not a Traffic Manager problem.
That said, if the Traffic Manager DNS name servers are down, you have a serious problem. You DNS resolution path will fail and your customers will be impacted. The only solution is to either accept the risk (small, but can never be zero) or have a plan in place to use another DNS system, either in parallel or failover. This is not a limitation of Traffic Manager; you could say the same about any DNS-based traffic management system.
The earlier answer from DornaDigital is very good (other than the first point which suggests DNS caching will protect you through a name server outage--it won't). It covers some important points. In short, DNS-based failover works well for new sessions. Existing clients may have to refresh or even close their browser and reconnect.
I also agree with the details provided dornadigital.
There are considerations for front end applications as well. The browsers all have different thresholds for how long they maintain persistent connections. Chromium, for example, currently maintains a connection unless there is inactivity for 300 seconds.
In our web applications, we are detecting the failover by the presence of a certain number of failed requests to the endpoint. After requests begin failing, we pause requests for 301 seconds to allow the connection to reset. This allows the DNS change from the traffic manager to be applied to subsequent requests. We pop up a snackbar to indicate to the user that we are having an issue and display the count down when requests will resume. Similar to Gmail when it has an issue connecting to their servers.
I hope that gives you one idea on how to build some redundancy into your web apps.
I disagree with Jonathan as his understanding of the resiliency of the Traffic Manager service is in disagreement with Microsoft's own documentation on the subject.
When you provision Azure Traffic Manager, you select a region in which to deploy the service. I (correctly) inferred this to assert if said region were to fail, the Traffic Manager service could also be impacted and in turn, your application solution would not properly fail over to the secondary region.
According to Microsoft's Azure Application Architecture Guide, under "Make all things redundant", a customer should deploy Traffic Manager into more than one region:
Include redundancy for Traffic Manager. Traffic Manager is a possible failure point. Review the Traffic Manager SLA, and determine whether using Traffic Manager alone meets your business requirements for high availability. If not, consider adding another traffic management solution as a failback. If the Azure Traffic Manager service fails, change your CNAME records in DNS to point to the other traffic management service.
Azure Application Architecture Guide - Make all things redundant
My thought and intention is to not deploy Traffic Manager within the primary service region, but instead to deploy it into the secondary (failover region) and a tertiary (3rd) region as a backup.

Resources