Do NSGs apply to Servic Endpoints of the Subnet - azure

I created a Subnet, where I connect a Cosmos DB as Service Endpoint. Besides the IP firewall of the Cosmos DB I want to control the Outbound via NSG rules. However if I create a rule, that denies all Outbound (also tested with deny all Inbound) it seems to have to effect, when connecting to the DB via the Mongo client.
Is this expected behaviour ?

Yes, It's expected behavior when access the Cosmos DB from service endpoint enabled VNet.
Here are two points in your question:
The NSG can be associated with a subnet or network interface level. In this case, when an NSG is associated with a subnet, the rules apply to all resources connected to the subnet. If a subnet NSG has a matching rule that denies traffic, packets are dropped, even if a VM\NIC NSG has a matching rule that allows traffic. Read here1 and here2.
When you enable a Cosmos DB as Service Endpoint in a VNet, it will extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.
Today, Azure service traffic from a virtual network uses public IP addresses
as source IP addresses. With service endpoints, service traffic
switches to use virtual network private addresses as the source IP
addresses when accessing the Azure service from a virtual network.
This switch allows you to access the services without the need for
reserved, public IP addresses used in IP firewalls.
So, if you are accessing the Cosmos DB from a VNet, it will use the private IP address in that VNet to access the Azure Cosmos DB service. If you are accessing the Cosmos DB outside of Azure, you will be restrcited by the firewall IP address of the Cosmos DB.

Related

Azure Firewall: How to translate Internet URL to Internal/Intranet URL?

I have created the following Vnets
vnet-hub-poc-hubspoke is the Hub Vnet
and both the Vnets are peered as per the HUB-SPOKE model
vnet-hub-poc-hubspoke being a Hub Vnet, it has Azure firewall configured
both the Vnets are connected to Azure Private DNS
Azure Private DNS has a record pointing to the VM deployed on the vnet-prod-poc-hubspoke Vnet
and I could access the FQDN within the internal network
after adding the below rule in Azure Firewall, I could access the website using the firewall public IP
Now, instead of firewall public IP I want to use the domain name like
http://myfirstweb.private.landingzonedomain.com/ (for now, I have updated the hosts file in the client machine pointing to firewall public IP)
what should I do at the azure firewall level so that it would translate Internet URL to Internal/Intranet URL like
http://myfirstweb.private.landingzonedomain.local/
What you want is not possible, because you cannot assign a domain name to your Azure Firewall. What you could do is to create a DNS record at a domain name provider that translates a custom domain to your Azure firewall public IP.
Although I have seen people routing inbound traffic in their vnet, Azure firewall is mainly designed for controlling outbound traffic and traffic flowing between (peered) vnets. When you want to direct inbound traffic to a website or service inside your vnet, you can choose between:
Application Gateway
Frontdoor
Combination of both
All the options above allow you to add custom domains and certificates. On the other hand, when you want to access a virtual machine through rdp or ssh, your main options are:
Bastion host (i.e. jumpbox)
VPN
Cloud Shell

How to route all traffic through Azure Firewall in Azure, even on Prem( connected with VPN)

In our Azure tenant we have a Azure Firewall and a VPN connection with our on prem servers. I want to route all traffic through the azure firewall, whether it's incoming traffic from on prem to azure or outgoing traffic from azure to on prem.
For traffic inside azure I have created a routing table for each subnet and pointed to the firewall. Is this correct? And what do I have to configure for the on prem connection part. Further, how can I test it?
Thanks and best regards
To route traffic coming from the on-prem network, through the Azure Firewall, you also need to specify a route on the "GatewaySubnet".
This route table should contain the (Azure) subnets you want to reach from on-prem.
So if you for example have a subnet 10.5.5.0/24 in Azure, and you want to reach that from On-Prem.
Add a route table, with a route to 10.5.5.0/24, next hop type "Virtual Appliance" and Next Hop IP the private IP of your Azure Firewall.
Add this route table to the GatewaySubnet. (Some times you cannot assosiate from within the route table itself, but have to to through Virtual Network > Subnet and specify the route table there.
(And allow the traffic in the Azure Firewall.)

Azure virtual network not routing traffic between subnets for app services

We deploy to Azure app services and to ensure secure traffic between each service they are configured with out bound traffic on a virtual network subnet. Each app service must have it's own subnet, which is understandable, but to allow the app services to communicate we are having to add inbound ip restrictions for each subnet, on each app service.
As all the subnets for all the app services within an environment are on the same virtual network we were expecting the traffic to route between the subnets. In that way by connecting each app service to an out bound subnet and allowing traffic back in on that subnet it would also allow traffic from the other subnets of the virtual network.
I've read here Azure: Routing between subnets a response that state "Azure routes traffic between all subnets within a virtual network, by default. You can create your own routes to override Azure's default routing." but that does not appear to be happening for us.
Is there a setting we need to change or a route that must be added to allow us to have a single inbound rule from the virtual network that allows all traffic from all subnets of the virtual network.
We are splitting our process into micro app services but this is making security of inter-app traffic complex as each time we add an app service we must update all others with an additional inbound rule before it can communicate.
We also have a similar issue with managing access to the azure sql database where we have connected the sql to a subnet in the virtual network but traffic from the app services cannot access over that subnet.
Any advice please?

Is a service Endpoint always the securest way to access a resource on Azure?

I build an architecture, where you can trigger an Azure Function to push data into a Cosmos DB, which lies behind my DMZ. Some implementation guidelines state, that a service endpoint should be always enabled if possible. However, if I do so, the Cosmos DB is potentially exposed to the Internet (although I would not allow any IPs in the Cosmos DB firewall). With exposure I mean the order of handling services in Azure (https://msdnshared.blob.core.windows.net/media/2016/05/1.bmp). Thus, the Cosmos DB would have by default a public endpoint.
Can I restrict any public access from the internet, except blocking all IP addresses?
Can I restrict any public access from the internet, except blocking
all IP addresses?
Actually, By enabling service endpoint, you have limited that only requests originating from that subnet could access the Azure Cosmos DB. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network. So, it's a secure way to access resources in Azure.
After enabling a service endpoint, the source IP addresses of virtual machines in the subnet switch from using public IPv4 addresses to using their private IPv4 address, when communicating with the service from that subnet. Also, the default NSG associated with that subnet continues to work with service endpoints, read here. If you want to deny all outbound internet traffic and only allow access to cosmos DB from that subnet, you could add service tag as the destination in the outbound rules in NSG.
edit
You could have a look at this Azure private link(preview), but it seems it's not available for Azure Cosmos DB Account yet.
Azure Private Link enables you to access Azure PaaS Services (for
example, Azure Storage and SQL Database) and Azure hosted
customer/partner services over a Private Endpoint in your virtual
network. Traffic between your virtual network and the service
traverses over the Microsoft backbone network, eliminating exposure
from the public Internet.

Control outbound IP address of internal VMSS in Azure

I have a VMSS/svc fabric cluster on internal vnet (not public). The only inbound connections to the VMSS is from on prem through a Azure VPN Gateway.
How do I control the outbound IP address the VMSS go through when accessing the internet? In this case I do not want this traffic routed through a random IP address or through the VPN connection.
Basically I want to secure my Azure SQL so that the outbound internet IPs of the VMSS is whitelisted. And I don't want to add all Azure datacenter IPs.
You could look to use Forced Tunneling which would ensure that your control where the data egress occurs in your on-premises environment, however this would force any data in your Virtual Network back over your VPN connection which may not be desirable (or helpful if you don't control egress from there).
Failing this you could add a software-based firewall running on an Azure VM with a public IP onto the same VNet and then use User Defined Routes (UDRs) to force all traffic bound for the Internet to go via that and then use the public IP address in your SQL firewall.
Longer term you will be able to connect Azure SQL DB to VNets (or at least restrict access to it from one) - see the Uservoice site (and add your vote!)

Resources