I have a resource (specifically, a Kubernetes service deployed to my AKS cluster) to which I want to limit access. I've looked around through the MSDN documentation on What is Azure Virtual Network?, VPN Gateway design, and more, but I don't see a clear way that I can either:
Require AAD authentication before a specific IP/Port is accessed, or
Whitelist access coming from a specific IP/subnet (eg, specifying CIDR format www.xxx.yyy.zzz/nn that should get access).
There seem to be ways to restrict access that require I install some a RADIUS VPN client, but I don't want to require this. It seems like there are a ton of hoops to jump through -- is there a way I can block all incoming traffic to my AKS cluster except from specific AAD roles or from specific IP ranges?
It would be helpful to understand what you intend to use AKS for (web site, batch computing, etc.)
First you should fully explore the networking options offered by the service itself. Start with locking down to your personal IP address and the service will likely (based on Azure docs) append a deny-all to the end of your networking rules. To get the IP address it sees you from, try IP Chicken.
I offer two additional options: Application Gateway or API Management.
One way to lock this down, based on the information you shared, is Application Gateway.
Application Gateway (Product Page)
Ingress Controller for AKS
"Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster." - from Azure Docs
API Management
You also have API Management paired with policies on that resource. It can restrict by AAD (check pricing tier for details) and IP address (on any pricing tier).
If the organic networking options of AKS don't cover your use case, I would choose API Management. Price and options are better for what it seems you are aiming for.
Related
If I am setting up an Azure SQL Database in a vnet which Azure App Service and Azure Function will access. Is using both Subnet Delegation and Service Endpoints the right way to go? I didn't fully understand the documentation.
Regarding subnet delegation, I read this Microsoft article and this stackoverflow post, which stated:
When you delegate a subnet to an Azure service, you allow that service to establish some basic network configuration rules for that subnet, which help the Azure service operate their instances in a stable manner.
That sounds like a good thing but makes me wonder how it worked efficiently w/o subnet delegation.
As for Service Endpoints, I read this Microsoft article, which states:
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
Does that mean I cannot reach the Azure SQL Database from my home machine w/a firewall rule?
They both sound like they have the same benefits and I'm struggling to understand the difference. I suppose the larger question is should I enable both for the simple architecture outlined above.
In the Microsoft service endpoints documentation they also mention:
Microsoft recommends use of Azure Private Link for secure and private access to services hosted on Azure platform. For more information, see Azure Private Link.
For some reason that seems like an Azure to on-premise thing.
• You cannot use a ‘Subnet Delegation’ along with a ‘Private endpoint’ since that subnet is delegated for the said service, in your case, the Azure SQL Database. Through a subnet delegation, you can define the NSG association for it, as well as associate multiple delegated subnets to a common NSG. You can also define the IP Address space for the delegated subnet, the route table association with it, the custom DNS entry configuration in Azure DNS as well as define the minimum number of IP Addresses available for that delegated subnet. Similarly, with regards to service endpoint, these stated functions are not available.
• In service endpoint, you do not have control over the routing mechanism as well as the IP address related allotment, reservation, or configuration. Also, managing DNS entries for the resources managed through them and controlling them through a firewall or NAT gateway isn’t required unlike a subnet delegation because all these things are managed by Microsoft Azure’s backbone network on your behalf.
Thus, both have their own features and specifications for enabling you to configure according to your own requirements.
Does that mean I cannot reach the Azure SQL Database from my home machine w/a firewall rule?
Yes, you will have to create a firewall rule to allow the access from on-premises system to Azure SQL Server/Database and configure the service endpoint accordingly to allow the VPN client IP Addresses for accessing the same over public internet.
Also, through Azure private link, you won’t be able to connect from on-premises to Azure as it uses a private IP address and a private DNS zone entry related to it to connect to Azure resources in the same virtual network.
To know more regarding the configuration of Azure service access from on-premises network, kindly refer to the below given link: -
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview#secure-azure-service-access-from-on-premises
Also, refer to the below snapshots regarding the configuration and selection of service endpoint for a particular subnet: -
I have App Service which is classic web app written in Node.js. Let's say that my app have 2 endpoints: /SecuredEndpoint and /ClassicEndpoint. /SecuredEndpoint should be secured, meaning only certain IP addresses are allowed to access it. ClassicEndpoint on the other hand is public to whole internet.
I've found out that in Azure I can specify Access Restrictions to whole service for certain IP addresses (I can block/allow access based on IP address). But I would like to secure not the whole app but only certain endpoints.
Can someone help me how can I achieve that in Azure?
To restrict certain IP addresses is to restrict ACL in the networking layer. Access Restrictions are effectively network ACLs. However, it is implemented in the App Service front-end roles, which are upstream of the worker hosts where your code runs. In this case, you could consider selecting to use two app services for each endpoint. You can read supported security in the Azure app service.
Alternatively, you may allow certain IP addresses in your special code. Google some samples for such a feature. It might be like this SO thread. For App Service on Windows, you can also restrict IP addresses dynamically by configuring the web.config. For more information, see Dynamic IP Security.
In addition, If you are interested in securing Back-end App Service Web Apps with VNets and Service Endpoints, you could have a look at this blog.
Given that I create an Azure 'App Service'
How do I ensure that this service is only callable from ...
A.> 2 existing external servers (whose IP addresses will be known)
B.> 3 other App Services which I will be creating, but whose IP Addresses may not be known since I may need to scale those out (Over multiple additional instances)
To clarify... Is there some Azure service that will allow me to treat this collective of machines (both real and virtual) as a single group, such that I can apply some test on incoming requests to see if they originate from this group?
on Azure WebApps, You may wish to know; the IP Restrictions (https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions) allow you to define a list of IP addresses that are allowed to access your app. The allow list can include individual IP addresses or a range of IP addresses defined by a subnet mask. When a request to the app is generated from a client, the IP address is evaluated against the allow list. If the IP address is not in the list, the app replies with an HTTP 403 status code.
You can use IP and Domain Restrictions to control the set of IP addresses, and address ranges, that are either allowed or denied access to in your websites. With Azure WebApps you can enable/disable the feature, as well as customize its behavior, using web.config files located in their website.
Additionally, VNET Integration gives your web app access to resources in your virtual network but does not grant private access to your web app from the virtual network. Private site access is only available with an ASE configured with an Internal Load Balancer (ILB).
If you haven’t checked this already, checkout Integrate your app with an Azure Virtual Network for more details on VNET Integration (https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet)
I strongly suggest dropping the whole what's my IP approach and throwing in OAuth. Azure AD gives you access tokens with moderate effort —
Service to service calls using client credentials (shared secret or certificate)
Else, TLS client authentication would be next on my list. Although that tends to really suck if you have to deal with several programming stacks, TLS offloaders and what not.
I would like to create a simple architecture on Azure. My high level design is very similar to the picture below (source: https://www.import.io/post/using-amazon-lambda-and-api-gateway/)
I do want to access the internal services via the Azure API Management. What I can see on Microfos documentation page is that this simple and secure architecture is not mentioned as a reference:
https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
I have the following issues:
API Management cannot be assigned to a Virtual Network if there is at least one NIC is using the same network (why?)
Even with peered Virtual Networks I cannot access 10.244.X.0/24 network (pods' network) because only 10.240.0.0/16 is owned by the k8s Virtual Network. How can I access cluster ips (10.0.0.0/16) and pod ips (10.244.0.0/16)?
Well, you don't need an Extra VNET, but just an extra Subnet. That Subnet could lie within your existing VNET. The Size of Subnet can be the smallest /29 which Azure supports.
The Extra Subnet requirement for API Management comes from the fact, that it is built on PAAS V1 (Classic) technology. While we can deploy into a Resource Manager VNET (V2 layer), there are consequences to that. The Classic deployment model in Azure are not tightly coupled with Resource Manager model and so if you create a resource in V2 stuff, the V1 doesn't know about it and problems can happen such as API Management trying to use an IP that is already allocated to a NIC (built on V2).
To learn more about difference of Classic and Resource Manager models in Azure refer to blog difference between Classic and ResourceManager models
The answer is basically YES although the setup is not trivial.
You need:
One extra VNet for the API Management (EDIT: an extra subnet is enough)
One service (kubernetes terminology)
Steps:
Peer the Kubernetes VNet and the extra VNet you have created (test it)
API Management -> Virtual network: change to External
Choose as Virtual Network the one extra VNet (lets call it 'apimgmntvnet') and a Subnet
Save it! Drink a beer because it took me 1h!
Meanwhile expose your deployment internally:
kubectl expose deployment app --port=<serviceport> --name=app --target-port=<containerport> --type=NodePort (NodePort is important!LoadBalancer type triggers kubernetes to dynamically configure the Azure External LB for Kubernetes install)
Check node IP:PORT on kubernetes (kubectl proxy) BUI
API Management -> Publisher portal: modify your API to the IP address (AgentIP:30361)
Theoretically it should work. It is advised to start with a VM in the apimgmntvnet and try peering first from the VM and than delete it (API Management cannot be part of a VNet where at least one NIC is present (?!) ).
So we are starting to move to the cloud and our biggest concern is security, as it should be. The thing that I am not sure about is how to secure the end points from public (interent) access? Is this even possible or is there something else we can do to keep the environments in Azure out of the public eye?
This question is likely better suited to server fault. But until then...
In Azure IaaS V1, you can specify IP based ACLs (access control lists) to restrict inbound traffic.
In both IaaS v2, you can leverage NSG (network security groups) to help restriction trick into and out of specific VMs or virtual network sub-nets.
If you are using Azure Express Route (a leased line into an Azure facility), the VMs can be addressed directly from within the virtual network connection and don't need to have publically exposed endpoint.
Then there's also all the usual options such as securing the connections on the VMs themselves. :)
If you are using Azure Resource Groups along with your VMs (which is available on the new portal) you cannot use endpoints because it's not available there, so you should do the following to open up the HTTP port or ANY other port:
(Sign in to your account on the new portal)
1- Select the VM that you want to manage ports on.
2- In settings, click on Network Interfaces and select your network.
3- Go to Network Security Group and select your group.
4- Add Inbound or Outbound security rules depending on what you need.