Azure Container Instance IP - azure

I've created an Azure container instance with a Private IP. This is connected to a VNET so my Web Apps can communicate with it in a secure way. (This API has Bearer tokens also but I don't want to make it public).
However, when restarting the Container I get a new IP. Therefore I have to update the Env and restart my apps.
Is there a way to implement service discovery within Azure, so my Web Apps (and other services) know where this Container Instance is, especially when the container gets a new IP.
I am used to dealing with Pivotal and Consul but under Azure I don't have these tools.
In Pivotal I was able to fire up multiple instances and the platform would auto discover and load balance. At the moment, Azure feels very manual :(
Does azure have the ability to register a service under a host name that can then auto resolve?
Does Azure support load balancing when multiple instances are started with the same name?

For this kind of work maybe the best solution is to create an Azure service web app. The web app as public static IP link to your plan which does not change.
Inbound and outbound IP addresses in Azure App Service
With this solution your IP never change.
According to the documentation linked, you can find your inbound IP like this:
nslookup <app-name>.azurewebsites.net
and the outbound IP in powershell:
(Get-AzWebApp -ResourceGroup <group_name> -name <app_name>).OutboundIpAddresses

Related

Creating Containerized Applications on Azure with a VNet

I have a set of containerized http services that I wrote, the services are configured using a docker-compose.yml and a collection of Dockerfiles to build the service images. I would like to be able to host my docker-compose.yml setup on Azure, specifically one of my http services requires the ability to rotate which outbound Public IP Address it is making requests from (similar to a proxy.)
I have looked at the following resources:
Azure App Service
Container App
Container Instances
Virtual Machine
I have been able to deploy my app successfully and test it on all of the solutions, however, my issue is that one of my services needs to rotate the outbound IP Address that it is making requests from (kind of like using a proxy to make requests.)
I can accomplish this with a virtual machine and by adding a VNet with multiple IP Address resources associated. This works perfectly fine but using the virtual machine alienates me from the benefits of azure's other managed container services.
I Have read the docs for App Service, Container App, and Container Instances and it doesn't seem like it's possible to assign these resources to a VNet with my current configuration.
Any advice on how I could go about solving my problem is appreciated.

Azure Container App: Only allow access over Api Management

I want to restrict access to my Azure Container App with an Api Management in Azure.
I successfully linked the Api Management with the Container App and I have activated a Subscription with an Api Key that will prevent public access over the Api Management Service Url. The problem, however, is that the Container App can still be accessed over the public Url of the Container App.
There is still the option to set the Ingress Traffic in the Container App to Limited to Container Apps Environment but then the Api Management will not have access to the Container App as well.
What is the correct way to properly secure the Container App behind an Api Management Service?
For Azure Container Instances, you don't have the option to configure IP restrictions similar to Azure App Services. Instead you will have to first create a virtual network and configure a Network Security Group to Deny all traffic from the internet and allow only from APIM, and then deploy your Azure Container Instance to this virtual network.
See here for deploying an azure container instance to a virtual network : https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
For configuring network security groups in your virtual network see : https://learn.microsoft.com/en-us/azure/virtual-network/manage-network-security-group#work-with-security-rules
You app service is still accessible over the public internet because you haven't configured Access Restrictions in your App Service's Network.
What you need to do is go to your App service. Then select Networking from the left menu and Turn on Access Restrictions for inbound traffic.
Create an access restriction rule to deny from the internet.
Next create a second acccess rule to allow access from the APIM. Ensure the priority on this one is higher.
Read the Microsoft Docs on how to set app service IP restrictions here : https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions
Assuming your API management service has a static IP (not a consumption plan), you would need to use you own VNET:
Networking architecture in Azure Container Apps
Then using NSG, you could add an inbound rule to only allow traffic from the APIM service IP on HTTPS (TCP 443).
Azure container apps do now seem to have the ability to restrict inbound ip addresses
https://azure.microsoft.com/en-gb/updates/public-preview-inbound-ip-restrictions-support-in-azure-container-apps/
We have are looking at a similar architecture with a similar dilemma. Everything we have is secured with Azure b2c but if I want to make an internal container/microservice accessible to Azure Api Management I think I'd have to drop b2c (api management has no UI to log into b2c) and make it publicly accessible via the Ingress. If the inbound ip addresses are restricted to api management maybe that is ok. It does worry me that ip addresses can be spoofed although you'd hope Microsoft have thought of that.
Another alternative which I've not investigated but which does work for Azure functions is managed identities. This might not work at all with container apps though
https://www.svenmalvik.com/azure-apim-function-msi/
First, I think that it is good to explain networking architecture in Azure Container Apps.
Azure Container Apps run in the context of an environment, which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you.
There are two ways to deploy Container Apps environments:
External - Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address.
Internal - When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses.
I attach the image from Azure portal to show above two options:
Now going further, if you want your container app to restrict all outside access, create an internal Container Apps environment.
Now when it comes to deployment of the Container Apps to the Container Apps Environment, accessibility level you selected for the environment will impact the available ingress options for your container app deployments.
If you are deploying to an external environment, you have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Accepting traffic from anywhere - to allow the application to be accessible from the public internet.
If you are deploying to an internal environment, you also have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Limited to vNET (Virtual Network) - to allow traffic from the VNET to make container app to be accessible from other Azure resources or applications within the virtual network or connected to the virtual network through Peering or some type of VPN connectivity
Now in you case, what you are looking for is the architecture where you enable access to Azure Container Apps only through the Azure API Management. In this case you have to deploy Azure Container Apps Environment with Internal mode and set ingress traffic to Limited to VNet (Virtual Network).
I assume that Azure API Management can be accessible from the Internet. In this case you have to deploy Azure API Management inside an Azure Virtual Network. There are two possible modes: internal, and external. In you scenario, you can use external mode. More details can be found here. When API Management instance in the external mode, the developer portal, API gateway, and other API Management endpoints are accessible from the public internet, and backend services are located in the Azure Virtual Network.
Here I also attach the solution architecture to show how all these components are connected together. I also have Azure Front Door here but API Management is deployed with external mode. Please remember that you will also need private DNS Zone for your Azure Container Apps Environment domain, to make it possible to refer to specific APIs from the Azure API Management using URLs, example:
https://ca-tmf-mip-vc-api--v-01.blacklacier-cf61414b.westeurope.azurecontainerapps.io
Helpful links:
Repo with Bicep files to deploy Azure Container App with internal mode
Azure Container Apps Virtual Network Integration

Azure WebApp Static Outbound IP

I am trying to solve a problem. I have to access APIs that are hosted on my on premises server (on-prem) from Azure hosted Web API.
The problem is that my on-prem server only allows white listed IPs. I know we can get outbound IPs from our Web App (Azure hosted). But I am not sure whether they are static or will change based on scaling.
Another Solution is to create VNET and add that Web app into that VNET. But I would like someone to suggest better solutions.
There are couple of choices you have.
First, you can have a look at the possibleOutboundIpAddress of your App Service and whitelist this IPs. This however also opens up the door for IPs not really in use by your App Service.
az webapp show --resource-group <group_name> --name <app_name> --query possibleOutboundIpAddresses --output tsv
Secondly, you can put a NAT Gateway in-front of your App Service. This however requires an App Service Plan that supports virtual network integration.
Configure regional virtual network integration from within your app service.
Force all outbound traffic originating from that app to travel through the virtual network. This is done by setting WEBSITE_VNET_ROUTE_ALL=1 property in your web app configuration
Create a public IP address.
Add a NAT gateway, attach it to the subnet that contains the app service and make use of the public IP created in step 3.
If you would also like to use a static inbound IP you can find more information here
The outbound IPs for Azure App service are generally static and will not change on scaling. There are normally 4 outbound IPs and they only change if you change the SKU or there is a need at MS end to increase the capacity of their data center (rare or may never happen in near future).
I would recommend creating a VNET as that is more secure than whitelisting IPs at your on prem service. But if you want to want list the outbound IPs, I would recommend creating a wrapper for your on prem APIs in Azure and whitelist IPs for these in your on prem firewall. This will ensure that you don't have to whitelist every time you want to create an API in Azure that needs to access on prem APIs.
Unfortunately there is no straight forward way to do this in Azure for App Services, I also had this issue recently.
The only solution (for now anyway) is to add the list of outbound IPs of the App Service to your allow rules.
Just be careful with scaling between the tiers because it will change the outbound IP addresses. (https://learn.microsoft.com/en-us/azure/app-service/overview-inbound-outbound-ips#when-outbound-ips-change)
The simplest way would be to use an Azure VM with a static public IP which is used for both inbound and outbound.
Sam Cogan has a good blog post where he does go through a couple of options.
(https://samcogan.com/obtaining-a-static-outbound-ip-from-an-azure-virtual-network/)
A hybrid connection might be a solution https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections? I think they are designed for accessing on premise services.

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

How to expose a website on the new VMs in Azure?

I used to use the "classic" VMs to host test web sites on Azure, but now when I create a VM using the new Azure portal I don't get the option to create a cloud service when I create the VM and can't seem to figure out how to hook up the VM to a cloud service.
My requirement is simple, I have a third party BA web site that I need to install and test. The site gets installed and runs fine locally in IIS but I can't seem to expose it as a ".cloudapp.net" site.
Thanks in advance,
Vince
Cloud services do not exist in the new portal (known as ARM). Instead you should be creating a load balancer or traffic manager resource in front of the VM and then configuring access that way.
A number of considerations will be needed before picking which method to go with, for example traffic manager profiles can only be pointed at Web Apps, classic cloud services or ARM based network adapters on VMs with public IP addresses. Your public IP address must have a DNS name associated with it as well.
For load balancers, you have more options in some areas over traffic manager profiles but it really does depend on your requirements.
Personally I use traffic manager profiles for providing access to IaaS VMs.

Resources