Creating Containerized Applications on Azure with a VNet - azure

I have a set of containerized http services that I wrote, the services are configured using a docker-compose.yml and a collection of Dockerfiles to build the service images. I would like to be able to host my docker-compose.yml setup on Azure, specifically one of my http services requires the ability to rotate which outbound Public IP Address it is making requests from (similar to a proxy.)
I have looked at the following resources:
Azure App Service
Container App
Container Instances
Virtual Machine
I have been able to deploy my app successfully and test it on all of the solutions, however, my issue is that one of my services needs to rotate the outbound IP Address that it is making requests from (kind of like using a proxy to make requests.)
I can accomplish this with a virtual machine and by adding a VNet with multiple IP Address resources associated. This works perfectly fine but using the virtual machine alienates me from the benefits of azure's other managed container services.
I Have read the docs for App Service, Container App, and Container Instances and it doesn't seem like it's possible to assign these resources to a VNet with my current configuration.
Any advice on how I could go about solving my problem is appreciated.

Related

Azure Container App: Only allow access over Api Management

I want to restrict access to my Azure Container App with an Api Management in Azure.
I successfully linked the Api Management with the Container App and I have activated a Subscription with an Api Key that will prevent public access over the Api Management Service Url. The problem, however, is that the Container App can still be accessed over the public Url of the Container App.
There is still the option to set the Ingress Traffic in the Container App to Limited to Container Apps Environment but then the Api Management will not have access to the Container App as well.
What is the correct way to properly secure the Container App behind an Api Management Service?
For Azure Container Instances, you don't have the option to configure IP restrictions similar to Azure App Services. Instead you will have to first create a virtual network and configure a Network Security Group to Deny all traffic from the internet and allow only from APIM, and then deploy your Azure Container Instance to this virtual network.
See here for deploying an azure container instance to a virtual network : https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
For configuring network security groups in your virtual network see : https://learn.microsoft.com/en-us/azure/virtual-network/manage-network-security-group#work-with-security-rules
You app service is still accessible over the public internet because you haven't configured Access Restrictions in your App Service's Network.
What you need to do is go to your App service. Then select Networking from the left menu and Turn on Access Restrictions for inbound traffic.
Create an access restriction rule to deny from the internet.
Next create a second acccess rule to allow access from the APIM. Ensure the priority on this one is higher.
Read the Microsoft Docs on how to set app service IP restrictions here : https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions
Assuming your API management service has a static IP (not a consumption plan), you would need to use you own VNET:
Networking architecture in Azure Container Apps
Then using NSG, you could add an inbound rule to only allow traffic from the APIM service IP on HTTPS (TCP 443).
Azure container apps do now seem to have the ability to restrict inbound ip addresses
https://azure.microsoft.com/en-gb/updates/public-preview-inbound-ip-restrictions-support-in-azure-container-apps/
We have are looking at a similar architecture with a similar dilemma. Everything we have is secured with Azure b2c but if I want to make an internal container/microservice accessible to Azure Api Management I think I'd have to drop b2c (api management has no UI to log into b2c) and make it publicly accessible via the Ingress. If the inbound ip addresses are restricted to api management maybe that is ok. It does worry me that ip addresses can be spoofed although you'd hope Microsoft have thought of that.
Another alternative which I've not investigated but which does work for Azure functions is managed identities. This might not work at all with container apps though
https://www.svenmalvik.com/azure-apim-function-msi/
First, I think that it is good to explain networking architecture in Azure Container Apps.
Azure Container Apps run in the context of an environment, which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you.
There are two ways to deploy Container Apps environments:
External - Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address.
Internal - When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses.
I attach the image from Azure portal to show above two options:
Now going further, if you want your container app to restrict all outside access, create an internal Container Apps environment.
Now when it comes to deployment of the Container Apps to the Container Apps Environment, accessibility level you selected for the environment will impact the available ingress options for your container app deployments.
If you are deploying to an external environment, you have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Accepting traffic from anywhere - to allow the application to be accessible from the public internet.
If you are deploying to an internal environment, you also have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Limited to vNET (Virtual Network) - to allow traffic from the VNET to make container app to be accessible from other Azure resources or applications within the virtual network or connected to the virtual network through Peering or some type of VPN connectivity
Now in you case, what you are looking for is the architecture where you enable access to Azure Container Apps only through the Azure API Management. In this case you have to deploy Azure Container Apps Environment with Internal mode and set ingress traffic to Limited to VNet (Virtual Network).
I assume that Azure API Management can be accessible from the Internet. In this case you have to deploy Azure API Management inside an Azure Virtual Network. There are two possible modes: internal, and external. In you scenario, you can use external mode. More details can be found here. When API Management instance in the external mode, the developer portal, API gateway, and other API Management endpoints are accessible from the public internet, and backend services are located in the Azure Virtual Network.
Here I also attach the solution architecture to show how all these components are connected together. I also have Azure Front Door here but API Management is deployed with external mode. Please remember that you will also need private DNS Zone for your Azure Container Apps Environment domain, to make it possible to refer to specific APIs from the Azure API Management using URLs, example:
https://ca-tmf-mip-vc-api--v-01.blacklacier-cf61414b.westeurope.azurecontainerapps.io
Helpful links:
Repo with Bicep files to deploy Azure Container App with internal mode
Azure Container Apps Virtual Network Integration

Exposing a non http endpoint in azure

I have an OPC UA server in a docker container. The server exposes a TCP endpoint with the binary opc.tcp protocol. What are possible methods I can use to expose non http endpoints in Azure? Thank you.
This suggested a WCF workaround, but the server is not WCF application.
How can I host a TCP Listener in Azure?
If it is docker based, but not http, then Microsoft suggested two possible solutions.
Azure container instance - deploy a single docker instance via the Azure website, or you can deploy a multi docker instance as a container group via the Azure CLI. For multi docker instances you have limits on CPU and memory as it is running on the same "server" so scaling could be an issue. Adding a static ip is possible and described here Configure a single public IP address for outbound and inbound traffic to a container group
Use the AKS/Kubernetes cluster in Azure.

Azure Container Instance IP

I've created an Azure container instance with a Private IP. This is connected to a VNET so my Web Apps can communicate with it in a secure way. (This API has Bearer tokens also but I don't want to make it public).
However, when restarting the Container I get a new IP. Therefore I have to update the Env and restart my apps.
Is there a way to implement service discovery within Azure, so my Web Apps (and other services) know where this Container Instance is, especially when the container gets a new IP.
I am used to dealing with Pivotal and Consul but under Azure I don't have these tools.
In Pivotal I was able to fire up multiple instances and the platform would auto discover and load balance. At the moment, Azure feels very manual :(
Does azure have the ability to register a service under a host name that can then auto resolve?
Does Azure support load balancing when multiple instances are started with the same name?
For this kind of work maybe the best solution is to create an Azure service web app. The web app as public static IP link to your plan which does not change.
Inbound and outbound IP addresses in Azure App Service
With this solution your IP never change.
According to the documentation linked, you can find your inbound IP like this:
nslookup <app-name>.azurewebsites.net
and the outbound IP in powershell:
(Get-AzWebApp -ResourceGroup <group_name> -name <app_name>).OutboundIpAddresses

How can I reach the private IP address of an Azure VM from the container which is inside a Web App (both of the resources are on the same VNet)?

In Azure I have the following resources: A VM and a Linux Web App for container. After putting them on the same VNet, the started Container within the App Service can't communicate through private IP.
I wanted to include sshd for the container for debugging purposes, however I couldn't connect to the container after connecting the Web App to the VNet (that already has the VM on it).
Sounds to me like you still need to configure VNET integration for your Web App?
https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet
And also read here a bit:
Azure Web App for Containers networking VNET

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

Resources