I have an OPC UA server in a docker container. The server exposes a TCP endpoint with the binary opc.tcp protocol. What are possible methods I can use to expose non http endpoints in Azure? Thank you.
This suggested a WCF workaround, but the server is not WCF application.
How can I host a TCP Listener in Azure?
If it is docker based, but not http, then Microsoft suggested two possible solutions.
Azure container instance - deploy a single docker instance via the Azure website, or you can deploy a multi docker instance as a container group via the Azure CLI. For multi docker instances you have limits on CPU and memory as it is running on the same "server" so scaling could be an issue. Adding a static ip is possible and described here Configure a single public IP address for outbound and inbound traffic to a container group
Use the AKS/Kubernetes cluster in Azure.
Related
I have a set of containerized http services that I wrote, the services are configured using a docker-compose.yml and a collection of Dockerfiles to build the service images. I would like to be able to host my docker-compose.yml setup on Azure, specifically one of my http services requires the ability to rotate which outbound Public IP Address it is making requests from (similar to a proxy.)
I have looked at the following resources:
Azure App Service
Container App
Container Instances
Virtual Machine
I have been able to deploy my app successfully and test it on all of the solutions, however, my issue is that one of my services needs to rotate the outbound IP Address that it is making requests from (kind of like using a proxy to make requests.)
I can accomplish this with a virtual machine and by adding a VNet with multiple IP Address resources associated. This works perfectly fine but using the virtual machine alienates me from the benefits of azure's other managed container services.
I Have read the docs for App Service, Container App, and Container Instances and it doesn't seem like it's possible to assign these resources to a VNet with my current configuration.
Any advice on how I could go about solving my problem is appreciated.
I want to restrict access to my Azure Container App with an Api Management in Azure.
I successfully linked the Api Management with the Container App and I have activated a Subscription with an Api Key that will prevent public access over the Api Management Service Url. The problem, however, is that the Container App can still be accessed over the public Url of the Container App.
There is still the option to set the Ingress Traffic in the Container App to Limited to Container Apps Environment but then the Api Management will not have access to the Container App as well.
What is the correct way to properly secure the Container App behind an Api Management Service?
For Azure Container Instances, you don't have the option to configure IP restrictions similar to Azure App Services. Instead you will have to first create a virtual network and configure a Network Security Group to Deny all traffic from the internet and allow only from APIM, and then deploy your Azure Container Instance to this virtual network.
See here for deploying an azure container instance to a virtual network : https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
For configuring network security groups in your virtual network see : https://learn.microsoft.com/en-us/azure/virtual-network/manage-network-security-group#work-with-security-rules
You app service is still accessible over the public internet because you haven't configured Access Restrictions in your App Service's Network.
What you need to do is go to your App service. Then select Networking from the left menu and Turn on Access Restrictions for inbound traffic.
Create an access restriction rule to deny from the internet.
Next create a second acccess rule to allow access from the APIM. Ensure the priority on this one is higher.
Read the Microsoft Docs on how to set app service IP restrictions here : https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions
Assuming your API management service has a static IP (not a consumption plan), you would need to use you own VNET:
Networking architecture in Azure Container Apps
Then using NSG, you could add an inbound rule to only allow traffic from the APIM service IP on HTTPS (TCP 443).
Azure container apps do now seem to have the ability to restrict inbound ip addresses
https://azure.microsoft.com/en-gb/updates/public-preview-inbound-ip-restrictions-support-in-azure-container-apps/
We have are looking at a similar architecture with a similar dilemma. Everything we have is secured with Azure b2c but if I want to make an internal container/microservice accessible to Azure Api Management I think I'd have to drop b2c (api management has no UI to log into b2c) and make it publicly accessible via the Ingress. If the inbound ip addresses are restricted to api management maybe that is ok. It does worry me that ip addresses can be spoofed although you'd hope Microsoft have thought of that.
Another alternative which I've not investigated but which does work for Azure functions is managed identities. This might not work at all with container apps though
https://www.svenmalvik.com/azure-apim-function-msi/
First, I think that it is good to explain networking architecture in Azure Container Apps.
Azure Container Apps run in the context of an environment, which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you.
There are two ways to deploy Container Apps environments:
External - Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address.
Internal - When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses.
I attach the image from Azure portal to show above two options:
Now going further, if you want your container app to restrict all outside access, create an internal Container Apps environment.
Now when it comes to deployment of the Container Apps to the Container Apps Environment, accessibility level you selected for the environment will impact the available ingress options for your container app deployments.
If you are deploying to an external environment, you have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Accepting traffic from anywhere - to allow the application to be accessible from the public internet.
If you are deploying to an internal environment, you also have two options for configuring ingress traffic to your container app:
Limited to Container Apps Environment - to allow only traffic from other container apps deployed within the shared Container Apps environment.
Limited to vNET (Virtual Network) - to allow traffic from the VNET to make container app to be accessible from other Azure resources or applications within the virtual network or connected to the virtual network through Peering or some type of VPN connectivity
Now in you case, what you are looking for is the architecture where you enable access to Azure Container Apps only through the Azure API Management. In this case you have to deploy Azure Container Apps Environment with Internal mode and set ingress traffic to Limited to VNet (Virtual Network).
I assume that Azure API Management can be accessible from the Internet. In this case you have to deploy Azure API Management inside an Azure Virtual Network. There are two possible modes: internal, and external. In you scenario, you can use external mode. More details can be found here. When API Management instance in the external mode, the developer portal, API gateway, and other API Management endpoints are accessible from the public internet, and backend services are located in the Azure Virtual Network.
Here I also attach the solution architecture to show how all these components are connected together. I also have Azure Front Door here but API Management is deployed with external mode. Please remember that you will also need private DNS Zone for your Azure Container Apps Environment domain, to make it possible to refer to specific APIs from the Azure API Management using URLs, example:
https://ca-tmf-mip-vc-api--v-01.blacklacier-cf61414b.westeurope.azurecontainerapps.io
Helpful links:
Repo with Bicep files to deploy Azure Container App with internal mode
Azure Container Apps Virtual Network Integration
I have having a hard time find a solution for this.
I have an Azure Internal Load Balancer (level 4). And I have ONLY one Virtual Machine act as the backend pool for the said Load Balancer.
And fun part starts here, I have multiple Docker containers running on that Virtual Machine. Running Nginx Web servers on ports 8080 and 8081.
And now I want to balance the load between these two ports. Literally what I want is something like below in the photo:
So according to the photo, the request comes from abc.xyz.com and it should hit the Load Balancer, and then it should route the traffic to the only VM running multiple docker containers in multiple ports.
How can I achieve this behavior?
I have already setup A frontend configuration with private ip, a rule, backend pool
As per this article(https://learn.microsoft.com/en-us/azure/container-instances/container-instances-virtual-network-concepts#unsupported-networking-scenarios), placing an Azure Load Balancer in front of container instances in a networked container group is not supported and similarly it is not possible to route the traffic on containers to their specific ports running on a single Virtual Machine. The above solution works on VM level not on container level.
The only workaround for this scenario would be to use Azure Application gateway as Microservice architecture is supported on App gateway. To probe on different ports, you need to configure multiple HTTP settings. Reference:
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#can-one-backend-pool-serve-many-applications-on-different-ports
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. And you can create an internal application gateway. To do that you can create an Application Gateway with both public and private frontend IP address and do not create any listeners for the public frontend IP address. Application Gateway will not listen to any traffic on the public IP address if no listeners are created for it.
Reference: https://learn.microsoft.com/en-us/azure/application-gateway/configuration-front-end-ip ,
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address
I've created an Azure container instance with a Private IP. This is connected to a VNET so my Web Apps can communicate with it in a secure way. (This API has Bearer tokens also but I don't want to make it public).
However, when restarting the Container I get a new IP. Therefore I have to update the Env and restart my apps.
Is there a way to implement service discovery within Azure, so my Web Apps (and other services) know where this Container Instance is, especially when the container gets a new IP.
I am used to dealing with Pivotal and Consul but under Azure I don't have these tools.
In Pivotal I was able to fire up multiple instances and the platform would auto discover and load balance. At the moment, Azure feels very manual :(
Does azure have the ability to register a service under a host name that can then auto resolve?
Does Azure support load balancing when multiple instances are started with the same name?
For this kind of work maybe the best solution is to create an Azure service web app. The web app as public static IP link to your plan which does not change.
Inbound and outbound IP addresses in Azure App Service
With this solution your IP never change.
According to the documentation linked, you can find your inbound IP like this:
nslookup <app-name>.azurewebsites.net
and the outbound IP in powershell:
(Get-AzWebApp -ResourceGroup <group_name> -name <app_name>).OutboundIpAddresses
I have a container (linux .NET Core) running in Azure. This application reads from Azure Service Bus and writes information in a database on-premises.
The connection to ASB is working fine but when the application tries to connect to SQL Server, I get a timeout. Initially, I was running the container with no network setup (the 'None' option). Then I went to public and it now gives me an IP address.
My infrastructure team added this IP to our firewall but either Azure is trying to access it with a different IP address OR the connection never leaves the Docker environment.
ps.: I have an App Service running (.NET Core API) and it does connect to the same SQL Server (same IP address) correctly.
Suggestions?
Since the IP address that outgoing from the Azure container group is random from Azure cloud IP list, you can not directly add its IP to the firewall. You can vote up this feature request for using the same exposed public IP for outbound traffic starting from the container group.
Currently, you could deploy container instances into an Azure virtual network, then the container could communicate with on-premises resources through a VPN gateway or ExpressRoute. For more details, you could see enable containers to use Azure Virtual Network capabilities.