Multiple egress IPs for Azure Container Group - azure

I would like to use multiple egress IPs for an Azure Container Group where each container instance in that group uses one of those IPs.
Something like a NAT gateway might work, but apparently that is not supported.
I also read there may be a workaround with Azure Firewall, but can't find resources that describe this.
What is the recommended solution for my use-case?
Edit: Also noting that our containers are windows-based. Which may add further limitations to our options.

Since Windows containers are not yet supported for deployment in VNET we cannot approach routing egress traffic manually.
References:
Virtual networking for Container group-Limitations.
Container groups and the types of scenarios they enable.
Steps to configure a container group in a virtual network integrated with Azure Firewall.
Configure IP addresses for azure firewall.

Related

How to whitelist multiple ip addresses to multiple different azure services?

Right now we have multiple resources like storage accounts and key vaults where the team is using the firewall setting within the networking tab on the individual services. This means when their ip changes after a disconnect/connect to the company VPN they have to go in to each service and add their new IP address.
Not being well versed in Azure networking possibilities, what are some of the options we have to allow a group of incoming IP addresses to be able to access all these services without having to individually touch each service to add their new ip address?
All services are also on the same virtual network.
Thank You
I used to work on Azure Cloud services as a DevOps in the past.
There should be multiple ways to control incoming network traffic to your landing zone or azure resources. But you should consider your requirements meet the solution.
Here are few you could take a look at which I used:
Virtual network service endpoints
Azure Firewall
Network Security Groups
ExpressRoute

How to block traffic between VMs in the same subnet in Azure other than NSG

I know NSG can easily do that, but for some reason I can not use NSG, is there any other alternative can do the same? Firewall within VM might also work, but it's better to control that in Azure level, so I don't have to login to the VM so config that.
You can use Application Security Group (ASG). ASGs are used within a NSG to apply a network security rule to a specific VM or a group of VMs.
You can start here https://learn.microsoft.com/en-us/azure/virtual-network/application-security-groups

GitHub self hosted runner have access to Azure resources behind a separate virtual network

I created some GitHub self-hosted runners and would like them to have access to my resources behind a separate virtual network. I know that whitelisting the IP address of the machine will give it access, but I will end up having any number of virtual machines that could be a self-hosted runner, so adding/deleting those IP addresses whitelist for each of my resources seems like a lot of manual work or having automation to whitelist IP addresses to each of my resources when creating the self-hosted runners.
I tried to peer the virtual network that my self-hosted runners would be connected to, to the virtual network of the rest of my resources thinking it would grant access to the self-hosted runners to those resources but I get a 403 firewall error when I attempt any changes or reading of the resource... Am I missing something here? Reading through Microsoft documentation makes it seem like peering the virtual networks would work.
I have bidirectional peering on both Vnets and forward traffic to and from the Vnets in the peering settings. My NSG on both VNet subnets are just the basic one that allows inbound and outbound VNet traffic
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview
Is there a recommended way of going at this?
If you just want to access the resources in one VNet from another VNet, the network peering is enough. But there is a limitation, there must be no NSG associated with resources or the VNet. If the NSGs exist, you need to add rules to allow the traffics. For example, if the resources are the VMs, then it should work as need. Of course, the firewall inside the VMs should also allow the traffics.

Segmentation of Azure Subnet for applications

We manage big environments inside Azure with multiple customers, we are redesigning it and in it we wanted to manage traffic within multiple common subnets like app, web and db subnets.
So essentially no two different application inside any common subnet like db cannot communicate with each other.
By default, resources in the different subnets from the same VNet could communicate with each other. So you need to use an Azure network security group to filter network traffic to and from Azure resources in an Azure virtual network or subnet.
Application security groups enable you to configure network security as a natural extension of an application's structure, allowing you to group virtual machines and define network security policies based on those groups. You can reuse your security policy at scale without manual maintenance of explicit IP addresses. To learn more, see Application security groups.
For PaaS like Azure app service or Azure SQL database, you could use VNet Integration to access VNet resources in a private network or use virtual network service endpoints and rules for servers in Azure SQL Database.
For more information, you may know:
https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/secure-vnet-dmz
https://learn.microsoft.com/en-us/azure/networking/networking-overview

Is it possible to use Azure API Management and Azure ACS (kubernetes) as frontend and backend?

I would like to create a simple architecture on Azure. My high level design is very similar to the picture below (source: https://www.import.io/post/using-amazon-lambda-and-api-gateway/)
I do want to access the internal services via the Azure API Management. What I can see on Microfos documentation page is that this simple and secure architecture is not mentioned as a reference:
https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
I have the following issues:
API Management cannot be assigned to a Virtual Network if there is at least one NIC is using the same network (why?)
Even with peered Virtual Networks I cannot access 10.244.X.0/24 network (pods' network) because only 10.240.0.0/16 is owned by the k8s Virtual Network. How can I access cluster ips (10.0.0.0/16) and pod ips (10.244.0.0/16)?
Well, you don't need an Extra VNET, but just an extra Subnet. That Subnet could lie within your existing VNET. The Size of Subnet can be the smallest /29 which Azure supports.
The Extra Subnet requirement for API Management comes from the fact, that it is built on PAAS V1 (Classic) technology. While we can deploy into a Resource Manager VNET (V2 layer), there are consequences to that. The Classic deployment model in Azure are not tightly coupled with Resource Manager model and so if you create a resource in V2 stuff, the V1 doesn't know about it and problems can happen such as API Management trying to use an IP that is already allocated to a NIC (built on V2).
To learn more about difference of Classic and Resource Manager models in Azure refer to blog difference between Classic and ResourceManager models
The answer is basically YES although the setup is not trivial.
You need:
One extra VNet for the API Management (EDIT: an extra subnet is enough)
One service (kubernetes terminology)
Steps:
Peer the Kubernetes VNet and the extra VNet you have created (test it)
API Management -> Virtual network: change to External
Choose as Virtual Network the one extra VNet (lets call it 'apimgmntvnet') and a Subnet
Save it! Drink a beer because it took me 1h!
Meanwhile expose your deployment internally:
kubectl expose deployment app --port=<serviceport> --name=app --target-port=<containerport> --type=NodePort (NodePort is important!LoadBalancer type triggers kubernetes to dynamically configure the Azure External LB for Kubernetes install)
Check node IP:PORT on kubernetes (kubectl proxy) BUI
API Management -> Publisher portal: modify your API to the IP address (AgentIP:30361)
Theoretically it should work. It is advised to start with a VM in the apimgmntvnet and try peering first from the VM and than delete it (API Management cannot be part of a VNet where at least one NIC is present (?!) ).

Resources