How to get set up FQDN for Azure Container Instance - azure

I am using Azure Container Registry to store my private docker image and Azure Container Instance to deploy it.
I get a public IP address which is OK for verification and simple preview, but not usable (or shareable with customer) since the IP address is dynamic.
Is there a way to set up fully qualified domain name that i can use instead of changing IP address on every container restart?
Browsing through the documentation does not reveal anything about that.

You can now set a dns-name-label as a property for your container group. Details are shared in this answer - hope this helps and thanks for being a user!
Azure Container Group IP Address disappeared

Is there a way to set up fully qualified domain name that i can use
instead of changing IP address on every container restart?
Unfortunately, for now, Azure does not support to set a static public IP address for instance, Azure Container Instance still in preview.
In the future, we will expand our networking capabilities to include
integration with virtual networks, load balancers, and other core
parts of the Azure networking infrastructure.
More information about Azure Container Instance network, please refer to this link.
As a workaround, we can deploy a VM and run docker on it, set static public IP address for this VM, then restart docker we will not lose this public IP address.

When using a docker-compose.yml file on ACI(Azure Container Instances), domainname property is used for FQDN.
services:
service-name:
image: ****
domainname: **FQDN**

Looks like you can do it via Azure CLI using the dns-name-label flag:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
src here
This will result in the following FQDN: aci-demo.westeurope.azurecontainer.io (westeurope being your location)

Related

cannot access ACI restful endpoint deployed to VN

I deployed a docker image to an ACR and then to an ACI with a command like this:
az container create
--resource-group myrg
--name myamazingacr
--image myamazingacr.azurecr.io/test3:v1
--cpu 1
--memory 1
--vnet myrg-vnet
--vnet-address-prefix 10.0.0.0/16
--subnet default
--subnet-address-prefix 10.0.0.0/24
--registry-login-server myamazingacr.azurecr.io
--registry-username xxx
--registry-password xxx
--ports 80
This all works without error and the IP of the ACI is 10.0.0.5 and there is no FQDN as it is a VN. I think this makes sense.
When I run the image outside Azure (i.e. on my local machine where I created the image) I can successfully access an endpoint like this:
http://127.0.0.1/plot
http://127.0.0.1/predict_petal_length?petal_width=3
[127.0.0.1] indicates that I run the image on the local machine.
However, this does not work:
http://10.0.0.5/plot
http://10.0.0.5/predict_petal_length?petal_width=3
I get:
This site can’t be reached10.0.0.5 took too long to respond.
What could be wrong please?
PS:
Maybe it is related to this:
https://learn.microsoft.com/en-us/answers/questions/299493/azure-container-instance-does-not-resolve-name-wit.html
I have to say I find Azure really frustrating to work with. Nothing really seems to work. Starting with Azure ML to ACIs ...
PPS:
this is what our IT says - tbh I do not fully understand ...
• Private endpoints are not supported so we need to create a vnet in the resource group peer it to the current dev vnet and we should be good
• We basically need to know how we can create an ACR with the network in an existing vnet in a different resource group. I am struggling to find the correct way to do this.
Since you have deployed your ACI into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. So you could access the ACI endpoint in the Azure vNet.
For example, you can try to deploy a VM in the vNet but in a different subnet than your ACI, then you can try to access the ACI endpoint from the Azure VM.
Alternatively, you can expose a static IP address for a container group by using an application gateway with a public frontend IP address.
The possible reason for your issue is that you set the wrong IP address for your application to listen to. The IP address 127.0.0.1 is a localhost or loopback IP that only can be used inside the machine. Take a look here. So you can try to change the IP into 0.0.0.0. This one is accessible outside.

docker - Azure Container Instance - how to make my container accesable and recognized from outside?

I have windows container which should access to external VM database (that is not in container, lets say VM1) so I would define for them l2bridge network driver in order to use the same Virtual Network.
docker network create -d "l2bridge" --subnet 10.244.0.0/24 --gateway 10.244.0.1
-o com.docker.network.windowsshim.vlanid=7
-o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent
So I suppose we need to stick on this definitely.
But now as well I need to make my container accessible from outside, on port 9000, from other containers as well as from other VMs. I suppose this has to be done based on its name (host name) since IP will be changed after the each restart.
How I should make my container accessible from some other VM2 virtual machine - Should I do any modifications within the network configuration? Or I just to make sure they are both using the same DNS server?
Of course I will do the expose of the port, but should I do any kind of additional network configuration in order to allow traffic on that specific port? I've read that by default network traffic is not allowed and that Windows may block some thing.
I will appreciate help on this.
Thanks
It seems you do not use the Azure Container Instance, you just run the container in the Windows VM. If I am right, the best way to make the container accessible outside is to run the container without setting the network, just need to map the port to the host port. Then the container is accessible outside with the exposed port. Here is the example command:
docker run --rm -d -p host_port:container_port --name container_name image
Then you can access the container outside through the public IP and the host port of the VM.
Update:
Ok, if you use the ACI to deploy your containers via docker-compose, then you need to know that ACI does not support the Windows container in VNet. It means Windows containers cannot be created with private IP in the VNet.
And if you use the Linux container, then you can deploy the containers in a VNet, but you need to use the follow the steps here. Then the containers have private IPs and it's accessible in the VNet through that private IPs.

Azure Container Instance - dns and subnet in the same container

I have an Azure Container Instance with subnet configuration (I need to access an internal service). But I also need to configure dns.
I try to create the Container, but it returns this message: The IP address type can not be public when the network profile is set.
Is it possible to configure dns and configure the subnet in the same container?
Unfortunately, if you deploy the Azure Container Instances in the Subnet of a Vnet, then you cannot set the public IPs or DNS for it. Azure does not support it, at least now. Maybe it will be supported in the future. For more details, see Virtual network deployment limitations.
Container groups deployed to a virtual network do not currently
support public IP addresses or DNS name labels.
Hope this will help you.
The error with the network profile looks like a bug in the az
command tool. If you just specify your VNET name and subnet name
then it will create a network profile name.
If you want to use DNS
to resolve these names you'll need to setup DNS separately, and call
an additional az command to configure the DNS after you create the
container instance.
az network dns record-set a add-record ...
See this doc for using Azure DNS with private IP addresses.
Use Azure DNS for private domains

Azure VMSS : Retrieve FQDN

I have created a Virtual machine scale set in Azure and now require to access FQDN of instance from inside of VM. Tried these:
1. Using Azure metadata service. It surprisingly does not have FQDN field.
2. used Hostname -f it gave an fqdn but I think it is to be used internally in azure as it is not accessible from outside.
3. Tried listing public IP of VMSS but how to filter it to show public ip related my VM escapes me.
Update : In AWS "curl http://169.254.169.254/latest/meta-data/public-hostname" commands give the intended output. I am looking for its equivalent
For now, it's not possible to get FQDN from metadata server.
More information about data categories are available through the Instance Metadata Service, please refer to this link.
Does your VMSS instances create with public IP addresses? if yes, you can use Powershell or Azure portal to find the FQDN.

SSH to Azure's Kubernetes managed master node

I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.
The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.
Any idea? Thank you in advance.
The problem I am facing is that I don't know how to ssh this agent
server.
Do you mean you create AKS and can't find master VM?
If I understand it correctly, that is a by design behavior, AKS does not provide direct access (Such as with SSH) to the cluster.
If you want to SSH to the agent node, as a workaround, we can create a public IP address and associate this public IP address to the agent's NIC, then we can SSH to this agent.
Here are my steps:
1.Create Public IP address via Azure portal:
2.Associate the public IP address to the agent VM's NIC:
3.SSH to this VM with this public IP address:
Note:
By default, we can find ssh key when we try to create AKS, like this:
Basically, you don't even have to create a public IP to that node. Simply add public ssh key to the desired node with Azure CLI:
az vm user update --resource-group <NODE_RG> --name <NODE_NAME> --username azureuser --ssh-key-value ~/.ssh/id_rsa.pub
Then run temporary pod with (Don't forget to switch to the desired namespace in kubernetes config):
kubectl run -it --rm aks-ssh --image=debian
Copy private ssh key to that pod:
kubectl cp ~/.ssh/id_rsa <POD_NAME>:/id_rsa
Finally, connect to the AKS node from pod to private IP:
ssh -i id_rsa azureuser#<NODE_PRIVATE_IP>
In this way, you don't have to pay for Public IP and in addition, this is good from security perspective.
The easiest way is to use the below, this will create a tiny priv pod on the node and access the node using nsenter.
https://github.com/mohatb/kubectl-wls

Resources