I'm deploying to azure container instances from the azure container registry (azure cli and/or portal). Azure blobfuse (on ubuntu 18) is giving me the following error:
device not found, try 'modprobe fuse' first.
The solution to this would be to use the --cap-add=SYS_ADMIN --device /dev/fuse flags when starting the container (docker run):
can't open fuse device in a docker container when mounting a davfs2 volume
However, the --cap-add flag is not supported by ACI:
https://social.msdn.microsoft.com/Forums/azure/en-US/20b5c3f8-f849-4da2-92d9-374a37e6f446/addremove-linux-capabilities-for-docker-container?forum=WAVirtualMachinesforWindows
AzureFiles are too expensive for our scenario.
Any suggestion on how to use blobfuse or Azure Blob Storage (quasi-natively from nodejs) from a Docker Linux container in ACI?
Unfortunately, it seems it's impossible to mount blobfuse or Azure Blob Storage to Azure Container Instance. There are just four types volume that can be mount to. You can take a look at the Azure Template for Azure Container Instance, it shows the whole property of the ACI. And you can see all the volume objects here.
Maybe other volumes which we can mount to Docker Container will be supported in the future. Hope this will help you.
Related
I currently use kubectl ctr to import my image files into the containerd registry for my local vm k3s setup.
sudo /usr/local/bin/k3s ctr image import file.image
However, i am now trying to use azure kubernetes service (aks) going forward but i still don't want to use docker. Most documentations i found online on pushing images to AKS involved using docker registry which is not an option for me for now.
My question is:
How can i use ctr or any other containerd operation to push file.image to the azure container registry without involving docker or docker hub. Is there a way to transfer images from my local containerd registry to the azure container registry?
You can still use ctr to manage your container images with the below functionalities including push and tag actions that you are looking for
Please have a look at documentation;
https://www.mankier.com/8/ctr#images,_image,_i
Mounted Azure File shares in AKS deployments using Cluster UAMI with Reader & Storage account key operator service role. It was successfully mounted in all the POD replicas and able to create the files/list all the files of Azure file share from a pod. But, it is not working after key rotation. Also, I tried to create new deployment, storage class, PVC. Still, facing permission issues while PODs are getting created.
Stage 1: (First Time Process)
Created AKS Cluster, Storage File share, User managed Identity.
Assigned the UAMI to Cluster and provided the Reader & Storage account key operator service roles in new storage scope.
Created new Custom Storage class, PVC, deployments.
Result: All functionalities were working as expected.
Stage 2: (Failure Process)
Created new deployment after key rotation as existing PODs were unable to access the Azure File Share. Permission issue.
Then, Created new Storage Class/PVC/deployment - Still same permission issue.
Error:
default 13s Warning FailedMount pod/myapp-deploymentkey1-67465fb9df-9xcrz MountVolume.SetUp failed for volume "xx" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //{StorageName}.file.core.windows.net/sample1 /var/lib/kubelet/pods/xx8/volumes/kubernetes.io~azure-file/pvc-cxx
Output: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
default 13s Warning FailedMount pod/myapp-deploymentkey1-67465fb9df-jwmcc MountVolume.SetUp failed for volume "xx" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //{StorageName}.file.core.windows.net/sample1 /var/lib/kubelet/pods/xxx/volumes/kubernetes.io~azure-file/pvc-xx
Output: mount error(13): Permission denied
• The error that you are encountering while mounting file share on the Kubernetes pod represents that there is communication protocol issue, i.e., the communication channel used to connect to the azure file share and mount it on the pod after key rotation is unencrypted and the connection attempt was made from different location of Azure datacenter than where the file share resides.
• Also, please check whether ‘Secure Transfer’ required property for the storage account is enabled or not because if it is enabled, then any requests originating from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all your storage accounts.
• So, for this issue, you can try disabling the ‘secure transfer’ property on the file share storage account as the files share will be shared for all the existing pods so if a new pod deployment with new key rotation related to the user assigned managed identity is detected, the existing ones might not be compatible with the new keys assigned or may not be updated with it.
• You can also check the version of SMB encryption used for the existing pods and the newly deployed ones. Please refer the below links for more information: -
https://learn.microsoft.com/en-us/answers/questions/560362/aks-file-share-persistent-mounting-with-managed-id.html
https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error13-permission-denied-when-you-mount-an-azure-file-share
I've trained a model and deployed it to ACI using Azure ML studio. It works as expected. Now I want to download the docker image and use it in my local environment. Is it possible to download the image using CLI?
Azure ML Studio must have pushed a container image somewhere before spinning up a container instance on ACI. You might be able to find out the image name by using Docker's ACI integration. For instance, you could run...
$ docker login azure
$ docker context create aci myacicontext
$ docker ps
... and check the IMAGE value of your running container, and see if you can pull that image to your local machine. If not, you might be able to create a new one using docker commit.
Now I want to download the docker image and use it in my local
environment. Is it possible to download the image using CLI?
It's possible to download the Docker image via CLI. When you trained a model and deployed it to ACI using Azure ML studio, there must be a place to store the images. Private registry or the public registry. You can see the tutorial, you can use a private registry such as the ACR, or other private registries. You can also use the Azure Machine Learning base images stored in the Microsoft registry, it's similar to the Docker hub.
If you have known where is the docker images stored, then you can download the docker images to your local environment.
From the public registry such as the Docker hub, you can pull the images directly:
docker pull image:tag
If it's a private registry, you need to log in with the credential first, for example, you use the Azure Container Registry:
docker login myacr.azurecr.io -u username -p password
docker pull myacr.azurecr.io/image:tag
Of course, you need to install the Docker server in your local environment first.
How to configure Azure Blob Storage Container on an Yaml
- name: scripts-file-share
azureFile:
secretName: dev-blobstorage-secret
shareName: logs
readOnly: false```
The above is for the logs file share to configure on yaml.
But if I need to mount blob container? How to configure it?
Instead of azureFile do I need to use azureBlob?
And what is the configuration that I need to have below azureBlob? Please help
After the responses I got from the above post and also went through the articles online, I see there is no option for Azure Blob to mount on Azure AKS except to use azcopy or rest api integration for my problem considering the limitations I have on my environment.
So, after little bit research and taking references from below articles I could able to create a Docker image.
1.) Created the docker image with the reference article. But again, I also need support to run a bash script as I am running azcopy command using bash file. So, I tried to copy the azcopy tool to /usr/bin.
2.) Created SAS tokens for Azure File Share & Azure Blob. (Make sure you give required access permissions only)
3.) Created a bash file that runs the below command.
azcopy <FileShareSASTokenConnectionUrl> <BlobSASTokenConnectionUrl> --recursive=true
4.) Created a deployment yaml that runs on AKS. Added the command to run bash file in that.
This gave me the ability to copy the files from Azure File Share Folders to Azure Blob Container
References:
1.) https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#obtain-a-static-download-link
2.) https://github.com/Azure/azure-storage-azcopy/issues/423
I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster