Persisting RabbitMQ Queues and Data on Azure Container Instance - azure

I've successfully deployed RabbitMQ Bitnami docker to run as container instance in my Azure subscription, and I'm able to send messages to a queue which I defined via the RabbitMQ management web ui.
Thing is, that once i'm restarting the container instance, all messages and queues are gone.
I defined a file share under the same resource group, and invoked the following Azure CLI command to create the instance and "bind" it to the file share:
az container create -g learning1 --name rabbitmq-instance1 --image dbmregistry1.azurecr.io/bitnami/rabbitmq:latest --cpu 1 --memory 1 --ports 80 5672 15672 --dns-name-label db-rabbit1 --azure-file-volume-share-name dbshare1 --azure-file-volume-account-name {STORAGE-NAME} --azure-file-volume-account-key {KEY} --azure-file-volume-mount-path /data
But it seems not to be sufficient. In management web api it looks like that:
RabbitMQ node image
Appreciate any advice of what might be missing.

Related

Running Superset on Azure container Instance

I am trying to run superset on Azure container Instance with volume mapping on Azure file Share storage. When I build the container with below command the container instance gets on the running state but I can't launch the superset url.
az container create --resource-group $ACI_PERS_RESOURCE_GROUP --name superset01 --image superset-image:v1 --dns-name-label superset01 --ports 8088 --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path "/app/superset_home/"
Also I can see the volume map files are created on the file share but it doesn't grow to the initial size of superset.db.
Name Content Length Type Last Modified
------------------- ---------------- ------ ---------------
cache/ dir
superset.db 0 file
superset.db-journal 0 file
Any inputs on this please?
I tested in my environment, and it is working fine for me. I am using superset images from docker hub and pushed to container registry.
Use below command to pull the superset images from dockerhub
docker pull apache/superset
Tag the Superset images to push in Container registry
docker tag 15e66259003c testmyacr90.azurecr.io/superset
Now login in the conatiner registry
docker login testmyacr90.azurecr.io
And then push the images to Container Repositoty
You can use the below cmdlet to create the Container Instace and mount to a FileShare as well
az container create --resource-group $ACI_PERS_RESOURCE_GROUP --name mycontainerman --image $ACR_LOGIN_SERVER/superset:latest --registry-login-server $ACR_LOGIN_SERVER --registry-username $ACR_USERNAME --registry-password $ACR_PASSWORD --dns-name-label aci-demo-7oi --ports 80 --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path /aci/logs/
Would suggest you use FDQN or Public IPAddress of container instance to hit on browser and able to see response from container instance.
Update-----
I have mounted the fileShare at "/app/superset_home/ as you did and getting the same output like you.
Based on above picture would suggest you to do not mount the fileshare at /app/superset_home/ because here superset.db and superset.db-journal resides. if we are mounting to this location there is conflict happing. So better to suggest you mount at /aci/logs/ or any location you want rather than /app/superset_home/
Update 2-------
As I am using the same storage account for mount the vloume at /aci/logs/ as previously volume mounted to /app/superset_home/.
Getting the same error like you
Soultion: To avoid thease kind of issue create a new storage accout and file share with in and mount the volume at /aci/logs/.
Once i did like that i sorted my issue

Azure container can't access a mounted volume on startup why?

I am having problems using a volume mount in my container on Azure container instances.
I can mount a volume mount to my container no problem. Here is the az cli command I am using and it works well. I followed this tutorial
az container create --resource-group mydemo --name paulwx --image containerregistry.azurecr.io/container:master --registry-username username --registry-password password --dns-name-label paulwx --ports 8080 --assign-identity --azure-file-volume-account-name accountname --azure-file-volume-account-key secretkey --azure-file-volume-share-name myshare --azure-file-volume-mount-path /opt/application/config
This works great and I can attach to the container console and access the shared volume. I can touch files and read files.
The problem comes when I try to have the application read this folder on startup (to get its application configuration). Again the command is almost identical except the application is flagged to read the configs from the volume mount via an ENV variable called SPRING_CONFIG_LOCATION.
az container create --resource-group mydemo --name paulwx --image containerregistry.azurecr.io/container:master --registry-username username --registry-password password --dns-name-label paulwx --ports 8080 --assign-identity --azure-file-volume-account-name accountname --azure-file-volume-account-key secretkey --azure-file-volume-share-name myshare --azure-file-volume-mount-path /opt/application/config --environment-variables SPRING_CONFIG_LOCATION=file:/opt/application/config/
The container now terminates with the following error.
Error: Failed to start container paulwx, Error response: to create
containerd task: failed to mount container storage: guest modify:
guest RPC failure: failed to mount container root filesystem using
overlayfs
/run/gcs/c/e51f86c414ae83c7c279a4252864a381399069d358f5d2303c97c630e17b049f/rootfs:
no such file or directory: unknown
So I can mount a volume as long as I don't access it on start up. Am I missing something very fundamental here? Surely having a volume mount means the mount point will be available when the container starts.
The volume mount type is samba and Standard_LRS.
So to cut a long story short the issue here was that the application on startup was configured to read the property file from a folder that was not where I expected it to be be.
The application failed to start due to not finding the application.properties where it expected to find them.The Azure error message was a red herring and not at all reflective of the problem. The application logger was not logging to standard out properly so I actually missed the application log which clearly stated application.properties not found.

Can't pull image from private Azure Container Registry when using Az CLI to create a new Azure Container Instance

I've created a service principal with push and pull access to/from my private Azure Container Registry. Pushing to ACR works perfectly fine with the following command:
az login --service-principal -u "someSpID" -p "someSpSecret" --tenant "someTenantID"
az acr login --name "someRegistry"
docker push "someRegistry.azurecr.io/my-image:0.0.1"
And I am also able to pull the image directly with the following command:
docker pull "someRegistry.azurecr.io/my-image:0.0.1"
I want to deploy a container instance into a private subnet and I've configured the network security to allow access for my said subnet.
However, when I attempt to deploy a container instance with the following command into my private subnet, where I specified the same service principal which I had previously logged in with, I get an error response.
az container create \
--name myContainerGroup \
--resource-group myResourceGroup \
--image "someRegistry.azurecr.io/my-image:0.0.1" \
--os-type Linux \
--protocol TCP \
--registry-login-server someRegistry.azurecr.io \
--registry-password someSpSecret \
--registry-username someSpID \
--vnet someVNET \
--subnet someSubnet \
--location someLocation \
--ip-address Private
Error:
urllib3.connectionpool : Starting new HTTPS connection (1): management.azure.com:443
urllib3.connectionpool : https://management.azure.com:443 "PUT /subscriptions/mySubscription/resourceGroups/myResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/myContainerGroup?api-version=2018-10-01 HTTP/1.1" 400
msrest.http_logger : Response status: 400
The image 'someRegistry.azurecr.io/my-image:0.0.1' in container group 'myContainerGroup' is not accessible. Please check the image and registry credential.
The same error ensues when I try and deploy the container instance through Azure Portal.
When I tried deploying a public image into the same subnet, it succeeds fine so it isn't a deployment permission issue, neither does it seem to be wrong service principal credentials as the docker pull "someRegistry.azurecr.io/my-image:0.0.1" works just fine. I can't quite wrap my head around this inconsistent behavior. Ideas anyone?
For your issue, here is a possible reason to explain the error you got. Let's look at the limitation describe here:
Only an Azure Kubernetes Service cluster or Azure virtual machine can
be used as a host to access a container registry in a virtual network.
Other Azure services including Azure Container Instances aren't
currently supported.
This limitation shows the firewall of the Azure Container Registry does not support the Azure Container Instance currently. It only supports that pull/push the image in the Azure VM or AKS cluster.
So the solution for you is that change the rules to allow all network and then try again. Or use the AKS cluster, but it will also cost more.

Azure Container Group Instance

I am working with Azure Container Instance group.. and one of my containers is constantly restarting.. it goes to a terminated state and restarts. Everything looks good in logs.. The container is running a spring framework + React application. When I run the containers locally.. it works perfectly.
Am not sure what is happening behind the scenes?
You could use Azure CLI to set a restart policy of OnFailure or Never.
az container create \
--resource-group myResourceGroup \
--name mycontainer \
--image mycontainerimage \
--restart-policy OnFailure
If you specify the restart policy and the issue still exists,there might be some problems with the application or script executed in your container. You could use az container show command to check the restartCount property.
az container show --name
--resource-group
For more details, refer to this article.

I can't deploy a community Ubuntu docker image to Azure Container Instance

I run the following Azure CLI command:
az container create --resource-group Experimental --name my-sage
--image sagemath/sagemath-jupyter --ip-address public --ports 8888
and get the following error
The OS type 'null' of image 'sagemath/sagemath-jupyter' does not
match the OS type 'Linux' of container group 'my-sage'.
Even though the sagemath image is built on Ubuntu Xenial image: https://github.com/sagemath/docker-images
How can I fix this?
Currently, Azure Container instance does not support this image. You could try to create this image on Azure Portal, you will get same error log.
Please check this official document.
Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs.
For your scenario, I suggest you could use Azure Container Service(aks).
The --os-type should default to Linux, if for some reason yours is not you can set the OS Type on the command.
az container create --resource-group Experimental --name my-sage
--image sagemath/sagemath-jupyter --ip-address public --ports 8888 --os-type Linux
Hope this helps.

Resources