I have a dockerized application deployed to Azure as an Application Service. After a several deployments I have got the following error:
The linux service which pulls the docker image for running it seems to be full. But I'm not sure where are these images pulled, because the File system storage tab says that only 4% of the storage is used.
I have tried both using the image from the ACR registry and from our private repository, the issue was the same.
I have tried also to connect via ssh but doesn't seem neither there to be full something.
Changing the Service plan from S1 to S2 solved the problem, but I would prefer to have a solution where I can do a clean up of the old images/resources or something similar.
Can anybody help me with this issue?
EDIT:
Checking the logs, basically the docker container start failed:
InnerException: Docker.DotNet.DockerApiException, Docker API responded with status code=InternalServerError, response={"message":"OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/LWASFiles/Sites/<site>/appsvctmp\\\\\\\" to rootfs \\\\\\\"/mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a\\\\\\\" at \\\\\\\"/mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a/appsvctmp\\\\\\\" caused \\\\\\\"mkdir /mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a/appsvctmp: no space left on device\\\\\\\"\\\"\": unknown"}
Related
I have installed Docker and Portainer on my Asustor home NAS. This issue appears to be specific to the implementation of Docker/Portainer provided in their app store. I have been working directly with the Portainer staff and they have not seen this issue before.
I have been following instructions from Portainer (https://www.youtube.com/watch?v=V0OvPyJZOAI) to deploy an agent in the program and found where Docker stores volumes (non-standard Linux location), however now I am getting an error that I believe is also caused by the non-standard implementation on Linux used by the NAS OS. This error happens when I go to start the service when following the stops in the video linked above. The error I am getting is "starting container failed: error creating external connectivity network: cannot restrict inter-container communication: please ensure that br_netfilter kernel module is loaded"
The response I got from Asustor Support is
The kernel module is in the [NAS OS].
So if you want you to need to manually insert the module to have it
working.
But please note that we have not yet tested it so there might be a
risk to the stability of the system.
I have located the filepath of the kernal module by logging in via SSH but I do not know what I need to do in regards to inserting the module as the Asustor support team told me to do.
Screenshot of Portainer error
We have used Azure Container Instance service to start an application which is expected to run at all times. It picks the Docker Image from ACR. The container instance got created successfully upon creation but now ACI container Instance Status is showing in Running state but when we browse Settings -> Containers, the container is shown in Terminated state.
Can someone please help me with the issue?
It depends on your images. If you want to run container groups with long-running processes, you could set a restart policy of Always when you create ACI, so containers in the container group always restart after they run to completion. You may need to change this to OnFailure or Never if you intend to run task-based containers. There is a sample that when Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container's status is set to Terminated.
When running container groups without long-running processes you may
see repeated exits and restarts with images such as Ubuntu or Alpine.
Connecting via EXEC will not work as the container has no process
keeping it alive. To resolve this problem, include a start command
like the following with your container group deployment to keep the
container running.
Read this common issue for more details.
I had a similar issue, but it may not be the same for you. I was building my image in the linux/arm64 platform, but Azure expects a linux/amd64. In my case, this started happening when I moved to an apple silicon chip.
The way I found about it can help you validate your case as well if you are still facing the issue. I created a deployment of an Azure Container App and there the error was more explicit:
{
"status": "Failed",
"error": {
"code": "WebhookInvalidParameterValue",
"message": "The following field(s) are either invalid or missing. Invalid value: \"<my-docker-image>": image OS/Arc must be linux/amd64 but found linux/arm64: <my-docker-image>."
}
}
Hope you already got this fixed or this will help someone be unblocked
I'm trying to deploy the current version of Elastic Search in an Azure Container Instance using the Docker image, however, I need to set vm.max_map_count=262144. Although since the container continually tries to restart on max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] I can't hit the instance with any commands. Trying to disable restarts or continuing on Errors causes the container instance to fail.
From the comments it sounds like you may have resolved the issue. In general for future readers a possible troubleshooting guide is:
If container exits unsuccessfully
Try using EXEC for interactive debugging while container is running. This can be found in the Azure portal as well on the "Containers" tab.
Attempt to run to success on local docker if EXEC did not help.
Upload new container version after local success was found to your registry and try to redeploy to ACI.
If container exits successfully and repeatedly restarts
Verify you have a long-running command for the container.
Update the restart policy to Never so upon exit you can debug the terminated container group.
If you cannot find issues, follow the local steps and get a successful run with local Docker.
Hope this helps.
I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.
I made a VM for making a Image in Azure.
After I made the linux vm(Redhat), I stop the vm and made image.
But I failed making the vm from image.
Both cases have the same problems
-1st case:I didn't install anything.
-2nd case:I install something and made ssh key(rsa)
If i execute this command 'sudo waagent -deprovision+user', there is no error.
BUT my ssh key disappear so my VMs from image cannot connect each other, which means that I cannot generate a cluster by using Ambari.
Is there any way to solve this problem?
this is error I got when I failed making a VM from image.
--------error---- Provisioning failed. OS Provisioning for VM 'master0' did not finish in the
allotted time. However, the VM guest agent was detected running. This
suggests the guest OS has not been properly prepared to be used as a
VM image (with CreateOption=FromImage). To resolve this issue, either
use the VHD as is with CreateOption=Attach or prepare it properly for
use as an image: * Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virtual-machines-windows-upload-image/
* Instructions for Linux: https://azure.microsoft.com/documentation/articles/virtual-machines-linux-capture-image/.
OSProvisioningTimedOut
Before you create a image, you should execute sudo waagent -deprovision+user. If you don't do it, you will get this error.
According to your scenario, you could configure Provisioning.RegenerateSshHostKeyPair=n (/etc/waagent.conf). According this official document
deprovision: Attempt to clean the system and make it suitable for
re-provisioning. This operation deleted the following:
All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in
the configuration file)
If it does not work for you, I suggest you could add publickey to your VMs by using Azure Portal.