We have used Azure Container Instance service to start an application which is expected to run at all times. It picks the Docker Image from ACR. The container instance got created successfully upon creation but now ACI container Instance Status is showing in Running state but when we browse Settings -> Containers, the container is shown in Terminated state.
Can someone please help me with the issue?
It depends on your images. If you want to run container groups with long-running processes, you could set a restart policy of Always when you create ACI, so containers in the container group always restart after they run to completion. You may need to change this to OnFailure or Never if you intend to run task-based containers. There is a sample that when Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container's status is set to Terminated.
When running container groups without long-running processes you may
see repeated exits and restarts with images such as Ubuntu or Alpine.
Connecting via EXEC will not work as the container has no process
keeping it alive. To resolve this problem, include a start command
like the following with your container group deployment to keep the
container running.
Read this common issue for more details.
I had a similar issue, but it may not be the same for you. I was building my image in the linux/arm64 platform, but Azure expects a linux/amd64. In my case, this started happening when I moved to an apple silicon chip.
The way I found about it can help you validate your case as well if you are still facing the issue. I created a deployment of an Azure Container App and there the error was more explicit:
{
"status": "Failed",
"error": {
"code": "WebhookInvalidParameterValue",
"message": "The following field(s) are either invalid or missing. Invalid value: \"<my-docker-image>": image OS/Arc must be linux/amd64 but found linux/arm64: <my-docker-image>."
}
}
Hope you already got this fixed or this will help someone be unblocked
Related
I am working on a debian based container which runs a c++ compiled application as a child process to a node.js script. For testing, I am attempting to define a swarm that instantiates this container as a service. When I run "docker service ls", it reports that the service is no running. If I issue "docker ps", however, I can see the identifier for the container and I can use "docker exec" to connect to it.
So far as I can see, the two target processes are running within the container yet, when other containers within the same swarm and using the same networks attempt to connect to this server, I see a name resolution error.
Here is my question: If the intended processes are actively and verifiably running within a container, why would docker consider a service based on that container to not be running? What is Docker's criteria for considering a container to be running as a service?
The status of starting means that it's waiting for the HEALTHCHECK condition to be satisfied.
Examine your Dockerfile's (or compose file's) HEALTHCHECK conditions and investigate why they are not being satisfied.
NOTE: Do not confuse starting with restarting, which means that the program is being launched again (usually after a crash) due to the restart policy.
I have a dockerized application deployed to Azure as an Application Service. After a several deployments I have got the following error:
The linux service which pulls the docker image for running it seems to be full. But I'm not sure where are these images pulled, because the File system storage tab says that only 4% of the storage is used.
I have tried both using the image from the ACR registry and from our private repository, the issue was the same.
I have tried also to connect via ssh but doesn't seem neither there to be full something.
Changing the Service plan from S1 to S2 solved the problem, but I would prefer to have a solution where I can do a clean up of the old images/resources or something similar.
Can anybody help me with this issue?
EDIT:
Checking the logs, basically the docker container start failed:
InnerException: Docker.DotNet.DockerApiException, Docker API responded with status code=InternalServerError, response={"message":"OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/LWASFiles/Sites/<site>/appsvctmp\\\\\\\" to rootfs \\\\\\\"/mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a\\\\\\\" at \\\\\\\"/mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a/appsvctmp\\\\\\\" caused \\\\\\\"mkdir /mnt/data/docker/images/231072.231072/aufs/mnt/a09dddec5e34cf18d12715faf148185e0fd74ae31e634f5f58f7a5525c89571a/appsvctmp: no space left on device\\\\\\\"\\\"\": unknown"}
I'm trying to deploy the current version of Elastic Search in an Azure Container Instance using the Docker image, however, I need to set vm.max_map_count=262144. Although since the container continually tries to restart on max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] I can't hit the instance with any commands. Trying to disable restarts or continuing on Errors causes the container instance to fail.
From the comments it sounds like you may have resolved the issue. In general for future readers a possible troubleshooting guide is:
If container exits unsuccessfully
Try using EXEC for interactive debugging while container is running. This can be found in the Azure portal as well on the "Containers" tab.
Attempt to run to success on local docker if EXEC did not help.
Upload new container version after local success was found to your registry and try to redeploy to ACI.
If container exits successfully and repeatedly restarts
Verify you have a long-running command for the container.
Update the restart policy to Never so upon exit you can debug the terminated container group.
If you cannot find issues, follow the local steps and get a successful run with local Docker.
Hope this helps.
I'll keep it simple.
I have multiple instances of the same micro service (using dockers) and this micro service also responsible of syncing a cache.
Every X time it pulls data from some repository and stores it in cache.
The problem is that i need only 1 instance of this micro-service to do this job, and if it fails, i need another one to take it place.
Any suggestions how to do it simple?
Btw, is there an option to tag some micro-service docker instance and make him do some extra work?
Thanks!
The responsibility of restarting a failed service or scaling up/down is that of an orchestrator. For example, in my latest project, I used Docker Swarm.
Currently, Docker's restart policies are:
no: Do not automatically restart the container when it exits. This is the default.
on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
unless-stopped: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
always: Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.
We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.