How do I start a container in ACI Container Group which is already deployed and in terminated state. Can it be done either through some automation or from logic app? CLI commands show az container restart but not start. The logic app connectors seem to pull the image every time and start it.Is there no means to just start an existing terminated container?
Once a container is terminated there is not a start action you can do. You can deploy a new container which will pull the image you have setup and run whatever app is setup.
If the container is in a failed state you could issue a restart command but the results are basically the same. It re-initializes the container, pulls the image and deploys the app.
This is how containers work. They are not a persistent item such as a Virtual Machine. They are designed to be removed, added, and upgraded as needed.
This is a pain point on Azure. While container instances are not meant to be restarted after being terminated, the container groups they sit inside are meant to be flexible enough to be restarted. And they have that capability - Microsoft Azure's docs just don't really talk about it. It's left for you to figure out.
The main problem is that when you deploy an updated container instance to an existing container group in Azure, and the matching container instance in Azure is terminated, the deployment will not automatically trigger the new version of that container instance to run again.
In my case, I'm scheduling a container deployment to occur from inside an Logic App using the "Create or update a container group" connector (because Microsoft removed the cloud scheduler earlier this year, forcing people to use Logic Apps for simple tasks like this). If I delete the existing container group and run the Logic App, then it will automatically start the Docker container instance as expected. When I re-run it subsequent times, it fails to start the container, because the previous instance inside the container group is in a terminated state. The solution is to add an additional "Start containers in a container group" action, which will start the container instances anew and allow them to run to completion (at which point my containers terminate again, because they're one-off ETL jobs).
Related
We utilizing Azure app services with linux os and we deploying containerized .net core application to the service.
The deployment runs using Set-AzWebApp function from Az.Websites ps module. One of the parameters passed to the function is containerimagename which provides a new container version to the service. There is a separate CI process that builds the docker image and pushes it into ACR.
When the Set-AzWebApp runs as a part of release pipeline and the new container is deployed I couldn't see any downtime to the service, meaning running health check endpoint returns 200 in browser and the service seems to be available all the time. But my test is unreliable in a sense that I am just pinging health check which is very simple endpoint that exercises service middleware only without running data base request or some other logic.
According to my understanding the service needs to recycle itself to accept a new version of the image and the question is would the consumer of the service expirience any downtime during the recycle, also what happens with requests that run during recycle process?
The continuous deployment feature of the Azure App Service will help you avoid downtime when you update the image. Here you can see the details:
We'll pull the image and start the container, and we'll wait until
that new container is running and ready for HTTP requests before we
switch over to it. During that time, your old image will continue to
serve requests into your app.
So maybe you enable this feature.
I did create a simple 'Hello World' Web Job and placed that Web Job inside a Docker Windows Container/Nanoserver
I did push that Docker Windows Container into an Azure Container Registry
I did follow this article and successfully created Virtual AKS pods/nodes
When I run 'get pods' I do see the pods created and running
I do see IP's generated reflected in the 'get pods' command
My question is how do I run the container inside these pods/nodes?
I did try to reference the IP's but those IP's don't load anything
How can I run those containers that I successfully placed into Virtual Kubelet pods/nodes
If the containers in the get fired up by themselves, do they fire up/get invoked only once or every n number of minutes?
Is there a way to check on how the last run went, like log files?
Thank you very much for your help
First of all, I see you create your web job inside a Windows-based Docker. If so, you cannot run the container in the AKS while it does not support Windows nodes, at least current. For window container, I suggest you could use the Azure Container Instance or Web App for Container.
For Linux containers, the pod in AKS is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. If you already have the Docker image, you can create the container inside the pod follow the steps in Running the application in AKS.
In addition, you can set the restart policy for your container. See Restart Policy for the container in Kubernetes. For the logs, I suggest you could use the persist volumes. If not, the files will be lost if the container crash.\
Update
If you really want to run windows container in AKS cluster, there is also a way for you. You can use the virtual Kubelet and Azure Container Instance in AKS. Follow the steps Run Windows container in AKS.
We have an Azure Container instance that seems to be freezing under heavy load. And we are able to cause this through a load test. I am not looking for the exact solution to that right now, but what I am confused about is that I can't seem to get any logs from the Container Instance when this happens that would tell me exactly what is going.
My instance is Docker container that runs a NodeJS application. I added Application Insights to the application, and have been successful in getting any exceptions that arise from the application itself. But when we experience the freezing behavior, it is not actually getting to the application inside the container, so Application Insights doesn't help me in this case.
Also, if I go to my Container Instance in Azure, and look under the Events tab, I am not seeing any kind of error, or anything really that would tell me my container instance is in a "not-working" state, even though we are not able to reach it.
What do you see in the "Logs" and "Connect" tab in the Azure portal?
Can you also check the overview page in the Azure portal to see the CPU/memory/network usage?
You can use the Azure CLI command az container attach to check the container instance state and also the logs. There are three ways to get the different logs, see Retrieve container logs and events in Azure Container Instances. The restart policy will also help when the container instance gets some problems on it.
I´m trying to get some docker images running on Azure. To be concrete, I have a Redis service, a MongoDB server (not CosmosDB) from bitnami and the coralproject talk. In order to start the docker container locally, I have to set some environment variables like
docker run -e key1=value1 -e key2=value2 -p 80:3000 ...
Now, I am trying to get the app running in Azure. Searching for how to start docker container in Azure, I found several options:
Container Instances
App Services
Virtual Machine
Managed Kubernetes (Preview state)
Container Services (somehow deprecated, will be replaced by Managed Kubernetes in the future)
Running a VM for one docker instance doesn´t sound economical. A Managed Kubernetes or Container Service is maybe a bit too much for now, whereby I can not select any version even with "Managed Kubernetes". I guess this is related to the current Preview state. I also tried App Services, but without success, e.g. no proper settings for environment variables. I saw that in App Services you can set a Start File, but without explanations from Microsoft. What should this be, a Start File? So I tried number one, Container Instances.
Unfortunately I can not find a way how to pass multiple environment variables at the time of starting the container. At the setup wizard you can set one environment variable and another two if you like to:
First, it is limited to three environment parameters. I need more. Second, the value needs to be alphanumeric, setting a domain is not possible.
Does anyone here has some experience in running docker instances on Azure? What is the best setup for you?
Thanks in advance.
We have an app we deploy to Azure. It involves deploying several cloud services with a mix of web roles and worker roles. Some of the worker roles pick items up off a queue and process them. We also have some scheduled jobs that run periodically (backing up Azure table storage, etc).
Every time we deploy, we have to watch the Staging environment in the portal and manually stop the roles from starting. We have to do this because we don't want the Staging and Production slots both processing information at the same time (e.g. pulling from the same queue but processing it differently, or both running the same scheduled job simultaneously, etc).
The only way I've found to have a deployment go into Staging in a stopped state is to leave the last deployment there also stopped. Downside is you're charged for those instances, even when they're not running.
So, how do you deploy to an empty staging slot in Azure without the deployment starting up?
EDIT: We kick off the builds through Visual Studio or Visual Studio Online (i.e. TFS). Don't usually use powershell.
There is no way to create a deployment but not have it start. What you can do instead is have a setting in your csdef/cscfg that your code would read during OnStart.
For example, you would have a setting called "ShouldRun" set to False. In OnStart you would have a loop that checked that setting and exits the loop if ShouldRun==True. After you deploy you would then go to the portal and change that setting to True whenever you are ready for it to start processing. Once the loop exits then the OnStart method will finish which will cause Azure to call your Run method and bring your instances to the Ready state.
In addition you could add a Changed event handler to stop processing messages when the setting was changed to False. This would let you first stop your production deployment and then start your staging deployment.
For me, you need to separate even your queue's and configs. Another option, you can create a powershell script to stop your cloud service after publish to it.
http://msdn.microsoft.com/en-us/library/dn495211.aspx