How to stop Azure function app container started from Azure cloud shell? - azure

Using Azure cloud shell to make changes and test locally. After changes are made, starting function app container using func start --verbose. Before making further changes and test again, need to stop the container first. What is the recommended way to do it? Tried ctrl+c, ctrl-z it takes about ~5 mins to ~12 mins everytime and then control returns to the prompt.
Stuck in terminating after printing the following logs
[2022-08-11T07:28:16.777Z] Language Worker Process exited. Pid=515.
[2022-08-11T07:28:16.777Z] python3 exited with code 1 (0x1). .
[2022-08-11T07:28:16.778Z] Exceeded language worker restart retry count for runtime:python. Shutting down and proactively recycling the Functions Host to recover

The func start command is used to run the function. In a background it will trigger function required components like configurations, host, port, etc.,
Whenever we change any configuration in a function the function and container will restart.
If we run the function, it will allocate specific resources and required packages & files. If stop the function in between it will reallocate the resources and release the file components. So, it will take some time to release the control to the prompt.
Before making further changes and test again, need to stop the container first. What is the recommended way to do it?
You can build the container image environment to test it locally. Keep your Dockerfile in root project it will gives the required environment to run the Function App in a container.

Related

Is it possible to restart a process in Google Cloud run

We have multiple Google Cloud Run services running for an API. There is one parent service and multiple child services. When the parent service starts it loads a schema from all the children.
Currently there isn't a way to tell the parent process to reload the schema so when a new child is deployed the parent service needs to be restarted to reload the schema.
We understand there there are 1 or more instances of Google Cloud Run running and have ideas on dealing with this, but are wondering if there is a way to restart the parent process at all. Without a way to achieve it, one or more is irrelevant for now. The only way found it by deploying the parent which seems like overkill.
The containers running in google cloud are Alpine Linux with Nodejs, running an express application/middleware. I can stop the node application running but not restart it. If I stop the service Google Cloud Run may still continue to serve traffic to that instances causing errors.
Perhaps I can stop the express service so Google Cloud run will replace that instance? Is this a possibility? Is there a graceful way to do it so it tries to complete and current requests first (not simply kill express)?
Looking for any approaches to force Google Cloud Run to restart or start new instances. Thoughts?
Your design seems, at high level, be a cache system: The parent service get the data from the child service and cache the data.
Therefore, you have all the difficulties of cache management, especially cache invalidation. There is no easy solution for that, but my recommendation will be to use memorystore where all child service publish the latest version number of their schema (at container startup for example). Then, the parent service checks (at each requests, for example) the status in memory store (single digit ms latency) if a new version is available of not. If a new, request the child service, and update the parent service schema cache.
If applicable, you can also set a TTL on your cache and reload it every minute for example.
EDIT 1
If I focus only on Cloud Run, you can in only one condition, restart your container without deploying a new version: set the max-instance param to 1, and implement an exit endpoint (simply do os.exit() or similar in your code)
Ok, you loose all the scale up capacity, but it's the only case where, with a special exit endpoint, you can exit the container and force Cloud Run to reload it at the next request.
If you have more than 1 instance, you won't be able to restart all the running instances but only this one which handle the "exit" request.
Therefore, the only one solution is to deploy a new revision (simply deploy, without code/config change)

Google VM - process persistence

I have a Google VM, and i can start a web server. The command i issue is: python server.py.
My plan is to have the application running.
Since i will eventually close my pc (and thus the terminal), will the application continue running normally?
Or, do i have to start the server and then use disown, to make the app run in the background?
NOTE: If the second option is the case, does this mean that when i re-log in, and i want to shut down the server, the only way to do it is with pkill?

Deploying Elastic search 6 in Azure Container Instance

I'm trying to deploy the current version of Elastic Search in an Azure Container Instance using the Docker image, however, I need to set vm.max_map_count=262144. Although since the container continually tries to restart on max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] I can't hit the instance with any commands. Trying to disable restarts or continuing on Errors causes the container instance to fail.
From the comments it sounds like you may have resolved the issue. In general for future readers a possible troubleshooting guide is:
If container exits unsuccessfully
Try using EXEC for interactive debugging while container is running. This can be found in the Azure portal as well on the "Containers" tab.
Attempt to run to success on local docker if EXEC did not help.
Upload new container version after local success was found to your registry and try to redeploy to ACI.
If container exits successfully and repeatedly restarts
Verify you have a long-running command for the container.
Update the restart policy to Never so upon exit you can debug the terminated container group.
If you cannot find issues, follow the local steps and get a successful run with local Docker.
Hope this helps.

Starting node server in azure batch startup

I am new to Azure batch. I am working in windows environment.
My requirement is, a node js server should be running before any batch task runs on machine.
I have tried to start the node server in job preparation task as well as pool start task with following task command line statement
cmd /c start node.exe my_js_file.js
But as soon as start task completes , Node server running on machine dies.
If I do not use start in above command , node server starts and keeps running but start task also keeps running and never completes.
What can I do to start node js server in background in azure batch.
I have also tried to start node server when a new task executes (which is a command line application). But as soon as task completes, node process also gets killed.
In order create a detached process that runs forever, you have two options. Either option can be done from a job prep task or a start task, but be warned that if you have multiple jobs requiring the same node.js server context to start, you may encounter errors. Please ensure that if utilizing this at the job-level, that you specify a job release task that kills the long running process correctly. Also be aware that if you allow multiple tasks to be co-scheduled on the same node, that there can be interaction conflicts if they require the same long-lived process.
The recommended way is to install a Windows service that runs your command. There are various ways to bootstrap a service, including use the commandline sc program or the myriad of helper programs to do this on your behalf.
If you do not want to (or cannot) install a Windows service, you can create a C++ program that invokes your command as a "breakaway process." Please consult the MSDN CreateProcess documentation and ensure that you specify the CREATE_BREAKAWAY_FROM_JOB flag for dwCreationFlags. This task must be run with elevated (administrator) privileges. It's also recommended that you start your process in a folder outside of the startup task working directory that is default (such that compute node restarts don't affect possible files that you may generate in the current working directory).

How to make one micro service instance at a time run a script (using dockers)

I'll keep it simple.
I have multiple instances of the same micro service (using dockers) and this micro service also responsible of syncing a cache.
Every X time it pulls data from some repository and stores it in cache.
The problem is that i need only 1 instance of this micro-service to do this job, and if it fails, i need another one to take it place.
Any suggestions how to do it simple?
Btw, is there an option to tag some micro-service docker instance and make him do some extra work?
Thanks!
The responsibility of restarting a failed service or scaling up/down is that of an orchestrator. For example, in my latest project, I used Docker Swarm.
Currently, Docker's restart policies are:
no: Do not automatically restart the container when it exits. This is the default.
on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
unless-stopped: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
always: Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.

Resources