Starting node server in azure batch startup - node.js

I am new to Azure batch. I am working in windows environment.
My requirement is, a node js server should be running before any batch task runs on machine.
I have tried to start the node server in job preparation task as well as pool start task with following task command line statement
cmd /c start node.exe my_js_file.js
But as soon as start task completes , Node server running on machine dies.
If I do not use start in above command , node server starts and keeps running but start task also keeps running and never completes.
What can I do to start node js server in background in azure batch.
I have also tried to start node server when a new task executes (which is a command line application). But as soon as task completes, node process also gets killed.

In order create a detached process that runs forever, you have two options. Either option can be done from a job prep task or a start task, but be warned that if you have multiple jobs requiring the same node.js server context to start, you may encounter errors. Please ensure that if utilizing this at the job-level, that you specify a job release task that kills the long running process correctly. Also be aware that if you allow multiple tasks to be co-scheduled on the same node, that there can be interaction conflicts if they require the same long-lived process.
The recommended way is to install a Windows service that runs your command. There are various ways to bootstrap a service, including use the commandline sc program or the myriad of helper programs to do this on your behalf.
If you do not want to (or cannot) install a Windows service, you can create a C++ program that invokes your command as a "breakaway process." Please consult the MSDN CreateProcess documentation and ensure that you specify the CREATE_BREAKAWAY_FROM_JOB flag for dwCreationFlags. This task must be run with elevated (administrator) privileges. It's also recommended that you start your process in a folder outside of the startup task working directory that is default (such that compute node restarts don't affect possible files that you may generate in the current working directory).

Related

How to stop Azure function app container started from Azure cloud shell?

Using Azure cloud shell to make changes and test locally. After changes are made, starting function app container using func start --verbose. Before making further changes and test again, need to stop the container first. What is the recommended way to do it? Tried ctrl+c, ctrl-z it takes about ~5 mins to ~12 mins everytime and then control returns to the prompt.
Stuck in terminating after printing the following logs
[2022-08-11T07:28:16.777Z] Language Worker Process exited. Pid=515.
[2022-08-11T07:28:16.777Z] python3 exited with code 1 (0x1). .
[2022-08-11T07:28:16.778Z] Exceeded language worker restart retry count for runtime:python. Shutting down and proactively recycling the Functions Host to recover
The func start command is used to run the function. In a background it will trigger function required components like configurations, host, port, etc.,
Whenever we change any configuration in a function the function and container will restart.
If we run the function, it will allocate specific resources and required packages & files. If stop the function in between it will reallocate the resources and release the file components. So, it will take some time to release the control to the prompt.
Before making further changes and test again, need to stop the container first. What is the recommended way to do it?
You can build the container image environment to test it locally. Keep your Dockerfile in root project it will gives the required environment to run the Function App in a container.

Run Python script in Task Scheduler as normal user but with admin privileges

I have an odd set of constraints and I'm not sure if what I want to do is possible. I'm writing a Python script that can restart programs/services for me via an Uvicorn/FastAPI server. I need the following:
For the script to always be running and to restart if it stops
To be constantly logged on as the standard (non-admin) user
To stop/start a Windows service that requires admin privileges
To start a program as the current (non-admin) user that displays a GUI
I've set up Task Scheduler to run this script as admin, whether logged in or not. This was the only way I found to be able to stop/start Windows services. With this, I'm able to do everything I need except for running a program as the current user. If I set the task to run as the current user, I can do everything except the services.
Within Python, I've tried running the program with os.startfile(), subprocess.Popen(), and subprocess.run(), but it always runs with no GUI, and seemingly as the admin since I can't kill the process without running Task Manager as admin. I'm aware of the 'user' flag in subprocess, but as I'm on Windows 8, the latest Python version I can use is 3.8.10, and 'user' wasn't introduced until Python 3.9.
I've tried the 'runas' cmd command (run through os.system() as well as a separate batch script), but this doesn't work as it prompts for the user's password. I've tried the /savecred flag and I've run the script manually both as a user and as admin just fine, but if I run this through Task Scheduler, either nothing happens, or there is a perpetual 'RunAs' process that halts my script.
I've tried PsExec, but again that doesn't work in Task Scheduler. If I run even a basic one-line batch file with PsExec as a task, I get error 0xC0000142, which from what I can tell is some DLL error: NT_STATUS_DLL_INIT_FAILED.
The only solution I can think of is running two different Python scripts in Task Scheduler (one as admin, one as non-admin), but this is not ideal as I want only one Uvicorn/FastAPI server running with one single port.
EDIT -
I figured out a way to grant service perms to the user account with ServiceSecurityEditor, but I'm still open to any suggestions that may be better. I want the setup process for a new machine to be as simple as possible.

Google VM - process persistence

I have a Google VM, and i can start a web server. The command i issue is: python server.py.
My plan is to have the application running.
Since i will eventually close my pc (and thus the terminal), will the application continue running normally?
Or, do i have to start the server and then use disown, to make the app run in the background?
NOTE: If the second option is the case, does this mean that when i re-log in, and i want to shut down the server, the only way to do it is with pkill?

PM2 - Is this considered a good practice

In my project I have several servers which run NodeJS applications using PM2, those were not created by me. I am not that familiar with the PM2. Now I need to start a new server, which is simply a CRON process that queries an ElasticSearch instance.
There are no routes or anything in it, just a CRON with some logging.
Here is my dilemma. I have played with PM2 and I become somewhat familiar with what is it, and what it does. But the question is how shall I run it?
The previous projects do have PM2 config.json with many parameters, and they are started in cluster mode (handled with Nginx), and when I start them I see all process's becoming daemons. But in my case I don't need that. I just need it to run as a single service.
In other words if I use the configuration file to run the PM2, I see it spawned in cluster mode, and it creates chaos as my CRON is fired many times. I don't need that. If I start it in Fork mode, it also spawns instances, but all of them die, except one (due to which they are using same port). I also don't need that.
I just need single service.
I managed to run the my CRON app.js with the singe line, simple as:
PM2 start app.js. It runs in single thread, and I can see it's info with PM2 status. All fine.
If I run it with the single line(as in my case), is it considered ok? Based in my knowledge if I use config.json, it will always run it in fork or cluster.
Is it ok to run it in single line, or do I need still to use a config.json file.
If you only need one process to be run, as is the case, you're doing the right thing.

How to achieve zero downtime redeployment in Node.js

What is the easiest way to achieve zero downtime for my Node.js application?
I have an app that requires the following steps for redeployment:
npm install
node_modules/.bin/bower install
node_modules/.bin/gulp
The result of these operations is the ready-to-run application in the generated by the gulpfile.js directory named build. In this directory I have a currently-running instance of the same application (currently launched via forever like this -- forever start server.js).
As far as I know, it is not possible to achieve zero downtime via forever module, so I decided to choose another way to do it.
I saw pm2 but I found it very complex tbh (prove me wrong if you don't feel the same).
I also saw naught but I can't even start my application via naught start server.js -- it doesn't even print anything in stdout / stderr.
I also saw up-time but I didn't get the idea -- how will it handle situation when I run gulp that should replace files in the directory where currently-running instance work at the moment?
Regarding of handling replaced files during build: if these files is used by Node.js app then all changes will be applied upon process restart (since these files are loaded into memory), browser frontend files could also be cached in application memory to achieve similar behavior (changes applied only upon restart or/and cache invalidation).
We're using pm2 in cluster mode.
pm2 start app.js -i
The above command starts app.js in cluster mode on all available CPU cores.
zero downtime restart:
pm2 gracefulReload all
this command restarts all processes sequentially, so if you have more than one process up and running there is always at least one process that servers requests during restart.
If you have only one process of app.js you can start it in cluster mode and run pm2 scale app.js 2 (starts one more process) then pm2 gracefulReload all and then pm2 scale app.js 1 (removes previously started process).
Though I think app restarting is not main problem of zero downtime deployment, we've not managed to handle DB migrations, so full app shutdown is needed to apply DB changes. Also there could be an issue with browser frontend files when during deploy user obtained the new version of them, but AJAX request is processed by old version of server process, in this case sticky sessions and API versioning came to the rescue.

Resources