Is it possible to restart a process in Google Cloud run - node.js

We have multiple Google Cloud Run services running for an API. There is one parent service and multiple child services. When the parent service starts it loads a schema from all the children.
Currently there isn't a way to tell the parent process to reload the schema so when a new child is deployed the parent service needs to be restarted to reload the schema.
We understand there there are 1 or more instances of Google Cloud Run running and have ideas on dealing with this, but are wondering if there is a way to restart the parent process at all. Without a way to achieve it, one or more is irrelevant for now. The only way found it by deploying the parent which seems like overkill.
The containers running in google cloud are Alpine Linux with Nodejs, running an express application/middleware. I can stop the node application running but not restart it. If I stop the service Google Cloud Run may still continue to serve traffic to that instances causing errors.
Perhaps I can stop the express service so Google Cloud run will replace that instance? Is this a possibility? Is there a graceful way to do it so it tries to complete and current requests first (not simply kill express)?
Looking for any approaches to force Google Cloud Run to restart or start new instances. Thoughts?

Your design seems, at high level, be a cache system: The parent service get the data from the child service and cache the data.
Therefore, you have all the difficulties of cache management, especially cache invalidation. There is no easy solution for that, but my recommendation will be to use memorystore where all child service publish the latest version number of their schema (at container startup for example). Then, the parent service checks (at each requests, for example) the status in memory store (single digit ms latency) if a new version is available of not. If a new, request the child service, and update the parent service schema cache.
If applicable, you can also set a TTL on your cache and reload it every minute for example.
EDIT 1
If I focus only on Cloud Run, you can in only one condition, restart your container without deploying a new version: set the max-instance param to 1, and implement an exit endpoint (simply do os.exit() or similar in your code)
Ok, you loose all the scale up capacity, but it's the only case where, with a special exit endpoint, you can exit the container and force Cloud Run to reload it at the next request.
If you have more than 1 instance, you won't be able to restart all the running instances but only this one which handle the "exit" request.
Therefore, the only one solution is to deploy a new revision (simply deploy, without code/config change)

Related

Prevent duplicate schedule job on Azure App service when scale out

I have a Nuxt app deployed on Azure App Service, with cron library to run schedule jobs. However I found that If there are more than one instance running, the schedule job will be duplicated. What is the proper way to handle it? Thanks!
If you have more than one instance of your app running, then you have more than one instance of cron running (I'm assuming you are referring to the npm module in your case). Both are going to activate on the same schedule since it's coded into both apps. And of course if you scale beyond that, you would have three, four, five jobs running.
There are a few options that let you run singleton jobs on a timer such as adding a WebJob to your App Service, creating a Logic App, and running an Azure Function. For a basic JS script I would recommend the function. You create a JSON file that defines the schedule you want to run it on (just like cron), it's JS so you can probably just copy your code over along with other npm modules you need, and you can set Configuration just like web apps so if your job needs to connect to storage or a database you can have connection strings and other info there just like you do for your existing web app.

Shutdown system or stop AWS EC2 instance from NodeJS

I have AWS EC2 instances running Debian with systemd running Node as a service. (Hereinafter these instances are called the "Node servers".)
The Node servers are started by another instance (hereinafter called "the manager instance") that is permanently on.
When a Node server experiences some predefined period of inactivity, I want it to shut down automatically.
I am considering the following options:
(After sensing a period of inactivity in Node) execute a child_process in Node that runs the shutdown now command.
(After sensing a period of inactivity in Node) call AWS SDK's stopInstances with the instance's own resource ID.
Expose an HTTP GET endpoint called last-request-time on each Node server, which is periodically polled by a "manager instance", which then decides whether/when to call AWS SDK's stopInstances.
I am unsure which of these approaches to take and would appreciate any advice. Explicitly shutting down a machine from Node running on that same machine feels somehow inappropriate. But option 3 requires periodic HTTP polling, not to mention that it feels more risky to rely on another instance for auto-shutdown. (If the manager is down all the instances keep going.)
Or perhaps it is possible to get systemd to shut down the machine when a particular service exits with a particular code? This, if possible, would feel like the best solution as the Node process would only need to abort itself after the period of inactivity with a particular exit code.
You could create a lambda function that acts as an api and uses the SDK's stopInstances functionality.
That would also allow you to make it have the full functionality of a "manager instance" and save even more on instances since it will only run when needed.
Or you could cut out the middle-man and migrate the "Node servers" to lambda.
(lambda documentation)

Cloud-based node.js console app needs to run once a day

I'm looking for what I would assume is quite a standard solution: I have a node app that doesn't do any web-work - simply runs and outputs to a console, and ends. I want to host it, preferably on Azure, and have it run once a day - ideally also logging output or sending me the output.
The only solution I can find is to create a VM on Azure, and set a cron job - then I need to either go fetch the debug logs daily, or write node code to email me the output. Anything more efficient available?
Azure Functions would be worth investigating. It can be timer triggered and would avoid the overhead of a VM.
Also I would investigate Azure Container Instances, this is a good match for their use case. You can have a container image that you run on an ACI instance that has your Node app. https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-deploy-app

What's the process of updating a NodeJS running in production?

I am working on a webapp that will be published in production and then updated on regular basis as features and bug fixes are coming.
I run it like node app.js which loads config, connects to database, starts web server.
I wonder, what's the process of updating the app when I have next version?
I suppose, I have to kill the process and start after update and deploy? It means, that there will be some downtime?
Should I collect stats on the least use during the week/month and apply the update during that period? Or should I start the current version on another machine, redirect all requests to it and update the main one, then switch back?
I think the second approach is better.
The first one won't prevent downtime, it will just make sure it impacts the least number of users, while the second one creates no down-time at all.
Moreover, I think you should keep the old version running in the other machine for some time in case you will find out the new version must be reverted due to whatever reason. In that case you will just have to redirect traffic to the old node without any downtime.
Also, if you're setting up production environment I would recommend that instead of just running your process with "node" command, you will use something like forever or pm2 in order to do automatic restarts and some other advanced features.

Docker container management solution

We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.

Resources