Multiple NodeJS Services/Modules on Google App Engine Flexible Environment - node.js

I'm struggling to figure out how to deploy multiple nodejs services on google app engine flexible.
I'm using multiple nodejs classes with firebase-queue to process my tasks.
Right now, i'm using my package.json to trigger starting everything at once.
However, this has become problematic. I would like to be able to push a change to one particular service/script without having to stop every other script.
My package.json currently looks like something like this:
"scripts": {
"task1": "node ./src/task1.js",
"task2": "node ./src/task2.js",
"start": "npm-run-all -p task1 task2"
}
I'm using different .yaml files to determine which build variant I want to push (Debug or Release) but am finding it hard to deploy each task individually. I found documentation on how to do so in python, but nothing on nodejs. Does anyone have any suggestions?

(Answering my own question, big thanks to Justin for helping)
I was specifically having issues dynamically changing the script to start in my package.json.
I found the package.json can access environment variables using '$'
package.json:
"scripts": {
"start": "node $SCRIPT_TO_RUN"
}
myService.yaml
runtime: nodejs
vm: true
api_version: 1
instance_class: B4
manual_scaling:
instances: 1
service: cart-monitor-dev
env_variables:
SCRIPT_TO_RUN: './src/mytask.js'
Then deploy using:
gcloud app deploy myService.yaml

This is exactly why App Engine services exist :) You can create a {serviceName}.yaml for each service you want to deploy. Then, call gcloud app deploy service.yaml for each one. That creates multiple services in the same app. For an example see:
https://github.com/JustinBeckwith/cloudcats
Hope this helps!

Related

AWS Elastic Beanstalk Amazon Linux 2 - How to set a custom NodeCommand

Migrating your Elastic Beanstalk Linux application to Amazon Linux 2 - AWS Elastic Beanstalk
According to the docs, the aws:elasticbeanstalk:container:nodejs namespace is no longer supported and the new way to set NodeCommand is to "Use a Procfile or the scripts keyword in a package.json file to specify the start script.".
I've never dealt with Procfiles and the part "the scripts keyword in a package.json file" isn't very clear, are they going to execute the scripts in order, until something sticks or what?
Did anyone figure out how exactly to set a custom NodeCommand in Amazon Linux 2?
you can use script option in your package.json. For example, if you start sample node.js application that EB provides, the file is:
package.json
{
"name": "Elastic-Beanstalk-Sample-App",
"version": "0.0.1",
"private": true,
"dependencies": {},
"scripts": {
"start": "node app.js"
}
}
Update
Docs have example how to use Procfile:
You can add a Procfile to your source bundle to specify the command that starts your application, as the following example shows. This feature replaces the legacy NodeCommand option in the aws:elasticbeanstalk:container:nodejs namespace.
web: node index.js
When you don't provide a Procfile, Elastic Beanstalk runs npm start if you provide a package.json file. If you don't provide that either, Elastic Beanstalk looks for the file app.js or server.js, in this order, and runs it.

Integrating PM2 on Google Cloud app engine

I am trying to integrate PM2 to Google Cloud App Engine but I just couldn't work it around. I am using PM2 for my site's staging site and I am very impressed with it. I use Digital Ocean droplet for staging. I realized that Google Cloud App Engine is not that flexible.
This is my package.json:
"main": "server.js",
"scripts": {
"start": "NODE_ENV=production npm run server:prod",
"server:prod": "node server.js",
"server:stage": "NODE_ENV=stage pm2 start server.js --exp-backoff-restart-delay=100 -i max",
"dev": "nodemon server.js",
"gcp-deploy-stage": "gcloud app deploy app.backend.stage.yaml --project=xxxxx",
"gcp-deploy-prod": "gcloud app deploy app.backend.prod.yaml --project=xxxx -v=alpha-16"
},
When I set production script start as staging like this:
"server:prod":"pm2 start server.js --exp-backoff-restart-delay=100 -i max"
and deploy this Google Cloud App Engine normally crashes because there is no global PM2 installed via NPM to start PM2.
Is there anybody went through this and made it work? Or any piece of code or any documentation that could lead me to the right solution?
Or the only option is migrating this to Google Cloud Compute Engine?
Thank you for reading this and your help.
If you want to use any module, you're going to have to include it in your package.json. Have you tried running npm install --save-dev pm2, and then redeploying your site? My guess is, that's going to get you where you want to go.
All of that aside - this probably isn't a good idea :) pm2 does a lot to manage processes on the machine, specifically dealing with crashes. App Engine Flexible does a lot of this at the infrastructure layer, automatically looking at instance health. It uses docker under the hood, which has it's own restart strategy. And then on top of that, if the the docker retry strategy doesn't work, the Google Load Balancer kicks in and will start a new compute instance for you. It's entirely possible doing process level monitoring and restarting like this will work, I just want to make sure you understand everything else that's happening under the hood.
Curiosity killed the cat - why did you end up going with App Engine Flexible over App Engine Standard?

Meteor react on Gcloud

I am trying to deploy my Meteor React app on Google's cloud but when I try deploying it, I get the error saying that MONGO_URL needs to be specified. I build my meteor app and cd to my bundle folder where I do gcloud app deploy. Here is my package.json
{
"private": true,
"scripts": {
"start": "node main.js",
"install": "(cd programs/server && npm install)"
},
"engines": {
"node": "6.6.0"
}
}
How can I find out my meteor mongo username and password. Running regular meteor did not ask me for my username and password. And here is my app.yaml
runtime: nodejs
env: flex
threadsafe: true
automatic_scaling:
max_num_instances: 1
env_variables:
MONGO_URL: 'mongodb://[user]:[pass]#[host]:[port]/[db]'
ROOT_URL: 'https://...'
METEOR_SETTINGS: '{}'
I don't know what to put for MONGO_URL and ROOT_URL if I am deploying on gcloud. Also I have settings file for my project. Should it go under METER_SETTINGS in app.yaml? I apologize for asking too many question but this is my first time dealing with gcloud :)
This question is a little old, but it's still getting some views from Google so let's answer by parts, first you need to understand how Meteor interacts with MongoDB in development and production. When you're coding your app, just executing meteor run does all the magic, because Meteor deploys an internal MongoDB. This is not recommended for real production usage and won't work well under any container based architecture (such as Docker, Google App Engine, Heroku etc.).
Given that, you'll need to deploy a separate instance in Google Compute Engine based on MongoDB. Google has them ready to launch in the Google Cloud Launcher, just search for "MongoDB".
I recommend the Bitnami's one, which is easier to configure if you're just beginning.
Google will create an instance automatically and you'll be given a root username and password, alongside a public IP address to connect to the instance.
Run the command below to access Mongo from a terminal:
# Use this template for the command
mongo "mongodb://root:PASSWORD#IP_ADDRESS/" --authenticationDatabase admin
# For example, with sample values
mongo "mongodb://root:8sdjkfh8876#127.0.0.1/" --authenticationDatabase admin
Now, create a new user for Meteor to connect on your newly created database. Never give it the root credentials, it won't work and it's not safe. For example, naming the database as myapp.
use myapp;
db.createUser({
user: "meteor_app",
pwd: "A_SECURE_PASSWORD",
roles: [ "readWrite", "dbAdmin" ]
})
Now, you exit this connection and test your new user.
mongo "mongodb://meteor_app:A_SECURE_PASSWORD#IP_ADDRESS/myapp"
If everything is OK, you now have your MONGO_URL.
# Put this in the app.yaml file, env variables sections
MONGO_URL: "mongodb://meteor_app:A_SECURE_PASSWORD#IP_ADDRESS/myapp"

Running forever script from Google Cloud Platform App Engine startup script

I have edited the startup-script variable for one of my instances running on the Google Cloud Platform App Engine. I'd like it to call a forever script to make sure my node app is running. So I added:
cd /opt/bitnami/apps/myapp
forever start --workingDir /opt/bitnami/apps/myapp/ --sourceDir /opt
/bitnami/apps/myapp/ app.js
after the #!/bin/bash line (also tried without the cd as it's not really necessary based on my command). But once the vm is started, running a forever list doesn't list my forever task as having ever started. If I copy and paste that forever command into a gcloud terminal and run, the task shows up fine and my app starts no problem.
Am I not calling this correctly somehow within the bash script?
The simple answer is that GAE does this by default. No need for forever or PM2. There are certain health checks that GAE does on the Docker container holding your app, and if they do not pass the instance is automatically restarted
If you want granular control over these checks (called Legacy Health Checks) you can add this to your app.yaml file:
health_check:
enable_health_check: True
check_interval_sec: 5
timeout_sec: 4
unhealthy_threshold: 2
healthy_threshold: 2
There is also updated mechanisms (called Updated Health Checks) that are still in beta, but can be used instead
The proper way to start your nodejs app on appengine is to specify the "scripts" field in your package.json, as the documentation
Below is an example borrowed from this sample
"scripts": {
"start": "node ./bin/www",
"test": "cd ..; npm run t -- appengine/analytics/test/*.test.js"
},
If you however, are only interested in running a node script, and not interested in the features that come with Google app engine, then you may simply run it on a Google Compute Engine instance.

Running blockchain-wallet-service on a Heroku worker

I'm trying to deploy my Django app on Heroku, that makes use of the Blockchain.info API V2 (https://github.com/blockchain/service-my-wallet-v3) and thus needs to run blockchain-wallet-service in the background, which in turn needs Node.js and npm installed.
On localhost, I have used this API successfully by running the service on my own machine, but I'm having trouble deploying to Heroku. Firstly, I assume I will need to run the service on a separate dyno, and that I will need node and npm installed on my instance.
Can someone tell me how to achieve this? I'm new to more advanced features of Heroku, I've tried to use the nodejs buildpack but I doubt this is the correct way. There is also this: https://elements.heroku.com/buttons/kmhouk/service-my-wallet-v3 which I've deployed as a separate app but I've failed to merge it in some way to my Django app.
Any help is much appreciated!
I had this exact same issue, bro, and i finally got some light in the end of the tunnel.
I've cloned the https://github.com/blockchain/service-my-wallet-v3 repository and deployed it to heroku and made some changes on "package.json" file. The problem is that (in heroku) you need to declare the dependencies on package file. I've added these lines:
"dependencies": {
"blockchain-wallet-service": "~0.22.4",
}
and a script to test in the deploy:
"scripts": {
"postinstall": "blockchain-wallet-service -V"
}
Also, by cloning this repository, i needed to add this line too:
"license" : "(ISC OR GPL-3.0)",
hope it works for you

Resources