Using pm2 with rabbitmq nodejs - node.js

I need to setup a nodejs cluster that uses pm2 to manage.
For communicating and passing message between the workers I am using rabbitmq.
I have gone through many articles but having some confusion basically regarding the flow.
These are the requirements:
When a order is created also create a booking for ordered services. Here I am thinking to pass the creation of booking to the worker process.
When a booking is created notify user and the devliery body also the admin.
This is what picture I have in my head for now.
I will start a node js cluster using pm2 as below.
// ecosystem.js
{
"apps" : [{
"name" : "API",
"script" : "server.js",// name of the startup file
"instances" : 4, // number of workers you want to run
"exec_mode" : "cluster", // to turn on cluster mode; defaults to 'fork' mode
"env": {
"PORT" : "9090" // the port on which the app should listen
}
}]
}
This is will start 4 workers.
Now How would I pass any task to these workers through rabbitmq?
Or should I another to workers for each task like.
NotificationWorker and BookingCreationWorker.
These two workers will wait for any task in their queue and process it?

i'll suggest you add a worker for BookingCreationWorker task and a consumer for NotificationWorker.js
{
apps: [
{
name: 'API',
script: 'server.js',
instances: 2,
watch: true,
exec_mode: "cluster",
max_memory_restart: '1G',
"env": {
"PORT" : "9090"
}
},
{
name: 'CreateBookWorker',
watch: true,
script: 'worker/BookingCreationWorker.js',
instances: 2
}
]
};

Related

IBM Analytics Engine - Cluster creation fails if i pass Ambari configuration as part of advance options

I using Analytics Engine on IBM Cloud and trying to pass Ambari configuration Like below in Advanced provisioning options.
{
"ambari_config": {
"hardware_config": "default",
"software_package": "ae-1.2-hive-spark",
"num_compute_nodes": 1,
"advanced_options": {
"ambari_config": {
"spark2-defaults": {
"spark.dynamicAllocation.minExecutors": 1,
"spark.shuffle.service.enabled": true,
"spark.dynamicAllocation.maxExecutors": 2,
"spark.dynamicAllocation.enabled": true
}
}
}
}
}
I am following this documentation to pass the above configuration
https://cloud.ibm.com/docs/services/AnalyticsEngine?topic=AnalyticsEngine-advanced-provisioning-options
After multiple retires i see that each time my cluster request is failing.
After reviewing my request, I figured out that I am passing ambari_config attribute twice for my request which i not accepted
Valid json which worked for me looks like this
{
"hardware_config": "default",
"software_package": "ae-1.2-hive-spark",
"num_compute_nodes": 1,
"advanced_options": {
"ambari_config": {
"spark2-defaults": {
"spark.dynamicAllocation.minExecutors": 1,
"spark.shuffle.service.enabled": true,
"spark.dynamicAllocation.maxExecutors": 2,
"spark.dynamicAllocation.enabled": true
}
}
}
}
one more scenario where cluster creation can fail is like InvalidTopologyException: The following config types are not defined in the stack: [spar2-hive-site-override]
Above issue was because of TYPO to define config property file where user want to add or modify properties.

How to access ecs task definition environment variables in nodejs?

I am trying to access the environment variables set in a task definition, inside my nodejs app, with process.env.
I use a Dockerfile to create an image of the project, upload it to ECR, then use this image in the task definition.
I set enviroment variables for the nodejs app, inside the Dockerfile, like this:
# Dockerfile
...
RUN ROOT_DIR='/'
RUN PUBLIC_DIR='/public'
...
I have this task definition:
# task_definition.json
...
"environment" : [
{ "name" : "KeyOne", "value" : "KeyOneValue" },
{ "name" : "KeyTwo", "value" : "KeyTwoValue" }
]
...
I am not able to access process.env.KeyOne / process.env.KeyTwo (they are undefined)
I would like to be able to set those environment variables from the task definition and then reference them inside nodejs app with process.env instead of setting them inside the Dockerfile.
Here is a test I just made on my account, using ECS Fargate. All env variables from the task definition are accessible from NodeJS code.
Source Code is at
https://github.com/sebsto/ecs-demo/tree/master/so
TaskDefinition excerpt :
"environment": [
{
"name": "KEY1",
"value": "VALUE1"
}
],
Code extract :
app.get('/', (req, res) => {
res.send(`Hello world<br/>${JSON.stringify(process.env, null, 2)}`);
});
Output :
Hello world
{ "KEY1": "VALUE1", "NODE_VERSION": "10.16.0", "HOSTNAME": "ip-10-0-0-83.eu-west-1.compute.internal", "YARN_VERSION": "1.16.0", "HOME": "/root", "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI": "/v2/credentials/b630982f-dffb-4ccc-9c8b-8311e42b57ab", "AWS_EXECUTION_ENV": "AWS_ECS_FARGATE", "AWS_DEFAULT_REGION": "eu-west-1", "ECS_CONTAINER_METADATA_URI": "http://169.254.170.2/v3/12186f93-de7b-47e3-a096-b0f23d7e0e81", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "AWS_REGION": "eu-west-1", "PWD": "/usr/src/app" }
I'll keep the container is up and running for a few months, you can test yourself at http://52.18.232.75:8080/

Running several scripts with forever

I have several scripts in a directory, each of the scripts called bot and it's number, from 1 to the number of the scripts.
What I would like to do is somehow run all of the scripts by 1 command line through the terminal (Using Ubuntu), I've used forever command to run the script without stopping and etc.
Could you make it through the terminal or using a node js script?
Is there any other commands like forever that would do it for me?
You could use it through the command line with the command forever.
You'll need to create a JSON file with the files you need.
Example:
[
{
// App1
"uid": "app1", // ID of the script.
"append": true,
"watch": true,
"script": "bot1.js", // Name of the script
"sourceDir": "" // Where the script is located. If it's in the
// same location as the json file, leave it ""
},
{
// App2 = > Same as app1, just different script name.
"uid": "app2",
"append": true,
"watch": true,
"script": "bot2.js",
"sourceDir": ""
}
]
Then you need just to run the JSON file through the forever command.
Example:
forever start apps.json
You can see more information about forever here.
My answer is the same as the answer by #Nikita Ivanov but with pm2. I personally like pm2, which also uses a config file just like forever, but it can be a js, json or yaml file.
// JS File
module.exports = {
apps : [{
name: "bot1",
script: "./bot1.js",
watch: true, // some optional param just for example
env: {
"NODE_ENV": "development",
}, // some optional param just for example
env_production : {
"NODE_ENV": "production"
} // some optional param just for example
},{
name: "bot2",
script: "./bot2.js",
instances: 4, // some optional param just for example
exec_mode: "cluster" // some optional param just for example
}]
}
Now if you do not know the number of scripts there are, it ok. Since it is JS, you can write a script to get the list of all the files in the directory and create an array similar to the one above and use that config for pm2.
module.exports = (function () {
// logic to get all file names and create the 'apps' array
return {
apps: apps
}
})()
Furthermore, you can also use the pm2 npm module and use pm2 as a module in a js script and do this.
See PM2 DOCS for more info.

Correct way to connect node.js to a sharded replica cluster in MongoDB using mongoose

So recently we redesigned our MongoDB database cluster to use SSL and replica sets in addition to the sharding we had already implemented. SSL wasn't too difficult to get working, we just needed to split up the private key and certificate and then everything worked fine. However, getting my Node.js app to connect to both mongos instances is proving to be more difficult than I anticipated.
Before we implemented replica sets, we just had two shards, each of them running a mongos router, and in mongoose I gave it the following connection string:
mongodb://Host1:27017,Host2:27017/DatabaseName
Then, in the options object to the connection, I passed in the following:
{mongos: true}
This seems to work just fine. However, after the replica sets are implemented, whenever I pass the mongos option, the application never connects. Our cluster is now setup so that there are 4 MongoDB servers in 2 replica sets of 2 servers each. The master in each replica set is also running a mongos router instance. I assumed I should be able to connect the same way as before, however it never connects. If I create the connection using just 1 shard with no options, the application connects just fine. However, this is not ideal as the whole point is to have redundancy among the router instances. Can anyone offer some insight here?
Here is the output of sh.status():
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57571fc5bfe098f05bbbe370")
}
shards:
{ "_id" : "rs0", "host" : "rs0/mongodb-2:27018,mongodb-3:27018" }
{ "_id" : "rs1", "host" : "rs1/mongodb-4:27018,mongodb-5:27018" }
active mongoses:
"3.2.7" : 4
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "Demo", "primary" : "rs0", "partitioned" : true }
I was asked to output rs.config(), here it is from the 1st master node:
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongodb-2:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-3:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("57571692c490a699f61e3784")
}
}
Alright, so I finally figured it out. I went through the logs on the server and saw that the client was trying to connect and wasn't using SSL so kept getting booted by the server. This was confusing to me because I set SSL in the server options and had the correct keys and cert bundle, as I was able to connect to a single instance just fine. Then I looked through the mongo driver options here. It shows that there are options you need to set for mongos itself regarding SSL. After setting these explicitly I was able to connect.
In summary, this options object allowed me to connect:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
}
}
while this options object did not:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": true
}
I think the server object is probably redundant, but I left it in.

Can we access the kraken config object in dust template?

My Kraken config in config.json
envConfig : {
prod : {
host : "....",
desc : "..."
},
qa : {
host : "....",
desc : "..."
},
}
Can I access this in my dust template as I wanted to dynamically populate my list or would I have to add it again in my context object for the template
I used ContextDump helper to find that the config is not accessible.I think it makes sense too as we should not expose the configuration information to the client side via dust

Resources