Crossbar.io External Worker Configuration - crossbar

I have been using crossbar for awhile and I love it. I have a question about the best way to run workers that will connect to an external router. I was using "crossbar start" and creating a config file that connected to the router and this worked great.
Recently my requirements have changed to where I would like to pass the router url and realm into the config file via environment variables. After trial and error I concluded that this was not possible with the current XBAR implementation.
I then looked at creating an application runner using the following where I retrieved the realm and the url from config vars
runner = ApplicationRunner(url=url, realm=realm)
runner.run(AppSession)
This works but I then noticed my server would go down periodically. After root causing, I realized that the reverse proxy was timing out the connection after 1 hour of inactivity. Looking at the server logs, I got the "onDisconnect" callback. Looking at the XBAR application runner documentation it states the following
This class is a convenience tool mainly for development and quick hosting
of WAMP application components.
I have my service running in a "runit" script as a Daemon. Some quick fixes I came up with are
Kill the runner and let the daemon restart the service
Explicitly perform the join process on any disconnects
All of these were starting to feel really hacky given the XBAR folks explicitly state that the ApplicationRunner is a development tool. Anyone know if there is something I can use other than an application runner OR some way I can get environment variables into the config.json file?
As a temporary workaround I am using sed. Here is my config file
{
"controller": {
},
"workers": [
{
"type": "container",
"options": {
"pythonpath": [".."]
},
"components": [
{
"type": "class",
"classname": "src.app_session.AppSession",
"realm": "%%%ROUTER_REALM%%%",
"transport": {
"type": "websocket",
"endpoint": {
"type": "tcp",
"host": "%%%ROUTER_HOST%%%",
"port": %%%ROUTER_PORT%%%
},
"url": "%%%ROUTER_PROTOCOL%%%://%%%ROUTER_HOST%%%/ws"
}
}
]
}
]
}
And my runit script is
#!/bin/bash
# Update the ROUTER config parameters
sed -i -e "s/%%%ROUTER_HOST%%%/${ROUTER_HOST}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_PORT%%%/${ROUTER_PORT}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_REALM%%%/${ROUTER_REALM}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_PROTOCOL%%%/${ROUTER_PROTOCOL}/g" /app/.crossbar/config.json
cat /app/.crossbar/config.json
cd /app/
exec crossbar start

There is indeed no mechanism with Crossbar.io to do what you want. From the side of the project there are no plans to implement this as a feature. We want to concentrate on the management API, which will enable to dynamically manage Crossbar.io.

Related

pm2+ not showing versioning when using ecosystem file

I'm using PM2+ to manage my NodeJS deployments.
Usually, when I deploy an application with pm2 start src/app.js, I get details about versioning like in the screenshot below. However, when I deploy using an ecosystem file I only get N/A:
PM2 normally extracts this information directly using vizion.
But since it didn't work with the ecosystem file, I specified the GitHub repository directly just like the documentation stated.
This is my current pm2-services.json ecosystem-file:
{
"apps": [
{
"name": "my-node-app",
"cwd": "./my-node-app-repo-folder",
"script": "src/app.js",
"env": {
"NODE_ENV": "production"
},
"repo": "https://github.com/MyUserName/MyNodeAppRepo.git",
"ref": "origin/master"
}
]
}
For the ref field, I also tried putting refs/remotes/origin/master, remotes/origin/master and master.
Sadly none of them worked (I made sure they are correct using git show-ref).
Additional info:
NodeJS Version: v15.11.0
NPM Version: 7.6.3
PM2 Version: 4.5.6 (latest, by the time of writing this)
So, how do I get the Versioning field to display correctly?
Note: This isn't really an issue but rather a minor inconvenience. I just want to know what I'm doing wrong.

Using NodeJS development dependencies in Heroku review-app post-deploy step

I have a (demo) application hosted on Heroku. I've enabled Heroku's "review app" feature to spin up new instances for pull request reviews. These review instances all get a new MongoDB (on mLab) provisioned for them through Heroku's add-on system. This works great.
In my repository, I've defined some seeder scripts to quickly get a test database up and running. Running yarn seed (or npm run seed) will fill the database with test data. This works great during development, and it would be perfect for review apps as well. I want to execute the seeder command in the postdeploy hook of the Heroku review app, which can be done by specifying it under the environment.review section of the app.json file. Like so:
{
"name": "...",
"addons": [
"mongolab:sandbox"
],
"environments": {
"review": {
"addons": [
"mongolab"
],
"scripts": {
"postdeploy": "npm run seed"
}
}
}
}
The problem is, the seeder script relies on some development-only dependencies (faker, ts-node [this is a TypeScript project], and mongo-seeding) to execute. And these dependencies are not available in the postdeploy phase of an Heroku app.
I also don't think that "compiling" the typescript in the regular build step is the best idea. This seeder script is only used in development (and review apps). Besides, I'm not sure that would resolve the issue with missing dependencies like faker.
How would one go about this? Any tricks I'm missing?
Can I maybe skip Heroku's step where it actively deletes development dependencies? But only for review apps? Or even better, can I "exclude" just the couple of dependencies I need, and only for review apps?
The Heroku docs indicate that when the NODE_ENV variable contains anything but "production", the devDependencies will not be removed after the build step.
To make sure this only happens for Heroku review apps, you can set the NODE_ENV variable under the environments.review section of the app.json file. The following config should do the trick:
{
"name": "...",
"addons": [
"mongolab"
],
"environments": {
"review": {
"addons": [
"mongolab:sandbox"
],
"env": {
"NODE_ENV": "development"
},
"scripts": {
"postdeploy": "npm run seed"
}
}
}
}

Yoga server deployment to the now.sh shows directory listing instedad the application

I can run the app locally without any issue by yarn start command. here I have provided photographs which represent my problem. I googled and noticed several people faces the same problem. but their context is different.
By default, Now publishes your files as a static directory. You can add a builder to your now.json file to tell Now how to build and deploy your site.
In a case where app.js contains a web server application, your now.json might look like this:
{
"version": 2,
"name": "my-project",
"builds": [
{"src": "app.js", "use": "#now/node"}
]
}
This tells Now to use the #now/node builder to generate a lambda that runs app.js to respond to requests.
If your app is purely js+html to be run on the client machine, you wouldn't need the lambda, but you can still build the source before deploying it as static files with #now/static-build.
Check out the Now docs for more info: https://zeit.co/docs/v2/deployments/basics/#introducing-a-build-step

Nodejs example for pachyderm

I am new to Pachyderm.
I have a pipeline to extract, transform and then save in the db.
Everything is already written in nodejs, docekrized.
Now, I would like to move and use pachyderm.
I tried following the python examples they provided, but creating this new pipeline always fails and the job never starts.
All my code does is take the /pfs/data and copy it to /pfs/out.
Here is my pipeline definition
{
"pipeline": {
"name": "copy"
},
"transform": {
"cmd": ["npm", "start"],
"image": "simple-node-docker"
},
"input": {
"pfs": {
"repo": "data",
"glob": "/*"
}
}
}
All that happens is that the pipeline fails and the job never starts.
Is there a way to debug on why the pipeline is failing?
Is there something special about my docker image that needs to happen?
Offhand I see two possible issues:
The image name doesn't have a prefix. By default, images are pulled from dockerhub, and dockerhub images are prefixed with the user who owns the image (e.g. maths/simple-node-docker)
The cmd doesn't seem to include a command for copying anything. I'm not familiar with node, but it looks like this starts npm and then does nothing else. Perhaps npm loads and runs your script by default? If so, it might help to post your script as well.

Rollback pm2 deploy to specific commit

I would like to know how to use pm2 to "rollback" a recent code change.
Our team's change process requires us to have a "rollback" plan in the case of a problem with a deploy. We normally just document that the rollback plan will be to git checkout CHANGESET to go back to the previous version of the code while we fix the issue and test it in dev.
How can I achieve a similar rollback using pm2? Our new-ish deploy process is pm2 deploy production and my ecosystem.json is included below. Currently both UAT and production use origin/master as the "ref". I was thinking maybe to use a tag or something, but not sure the best approach. Perhaps I should just continue to do a git checkout COMMIT in these rare rollback cases, but looking for other ideas.
{
"apps": [{ blah... }],
"deploy": {
"UAT": {
"user": "USER_HERE",
"host": ["IP_HERE", "IP_HERE"],
"ref": "origin/master",
"repo": "git#github.com:USER/REPO.git",
"path": "/home/USER/node",
"post-deploy": "bash ./update.sh"
},
"production": {
"user": "USER_HERE",
"host": ["IP_HERE", "IP_HERE"],
"ref": "origin/master",
"repo": "git#github.com:USER/REPO.git",
"path": "/home/USER/node",
"post-deploy": "bash ./update.sh"
}
}
The tool pm2 deploy provides the command revert. So, you can do:
pm2 deploy ecosystem.json revert
This "rollback" your app to the previous deploy. You can specify also how many last deployment revert to.
See also the command [ref] combined with list, curr or prev:
https://github.com/Unitech/PM2/blob/0.14.7/ADVANCED_README.md#deployment-options

Resources