I am trying to get the PM2 logs in my backend server using PM2 API. I tried going through the docs but almost nothing related to logs.
I tried generating a pm2.launchBus but that only gets me current logs and not the old logs.
Usually for checking the pm2 logs:
run pm2 logs (process Id)
run pm2 show (process Id) and it would tell you the log location
May be not using PM2 API but a workable method would be read the logfile using fs and return to your client app
There isn't an API method to get the log data. You can only read it from disk. If you'd like to get more structured information about the log data you can configure the ecosystem (or process) JSON to either:
Add a timestamp
{
"apps" : [
{
"name": "app",
"script": "main.js",
"log_date_format": "YYYY-MM-DD hh:mm:ss"
}
]
}
Which results in:
2021-07-30 06:34:22: Hello World!
Write log entries as JSON
{
"apps" : [
{
"name": "app",
"script": "main.js",
"log_type": "json"
}
]
}
Which will get you:
{
"message": "Hello World!\n",
"timestamp": "2017-02-06T14:51:38.896Z",
"type": "out",
"process_id": 0,
"app_name": "app"
}
See Log Management docs
Related
I have GitHub action to deploy the app to the server. I'm deploying the app and running a few commands to restart the app using ssh. Then, I'm starting the process using pm2. The problem is, It streams the log to the console stdout, and the Github action hags up with a timeout error. What I want to do is spawn the application instance without streaming the log to the console. Here is the command I'm using to start the app.
NODE_ENV=production && pm2 start ecosystem.config.json
and, here is the ecosystem configuration file.
{
"apps": [
{
"name": "apitoid",
"script": "src/index.js",
"instances": 1,
"autorestart": true,
"watch": false,
"time": true,
"env": {
"NODE_ENV": "production"
}
}
]
}
Is there any way I can prevent logging stream to stdout or no stdout in GitHub action workflow?
I am currently having trouble to debug my Azure Functions Core Tools in VS Code.
I am using the npm package azure-functions-core-tools#2.
As i read on many resources func host start / func start should always start the node process in with --inspect=9229. As you can see this is not the case in my setup:
[4/30/19 4:51:25 AM] Starting language worker process:node "/usr/lib/node_modules/azure-functions-core-tools/bin/workers/node/dist/src/nodejsWorker.js" --host 127.0.0.1 --port 50426 --workerId 3e909143-72a3-4779-99c7-376ab3aba92b --requestId 656a9413-e705-4db8-b09f-da44fb1f991d --grpcMaxMessageLength 134217728
[4/30/19 4:51:25 AM] node process with Id=92 started
[4/30/19 4:51:25 AM] Generating 1 job function(s)
[...]
[4/30/19 4:51:25 AM] Job host started
Hosting environment: Production
Also all attempts of changing the hosting environment failed. I tried to add FUNCTIONS_CORETOOLS_ENVIRONMENT to my local configuration, resulting in an error:
An item with the same key has already been added. Key: FUNCTIONS_CORETOOLS_ENVIRONMENT
I tried to add several environment variables in my launch and task settings, using --debug, changing project settings. Nothing works.
I am currently running this on the Windows Subsystem for Linux (WSL) and except this it works really well.
Anyone has a clue on what i am doing wrong here?
I don't think debug is enabled by default. You will have to set the language worker arguments for this to work as documented.
In local.settings.json
To debug locally, add "languageWorkers:node:arguments": "--inspect=5858" under Values in your local.settings.json file and attach a debugger to port 5858.
With func CLI
You could set this by using the --language-worker argument
func host start --language-worker -- --inspect=5858
In VS Code
If you are developing using VS Code and the Azure Functions extension, the --inspect is added automatically as the corresponding environment variable is set in .vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "runFunctionsHost",
"type": "shell",
"command": "func host start",
"isBackground": true,
"presentation": {
"reveal": "always"
},
"problemMatcher": "$func-watch",
"options": {
"env": {
"languageWorkers__node__arguments": "--inspect=5858"
}
},
"dependsOn": "installExtensions"
},
{
"label": "installExtensions",
"command": "func extensions install",
"type": "shell",
"presentation": {
"reveal": "always"
}
}
]
}
You could also set this environment variable directly if you'd like instead of adding it in local.settings.json too.
I am new in Node. I am trying to start a node service by using the following command --
export NODE_CONFIG_DIR=/var/www/api/config NODE_ENV=development PORT=3018; node /var/www/api/app.js
Inside the config directory I have the following file development.json
{
"general": {
"port": 3018
},
"database": {
"connectionString": "mongodb://***:***#url:port/api?replicaSet=set-578f3ed7000587"
},
"environmentName": "Development"
}
And I am getting the following line after starting node
The $NODE_CONFIG environment variable is malformed JSON
API listening on port 3018!
When I try accessing the api in browser I get 502 bad gateway. The node is running though.
Any help is highly appreciated.
I am using pm2 for my node server. My pm2 config json is as follows
{
"apps": [{
"name": "hello",
"script": "server/app.js",
"exec_mode": "fork",
"env": {
"NODE_ENV": "development",
"PORT": 9000
}
}]
}
I am starting the application using: pm2 start pm2Config.json
My application is showing online but not loading. When I try to access the application, it is giving me 502 bad gateway error. But nothing is coming in the logs.
I don't know what went wrong. Please help.
Thanks in advance.
I have been using crossbar for awhile and I love it. I have a question about the best way to run workers that will connect to an external router. I was using "crossbar start" and creating a config file that connected to the router and this worked great.
Recently my requirements have changed to where I would like to pass the router url and realm into the config file via environment variables. After trial and error I concluded that this was not possible with the current XBAR implementation.
I then looked at creating an application runner using the following where I retrieved the realm and the url from config vars
runner = ApplicationRunner(url=url, realm=realm)
runner.run(AppSession)
This works but I then noticed my server would go down periodically. After root causing, I realized that the reverse proxy was timing out the connection after 1 hour of inactivity. Looking at the server logs, I got the "onDisconnect" callback. Looking at the XBAR application runner documentation it states the following
This class is a convenience tool mainly for development and quick hosting
of WAMP application components.
I have my service running in a "runit" script as a Daemon. Some quick fixes I came up with are
Kill the runner and let the daemon restart the service
Explicitly perform the join process on any disconnects
All of these were starting to feel really hacky given the XBAR folks explicitly state that the ApplicationRunner is a development tool. Anyone know if there is something I can use other than an application runner OR some way I can get environment variables into the config.json file?
As a temporary workaround I am using sed. Here is my config file
{
"controller": {
},
"workers": [
{
"type": "container",
"options": {
"pythonpath": [".."]
},
"components": [
{
"type": "class",
"classname": "src.app_session.AppSession",
"realm": "%%%ROUTER_REALM%%%",
"transport": {
"type": "websocket",
"endpoint": {
"type": "tcp",
"host": "%%%ROUTER_HOST%%%",
"port": %%%ROUTER_PORT%%%
},
"url": "%%%ROUTER_PROTOCOL%%%://%%%ROUTER_HOST%%%/ws"
}
}
]
}
]
}
And my runit script is
#!/bin/bash
# Update the ROUTER config parameters
sed -i -e "s/%%%ROUTER_HOST%%%/${ROUTER_HOST}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_PORT%%%/${ROUTER_PORT}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_REALM%%%/${ROUTER_REALM}/g" /app/.crossbar/config.json
sed -i -e "s/%%%ROUTER_PROTOCOL%%%/${ROUTER_PROTOCOL}/g" /app/.crossbar/config.json
cat /app/.crossbar/config.json
cd /app/
exec crossbar start
There is indeed no mechanism with Crossbar.io to do what you want. From the side of the project there are no plans to implement this as a feature. We want to concentrate on the management API, which will enable to dynamically manage Crossbar.io.