Write PM2 logs to custom path - node.js

I use PM2 to execute my Node.js app.
In order to do that, I have defined the following ecosystem config file:
apps:
- script: app.js
name: "myApp"
exec_mode: cluster
cwd: "/etc/myService/myApp"
Everything is working. Now I want to specify the custom location for the PM2's logs, therefore I added into ecosystem config file:
log: "/etc/myService/myApp/logs/myApp.log"
It works, but I paid attention that after execution of pm2 start ecosystem, PM2 will write the logs to both locations at the same time:
/etc/myService/myApp/logs/myApp.log (as expected)
/home/%$user%/.pm2/logs/ (default logs destination)
How can I specify the only place for logs of PM2 and avoid the duplicate logs generation?

Based on the comment of robertklep, in order to solve the issue we have to use out_file and err_file fields for output and error log paths respectively.
Syntax sample in YAML format:
out_file: "/etc/myService/myApp/logs/myApp_L.log"
err_file: "/etc/myService/myApp/logs/myApp_E.log"
P.S. The field log can be removed from the config file:
log: "/etc/myService/myApp/logs/myApp.log"

Related

pm2 logging pm2.log to JSON

I searched but didn't find it, is it possible to change pm2 logs (specifically pm2.log file, not apps logs) to log in JSON format?
Yes, with apps logs its possible via ecosystem file, but about pm2 itself?
No, But you can log pm2 start into a .log format file
pm2 start server.js --output="/path/to/file.log" --error="/path/to/file.log"
# or
pm2 start server.js --log="/path/to/file.log"
Be aware that writing output and error logs to the same file may increase your effort to search for error entries within the log file.
More detailed information about pm2 log
More detailed information about pm2 log [official site]
example convert .log file to json

How can I start a java application with parameters using a pm2 config file?

I'm trying to start graphhopper using pm2... graphhopper is a java application and the way I initiate it on the terminal is by going to its folder and entering the following command:
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
This application works fine running from the command line, but I haven't succeeded on running it as a service with pm2. The config file I'm using is this one (pm2 start config.json):
{
"apps":[
{
"name":"graphhopper",
"cwd":".",
"script":"/usr/bin/java",
"args":[
"-jar",
"/home/myyser/graphhopper/map-matching/matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar",
"server",
"config.yml"
],
"log_date_format":"YYYY-MM-DD HH:mm Z",
"exec_interpreter":"",
"exec_mode":"fork"
}
]
}
I'm 100% sure that what I'm getting wrong here is the way I'm writing the "server", "config.yml" parameters... Looking into pm2 logs graphhopper I can see that those parameters are not being recognized at all... I've tried to tweak the way it's done as well but I didn't manage to figure out the right solution. I know how to start a java application using pm2 with no parameters. But how can I do it with a java application that has parameters as in the case of graphhopper?
As stated in the comments, this issue can be solved by creating a bash script and running it with pm2 instead of running directly the java application... The bash script used was the file graphhopper.sh as the following:
#!/bin/bash
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
And to start it as a service with pm2:
pm2 start graphhopper.sh --name=graphhopper
You can also run a fat jar directly in pm2 - use two dashes to separate the command args:
pm2 start java -- -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
Mine java app worked fine - just use 1 arg - no array.
apps:
- name : 'admin'
script: '/opt/homebrew/opt/openjdk#11/bin/java'
args: '-jar ./wweevvAdmin/target/wweevvAdmin-1.0-SNAPSHOT.jar'
instances: '1'
autorestart: true

No logs appear on Cloudwatch log group for elastic beanstalk environment

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"

npm start fails to run with AWS Code Deploy on AWS Windows instances

​I am trying to deploy a Node.js application on windows EC2 instances. Deployment finishes successfully but node server is not started automatically on those instances. I've to login to each instance to run command node app.js to start node server. I tried to include this command in appspec.yml file but then I got below error,
LifecycleEvent - ApplicationStart
Script - node_start.bat
[stdout]
[stdout]C:\Windows\system32>cd C:/host/
[stdout]
[stdout]C:\host>npm start
[stderr]'npm' is not recognized as an internal or external command,
[stderr]operable program or batch file.
​
My appspec.yml file is as below,
version: 0.0
os: windows
files:
- source: app.js
destination: c:\host
- source: package.json
destination: c:\host
- source: \node_modules
destination: c:\host\node_modules
- source: node_start.bat
destination: c:\host
- source: before_install.bat
destination: c:\host
hooks:
AfterInstall:
- location: before_install.bat
timeout: 300
ApplicationStart:
- location: node_start.bat
timeout: 300
Node is already installed on those two instances and Path variable is also properly set. Logging manually to those servers and running command npm start works perfectly fine. It fails only though AWS Code deploy.
I want to introduce AWS Lambda function also after this step to do health check.
Any help to resolve this issue would be greatly appreciated.
The issue isn't really related to CodeDeploy. CodeDeploy simply runs the script that you give it; in your case, node_start.bat. Perhaps, this question answers the issue that you're really having. CodeDeploy has no knowledge of your application unless you tell it.
You will likely either need to edit node_start.bat or your environment so that npm is a valid command.
Here are a couple of suggestions to help your case:
Test your appspec and scripts
You can test your appspec.yml and related scripts using the CodeDeploy local CLI, which comes with the agent.
Validate that your service is running
Obviously, it's not awesome if your deployment succeeds, but your application is not actually running. However, you can use the ValidateService lifecycle hook to confirm that your application is actually running or any other validation. You can include a script that can see that the process is running, confirm that no errors are getting logged, run tests, or whatever else you might want.

Pm2 changing log file location

I have couple of questions regarding pm2
How can I change the location of server-error-0.log and
server-out-0.log files location from c:\users\user\.pm2\logs to other drive, due to restriction in server's c drive access.
Can I log the error and info in database instead of a log file? Do I need to write a separate module for that or is there any way to achieve this?
How can I change the location of ...log file location?
To change pm2's log file location, there are 2 solutions: define log path as parameter when pm2 command is executed (-l, -o, -e), or start pm2 from a configuration file.
For the parameter solution, here is an example:
pm2 start app.js -o ./out.log -e ./err.log
If you don't want to define log path every time when pm2 is executed, you can generate a configuration file, define error_file and out_file, and start pm2 from that:
Generate a configuration file: pm2 ecosystem simple. This would generate a file ecosystem.config.js, with following content:
module.exports = {
apps : [{
name : "app1",
script : "./app.js"
}]
}
Define error_file (for error log) and out_file (for info log) in the file, such as:
module.exports = {
apps : [{
name : "app1",
script : "./app.js",
error_file : "./err.log",
out_file : "./out.log"
}]
}
Delete existing processes in pm2:
pm2 delete <pid>
You can get pid by doing:
pm2 status
Start the process from the configuration file:
pm2 start ecosystem.config.js
In this way, the logs are saved to ./err.log and ./out.log.
Please refer to the document for detail information.
Can I log the error and info in database instead of a log file?
I didn't find any resources in official document. It seems you need to write code and save log to database yourself.
Just wanted to add to #shaochuancs answer, that before doing step 3, make sure you delete the old process. If you don't delete the old process, the changes that you made to your process file will not take into effect after you start your app.
You will need to issue this command before doing step 3 above:
pm2 delete <pid>
In case you want pm2 on startup with changed logs path:
pm2 delete all
pm2 start ecosystem.js
pm2 save
pm2 startup
If you want to write both an error log and console log to the same file, might be a use case, like I am interested to have log in OneFile to push to ELK.you can use -l
-l --log [path] specify filepath to output both out and error logs
Here is the example
pm2 start server.js -l /app/logs/server.log
After doing changes do not forget to run this command as mentioned in the answer.
pm2 delete <pid>

Resources