Output Node.js logs to filesystem - node.js

I have a Splunk forwarder managing logs in my production servers, so I really just need to get the output of my node app into a file that Splunk is watching. What is the downside of simply doing the following in production:
node server.js &> output.log
As oppose to handling the log output inside the node process with some sort of logging module...

checkout supervisord which is a logging and babysitting tool which becomes the parent of processes like a node server which can handle redirecting both standard out and standard error to files of your choosing ... besides it will sniff for abends and throw the child process back in when needed
here is a typical config file : /etc/supervisor/conf.d/supervisord.conf
[supervisord]
nodaemon=true
logfile=GKE_MASTER_LOGDIR/supervisord_nodejs_GKE_FLAVOR_USER.log
pidfile=GKE_MASTER_LOGDIR/supervisord_nodejs_GKE_FLAVOR_USER.pid
stdout_logfile_maxbytes = 1MB
stderr_logfile_maxbytes = 1MB
logfile_backups = 50
# loglevel = debug
[program:nodejs]
command=/tmp/boot_nodejs.sh %(ENV_MONGO_SERVICE_HOST)s %(ENV_MONGO_SERVICE_PORT)s
stdout_logfile = GKE_MASTER_LOGDIR/nodejs_GKE_FLAVOR_USER_stdout.log
stderr_logfile = GKE_MASTER_LOGDIR/nodejs_GKE_FLAVOR_USER_stderr.log
stdout_logfile_maxbytes = 1MB
stderr_logfile_maxbytes = 1MB
logfile_backups = 50
autostart = True
autorestart = True
# user = GKE_NON_ROOT_USER
in my case this all happens inside a Docker container so here is a snippet of my Dockerfile which launches supervisord which in turn launches nodejs and in so doing redirects stdout / err to logging files which supervisord rotates based on space and/or time ... use of Docker is orthogonal to using supervisord so YMMV
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
for completeness below I include the boot_nodejs.sh referenced above
#!/bin/bash
given_mongo_service_host=$1
given_mongo_service_port=$2
current_dir=$(dirname "${BASH_SOURCE}")
current_timestamp="timestamp "$(date '+%Y%m%d_%H%M%S_%Z')
echo
echo "______________ fresh nodejs server bounce ______________ $current_timestamp"
echo
# ............... now output same to standard error so its log gets the hat tip
(>&2 echo )
(>&2 echo "______________ fresh nodejs server bounce ______________ $current_timestamp" )
(>&2 echo )
# ................
export MONGO_URL=mongodb://$given_mongo_service_host:$given_mongo_service_port
type node
node main.js

There's no problem with redirecting your output to a log file. In a lot of ways, this is preferable.
Having your application write logs directly is more useful when your application is complicated and needs a lot of log configuration, possibly writing to several log files. What I do is use Winston for logging. Normally the only log transport enabled is the console, and I can redirect that to a file if I want. But, I also have in my app config a way to specify other transports and config. I use that for writing directly to Logstash and such.

Related

No logs appear on Cloudwatch log group for elastic beanstalk environment

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"

Stopping nodejs process start by npm?

I am in the process of creating some scripts to deploy my node.js based application, via continuous integration and I am having trouble seeing the right way to stop the node process.
I start the application via a start-dev.sh script:
#!/bin/sh
scripts_dir=`dirname $0`
cd "${scripts_dir}/"..
npm start &
echo $! > app.pid
And then I was hoping to stop it via:
#!/bin/sh
scripts_dir=`dirname $0`
cd "${scripts_dir}/"..
echo killing pid `cat app.pid`
kill -9 `cat app.pid`
The issue I am finding is that npm is no longer running at this point, so the pid isn't useful to stop the process tree. The only workaround I can think of at this point is to skip npm completely for launch and simply call node directly?
Can anyone suggest an appropriate way to deal with this? Is foregoing npm for launching a good approach, in this context?
Forever can do the process management stuff for you.
forever start app.js
forever stop app.js
Try to avoid relying on npm start outside of development, it just adds an additional layer between you and node.
just use supervisor example conf is like
[program:long_script]
command=/usr/bin/node SOURCE_FOLDER/EXECUTABLE_JAVASCRIPT_FILE.js
autostart=true
autorestart=true
stderr_logfile=/var/log/long.err.log
stdout_logfile=/var/log/long.out.log
where
SOURCE_FOLDER is the folder for your project
EXECUTABLE_JAVASCRIPT_FILE the file to be run
you can check the post here

Node.js OpsWorks Layer console logs

I have an Opsworks stack with a Node.js Layer and Node.js Application. I'm wondering if anyone knows where on an ubuntu 14.04LTS instance the console logs from my application are being printed to. I know the opsworks uses monit to run my application but I'm not sure where its outputting the logs to.
Thanks!
So annoyingly enough, the Monit configuration rendered for Node.JS apps on Opsworks doesn't send the output anywhere! Source for this claim. (This surprised me when I learned about it!)
What I recommend doing is overriding that template - see the OpsWorks documentation on overriding templates: essentially all you need to do is copy paste the Monit config from Amazon, but change line 2 to redirect the output to a file, like I do below so:
start program = "/bin/bash -c 'cd <%= #deploy[:deploy_to] %>/current ; source <%= #deploy[:deploy_to] %>/shared/app.env ; /usr/bin/env PATH=$PATH:/usr/local/bin PORT=<%= #deploy[:nodejs][:port] %> NODE_PATH=<%= #deploy[:deploy_to] %>/current/node_modules:<%= #deploy[:deploy_to] %>/current /usr/local/bin/node <%= #monitored_script %> &> <%= #deploy[:deploy_to] %>/current/log/production.log'"
You can find it in the app directory then you will find this path shared/log
for example : /srv/www/my_app/shared/log
I found the logs by doing the following (similar to #RyanWilcox's comment):
I found my "current" deployment in /srv/www/{APP NAME}/current/
listing files, I could see the log directory symlink (log should be
symlinked to something like /srv/www/{APP NAME}/shared/log
I ran into a permissions issue trying to cd to this directory, so I
switched to the super user without a password using the command "sudo
su" and then I could access the directory
finally in the logs directory, I could see the nodejs console logs
for STDERR and STDOUT
... and Bob's father is your father's father.
Unless behavior of console.log was altered, node will output console.log logs to the application's standard output (stdout).
If you are running your node application in a console or using ssh, you should see the logs there.
Otherwise, try redirecting the stdout of your application to a file so you can see it even if you run it without a console, in this way:
node myapp.js > logfile
A preferred way would be to user Forever to make sure you application is constantly on and there you can redirect your output (both stdout and stderr) as follows:
/>forever -o forever.out -e forever.err myapp.js

Nodejs/Strongloop: working upstart config example

After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run

any body provide me node.js gc log file

Can any body tell me how to redirect output trace_gc to log file in node js instead of console I gave command like this
node -trace_gc -trace_gc_verbose example.js
and also give me one example gc log file for node.js
If you do not need to separate the output of the application itself from the gc trace you can simply redirect the output using:
node -trace_gc -trace_gc_verbose example.js > output.log
Otherwise V8 has a --log_gc option which output to the log file (can be defined with --logfile but this does not seems to output has many information as -trace_gc.
See node --v8-options for all options

Resources