Node.js OpsWorks Layer console logs - node.js

I have an Opsworks stack with a Node.js Layer and Node.js Application. I'm wondering if anyone knows where on an ubuntu 14.04LTS instance the console logs from my application are being printed to. I know the opsworks uses monit to run my application but I'm not sure where its outputting the logs to.
Thanks!

So annoyingly enough, the Monit configuration rendered for Node.JS apps on Opsworks doesn't send the output anywhere! Source for this claim. (This surprised me when I learned about it!)
What I recommend doing is overriding that template - see the OpsWorks documentation on overriding templates: essentially all you need to do is copy paste the Monit config from Amazon, but change line 2 to redirect the output to a file, like I do below so:
start program = "/bin/bash -c 'cd <%= #deploy[:deploy_to] %>/current ; source <%= #deploy[:deploy_to] %>/shared/app.env ; /usr/bin/env PATH=$PATH:/usr/local/bin PORT=<%= #deploy[:nodejs][:port] %> NODE_PATH=<%= #deploy[:deploy_to] %>/current/node_modules:<%= #deploy[:deploy_to] %>/current /usr/local/bin/node <%= #monitored_script %> &> <%= #deploy[:deploy_to] %>/current/log/production.log'"

You can find it in the app directory then you will find this path shared/log
for example : /srv/www/my_app/shared/log

I found the logs by doing the following (similar to #RyanWilcox's comment):
I found my "current" deployment in /srv/www/{APP NAME}/current/
listing files, I could see the log directory symlink (log should be
symlinked to something like /srv/www/{APP NAME}/shared/log
I ran into a permissions issue trying to cd to this directory, so I
switched to the super user without a password using the command "sudo
su" and then I could access the directory
finally in the logs directory, I could see the nodejs console logs
for STDERR and STDOUT
... and Bob's father is your father's father.

Unless behavior of console.log was altered, node will output console.log logs to the application's standard output (stdout).
If you are running your node application in a console or using ssh, you should see the logs there.
Otherwise, try redirecting the stdout of your application to a file so you can see it even if you run it without a console, in this way:
node myapp.js > logfile
A preferred way would be to user Forever to make sure you application is constantly on and there you can redirect your output (both stdout and stderr) as follows:
/>forever -o forever.out -e forever.err myapp.js

Related

npm adds whitespace when setting env variable in package.json

I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?
i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."

No logs appear on Cloudwatch log group for elastic beanstalk environment

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"

Output debug package logs inside a file

I'm using debug npm module to log stuff, is there a way to log into a file programmatically?
Right now I'm doing DEBUG=* node myApp.js > abc.log, how can I log into abc.log by simply running DEBUG=* node myApp.js, while also outputting in stderr?
I didn't find any package doing this.
The package doesn't seem to provide a builtin feature to do this, but it provides you with a hook to customise how logs are emitted.
There is an example in the Readme here.
Note: the example is a bit confusing because it shows you how to replace writing on stdout with ... writing on stdout using the console !
So what you should at the startup of the application:
Open a stream that writes to a file. Tutorial here if you need help on this
Override the log.log() as explained in the doc to write to your file instead of using console.log().

Output Node.js logs to filesystem

I have a Splunk forwarder managing logs in my production servers, so I really just need to get the output of my node app into a file that Splunk is watching. What is the downside of simply doing the following in production:
node server.js &> output.log
As oppose to handling the log output inside the node process with some sort of logging module...
checkout supervisord which is a logging and babysitting tool which becomes the parent of processes like a node server which can handle redirecting both standard out and standard error to files of your choosing ... besides it will sniff for abends and throw the child process back in when needed
here is a typical config file : /etc/supervisor/conf.d/supervisord.conf
[supervisord]
nodaemon=true
logfile=GKE_MASTER_LOGDIR/supervisord_nodejs_GKE_FLAVOR_USER.log
pidfile=GKE_MASTER_LOGDIR/supervisord_nodejs_GKE_FLAVOR_USER.pid
stdout_logfile_maxbytes = 1MB
stderr_logfile_maxbytes = 1MB
logfile_backups = 50
# loglevel = debug
[program:nodejs]
command=/tmp/boot_nodejs.sh %(ENV_MONGO_SERVICE_HOST)s %(ENV_MONGO_SERVICE_PORT)s
stdout_logfile = GKE_MASTER_LOGDIR/nodejs_GKE_FLAVOR_USER_stdout.log
stderr_logfile = GKE_MASTER_LOGDIR/nodejs_GKE_FLAVOR_USER_stderr.log
stdout_logfile_maxbytes = 1MB
stderr_logfile_maxbytes = 1MB
logfile_backups = 50
autostart = True
autorestart = True
# user = GKE_NON_ROOT_USER
in my case this all happens inside a Docker container so here is a snippet of my Dockerfile which launches supervisord which in turn launches nodejs and in so doing redirects stdout / err to logging files which supervisord rotates based on space and/or time ... use of Docker is orthogonal to using supervisord so YMMV
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
for completeness below I include the boot_nodejs.sh referenced above
#!/bin/bash
given_mongo_service_host=$1
given_mongo_service_port=$2
current_dir=$(dirname "${BASH_SOURCE}")
current_timestamp="timestamp "$(date '+%Y%m%d_%H%M%S_%Z')
echo
echo "______________ fresh nodejs server bounce ______________ $current_timestamp"
echo
# ............... now output same to standard error so its log gets the hat tip
(>&2 echo )
(>&2 echo "______________ fresh nodejs server bounce ______________ $current_timestamp" )
(>&2 echo )
# ................
export MONGO_URL=mongodb://$given_mongo_service_host:$given_mongo_service_port
type node
node main.js
There's no problem with redirecting your output to a log file. In a lot of ways, this is preferable.
Having your application write logs directly is more useful when your application is complicated and needs a lot of log configuration, possibly writing to several log files. What I do is use Winston for logging. Normally the only log transport enabled is the console, and I can redirect that to a file if I want. But, I also have in my app config a way to specify other transports and config. I use that for writing directly to Logstash and such.

sqlite3 module cannot open database when started from ubuntu upstart

Working in Ubuntu server 14.04
I have an upstart .conf file in /etc/init for staring my node server. I am using forever. Here is what my script looks like
start on filesystem or runlevel [2345]
expect fork
setuid myUserId
env HOME=/home/myUserId/
env NODE_BIN_DIR=/usr/bin
env NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
script
PATH=$NODE_BIN_DIR:$NODE_PATH:$PATH
echo $PATH
exec forever start -o /home/myUserId/nodeServ/lServer/logs/out.log /home/myUserId/nodeServ/lServer/server.js 1337
end script
But I keep getting this error
Error: SQLITE_CANTOPEN: unable to open database file
error: Forever detected script exited with code: 8
If I run the script from the command line exactly as it is in the conf file it works just fine. No problems. So I think it is a permissions issue. I have set permissions for read write execute on the database directory and the database and still I am unable to read from the file.
I have tried so many different things and I cannot figure out why this is happneing
UPDATE: This problem appears to not be isolated to upstart. I tried staring forever in shell script as well and I get the same errors.
I resolved my issue via workaround. Not using forever and starting node directly from the upstart file (allowing respawn). No issues. This appears to either be a sqlite3 issue or a forever issue.
write this command sudo chown www-data. . in db file directory.
other solution is check file exists or not
ex :
var fs = require("fs");
var exists = fs.existsSync(dbfilepath);

Resources