configure monitrc to monitor node js app process, can't find node app pidfile running with screen ec2 linux - node.js

I run my app.js (node js application) via screen on my ec2 linux instance.
I'm trying to config my monitrc file and I need the app pidfile.
It's not in :
/var/run
(and there isn't a /var/www)
Would really appreciate it if someone has any idea where the pidfile is or how can I find it out..
Thank you!

in your app you can get the current pid number with process.pid so
var fs = require('fs');
fs.writeFile("/tmp/pidfile", process.pid);
and you get a pidfile in tmp

seems like there isn't a pid file created so I used forever-monitor in order to restart my app.js script in case of an error.
Looks like it is working.
What you need to do is npm install forever
and write server.js :
var forever = require('forever'),
child = new(forever.Monitor)('app.js', {
'silent': false,
'pidFile': '/var/run/app.pid',
'watch': false,
'options': ['8383'], // Additional arguments to pass to the script,
'sourceDir': '.', // Directory that the source script is in
'watchDirectory': '.', // Top-level directory to watch from.
'watchIgnoreDotFiles': true, // whether to ignore dot files
'watchIgnorePatterns': [], // array of glob patterns to ignore, merged with contents of watchDirectory + '/.foreverignore' file
'logFile': 'logs/forever.log', // Path to log output from forever process (when daemonized)
'outFile': 'logs/forever.out', // Path to log output from child stdout
'errFile': 'logs/forever.err'
});
child.start();
forever.startServer(child);
and then run it with - node server.js (I run it from ~/nodejs directory)
Still the pid file that supposed to be in /var/run isn't there, weird but I don't need monit anymore.
I still don't understand why I should additionally use upstart (like all the posts related suggested) anyhow when I tried to run upstart it didn't work

Related

Node JS to load dotenv with forever

I would like to ask if anyone know how to run forever that can load .env file.
Currently if we run forever start app.js, process.env.foo become undefined.
TLDR, You need to add the --workingDir path to your cronjob line.
forever -c "node -r dotenv/config" --workingDir app-workdir-path start app.js
Many previous answers but none of them really solve this specific use case.
To run forever with dotenv you'll need to do two things.
First is we need to use dotenv's preload feature, meaning we need forever to pass a node parameter to the process. we can do it by using the -c COMMAND flag forever has.
The second thing is related to how the dotenv package works. here is snippet from the source code:
let dotenvPath = path.resolve(process.cwd(), '.env')
What does process.cwd() do?
The process.cwd() method is an inbuilt application programming interface of the process module which is used to get the current working directory of the node.js process.
Meaning dovenv package want's to load the .env file from the working directory. so to solve this issue we can use forever's --workingDir flag to specify the actual working directory of the process.
And the final command will look like this:
forever -c "node -r dotenv/config" --workingDir app-workdir-path start app.js
Where app-workdir-path is the absolute path to the project directory.
What worked for me was to specify the full path:
require('dotenv').config({ path: '/opt/api/.env' });
You can use dotenv package for this purpose. On your app entry, do this
require('dotenv').config({ path: '.env' })
If you have added .env file in root directory of your project then you can use like this
require('dotenv').config()
Or if you created your file .env with different location then in your code use
require('dotenv').config({path : '/your/path/.env'})
I found your question and had the same issue. I don't think dotenv works with forever - At least not that I was able to get working. However, I think there's a workaround that you could employ. I was able to specify environment variables on the command line preceding the forever command, and forever passed those environment variables to my node app.
~$ ENV=production forever start yourApp.js
For more information about specifying environment variables on the command line, checkout this Stack Overflow question.
I've had this issue with multiserver forever config.
You should include --workingDir parameter pointing to the root of your project directory in case you've included .env file in your root and using dotenv
Example:
Flexible config with minimum "hard coded" values
.env placed in root directory
"dotenv" used in form of dotenv.config()
Code for multiserver config in case of one server:
const fs = require('fs');
const path = require('path');
let foreverConfig = [
{
uid: 'scheduledJobsServer',
append: true,
watch: true,
script: 'scheduledJobsServer.js',
sourceDir: path.join(__dirname, '/server'),
workingDir: path.join(__dirname)
},
{
uid: 'mainServer',
append: true,
watch: true,
script: 'server.js',
sourceDir: path.join(__dirname, '/server'),
workingDir: path.join(__dirname)
}
];
try {
fs.writeFileSync(
path.join(__dirname, '/foreverConfig.json'),
JSON.stringify(foreverConfig),
{ encoding: 'utf8' }
);
let consoleMessage = 'Init script success';
console.log('\x1b[42m%s\x1b[0m', consoleMessage);
} catch (e) {
console.log('Init script error:', e);
process.exit(1);
}
Then run forever start foreverConfig.json
Sometimes you have to call the node script from another directory. For instance, when running cron jobs. Here is what you can do:
cd /path/to/script/ && /usr/bin/forever start /usr/bin/node script.js
Now the .env file will load.
The easiest command for me is
dotenv -e .env forever start build/index.js

How to add console outputs Node.js app in the log access NGINX file?

I have an Node.js app setting up with systemd. The app running behind NGINX.
I would like to add console output of my Node.js application in the log access NGINX file ?
How can I do this ?
Thanks in advance.
More simple way is hook console.log and call console.log as usually.
var util = require('util');
var JFile = require("jfile");
var nxFile = new JFile('/var/log/nginx/access.log');
...
process.stdout.write = (function(write) {
return function(text, encoding, fd) {
write.apply(process.stdout, arguments); // write to console
nxFile.text += util.format.apply(process.stdout, arguments) + '\n'; // write to nginx
}
})(process.stdout.write);
Also you can define hook to console.error by change stdout to strerr in code above.
P.S. I don't have nginx to verify code. So code can contains errors :)
Brief :
Using JFile package , file logging can be smooth as following :
nxFile.text+='\n'+message;
Details :
Add function that logs on both (Terminal+nginx log) , then use it instead of using console.log directly :
var customLog=function(message){
console.log(message);
logNginx(message);
}
Then , implement logNginx which is called inside customLog :
var JFile=require('jfile'); // "npm install jfile --save" required
let nxFile=new JFile('/var/log/nginx/access.log'); // check path before if exist in your system . IF no , change it with the available path
function logNginx(message){
nxFile.text+='\n'+message; //append new line in nginx log file
}
Don't forget to install JFile npm install jfile which makes handling files done quickly .
If you're running Node as a systemd process, with console.log going to stdout (which I believe is the default), and your goal is just to see the logs (or get them on disk somewhere), there's an easier way than all this Node meddling and hooking.
You should already have access to the console log without doing anything through journalctl. For instance, my systemd unit file (at /etc/systemd/system/myapp.service in this example) looks something like this:
[Unit]
Description=My Webapp
[Service]
WorkingDirectory=/srv/webapp-dir
ExecStart=/usr/local/bin/node server.js
Restart=always
RestartSec=5
Environment=NODE_ENV=production PORT=1984
User=myuser
Group=myuser
[Install]
WantedBy=multi-user.target
And running journalctl -u myapp shows me the console logs from my app.
If you want, you can also send those logs to the syslog with some additional parameters. I've also added the following to my [Service] directory to do so:
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=myapp
Which results in my logs going to the syslog tagged with myapp, where I could filter them into their own log if I wanted to with rsyslog filtering.
You can add the following code in your nginx script . This should work
env NODE_BIN=/usr/bin/node
env SCRIPT_FILE="server.js"
env LOG_FILE=/var/log/logfilename.log
env RUN_AS="root"
$RUN_AS -- $NODE_BIN $SCRIPT_FILE >> $LOG_FILE 2>&1

Local node process running in wrong directory

I am using PHP storm 9 to run a node instance on my local dev environment., but the process seems to be running in server.js parent directory.
My folder structure looks like
app/
app_server/
server.js
user_resources/
user_resources/
When i write a file with in my local instance to writes user_resources in app/ and when i run the same process on live environment it writes to user_resources in app_server/
pdf.create(html, options).toFile(path+filename, function (err, reuslt) {
callback();
});
using fs writeFile, readFile or readdir gives similar behavior
Local node server i ran with PHPstorm and live server runs with forever
Both local and live is a ubuntu system.
Any suggestings to why local node seems to be running in server.js parent directory.
The node server is probably executed from the app directory by PHPStorm, while the live process runs from app/app_server.
If no other hint is provided in the server code on where exactly to put the user_resources it will reside within the current working directory (which is the path from where the node process was involved).
You may want to specify a path relative to the location of the server.js, this can easily be done like this:
var userResourcePath = __dirname + '/user_resources`;
Node always ensures the __dirname to be set to the path of the file it is in.
I made the assumption the user_resources path of your live environment (app/app_server/user_resources) is the one you want for local development.

node.js child process change a directory and run the process

I try to run external application in node.js with child process like the following
var cp = require("child_process");
cp.exec("cd "+path+" && ./run.sh",function(error,stdout,stderr){
})
However when I try to run it stuck, without entering the callback
run.sh starts a server, when I execute it with cp.exec I expect it run asynchronously, such that my application doesn't wait until server termination. In callback I want to work with server.
Please help me to solve this.
cp.exec get the working directory in parameter options
http://nodejs.org/docs/latest/api/child_process.html#child_process_child_process_exec_command_options_callback
Use
var cp = require("child_process");
cp.exec("./run.sh", {cwd: path}, function(error,stdout,stderr){
});
for running script in the "path" directory.
The quotes are interpreted by the shell, you cannot see them if you just look at ps output.

NodeJS: exit parent, leave child alive

i am writing an utility. One command of this utility is to run an external application.
var child_process = require('child_process');
var fs = require('fs');
var out = fs.openSync('.../../log/out.log', 'a');
var err = fs.openSync('.../../log/err.log', 'a');
exports.Unref = function(app, argv) {
var child = child_process.spawn(app, argv, {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.unref();
//process.exit(0);
};
Currently:
$ utility run app --some-args // run external app
// cant enter next command while app is running
My Problem is that if i run this command, the terminal is locked while the "external" Application is running.
But the terminal window shouldn't be locked by the child_process.
i wanna run:
$ utility run app --some-args
$ next-command
$ next-command
The external Application (a desktop application) will be closed by hisself.
Like this:
$ subl server.js // this runs Sublime Text and passes a file to the editor
$ npm start // The terminal does not locked - i can execute next command while Sublime is still running
You know what i mean ^^?
Appending ['>>../../log/out.log', '2>>../../log/err.log'] to the end of argv instead of leaving two files open should work since it's the open file handles that are keeping the process alive.
Passing opened file descriptors in stdio in addition to detached: true will not work the way you expect because there is no way to unref() the file descriptors in the parent process and have it still work for the child process. Even if there was a way, I believe that when the parent process exited, the OS would clean up (close) the file descriptors it had open, which would cause problems for the detached child process.
The only possible way that this might have been able to work would have been by passing file descriptors to child processes, but that functionality was dropped several stable branches ago because the same functionality did not exist on some other platforms (read: Windows).

Resources