I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
Related
I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?
i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."
What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.
I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn
I have an app that I want to run on a single self contained Docker image.
I had it running fine on an Ubuntu based image, but the same script now causes me trouble on Alpine.
Here is my docker file :
FROM julienlengrand/alpine-node-rethinkdb
# Preparing
# RUN ln -snf /bin/bash /bin/sh
# # Define mountable directories.
VOLUME ["/data"]
# # Define working directory.
WORKDIR /data
# # Install app dependencies
COPY package.json /data
RUN npm install
# # Bundle app source
COPY . /data
# # Expose rethinkdb ports.
# - 8080: web UI
# - 28015: process
# - 29015: cluster
EXPOSE 8080
#EXPOSE 28015
#EXPOSE 29015
# Expose node app ports
EXPOSE 4567
CMD [ "/bin/sh", "/data/startApp.sh" ]
My startApp script is relatively simple :
#!/bin/sh
rethinkdb --bind all & sleep 1; node dbCreate.js; sleep 2; nohup node workers/worker.js & node app.js
But when I try to run it, I get the following error:
module.js:442
throw err;
^
'rror: Cannot find module '/data/app.js
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.runMain (module.js:575:10)
at run (bootstrap_node.js:352:7)
at startup (bootstrap_node.js:144:9)
at bootstrap_node.js:467:3
This happens whether I run it automatically, or directly within the image using the shell.
I have checked and everything is correctly placed in the data folder.
Additionally, if I run all the commands one after the other directly in the sh shell everything runs as expected.
I have also tried to simplify my script as such :
#!/bin/sh
rethinkdb --bind all & sleep 1; node dbCreate.js; sleep 2; node app.js
bu the same issue happens.
Any idea what can go wrong? What could make my /data folder unavailable when running via the startApp script? Could it be that it is a specificity from Alpine?
Thanks,
The error message you are receiving looks like a classic carriage return issue as the quote after app.js has moved to the start of the line
'rror: Cannot find module '/data/app.js
Node should normally be able to deal with both line endings but shell scripts aren't so kind.
I generally default all projects/files/editors/git to a Unix \n unless there are specific requirements not too.
You can convert existing files with dos2unix or one of the answers in the question jlengrand found. I like perl -pi -e 's/\r\n/\n/g', because pie
After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run