Working in Ubuntu server 14.04
I have an upstart .conf file in /etc/init for staring my node server. I am using forever. Here is what my script looks like
start on filesystem or runlevel [2345]
expect fork
setuid myUserId
env HOME=/home/myUserId/
env NODE_BIN_DIR=/usr/bin
env NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
script
PATH=$NODE_BIN_DIR:$NODE_PATH:$PATH
echo $PATH
exec forever start -o /home/myUserId/nodeServ/lServer/logs/out.log /home/myUserId/nodeServ/lServer/server.js 1337
end script
But I keep getting this error
Error: SQLITE_CANTOPEN: unable to open database file
error: Forever detected script exited with code: 8
If I run the script from the command line exactly as it is in the conf file it works just fine. No problems. So I think it is a permissions issue. I have set permissions for read write execute on the database directory and the database and still I am unable to read from the file.
I have tried so many different things and I cannot figure out why this is happneing
UPDATE: This problem appears to not be isolated to upstart. I tried staring forever in shell script as well and I get the same errors.
I resolved my issue via workaround. Not using forever and starting node directly from the upstart file (allowing respawn). No issues. This appears to either be a sqlite3 issue or a forever issue.
write this command sudo chown www-data. . in db file directory.
other solution is check file exists or not
ex :
var fs = require("fs");
var exists = fs.existsSync(dbfilepath);
Related
I have a simple python3 script running on ubuntu server 20.04 that tries to call clamd (clamav-daemon process) library to scan a file. The scan ping() and version() function all work correctly. However when I actually do a test write and scan, i get the following error:
{'/filedrop/test.doc': ('ERROR', "Can't open file or directory")}
This is the code that I used to call the test write and scan, and this is all standard sample from the clamd website:
open('/filedrop/test.doc','wb').write(clamd.EICAR)
print(cd.scan('/filedrop/test.doc'))
After the code is run, i get the following string in the test file which indicates that the python3 script was able to successfully write to the file, yet i keep getting the error that the file can't be opened when i use the clamd scan function.
This is the string that was written to the file:
X5O!P%#AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
I am also able to run clamscan from command line on the folder and it successfully scans the files as well.
I'm running as root user while the service is using clamav:clamav.
I did give read/write permission to the folder and the files to "other users", and also indicated by the fact that the file could be written by the python script.
I believe the solution to the problem here is that AppArmour is blocking clamd for that particular directory. I would look at the AppArmour profile for clamd. It should be called something like /etc/apparmor.d/clamav or similar. You can adjust that profile or alternatively disable it (according to Ubuntu):
sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/profile.name
More complete instructions available here:
https://help.ubuntu.com/community/AppArmor
You can also disable AppArmour, for the purposes of testing (I don't like to advise anyone to remove security features permanently), with:
sudo systemctl stop apparmor
sudo systemctl disable apparmor
I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run
Per various tutorials I've done the following:
created a file called ftpserver.py in /home/root/
created a file in /etc/init.d/ called ftpserver that looks like this"
#!/bin/sh
python /home/root/ftpserver.py
Upon creation, I ran the following (to make it executable, apparently)
root#beaglebone1:/etc/init.d# chmod +x ftpserver
But it doesn't appear to be running on startup. However if I run the following command:
root#beaglebone1:/etc/init.d# /etc/init.d/ftpserver
Then the script runs, exectuing ftpserver.py.
Interestingly, if I try to run ftpserver from within it's directory in the following manner (not sure if this is relevant):
root#beaglebone1:/etc/init.d# ftpserver
It returns:
-sh: ftpserver: command not found
So I'm not certain why my script isn't running on startup.
For reference, ftpserver.py looks like this:
from pyftpdlib import ftpserver
authorizer = ftpserver.DummyAuthorizer()
authorizer.add_user("root", "12345", "/home/root", perm="elradfmw")
handler = ftpserver.FTPHandler
handler.authorizer = authorizer
address = ("", 21)
ftpd = ftpserver.FTPServer(address, handler)
ftpd.serve_forever(
Try running it with ./ftpserver
Also, check if your script is configured to run in current runlevel - probably /etc/rc.conf and there DAEMONS or something like that.