Why is my startup script not running - linux

Per various tutorials I've done the following:
created a file called ftpserver.py in /home/root/
created a file in /etc/init.d/ called ftpserver that looks like this"
#!/bin/sh
python /home/root/ftpserver.py
Upon creation, I ran the following (to make it executable, apparently)
root#beaglebone1:/etc/init.d# chmod +x ftpserver
But it doesn't appear to be running on startup. However if I run the following command:
root#beaglebone1:/etc/init.d# /etc/init.d/ftpserver
Then the script runs, exectuing ftpserver.py.
Interestingly, if I try to run ftpserver from within it's directory in the following manner (not sure if this is relevant):
root#beaglebone1:/etc/init.d# ftpserver
It returns:
-sh: ftpserver: command not found
So I'm not certain why my script isn't running on startup.
For reference, ftpserver.py looks like this:
from pyftpdlib import ftpserver
authorizer = ftpserver.DummyAuthorizer()
authorizer.add_user("root", "12345", "/home/root", perm="elradfmw")
handler = ftpserver.FTPHandler
handler.authorizer = authorizer
address = ("", 21)
ftpd = ftpserver.FTPServer(address, handler)
ftpd.serve_forever(

Try running it with ./ftpserver
Also, check if your script is configured to run in current runlevel - probably /etc/rc.conf and there DAEMONS or something like that.

Related

Databricks init scripts not working sometimes

Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.

Strange Behavior with clamd scan function

I have a simple python3 script running on ubuntu server 20.04 that tries to call clamd (clamav-daemon process) library to scan a file. The scan ping() and version() function all work correctly. However when I actually do a test write and scan, i get the following error:
{'/filedrop/test.doc': ('ERROR', "Can't open file or directory")}
This is the code that I used to call the test write and scan, and this is all standard sample from the clamd website:
open('/filedrop/test.doc','wb').write(clamd.EICAR)
print(cd.scan('/filedrop/test.doc'))
After the code is run, i get the following string in the test file which indicates that the python3 script was able to successfully write to the file, yet i keep getting the error that the file can't be opened when i use the clamd scan function.
This is the string that was written to the file:
X5O!P%#AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
I am also able to run clamscan from command line on the folder and it successfully scans the files as well.
I'm running as root user while the service is using clamav:clamav.
I did give read/write permission to the folder and the files to "other users", and also indicated by the fact that the file could be written by the python script.
I believe the solution to the problem here is that AppArmour is blocking clamd for that particular directory. I would look at the AppArmour profile for clamd. It should be called something like /etc/apparmor.d/clamav or similar. You can adjust that profile or alternatively disable it (according to Ubuntu):
sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/profile.name
More complete instructions available here:
https://help.ubuntu.com/community/AppArmor
You can also disable AppArmour, for the purposes of testing (I don't like to advise anyone to remove security features permanently), with:
sudo systemctl stop apparmor
sudo systemctl disable apparmor

How to run two shell scripts at startup?

I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.

Making an npm script auto start in a FreeBSD Jail

I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.

sqlite3 module cannot open database when started from ubuntu upstart

Working in Ubuntu server 14.04
I have an upstart .conf file in /etc/init for staring my node server. I am using forever. Here is what my script looks like
start on filesystem or runlevel [2345]
expect fork
setuid myUserId
env HOME=/home/myUserId/
env NODE_BIN_DIR=/usr/bin
env NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
script
PATH=$NODE_BIN_DIR:$NODE_PATH:$PATH
echo $PATH
exec forever start -o /home/myUserId/nodeServ/lServer/logs/out.log /home/myUserId/nodeServ/lServer/server.js 1337
end script
But I keep getting this error
Error: SQLITE_CANTOPEN: unable to open database file
error: Forever detected script exited with code: 8
If I run the script from the command line exactly as it is in the conf file it works just fine. No problems. So I think it is a permissions issue. I have set permissions for read write execute on the database directory and the database and still I am unable to read from the file.
I have tried so many different things and I cannot figure out why this is happneing
UPDATE: This problem appears to not be isolated to upstart. I tried staring forever in shell script as well and I get the same errors.
I resolved my issue via workaround. Not using forever and starting node directly from the upstart file (allowing respawn). No issues. This appears to either be a sqlite3 issue or a forever issue.
write this command sudo chown www-data. . in db file directory.
other solution is check file exists or not
ex :
var fs = require("fs");
var exists = fs.existsSync(dbfilepath);

Resources