Issue running a .SH file on server reboot - Ubuntu 12.04 LTS - node.js

I am having an issue getting the .sh file to run. I can run it via sudo ./starter.sh when inside my app directory, but relying for it on reboot isn't working.
I am using an Ubuntu 12.04 VM on Windows 7. I have my files on Windows shared with the VM, so I access my files via /mnt/hgfs/nodejs-test
I am running a node.js server with Nginx via my local VM.
As of right now, I can go to http://node.dev and it will properly load up my server.js located in nodejs-test (/mnt/hgfs/nodejs-test) and output hello world to the screen.
So running the site isn't a problem..but getting forever (forever.js installed globally) to kick in on reboot isn't working. I suspect it simply can't execute my SH file.
Here is my starter.sh
#!/bin/sh
if [ $(ps aux | grep $USER | grep node | grep -v grep | wc -l | tr -s "\n") -eq 0 ]
then
export PATH=/usr/local/bin:$PATH
forever start --sourceDir /mnt/hgfs/nodejs-test/server.js >> /mnt/hgfs/nodejs-test/serverlog.txt 2>&1
fi
Now I have tried sudo crontab -e (and added my path to the file) as well as just crontab -e and did the same thing. Upon reboot...nothing.
#reboot /mnt/hgfs/nodejs-test/starter.sh
I tried editing that cronjob to this
#reboot /var/www/nodejs-test/starter.sh
because I created a symlink in /var/www/nodejs-test to /mnt/hgfs/nodejs-test
Where can I check to see if an error fires on reboot, or is it possible my reboot cron isn't running at all? I know running the starter.sh DOES work though.
EDIT The /mnt/hgfs/nodejs-test is owned by root (which might be a windows thing given the files exist on my Windows 7 os). My ubuntu user is "bkohlmeier" which I created on installing the VM.
EDIT #2
Nov 10 13:05:01 ubuntu cron[799]: (CRON) INFO (pidfile fd = 3)
Nov 10 13:05:01 ubuntu cron[875]: (CRON) STARTUP (fork ok)
Nov 10 13:05:01 ubuntu cron[875]: (CRON) INFO (Running #reboot jobs)
Nov 10 13:05:02 ubuntu CRON[887]: (bkohlmeier) CMD (/mnt/hgfs/nodejs-test/starter.sh)
Nov 10 13:05:02 ubuntu CRON[888]: (root) CMD (/var/www/nodejs-test/starter.sh >/dev/null2>&1)
Nov 10 13:05:02 ubuntu CRON[877]: (CRON) info (No MTA installed, discarding output)
Nov 10 13:05:02 ubuntu CRON[878]: (CRON) info (No MTA installed, discarding output)

Ok, I found a solution. Whether it is the "right" solution I don't know. Because my system shares files between HOST and GUEST (windows 7 and Ubuntu VM), the /mnt/hgfs/ has to get mounted (I think)..and the reboot happens quick enough I think the mount isn't aware yet.
So I added this via crontab -e
#reboot /bin/sleep 15; /var/www/nodejs-test/starter.sh
and it worked like a charm.

Related

WSL, Running linux commands with "wsl --exec <cmd>" or "wsl -- <cmd>"

wsl -h shows the following:
--exec, -e <CommandLine> Execute the specified command without using the default Linux shell.
-- Pass the remaining command line as is.
What does "without using the default Linux shell" mean (i.e. what else is it going to use, if not the default shell!?)?.
Additionally, by way of an example, I now have three possible ways to run Linux ls from my PowerShell prompt (i.e. this will not be Get-ChildItem aliased to ls, but instead a Linux command via WSL):
PS C:\> wsl -e ls # Alternatively, wsl --exec ls
PS C:\> wsl -- ls
PS C:\> wsl ls
But all outputs appear to be the same. How would you explain the differences between these three ways of running a WSL Linux command from a PowerShell prompt?
I think it means wsl runs the command directly, instead of spawning a shell process to run the command.
For example, if I run :
wsl -e sleep 10
From another terminal, I have :
root 1482 1 0 11:32 tty3 00:00:00 /init
ubuntu 1483 1482 0 11:32 tty3 00:00:00 sleep 10
We can see /init is the parent of sleep 10, without a shell in between.
A cool trick is using this to set the X11 $DISPLAY variable, letting you use windows terminal to get remote shells using WSLG.
# in microsoft terminal or powershell use this command line
wsl.exe -- ssh -a -X -Y $hostname
then on the remote system
# DISPLAY will show something like localhost:10.0 on the remote system
echo $DISPLAY
# use a program like xeyes to test
xeyes

deleting cronjob that shouldn't be there

I installed froxlor a while back and uninstalled it again, because it didn't fit my need. the server I'm running is a debian web server. after inspecting the system log file using
grep CRON /var/log/syslog
I noticed that there are still some froxlor things going on.
most noticable are log entries like:
Jun 25 10:55:01 v220200220072109810 CRON[5633]: (root) CMD (/usr/bin/nice -n 5 /usr/bin/php -q /var/www/froxlor/scripts/froxlor_master_cronjob.php --tasks 1> /dev/null)
Jun 25 11:00:01 v220200220072109810 CRON[5727]: (root) CMD (/usr/bin/nice -n 5 /usr/bin/php -q /var/www/froxlor/scripts/froxlor_master_cronjob.php --tasks 1> /dev/null)
however, when inspecting the crontab for the root user, I don't have any active crontabs. Any ideas on how to fix this issue?

bash redirect to /dev/stdout: Not a directory

I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi

Two instances of node started on linux

I have a node.js server app which is being started twice for some reason. I have a cronjob that runs every minute, checking for a node main.js process and if not found, starting it. The cron looks like this:
* * * * * ~/startmain.sh >> startmain.log 2>&1
And the startmain.sh file looks like this:
if ps -ef | grep -v grep | grep "node main.js" > /dev/null
then
echo "`date` Server is running."
else
echo "`date` Server is not running! Starting..."
sudo node main.js > main.log
fi
The log file storing the output of startmain.js shows this:
Fri Aug 8 19:22:00 UTC 2014 Server is running.
Fri Aug 8 19:23:00 UTC 2014 Server is running.
Fri Aug 8 19:24:00 UTC 2014 Server is not running! Starting...
Fri Aug 8 19:25:00 UTC 2014 Server is running.
Fri Aug 8 19:26:00 UTC 2014 Server is running.
Fri Aug 8 19:27:00 UTC 2014 Server is running.
That is what I expect, but when I look at processes, it seems that two are running. One under sudo and one without. Check out the top two processes:
$ ps -ef | grep node
root 99240 99232 0 19:24:01 ? 0:01 node main.js
root 99232 5664 0 19:24:01 ? 0:00 sudo node main.js
admin 2777 87580 0 19:37:41 pts/1 0:00 grep node
Indeed, when I look at the application logs, I see startup entries happening in duplicate. To kill these processes, I have to use sudo, even for the process that does not start with sudo. When I kill one of these, the other one dies too.
Any idea why I am kicking off two processes?
First, you are starting your node main.js application with sudo in the script startmain.sh. According to sudo man page:
When sudo runs a command, it calls fork(2), sets up the execution environment as described above, and calls the execve system call in the child process. The main sudo process waits until the command has completed, then passes the command's exit status to the security policy's close method and exits.
So, in your case the process with name sudo node main.js is the sudo command itself and the process node main.js is the node.js app. You can easily verify this - run ps auxfw and you will see that the sudo node main.js process is the parent process for node main.js.
Another way to verify this is to run lsof -p [process id] and see that the txt part for the process sudo node main.js states /usr/bin/sudo while the txt part of the process node main.js will display the path to your node binary.
The bottom line is that you should not worry that your node.js app starts twice.

Cpanel does not run my cron jobs

I have cron jobs in cPanel that are scheduled every night. Yesterday, I noticed that these cron jobs haven't run since 2 days ago. I checked the cron log in /var/log/cron, and it shows me errors when trying to access the file.
Errors:
Nov 6 11:25:01 web2 crond[17439]: (laptoplc) ERROR (failed to change user)
Nov 6 11:25:01 web2 crond[17447]: (projecto) ERROR (failed to change user)
Nov 6 11:25:01 web2 crond[17446]: (CRON) ERROR (setreuid failed): Resource temporarily unavailable
Nov 6 11:25:01 web2 crond[17446]: (laptoppa) ERROR (failed to change user)
What could be the problem?
There could be several things caused this. Here are ways to debug your crons:
Run it manually from shell:
php yourcron.php
Add logging from your cron file, maybe by adding error_log('check if running'); to see if it is indeed running.
As suggested above it could be permission issue too. Add execute permission to your cron:
chmod 755 yourcron.php
Check whether any Zombie processes for these users exist using the below command.
ps -eLF |grep -i username
Try killing those processes and check whether cronjobs are running after that.
sudo ps -eLF |grep username |awk '{print $2}' |xargs sudo kill -9
Dont kill any important running process !
I had a similar problem today. The cron in /var/spool/cron/userXXX had a script for /home/userYYY (another user) and so this error occurred. I removed the line that had userYYY and this was resolved.

Resources