RedHat Enterprise 7.2 init.d runlevel 0 script cannot find file in /home directory - linux

I have an etc/init.d script that tries to call a shell script in a /home/myuser directory. One shell script for startup, another shell script for shutdown.
The start script gets called just fine ( from rc3.d/S99cslink , rc5.d/S99cslink ) but when I try to call the stop script ( from rc0.d/K01cslink ) I get the message that /home/myuser/bin/stop_service.sh cannot be found.
/etc/rc.d/init.d/cslink: line 42: /home/myuser/bin/stop_service.sh: No such file or directory
I verified that at the point in time when I am trying to run /home/myuser/bin/stop_service.sh , /home/myuser is unavailable (an ls -l /home/myuser >/tmp/mylog.log 2>&1 from inside the init.d script shows the error
ls: cannot access /home/myuser: No such file or directory
Both the start) script and the stop) script in init.d are run with runuser -l myuser, so I don't think it's a permissions problem.
Why would /home/myuser be unavailab.e, and can I run my script at a different point in time when /home/myuser is still available?
All the answers I see through searching are saying that I should check for Windows-style carriage returns in my stop_service.sh script, but I have checked and that's not the issue here.

Related

wget in script not working when called from cron

I've read a load of similar cases and can't for the life of me figure this one out...
I'm running a wget command inside a .sh script which is called from cron on reboot as follows:
#reboot /home/user/reboot_script.sh
The .sh script starts with
#!/bin/bash
And I have done chmod +x reboot_script.sh
The line that fails is :
Either
mac=$(</home/user/mac.txt)
Which may not be providing the content to the variable in the wget
OR
/usr/bin/wget "http://my.domain.com/$mac/line.txt" -O /home/user/line.txt
If I run the script from command line, it works absolutely fine but if it runs from the cron on reboot, the script runs, but line.txt is saved as an empty file (0 bytes). Again, if run directly from command line, it works fine.
I've looked at file permissions, absolute paths, everything I can think of, but I've been staring at this for hours now.
Any help would be appreciated. Thanks.
#reboot is too early in boot process IMHO. You should create a systemd script to wait network.
As a workaround, you can add a
sleep 30
or better:
until ping -c1 domain.com &>/dev/null; do
sleep 5
done
before your wget

How to run Cron Job to creates files as User file instead of root file

Why the output file from this is owned by root and not w3svcsadm?
sudo -u w3svcsadm echo "TEST ran" > /home/your/emaildigest/TEST_$( date +%Y%m%d%H%M%S ).output
I'm running into some issues with cron, and I believe this is the key to my problems.
Using the -u flag with sudo executes the command 'echo "TEST ran"' as the user w3svcasadm, but that command isn't the thing doing the work of outputting to a file, which is done by the '>' operator. By the time bash is using that operator, it's already switched back to the user running the shell. If that user is root, then the file will be created under root. In your script, you could use "su w3svcsadm" to switch the shell user before executing that command, then you wouldn't have to use that -u flag at all.

Execute shell script whithin another script prompts: No such file or directory

(I'm new in shell script.)
I've been stuck with this issue for a while. I've tried different methods but without luck.
Description:
When my script attempt to run another script (SiebelMessageCreator.sh, which I don't own) it prompts:
-bash: ./SiebelMessageCreator.sh: No such file or directory
But the file exists and has execute permissions:
-rwxr-xr-x 1 owner ownergrp 322 Jun 11 2015 SiebelMessageCreator.sh
The code that is performing the script execution is:
(cd $ScriptPath; su -c './SiebelMessageCreator.sh' - owner; su -c 'nohup sh SiebelMessageSender.sh &' - owner;)
It's within a subshell because I first thought that it was throwing that message because my script was running in my home directory (When I run the script I'm root and I've moved to my non-root home directory to run the script because I can't move my script [ policies ] to the directory where the other script resides).
I've also tried with the sh SCRIPT.sh ./SCRIPT.sh. And changing the shebang from bash to ksh because the SiebelMessageCreator.sh has that shell.
The su -c 'sh SCRIPT.sh' - owner is necessary. If the script runs as root and not as owner it brokes something (?) (that's what my partners told me from their experience executing it as root). So I execute it as the owner.
Another thing that I've found in my research is that It can throw that message if it's a Symbolic link. I'm really not sure if the content of the script it's a symbolic link. Here it is:
#!/bin/ksh
BASEDIRROOT=/path/to/file/cpp-plwsutil-c
ore-runtime.jar (path changed on purpose for this question)
java -classpath $BASEDIRROOT com.hp.cpp.plwsutil.SiebelMessageCreator
exitCode=$?
echo "`date -u '+%Y-%m-%d %H:%M:%S %Z'` - Script execution finished with exit code $exitCode."
exit $exitCode
As you can see it's a very siple script that just call a .jar. But also I can't add it to my script [ policies ].
If I run the ./SiebelMessageCreator.sh manually it works just fine. But not with my script. I suppose that discards the x64 x32 bits issue that I've also found when I googled?
By the way, I'm automating some tasks, the ./SiebelMessageCreator.sh and nohup sh SiebelMessageSender.sh & are just the last steps.
Any ideas :( ?
did you try ?
. ./SiebelMessageCreator.sh
you can also perform which sh or which ksh, then modify the first line #!/bin/ksh

Bash Script works but not in when executed from crontab

I am new to linux and the script below is just an example of my issue:
I have a script which works as expected when I execute it however when I set it to run via crontab it doesn't work as expected because it doesn't read the file content into the variable.
I have a file 'test.txt' which has 'abc' in it. My script puts the text into a variable 'var' and then I echo it out to a log file:
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
This works perfectly fine when I execute it and it echo's into the log file but not when I set it via crontab:
* * * * * /home/pi/MyScripts/test.sh
The cron job runs, and it sent me the following error message:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
But I have given it 777 permissions:
-rwxrwxrwx 1 pi pi 25 Jun 10 15:31 test.txt
-rwxrwxrwx 1 pi pi 77 Jun 10 15:34 test.sh
Any ideas?
This happens when you run the script with a different shell. It's especially relevant for systems where /bin/sh is dash:
$ cat myscript
echo "$(< file)"
$ bash myscript
hello world
$ sh myscript
$
To fix it, add #!/bin/bash as the first line in your script.
Others have provided answers, but I will give you a big clue from your error message; emphasis mine:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
Note how the cron job was trying to use /bin/sh to run the script. That’s solved by always indicating which shell you want to use at the top of your script like this.
#!/bin/bash
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
If your script is using bash, then you must explicitly set /bin/bash in some way.
Also, regarding permissions you say this:
But I have given it 777 permissions:
First, 777 permissions is a massive security risk. If you do that it means that anyone or anything on the system can read, write & execute the file. Don’t do that. In the case of a cron job the only entity that needs 7 permissions on a file is the owner of the crontab running that file.
Meaning if this is your crontab, just change the permissions to 755 which allows others to read & execute but not write. Or maybe better yet change it to 700 so only you—as the owner of the file—can do anything to the file. But avoid 777 permissions if you want to keep your system safe, stable & sane.
You have two options. In the first line of your file, tell what program you want to interpret the script
#!/bin/bash
...more code...
Or in your crontab, tell what program you want to interpret the script
* * * * * bash /home/pi/MyScripts/test.sh
In this option, you do not need to make the script executable

Run script with rc.local: script works, but not at boot

I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.

Resources