Condor job - running shell script as executable - linux

I’m trying to run a Condor job where the executable is a shell script which invokes certain Java classes.
Universe = vanilla
Executable = /script/testingNew.sh
requirements = (OpSys == "LINUX")
Output = /locfiles/myfile.out
Log = /locfiles/myfile.log
Error = /locfiles/myfile.err
when_to_transfer_output = ON_EXIT
Notification = Error
Queue
Here’s the content for /script/testingNew.sh file –
(Just becaz I’m getting error, I have removed the Java commands for now)
#!/bin/sh
inputfolder=/n/test_avp/test-modules/data/json
srcFolder=/n/test_avp/test-modules
logsFolder=/n/test_avp/test-modules/log
libFolder=/n/test_avp/test-modules/lib
confFolder=/n/test_avp/test-modules/conf
twpath=/n/test_avp/test-modules/normsrc
dataFolder=/n/test_avp/test-modules/data
scriptFolder=/n/test_avp/test-modules/script
locFolder=/n/test_avp/test-modules/locfiles
bakUpFldr=/n/test_avp/test-modules/backupCurrent
cd $inputfolder
filename=`date -u +"%Y%m%d%H%M"`.txt
echo $filename $(date -u)
mkdir $bakUpFldr/`date -u +"%Y%m%d"`
dirname=`date -u +"%Y%m%d"`
flnme=current_json_`date -u +"%Y%m%d%H%M%S"`.txt
echo DIRNameis $dirname Filenameis $flnme
cp $dataFolder/current_json.txt $bakUpFldr/`date -u +"%Y%m%d"`/current_json_$filename
cp $dataFolder/current_json.txt $filename
mkdir $inputfolder/`date -u +"%Y%m%d"`
echo Creating Directory $(date -u)
mv $filename $filename.inprocess
echo Created Inprocess file $(date -u)
Also, here’s the error log from Condor –
000 (424639.000.000) 09/09 16:08:18 Job submitted from host: <135.207.178.237:9582>
...
001 (424639.000.000) 09/09 16:08:35 Job executing on host: <135.207.179.68:9314>
...
007 (424639.000.000) 09/09 16:08:35 Shadow exception!
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
012 (424639.000.000) 09/09 16:08:35 Job was held.
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
Code 6 Subcode 8
...
Can anyone explain whats causing this error, also how to resolve this?
The testingNew.sh scripts run fine on the Linux box, if executed on a network machine seperately.
Thx a lot!! - GR

The cause, in our case, was the shell script using DOS line endings instead of Unix ones.
The Linux kernel will happily try to feed the script not to /bin/sh (as you intend) but to /bin/sh
. (Do you see that trailing carriage return character? Neither do I, but the Linux kernel does.) That file doesn't exist, so then, as a last resort, it will try to execute the script as a binary executable, which fails with the given error.

You need to specify input as:
input = /dev/null
Source: Submitting a job to Condor

Related

return code 999 error while executing the shell script through crontab

I have created a shell scripts which connects to teradata and executes the given condition, lets say test.sh. i have created a wrapper script to call test.sh if there is only an input file for test.sh, wrapper.sh.
Wrapper.sh:
cd ${FILELOC
COUNT=$(ls -l 1001_*.txt | wc -l)
if [ "$COUNT" -ne 0 ]
then
/u/w/us/bin/test.sh.sh`
fi
when i executes wrapper.sh manually, the test.sh is been called and getting executed without an error. but when i schedule it in cron it throws an error as shown below
EXIT ERRORCODE;
*** Exiting BTEQ...
*** RC (return code) = 999
Please help me to understand the issue
Shell script name seems to be Invalid
Can you share the the content of the test.sh.sh fine so that it will be easy to debug
There might be chances that file permission are not proper. due to this it got failed while executing from cron scheduler

Cron not executing the shell script + Linux [duplicate]

I have a script that checks if the PPTP VPN is running, and if not it reconnects the PPTP VPN. When I run the script manually it executes fine, but when I make a cron job, it's not running.
* * * * * /bin/bash /var/scripts/vpn-check.sh
Here is the script:
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
finally i found a solution ... instead of entering the cronjob with
crontab -e
i needed to edit the crontab file directly
nano /etc/crontab
adding e.g. something like
*/5 * * * * root /bin/bash /var/scripts/vpn-check.sh
and its fine now!
Thank you all for your help ... hope my solution will help other people as well.
After a long time getting errors, I just did this:
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
Where test.sh contains:
#!/bin/bash
/usr/bin/python3 /home/joaovitordeon/Documentos/test.py;
In my case, the issue was that the script wasn't marked as executable. To make sure it is, run the following command:
chmod +x your_script.sh
If you're positive the script runs outside of cron, execute
printf "SHELL=$SHELL\nPATH=$PATH\n* * * * * /bin/bash /var/scripts/vpn-check.sh\n"
Do crontab -e for whichever crontab you're using and replace it with output of the above command. This should mirror most of your environment in case there is some missing path issue or something else. Also check logs for any errors it's getting.
Though it definitly looks like the script has an error or you messed something up when copying it here
sed: -e expression #1, char 44: unterminated `s' command
./bad.sh: 5: ./bad.sh: [[: not found
Simple alternate script
#!/bin/bash
if [[ $(ping -c3 192.168.17.27) == *"0 received"* ]]; then
/usr/sbin/pppd call home
fi
Your script can be corrected and simplified like this:
#!/bin/sh
log=/tmp/vpn-check.log
{ date; ping -c3 192.168.17.27; } > $log
if grep -q '0 received' $log; then
/usr/sbin/pppd call home >>$log 2>&1
fi
Through our discussion in comments we confirmed that the script itself works, but pppd doesn't, when running from cron. This is because something must be different in an interactive shell like your terminal window, and in cron. This kind of problem is very common by the way.
The first thing to do is try to remember what configuration is necessary for pppd. I don't use it so I don't know. Maybe you need to set some environment variables? In which case most probably you set something in a startup file, like .bashrc, which is usually not used in a non-interactive shell, and would explain why pppd doesn't work.
The second thing is to check the logs of pppd. If you cannot find the logs easily, look into its man page, and it's configuration files, and try to find the logs, or how to make it log. Based on the logs, you should be able to find what is missing when running in cron, and resolve your problem.
Was having a similar problem that was resolved when a sh was put before the command in crontab
This did not work :
#reboot ~myhome/mycommand >/tmp/logfile 2>&1
This did :
#reboot sh ~myhome/mycommand >/tmp/logfile 2>&1
my case:
crontab -e
then adding the line:
* * * * * ( cd /directory/of/script/ && /bin/sh /directory/of/script/scriptItself.sh )
in fact, if I added "root" as per the user, it thought "root" was a command, and it didn't work.
As a complement of other's answers, didn't you forget the username in your crontab script ?
Try this :
* * * * * root /bin/bash /var/scripts/vpn-check.sh
EDIT
Here is a patch of your code
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult=`echo "$result" | /bin/sed 's/^\(.................................\).*$/\1/'`
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
In my case, it could be solved by using this:
* * * * * root ( cd /directory/of/script/ && /directory/of/script/scriptItself.sh )
I used some ./folder/-references in the script, which didn't work.
The problem statement is script is getting executed when run manually in the shell but when run through cron, it gives "java: command not found" error -
Please try below 2 options and it should fix the issue -
Ensure the script is executable .If it's not, execute below -
chmod a+x your_script_name.sh
The cron job doesn’t run with the same user with which you are executing the script manually - so it doesn't have access to the same $PATH variable as your user which means it can't locate the Java executable to execute the commands in the script. We should first fetch the value of PATH variable as below and then set it(export) in the script -
echo $PATH can be used to fetch the value of PATH variable.
and your script can be modified as below - Please see second line starting with export
#!/bin/sh
export PATH=<provide the value of echo $PATH>
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
First of all, check if cron service is running. You know the first question of the IT helpdesk: "Is the PC plugged in?".
For me, this was happening because the cronjob was executing from /root directory but my shell script (a script to pull the latest code from GitHub and run the tests) were in a different directory. So, I had to edit my script to have a cd to my scripts folder. My debug steps were
Verified that my script run independent of cron job
Checked /var/log/cron to see if the cron jobs are running. Verified that the job is running at the intended time
Added an echo command to the script to log the start and end times to a file. Verified that those were getting logged but not the actual commands
Logged the result of pwd to the same file and saw that the commands were getting executed from /root
Tried adding a cd to my script directory as the first line in the script. When the cron job kicked off this time, the script got executed just like in step 1.
it was timezone in my case. I scheduled cron with my local time but server has different timezone and it does not run at all. so make sure your server has same time by date cmd
first run command env > env.tmp
then run cat env.tmp
copy PATH=.................. Full line and paste into crontab -e, line before your cronjobs.
try this
/home/your site folder name/public_html/gistfile1.sh
set cron path like above

Missing File Output When Script Command Runs as su -c command -m user

I have a script that needs to check if a lockfile exists that only root can access and then the script runs the actual test script that needs to be run as a different user.
The problem is the test script is supposed to generate xml files and those files do not exist. (aka I can't find them)
Relevant part of the script
if (mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
if [ -f "$puppetlock" ]
then
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser"
lockdir is what gets created when the test is run to signify that the test process has begun.
puppetlock checks if puppet is running by looking for the lock file puppet creates.
qaUser does not have the rights to check if puppetlock exists.
start-qa-test.sh ends up calling java to execute an automated test. My test-date.log file displays what console would see if the test was run.
However the test is supposed to produce some xml files in a directory called target. Those files are missing.
In case it's relevant start-qa-test.sh is trying to run something like this
nohup=true
/usr/bin/java -cp .:/folderStuff/$jarFile:/opt/folderResources org.junit.runnt.JUNITCore org.some.other.stuff.Here
Running start-qa-test.sh produces the xml output in the target folder. But running it through su -c it does not.
Edit
I figured out the answer to this issue.
I changed the line to
su - qaUser -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log"
That allowed the output to show up at the /home/qaUser
Try redirecting the output of stout and stderr in the line:
su -c "/opt/qa-scripts/start-qa-test.sh > /var/log/qaTests/test-$(date +\"m-%d-%T\".log" -m "qaUser" 2>&1

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

What script/user/process is writing to a file?

I'm trying to find out what script/user/process is writing to a file .
I have 4 hosts that have the same NFS mounted
I have made a scipt and put it on all of the host and with no success
Can sombody please help with this
The script is running from 5:50 to 6:10 this is the period when the my file gets written to
This is the script that I made :
#!/bin/sh
log=~/file-access.log
check_time_to_run() {
tempTime=$1
if [ $tempTime -gt 555 -a $tempTime -lt 610 ]; then
#Intre intervalul 5:55 si 6:10
lsof /cdpool/Xprint/Liste_Drucker >> $log
else
#In afara intervalului
exit 1
fi
}
while true; do
currTime=`date +%k%M`
check_time_to_run $currTime
sleep 0.1s
done
Don't use a shell script for this at all. Instead, install sysdig, and run:
sysdig 'fd.filename=/cdpool/Xprint/Liste_Drucker'
...leave that open, and whenever anything writes to or reads from that file, an appropriate log message will be printed.
If you want to print both the username and the process name (with arguments) for the job printing to the file, the following format string will do so:
sysdig \
-p '%user.name %proc.name - %evt.dir %evt.type %evt.args' \
'fd.filename=/cdpool/Xprint/Liste_Drucker'

Resources