Executed Bashscript alters command - linux

Im using Bashscript for automated backups from my system and most important config files.
Strange thing is, after running the script it should send me an email with content of a log file. But debugging the script reveals, that it runs an altered command than written in my script
Script:
#!/bin/bash
###########################
### ###
### BACKUP SCRIPT ###
### ###
###########################
......
LOG="backup.log"
exec > >(tee -i >( sed 's/\x1B\[[0-9;]*[JKmsu]//g' > ${LOG}))
exec 2>&1
# commands etc
mailx -s "Backup | subject" mail#mail.tld < $LOG
So my log contains all necessary output and isn't empty.
Debugging the script reveals this
echo -e '\033[1;32mExiting prune script\033[0m'
Exiting prune script
+ mailx -s 'Backup | subject' mail#mail.tld #Missing < $LOG!!!
Really dunno why it's missing my logfile. The mail I receive ist just an empty mail with correct subject.
Some ideas why?
PS: using Ubuntu 16.04 LTS but this shouldn't matter
Backup.log: https://pastebin.com/cCVSLV0h nothing special.
If I run mail command directly from my shell it sends this log as I expect it.

Related

Adding logfile command puts script on the loop?

I'm trying to create logfile for output/terminal files of a script.
Why is script going for the loop?
Script is used for remotely changing passwords over ssh using imported .txt file with list of addresses. Script is working fine until I add line for logging at the end after DONE:
#!/bin/bash
echo "Enter the username for which you want to change the password"
read USER
sleep 2
echo "Enter the password that you would like to set for $USER"
read PASSWORD
sleep 2
echo "Enter the file name that contains a list of servers. Ex: ip.txt"
read FILE
sleep 2
for HOST in $(cat $FILE)
do
ssh root#$HOST "echo $'$PASSWORD\n$PASSWORD' | passwd $USER"
done
I tried adding following log creating commands:
/root/passwordchange.sh | tee -a /root/output.log
logsave -a /root/output.log /root/passwordchange.sh
/root/passwordchange.sh >> /root/output.log
Adding logging line is creating loop for entire program instead of closing it.
I need to sigkill script to end the process.
Output file is created just as normal with all the information provided.
It is my first question here on stack, thank you from advance for all answers

Why my named pipe input command line just hangs when it is called?

Why my named pipe input command line just hangs when it is called?
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Send command to a background process
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server. And worked the first time I did it. Since it them they do not work anymore. Every time I do ./send.sh commands the command line hangs until I hit Ctrl+C.
It also hangs and does nothing when I do directly echo commamd > /tmp/srv-input
The scripts
It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
pkill -f hlds
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
cat > /tmp/srv-input &
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
cat /tmp/srv-input | ./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
#!/bin/sh
echo "$#" > /tmp/srv-input
# Successful execution
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
I always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
You would be better off just having hlds_run read directly from the pipe instead of having cat pipe it in.
Try
./hlds_run … > my_logs.txt 2>&1 < /tmp/srv-input &
Instead of
cat /tmp/srv-input | ./hlds_run …

Read command in bash script not waiting for user input when piped to bash?

Here is what I'm entering in Terminal:
curl --silent https://raw.githubusercontent.com/githubUser/repoName/master/installer.sh | bash
The WordPress installing bash script contains a "read password" command that is supposed to wait for users to input their MySQL password. But, for some reason, that doesn't happen when I run it with the "curl githubURL | bash" command. When I download the script via wget and run it via "sh installer.sh", it works fine.
What could be the cause of this? Any help is appreciated!
If you want to run a script on a remote server without saving it locally, you can try this.
#!/bin/bash
RunThis=$(lynx -dump http://127.0.0.1/example.sh)
if [ $? = 0 ] ; then
bash -c "$RunThis"
else
echo "There was a problem downloading the script"
exit 1
fi
In order to test it, I wrote an example.sh:
#!/bin/bash
# File /var/www/example.sh
echo "Example read:"
read line
echo "You typed: $line"
When I run Script.sh, the output looks like this.
$ ./Script.sh
Example read:
Hello World!
You typed: Hello World!
Unless you absolutely trust the remote scripts, I would avoid doing this without examining it before executing.
It wouldn't stop for read:
As when you are piping in a way you are forking a child which has been given input from parent shell.
You cannot give the values back to parent(modify parent's env) from child.
and through out this process you are always in parent process.

Condor job - running shell script as executable

I’m trying to run a Condor job where the executable is a shell script which invokes certain Java classes.
Universe = vanilla
Executable = /script/testingNew.sh
requirements = (OpSys == "LINUX")
Output = /locfiles/myfile.out
Log = /locfiles/myfile.log
Error = /locfiles/myfile.err
when_to_transfer_output = ON_EXIT
Notification = Error
Queue
Here’s the content for /script/testingNew.sh file –
(Just becaz I’m getting error, I have removed the Java commands for now)
#!/bin/sh
inputfolder=/n/test_avp/test-modules/data/json
srcFolder=/n/test_avp/test-modules
logsFolder=/n/test_avp/test-modules/log
libFolder=/n/test_avp/test-modules/lib
confFolder=/n/test_avp/test-modules/conf
twpath=/n/test_avp/test-modules/normsrc
dataFolder=/n/test_avp/test-modules/data
scriptFolder=/n/test_avp/test-modules/script
locFolder=/n/test_avp/test-modules/locfiles
bakUpFldr=/n/test_avp/test-modules/backupCurrent
cd $inputfolder
filename=`date -u +"%Y%m%d%H%M"`.txt
echo $filename $(date -u)
mkdir $bakUpFldr/`date -u +"%Y%m%d"`
dirname=`date -u +"%Y%m%d"`
flnme=current_json_`date -u +"%Y%m%d%H%M%S"`.txt
echo DIRNameis $dirname Filenameis $flnme
cp $dataFolder/current_json.txt $bakUpFldr/`date -u +"%Y%m%d"`/current_json_$filename
cp $dataFolder/current_json.txt $filename
mkdir $inputfolder/`date -u +"%Y%m%d"`
echo Creating Directory $(date -u)
mv $filename $filename.inprocess
echo Created Inprocess file $(date -u)
Also, here’s the error log from Condor –
000 (424639.000.000) 09/09 16:08:18 Job submitted from host: <135.207.178.237:9582>
...
001 (424639.000.000) 09/09 16:08:35 Job executing on host: <135.207.179.68:9314>
...
007 (424639.000.000) 09/09 16:08:35 Shadow exception!
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
012 (424639.000.000) 09/09 16:08:35 Job was held.
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
Code 6 Subcode 8
...
Can anyone explain whats causing this error, also how to resolve this?
The testingNew.sh scripts run fine on the Linux box, if executed on a network machine seperately.
Thx a lot!! - GR
The cause, in our case, was the shell script using DOS line endings instead of Unix ones.
The Linux kernel will happily try to feed the script not to /bin/sh (as you intend) but to /bin/sh
. (Do you see that trailing carriage return character? Neither do I, but the Linux kernel does.) That file doesn't exist, so then, as a last resort, it will try to execute the script as a binary executable, which fails with the given error.
You need to specify input as:
input = /dev/null
Source: Submitting a job to Condor

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources