I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.
Related
Having below sample script sample.sh
#!/bin/bash
if ps aux | grep -o "sample.sh" >/dev/null
then
echo "Already script running"
exit 0
fi
echo "start script"
while true
do
echo "script running"
sleep 5
done
In above script i want to check if this script previously running or not if running then not run it again.
problem is check condition always become true (because to check the condition require to run script) and it always show me "Already script running" message.
Any idea how to solve it?
You need a proper lock. I'd do using flock like this:
exec 201> /tmp/lock.$(basename $0).file
if ! flock -n 201 ; then
echo "another instance of $0 is running";
exit 1
fi
# cmds
exec 201>&-
rm -rf /tmp/lock.$(basename $0).file
This basically creates lock for script using a temporary file. The temporary file has particular significance other than it's used to tell whether your script has acquired a lock.
When there's an instance of this program running, the next run of the same program can't run as the lock will prevent it.
For me will be safer to use a lock file , create it when process start and delete after completion.
Let the script record its own PID in a file. Before doing so, it first checks if that file currently contains an active PID, in which case it exits.
pid=$(< ${PID_FILE:?} || exit
kill -0 $PID && exit
The next exercise is to prevent race conditions when writing the file.
Try this, it gives number of sample.sh run by the user
ps -aux | awk -v app='sample.sh' '$0 ~ app { print $1 }' |grep $USERNAME|wc -l
Wtite a tmp file to the /tmp directory.
have your script check to see if the file exists, if it does then don't run.
#!/bin/sh
# our tmpfile
tmpfile="/tmp/mytmpfile"
# check to see if it exists.
# if it does then exit script
if [[ -f ${tmpfile} ]]; then
echo script already running.
exit
fi
# it doesn't exist at this point so lets make one
touch ${tmpfile}
# do whatever now.
# end of script
rm ${tmpfile}
I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.
I started using "sudo rm -r" to delete files/directories. I even put it as an alias of rm.
I normally know what I am doing and I am quite experience linux user.
However, I would like that when I press the "ENTER", before the execution of rm, a list of files will show up on the screen and a prompt at the end to OK the deletion of files.
Options -i -I -v does not do what I want. I want only one prompt for all the printed files on screen.
Thank you.
##
# Double-check files to delete.
delcheck() {
printf 'Here are the %d files you said you wanted to delete:\n' "$#"
printf '"%s"\n' "$#"
read -p 'Do you want to delete them? [y/N] ' doit
case "$doit" in
[yY]) rm "$#";;
*) printf 'No files deleted\n';;
esac
}
This is a shell function that (when used properly) will do what you want. However, if you load the function in your current shell then try to use it with sudo, it won't do what you expect because sudo creates a separate shell. So you'd need to make this a shell script…
#!/bin/bash
… same code as above …
# All this script does is create the function and then execute it.
# It's lazy, but functions are nice.
delcheck "$#"
…then make sure sudo can access it. Put it in some place that is in the sudo execution PATH (Depending on sudo configuration.) Then if you really want to execute it precisely as sudo rm -r * you will still need to name the script rm, (which in my opinion is dangerous) and make sure its PATH is before /bin in your PATH. (Also dangerous). But there you go.
Here's a nice option
Alias rm to echo | xargs -p rm
The -p option means "interactive" - it will display the entire command (including any expanded file lists) and ask you to confirm
It will NOT ask about the recursively removed files. But it will expand rm * .o to:
rm -rf * .o
rm -rf program.cc program.cc~ program program.o backup?... # NO NO NO NO NO!
Which is much nicer than receiving the error
rm: .o file not found
Edit: corrected the solution based on chepner comment. My previous solutions had a bug :(
This simple script prompts for a y response before deleting the files specified.
rmc script file:
read -p "ok to delete? " ans
case $ans in
[yY]*) sudo rm "$#" ;;
*) echo "Nothing deleted";;
esac
Invoke thus
./rmc *.tmp
I created a script to do this. The solution is similar to #kojiro's.
Save the script with the filename del. Run the command sudo chmod a=r+w+x del to make the script an executable. In the directory in which you want to save the script, export the path by entering export PATH=$PATH:/path/to/the/del/executable in your '~/.bashrc' file and run source ~/.bashrc.
Here, the syntax of rm is preserved, except instead of typing rm ..., type del ... where del is the name of the bash script below.
#! /bin/bash
# Safely delete files
args=("$#") # store all arguments passed to shell
N=$# # number of arguments passed to shell
#echo $#
#echo $#
#echo ${args[#]:0}
echo "Files to delete:"
echo
n=`expr $N - 1`
for i in `seq 0 $n`
do
str=${args[i]}
if [ ${str:0:1} != "-" ]; then
echo $str
fi
done
echo
read -r -p "Delete these files? [y/n] " response
case $response in
[yY][eE][sS]|[yY])
rm ${args[#]:0}
esac
I am running this command in a script
while [ 1 ]
do
if [ -e $LOG ]
then
grep -A 5 -B 5 -f $PATTERNS $LOG >> $FOREMAIL
break
fi
done
$LOG file is scp'ed from another machine. So as soon as it appears in the current directory, while loop detects it and does the grep. The problem is, the $FOREMAIL file turns up to be empty. But if I run this grep outside of the script as a standalone command with same files and params, I can see that it generates an output.
I am baffled as to why this command is generating no o/p in the script?
The -e is triggering as soon as scp creates the file, while it still has no data in it, and grep is operating on an empty file. You need to wait until the file has finished transferring.
You could accomplish this by transferring to a temporary filename, than running mv over ssh from the machine which is pushing the file up.
Edit: the code for the machine copying to log file up...
scp $log 192.168.0.1:/logfiles/${log}.tmp
ssh 192.168.0.1 mv /logfiles/${log}.tmp /logfiles/${log}
Before you can grep, you need to wait for two things: 1) the download started (file comes into existence) and 2) download finished (nobody is opening the file anymore). I have a script call waitfor.sh, which does this:
#!/bin/bash
# waitfor.sh - wait for a file fully downloaded (via Firefox, scp, ...)
# Syntax:
# waitfor.sh filename
FILENAME=$1 # Name of file to wait for
INTERVAL=10 # Wait interval of N seconds
# Wait for download started
while [ ! -f $FILENAME ]
do
sleep $INTERVAL
done
# Wait for download finished
while lsof $FILENAME
do
sleep $INTERVAL
done
To use it:
waitfor.sh $LOG
grep ...
Could it be that the while [1] is very fast, so when the file starts copying, it shows up as an empty file first before copying is complete? Depending on the size of the file, try a sleep delay inside the then loop. Figuring out when a file finishes copying when done by an external process is probably a separate question - e.g. googling for something like "how to tell when scp has finshed copying a file" turns up a bunch of links like: https://superuser.com/questions/45224/is-there-a-way-to-tell-if-a-file-is-done-copying
Better to use:
if [ -f $LOG ]
instead of:
if [ -e $LOG ]
-f checks for a regular type
-e checks for any file
Here's what I ended up doing:
scp $LOGFILE
then
scp $SCPDONE # empty file
And modified the if clause like this:
while [ 1 ]
do
if [ -e $SCPDONE ]
then
grep -A 5 -B 5 -f $PATTERNS $LOG >> $FOREMAIL
break
fi
done
I have a bash program that will write to an output file. This file may or may not exist, but the script must check permissions and fail early. I can't find an elegant way to make this happen. Here's what I have tried.
set +e
touch $file
set -e
if [ $? -ne 0 ]; then exit;fi
I keep set -e on for this script so it fails if there is ever an error on any line. Is there an easier way to do the above script?
Why complicate things?
file=exists_and_writeable
if [ ! -e "$file" ] ; then
touch "$file"
fi
if [ ! -w "$file" ] ; then
echo cannot write to $file
exit 1
fi
Or, more concisely,
( [ -e "$file" ] || touch "$file" ) && [ ! -w "$file" ] && echo cannot write to $file && exit 1
Rather than check $? on a different line, check the return value immediately like this:
touch file || exit
As long as your umask doesn't restrict the write bit from being set, you can just rely on the return value of touch
You can use -w to check if a file is writable (search for it in the bash man page).
if [[ ! -w $file ]]; then exit; fi
Why must the script fail early? By separating the writable test and the file open() you introduce a race condition. Instead, why not try to open (truncate/append) the file for writing, and deal with the error if it occurs? Something like:
$ echo foo > output.txt
$ if [ $? -ne 0 ]; then die("Couldn't echo foo")
As others mention, the "noclobber" option might be useful if you want to avoid overwriting existing files.
Open the file for writing. In the shell, this is done with an output redirection. You can redirect the shell's standard output by putting the redirection on the exec built-in with no argument.
set -e
exec >shell.out # exit if shell.out can't be opened
echo "This will appear in shell.out"
Make sure you haven't set the noclobber option (which is useful interactively but often unusable in scripts). Use > if you want to truncate the file if it exists, and >> if you want to append instead.
If you only want to test permissions, you can run : >foo.out to create the file (or truncate it if it exists).
If you only want some commands to write to the file, open it on some other descriptor, then redirect as needed.
set -e
exec 3>foo.out
echo "This will appear on the standard output"
echo >&3 "This will appear in foo.out"
echo "This will appear both on standard output and in foo.out" | tee /dev/fd/3
(/dev/fd is not supported everywhere; it's available at least on Linux, *BSD, Solaris and Cygwin.)