Which command is used for checking if file is not open in linux? [duplicate] - linux

This question already has answers here:
How find out which process is using a file in Linux?
(4 answers)
Closed 3 years ago.
lsof command is used fot the open files in linux.Which command is used for checking if file is not open.I want to use in my script
my condition is
do
if [[ 'lsof | grep $r_error_file' ]]
then
error_text=$error_text$(tail -n +1 $r_error_file | grep 'Error\')
mv $r_error_file $(dirname ${r_error_file})/BkError/$(filename ${r_error_file})
fi
done

Use fuser command
fuser $filename
if [ $? ne 0 ]
then
# file is open, Add your code here
fi

You need the case where the lsof statement is false. There's a useful list of methods for manipulating truth in if statements here; but to ruin your fun serarching, you're looking for
if [[ ! `lsof | grep $r_error_file` ]]
then
...
fi

Related

Shell script which prints error message when package not found [duplicate]

This question already has answers here:
How do I suppress shell script error messages?
(6 answers)
Detect if executable file is on user's PATH [duplicate]
(7 answers)
Closed 1 year ago.
I'm writing a shell script, and I need to check for some dependencies being installed before executing anything. I found I can use which <package> to see if it is installed or not. The problem is that when that dependency is not found, it throws the following error into console's output:
which: no abc in (/home/pace/.emacs.d/bin:/usr/local/bin:/home/pace/.emacs.d/bin:/usr/local/bin:/home/pace/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:...)
I want to avoid having such output, as I already have error messages shown when something fails. How can I avoid which from writing anything?
function is_installed() {
if [[ ! $(which $1) ]]
then
echo "[ERROR]: $1 $2"
exit 1
fi
}
Well, there might be better ways to do what you're trying to do (I'm not certain of the "best" way), but you can redirect stderr and stdout to hide the results from the output:
function is_installed() {
if [[ ! $(which $1 > /dev/null 2>&1 ) ]]
then
echo "[ERROR]: $1 $2"
exit 1
fi
}
(recent versions of bash support using >& /dev/null too to do both at once, but the above is slightly more portable)
EDIT -- try this instead
function is_installed() {
which $1 > /dev/null 2>&1
if [ $? = 1 ] ; then
echo "[ERROR]: $1 $2"
exit 1
fi
}

Is it possible to do watch logfile with tail -f and pipe updates/changes over netcat to another local system? [duplicate]

This question already has answers here:
Piping tail output though grep twice
(2 answers)
Closed 4 years ago.
There is a file located at $filepath, which grows gradually. I want to print every line that starts with an exclamation mark:
while read -r line; do
if [ -n "$(grep ^! <<< "$line")" ]; then
echo "$line"
fi
done < <(tail -F -n +1 "$filepath")
Then, I rearranged the code by moving the comparison expression into the process substitution to make the code more concise:
while read -r line; do
echo "$line"
done < <(tail -F -n +1 "$filepath" | grep '^!')
Sadly, it doesn't work as expected; nothing is printed to the terminal (stdout).
I prefer to write grep ^\! after tail. Why doesn't the second code snippet work? Why putting the command pipe into the process substitution make things different?
PS1. This is how I manually produce the gradually growing file by randomly executing one of the following commands:
echo ' something' >> "$filepath"
echo '!something' >> "$filepath"
PS2. Test under GNU bash, version 4.3.48(1)-release and tail (GNU coreutils) 8.25.
grep is not line-buffered when its stdout isn't connected to a tty. So it's trying to process a block (usually 4 KiB or 8 KiB or so) before generating some output.
You need to tell grep to buffer its output by line. If you're using GNU grep, this works:
done < <(tail -F -n +1 "$filepath" | grep '^!' --line-buffered)
^^^^^^^^^^^^^^^

How to check whether a directory is empty or not in Shell Scripting? [duplicate]

This question already has answers here:
Checking from shell script if a directory contains files
(30 answers)
How do I check if a folder has contents? [duplicate]
(3 answers)
Closed 6 years ago.
I have a directory. It is empty. If i perform ls -lrt , it shows total 0
How do I specify an If condition to perform something, only if the directory is empty.
I mean to ask how to capture that 0 value.
From here. This should help you run your statements within the if else loop. I saved the DIR in the variable
#!/bin/bash
FILE=""
DIR="/empty_dir"
# init
# look for empty dir
if [ "$(ls -A $DIR)" ]; then
echo "Take action $DIR is not Empty"
else
echo "$DIR is Empty"
fi
# rest of the logic
Remove the -A option :
$ mkdir /tmp/aaa
$ ls /tmp/aaa
$ a=\`ls /tmp/aaa`
$ [[ -z $a ]]
$ echo $?
0

Trying to call a script while passing two arguments [duplicate]

This question already has answers here:
Attempting to pass two arguments to a called script for a pattern search
(2 answers)
Closed 9 years ago.
I have a script that greps with $1 and $2, first argument being a pattern and second being a file.
I need to create another script that calls this first one, passes the two arguments to it, and if the second is a directory, loops it on all the files in the directory.
Does anyone know how I'd go about this? I keep coming close but failing miserably.
EDIT
Thought that the other post I had made didn't go through, Somehow got it lost. I apologize to everyone, so sorry.
Please forgive me. :(
if [[ -d $2 ]]; then
find "$2" -type f -exec ./script "$1" {} \;
else
./script "$1" "$2"
fi
If $2 is a directory then the find command finds all of the files in it and calls ./script once for each file. The curly braces {} are a placeholder for these file names.
Something like:
[[ -d "$2" ]] && grep -e "$1" -r "$2" || grep -e "$1" "$2"
It tests whether arg 2 is a directory (bash syntax) and if so it invokes grep in recursive mode, otherwise in non-recursive.

How to tail -f the latest log file with a given pattern

I work with some log system which creates a log file every hour, like follows:
SoftwareLog.2010-08-01-08
SoftwareLog.2010-08-01-09
SoftwareLog.2010-08-01-10
I'm trying to tail to follow the latest log file giving a pattern (e.g. SoftwareLog*) and I realize there's:
tail -F (tail --follow=name --retry)
but that only follow one specific name - and these have different names by date and hour. I tried something like:
tail --follow=name --retry SoftwareLog*(.om[1])
but the wildcard statement is resoved before it gets passed to tail and doesn't re-execute everytime tail retries.
Any suggestions?
I believe the simplest solution is as follows:
tail -f `ls -tr | tail -n 1`
Now, if your directory contains other log files like "SystemLog" and you only want the latest "SoftwareLog" file, then you would simply include a grep as follows:
tail -f `ls -tr | grep SoftwareLog | tail -n 1`
[Edit: after a quick googling for a tool]
You might want to try out multitail - http://www.vanheusden.com/multitail/
If you want to stick with Dennis Williamson's answer (and I've +1'ed him accordingly) here are the blanks filled in for you.
In your shell, run the following script (or it's zsh equivalent, I whipped this up in bash before I saw the zsh tag):
#!/bin/bash
TARGET_DIR="some/logfiles/"
SYMLINK_FILE="SoftwareLog.latest"
SYMLINK_PATH="$TARGET_DIR/$SYMLINK_FILE"
function getLastModifiedFile {
echo $(ls -t "$TARGET_DIR" | grep -v "$SYMLINK_FILE" | head -1)
}
function getCurrentlySymlinkedFile {
if [[ -h $SYMLINK_PATH ]]
then
echo $(ls -l $SYMLINK_PATH | awk '{print $NF}')
else
echo ""
fi
}
symlinkedFile=$(getCurrentlySymlinkedFile)
while true
do
sleep 10
lastModified=$(getLastModifiedFile)
if [[ $symlinkedFile != $lastModified ]]
then
ln -nsf $lastModified $SYMLINK_PATH
symlinkedFile=$lastModified
fi
done
Background that process using the normal method (again, I don't know zsh, so it might be different)...
./updateSymlink.sh 2>&1 > /dev/null
Then tail -F $SYMLINK_PATH so that the tail hands the changing of the symbolic link or a rotation of the file.
This is slightly convoluted, but I don't know of another way to do this with tail. If anyone else knows of a utility that handles this, then let them step forward because I'd love to see it myself too - applications like Jetty by default do logs this way and I always script up a symlinking script run on a cron to compensate for it.
[Edit: Removed an erroneous 'j' from the end of one of the lines. You also had a bad variable name "lastModifiedFile" didn't exist, the proper name that you set is "lastModified"]
I haven't tested this, but an approach that may work would be to run a background process that creates and updates a symlink to the latest log file and then you would tail -f (or tail -F) the symlink.
#!/bin/bash
PATTERN="$1"
# Try to make sure sub-shells exit when we do.
trap "kill -9 -- -$BASHPID" SIGINT SIGTERM EXIT
PID=0
OLD_FILES=""
while true; do
FILES="$(echo $PATTERN)"
if test "$FILES" != "$OLD_FILES"; then
if test "$PID" != "0"; then
kill $PID
PID=0
fi
if test "$FILES" != "$PATTERN" || test -f "$PATTERN"; then
tail --pid=$$ -n 0 -F $PATTERN &
PID=$!
fi
fi
OLD_FILES="$FILES"
sleep 1
done
Then run it as: tail.sh 'SoftwareLog*'
The script will lose some log lines if the logs are written to between checks. But at least it's a single script, with no symlinks required.
We have daily rotating log files as: /var/log/grails/customer-2020-01-03.log. To tail the latest one, the following command worked fine for me:
tail -f /var/log/grails/customer-`date +'%Y-%m-%d'`.log
(NOTE: no space after the + sign in the expression)
So, for you, the following should work (if you are in the same directory of the logs):
tail -f SoftwareLog.`date +'%Y-%m-%d-%H'`
I believe the easiest way is to use tail with ls and head, try something like this
tail -f `ls -t SoftwareLog* | head -1`

Resources