Add a line to a file section unless it exists - linux

I have a file that looks like this:
...
%ldirs
(list of line-separated directories)
...
With a shell script, I need to add a directory to the list in that file, but only if that directory is not already in the list. Here's the catch: The directory in question must come from a variable $SOME_PATH.
I thought about using the patch utility, but to do that I would have to generate the patch file dynamically to add "+$SOME_PATH". The other problem is that I do not know the "after context" or the line number of "%ldirs", so generating the patch file is problematic.
Is there another option?
Tweaked answer - Thanks to Rob:
line=$(grep "$SOME_PATH" /path/to/file)
if [ $? -eq 1 ]
then
sed -i "/%ldirs/ a\\$SOME_PATH" /path/to/file
fi
Final answer - Thanks to tripleee:
fgrep -xq "$SOME_PATH" /path/to/file || sed -i "/%ldirs/ a\\$SOME_PATH" /path/to/file

line=$(grep "$SOME_PATH" %ldirs)
if [ $? -eq 1 ]
then
echo "$SOME_PATH" >> %ldirs
fi
something like this should work, it worked fine for me. I'm sure there are other ways to write it, too.
line=$(grep "$SOME_PATH" /path/to/file)
if [ $? -eq 1 ]
then
sed -i 's/%lsdir/%lsdir\n"$SOME_PATH"/' /path/to/file
fi
should work. It'll find %lsdir and replace it with %lsdir(newline)$SOME_PATH (not sure if quotes are needed on $SOME_PATH here, pretty sure they aren't)

Related

Silent while loop in bash

I am looking to create a bash script that keeps checking a file in directory and perform certain operation on it. I am using while loop, if file does not exists I want that while loop stays quite and keeps on checking condition. Here is what i created but it keeps throwing error that file not found, if file is not there.
while [ ! -f /home/master/applications/tmp/mydata.txt ]
do
cat mydata.txt;
rm mydata.txt;
sleep 1; done
There are two issue in your implementation:
You should use the same (absolute or relative) path in your while loop test statement [ ! -f $file ] and in your cat and rm commands.
The cat command is looking for the file in the current working directory (pwd) and your while statement might be looking somewhere else and hence, your implementation is buggy and won't work as expected if your pwd isn't /home/master/applications/tmp.
You need to move your cat command and rm command after the while block. It doesn't make sense to cat a file if a file doesn't exist. I think your misplaced those commands.
Try this:
file="/home/master/applications/tmp/mydata.txt"
while [ ! -f "$file" ]
do
sleep 1
done
cat $file
rm $file
EDIT
As per suggestion from #Ivan, you could use until instead of while as it suits more to your requirements.
file="/home/master/applications/tmp/mydata.txt"
until [ -f "$file" ]; do sleep 1; done
cat $file
rm $file
Making a different assumption than abhiarora, I'll guess maybe you meant for the file to reappear, and you want it shown each time.
file=/home/master/applications/tmp/mydata.txt
while :
do if [[ -f "$file" ]]
then echo "$(<"$file")"
rm "$file"
fi
sleep 1
done
This creates an infinite loop. If that's NOT what you wanted, use abhiarora's solution.

extracting files that doesn't have a dir with the same name

sorry for that odd title. I didn't know how to word it the right way.
I'm trying to write a script to filter my wiki files to those got directories with the same name and the ones without. I'll elaborate further.
here is my file system:
what I need to do is print a list of those files which have directories in their name and another one of those without.
So my ultimate goal is getting:
with dirs:
Docs
Eng
Python
RHEL
To_do_list
articals
without dirs:
orphan.txt
orphan2.txt
orphan3.txt
I managed to get those files with dirs. Here is me code:
getname () {
file=$( basename "$1" )
file2=${file%%.*}
echo $file2
}
for d in Mywiki/* ; do
if [[ -f $d ]]; then
file=$(getname $d)
for x in Mywiki/* ; do
dir=$(getname $x)
if [[ -d $x ]] && [ $dir == $file ]; then
echo $dir
fi
done
fi
done
but stuck with getting those without. if this is the wrong way of doing this please clarify the right one.
any help appreciated. Thanks.
Here's a quick attempt.
for file in Mywiki/*.txt; do
nodir=${file##*/}
test -d "${file%.txt}" && printf "%s\n" "$nodir" >&3 || printf "%s\n" "$nodir"
done >with 3>without
This shamelessly uses standard output for the non-orphans. Maybe more robustly open another separate file descriptor for that.
Also notice how everything needs to be quoted unless you specifically require the shell to do whitespace tokenization and wildcard expansion on the value of a token. Here's the scoop on that.
That may not be the most efficient way of doing it, but you could take all files, remove the extension, and the check if there isn't a directory with that name.
Like this (untested code):
for file in Mywiki/* ; do
if [ -f "$d" ]; then
dirname=$(getname "$d")
if [ ! -d "Mywiki/$dirname" ]; then
echo "$file"
fi
fi
done
To List all the files in current dir
list1=`ls -p | grep -v /`
To List all the files in current dir without extension
list2=`ls -p | grep -v / | sed 's/\.[a-z]*//g'`
To List all the directories in current dir
list3=`ls -d */ | sed -e "s/\///g"`
Now you can get the desired directory listing using intersection of list2 and list3. Intersection of two lists in Bash

Copy multiple files with bash script from command line arguments?

I want to create a script that allows me to enter multiple filenames from the command line, and have the script copy those files to another directory. This is what I am trying but I keep getting an error of
line 10: binary operator expected
#!/bin/bash
DIRECTORY=/.test_files
FILE=$*
if [ -e $DIRECTORY/$FILE ]; then
echo "File already exists"
else
cp $FILE $DIRECTORY
fi
So if the script was named copfiles.sh, I am writing...
./copyfiles.sh doc1.txt doc2.txt
It will move the files, but if they already exist, it won't read the error message.
Also I get the "line 10: binary operator expected" error regardless of it the files are there or not. Can anyone tell me what I am doing wrong?
As a possible problem, if you had a filename with a space or had multiple arguments $* would have spaces in it so [ -e $DIR/$FILE ] will expand to have lots of words, like [ -e /.test_files/First word and more ] and -e expects just 1 word after it. Try putting it in quotes like
if [ -e "$DIRECTORY/$FILE" ]
Of course, you may only want to store $1 in $FILE to get just the first argument.
To test all the arguments you want to loop over the arguments and test each with something like
for FILE in "$#"; do
if [ -e "$DIRECTORY/$FILE" ]; then
echo "$FILE already exists"
else
cp "$FILE" $DIRECTORY
fi
done
Using quotes around $# to preserve spaces in the original arguments as well

Bash: Create a file if it does not exist, otherwise check to see if it is writeable

I have a bash program that will write to an output file. This file may or may not exist, but the script must check permissions and fail early. I can't find an elegant way to make this happen. Here's what I have tried.
set +e
touch $file
set -e
if [ $? -ne 0 ]; then exit;fi
I keep set -e on for this script so it fails if there is ever an error on any line. Is there an easier way to do the above script?
Why complicate things?
file=exists_and_writeable
if [ ! -e "$file" ] ; then
touch "$file"
fi
if [ ! -w "$file" ] ; then
echo cannot write to $file
exit 1
fi
Or, more concisely,
( [ -e "$file" ] || touch "$file" ) && [ ! -w "$file" ] && echo cannot write to $file && exit 1
Rather than check $? on a different line, check the return value immediately like this:
touch file || exit
As long as your umask doesn't restrict the write bit from being set, you can just rely on the return value of touch
You can use -w to check if a file is writable (search for it in the bash man page).
if [[ ! -w $file ]]; then exit; fi
Why must the script fail early? By separating the writable test and the file open() you introduce a race condition. Instead, why not try to open (truncate/append) the file for writing, and deal with the error if it occurs? Something like:
$ echo foo > output.txt
$ if [ $? -ne 0 ]; then die("Couldn't echo foo")
As others mention, the "noclobber" option might be useful if you want to avoid overwriting existing files.
Open the file for writing. In the shell, this is done with an output redirection. You can redirect the shell's standard output by putting the redirection on the exec built-in with no argument.
set -e
exec >shell.out # exit if shell.out can't be opened
echo "This will appear in shell.out"
Make sure you haven't set the noclobber option (which is useful interactively but often unusable in scripts). Use > if you want to truncate the file if it exists, and >> if you want to append instead.
If you only want to test permissions, you can run : >foo.out to create the file (or truncate it if it exists).
If you only want some commands to write to the file, open it on some other descriptor, then redirect as needed.
set -e
exec 3>foo.out
echo "This will appear on the standard output"
echo >&3 "This will appear in foo.out"
echo "This will appear both on standard output and in foo.out" | tee /dev/fd/3
(/dev/fd is not supported everywhere; it's available at least on Linux, *BSD, Solaris and Cygwin.)

Renaming a set of files to 001, 002,

I originally had a set of images of the form image_001.jpg, image_002.jpg, ...
I went through them and removed several. Now I'd like to rename the leftover files back to image_001.jpg, image_002.jpg, ...
Is there a Linux command that will do this neatly? I'm familiar with rename but can't see anything to order file names like this. I'm thinking that since ls *.jpg lists the files in order (with gaps), the solution would be to pass the output of that into a bash loop or something?
If I understand right, you have e.g. image_001.jpg, image_003.jpg, image_005.jpg, and you want to rename to image_001.jpg, image_002.jpg, image_003.jpg.
EDIT: This is modified to put the temp file in the current directory. As Stephan202 noted, this can make a significant difference if temp is on a different filesystem. To avoid hitting the temp file in the loop, it now goes through image*
i=1; temp=$(mktemp -p .); for file in image*
do
mv "$file" $temp;
mv $temp $(printf "image_%0.3d.jpg" $i)
i=$((i + 1))
done
A simple loop (test with echo, execute with mv):
I=1
for F in *; do
echo "$F" `printf image_%03d.jpg $I`
#mv "$F" `printf image_%03d.jpg $I` 2>/dev/null || true
I=$((I + 1))
done
(I added 2>/dev/null || true to suppress warnings about identical source and target files. If this is not to your liking, go with Matthew Flaschen's answer.)
Some good answers here already; but some rely on hiding errors which is not a good idea (that assumes mv will only error because of a condition that is expected - what about all the other reaons mv might error?).
Moreover, it can be done a little shorter and should be better quoted:
for file in *; do
printf -vsequenceImage 'image_%03d.jpg' "$((++i))"
[[ -e $sequenceImage ]] || \
mv "$file" "$sequenceImage"
done
Also note that you shouldn't capitalize your variables in bash scripts.
Try the following script:
numerate.sh
This code snipped should do the job:
./numerate.sh -d <your image folder> -b <start number> -L 3 -p image_ -s .jpg -o numerically -r
This does the reverse of what you are asking (taking files of the form *.jpg.001 and converting them to *.001.jpg), but can easily be modified for your purpose:
for file in *
do
if [[ "$file" =~ "(.*)\.([[:alpha:]]+)\.([[:digit:]]{3,})$" ]]
then
mv "${BASH_REMATCH[0]}" "${BASH_REMATCH[1]}.${BASH_REMATCH[3]}.${BASH_REMATCH[2]}"
fi
done
I was going to suggest something like the above using a for loop, an iterator, cut -f1 -d "_", then mv i i.iterator. It looks like it's already covered other ways, though.

Resources