Change file's name using command line arguments Bash [duplicate] - linux

This question already has answers here:
Change file's numbers Bash
(2 answers)
Closed 2 years ago.
I need to implement a script (duplq.sh) that would rename all the text files existing in the current directory using the command line arguments. So if the command duplq.sh pic 0 3 was executed, it would do the following transformation:
pic0.txt will have to be renamed pic3.txt
pic1.txt to pic4.txt
pic2.txt to pic5.txt
pic3.txt to pic6.txt
etc…
So the first argument is always the name of a file the second and the third always a positive digit.
I also need to make sure that when I execute my script, the first renaming (pic0.txt to pic3.txt), does not erase the existing pic3.txt file in the current directory.
Here's what i did so far :
#!/bin/bash
name="$1"
i="$2"
j="$3"
for file in $name*
do
echo $file
find /var/log -name 'name[$i]' | sed -e 's/$i/$j/g'
i=$(($i+1))
j=$(($j+1))
done
But the find command does not seem to work. Do you have other solutions ?

The problem you're trying to solve is actually somewhat tricky, and I don't think you've fully thought it through. For instance, what's the difference between duplq.sh pic 0 3 and duplq.sh pic 2 5 -- it looks like both should just add 3 to the number, or would the second skip "pic0.txt" and "pic1.txt"? What effect would either one have on files named "pic", "pic.txt", "picture.txt", "picture2.txt", "pic2-2.txt", or "pic999.txt".
There are also a bunch of basic mistakes in the script you have so far:
You should (almost) always put variable references in double-qotes, to avoid unexpected word-splitting and wildcard expansion. So, for example, use echo "$file" instead of echo $file. In for file in $name*, you should put double-quotes around the variable but not the *, because you want that to be treated as a wildcard. Hence, the correct version is for file in "$name"*
Don't put variable references in single-quotes, they aren't expanded there. So in the find and sed commands, you aren't passing the variables' values, you're passing literal dollar signs followed by letters. Again, use double-quotes. Also, you don't have a "$" before "name", so it won't be treated as a variable even in double-quotes.
But the find and sed commands don't do what you want anyway. Consider find /var/log -name "name[1]" -- that looks for files named "name1", not "name1" + some extension. And it looks in the current directory and all subdirectories, which I'm pretty sure you don't want. And the "1" ("$i") may not be the number in the current filename. Suppose there are files named "pic0.jpg", "pic0.png", and "pic0.txt" -- on the first iteration, the loop might find all three with a pattern like "pic0*", then on the second and third iterations try to find "pic1*" and "pic2*, which don't exist. On the other hand, suppose there are files named "pic0.txt", "pic5.txt", and "pic8.txt" -- again, it might look for "pic0*" (ok), then "pic1*" (not found), and then "pic2*" (ditto).
Also, if you get to multi-digit numbers, the pattern "name[10]" will match "file0" and "file1", but not "file10". I don't know why you added the brackets there, but they don't do anything you'd want.
You already have the files being listed one at a time in the $file variable, searching again with different criteria just adds confusion.
Also, at no point in the script do you actually rename anything. The find | sed line will (if it works) print the new name for the file, but not actually rename it.
BTW, when you do use the mv command, use either mv -n or mv -i to keep it from silently and irretrievably overwriting files if/when a name conflict occurs.
To prevent overwriting when incrementing file numbers, you need to do the renames in reverse numeric order (i.e. rename "pic3.txt" to "pic6.txt" before renaming "pic0.txt" to "pic3.txt"). This is especially tricky because if you just sort filenames in reverse alphabetic order, you'll get "pic7.txt" before "pic10.txt". But you can't do a numeric sort without removing the "pic" and ".txt" parts first.
IMO this is actually the trickiest problem to be solved in order to get this script to work right. It might be simplest to specify the largest index number as one of the arguments, and have it start there and count down to 0 (looping over numbers rather than files), and then for each number iterate over matching files (e.g. "pic0.jpg", "pic0.png", and "pic0.txt").

So I assume that 0 3 is just a measurement for the difference of old num and new num and equivalent to 1 4 or 100 103.
To avoid overwriting existing files, create a new temp dir, move all affected files there, and move all of them back in the end.
#/bin/bash
#
# duplq.sh pic 0 3
base="$1"
delta=$(( $3 - $2 ))
# echo delta $delta
target=$(mktemp -d)
echo $target
# /tmp/tmp.7uXD2GzqAb
add () {
f="$1"
b="$2"
d=$3
num=${f#./${b}}
# echo -e "file: $f \tnum: $num \tnum + d: $((num + d))" ;
echo -e "$((num + d))" ;
}
for f in $(find -maxdepth 1 -type f -regex ".*/${base}[0-9]+")
do
newnum=$(add "$f" "${base}" $delta)
echo mv "$f" "$target/${base}$newnum"
done
# exit
echo mv $target/${base}* .
First I tried to just use bash syntax, to check, whether removal of the prefix (pic) results in just digits remaining. I also didn't use the extension .txt - this is left as an exercise for the reader. From the question it is unclear - it is never explicitly told, that all files share the same extension, but all files in the example do.
With the -regex ".*/${base}[0-9]+") in find, the values are guaranteed to be just digits.
num=${f#./${b}}
removes from file f the base ("pic"). Delta d is added.
Instead of really moving, I just echoed the mv-command.
#TODO: Implement the file name extension conservation.
And 2 other pitfalls came to my mind: If you have 3 files pic0, pic00 and pic000 they all will be renamed to pic3. And pic08 will be cut into pic and 08, 08 will then be tried to be read as octal number (or 09 or 012129 and so on) and lead to an error.
One way to solve this issue is, that you prepend the extracted number (001 or 018) with a "1", then add 3, and remove the leading 1:
001 1001 1004 004
018 1018 1021 021
but this clever solution leads to new problems:
999 1999 2002 002?
So a leading 1 has to be cut off, a leading 2 has to be reduced by 1. But now, if the delta is bigger, let's say 300:
018 1018 1318 318
918 1918 2218 1218
Well - that seems to be working.

Related

How can I stop my script to overwrite existing files

I am learning bash since 6 days I think I got some of the basics.
Anyway, for the wallpapers downloaded from Variety I've written two scripts. One of them moves downloaded photos older than 12 days to a folder and renames them all as "Aday 1,2,3..." and the other lets me select these and moves them to another folder and removes photos I didn't select. 1st script works just as I intended, my question is about the other
I think I should write the script down to better explain my problem
Script:
#!/bin/bash
#Move victors of 'Seçme-Eleme' to 'Kazananlar'
cd /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme
echo "Select victors"
read vct
for i in $vct; do
mv -i "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.png" ;
mv -i "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.jpg" ;
done
#Now let's remove the rest
rm /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme/*
In this script I originally intended to define another variable (let's call this "n") and so did I with copying and changing the variable from the first script. It was something like that
for i in $vct; do
n=1
mv "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.png" ;
mv "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.jpg" ;
n=$((n+1))
done
When I do that for the first time the script worked just as I intended. However, in my 2nd test run this script overwrote the files that already existed. I mean, for example in 1st run i had 5 files whose names are "Bahar 1,2,3,4,5" and the 2nd time I chose 3 files to add. I wanted their names to be "Bahar 6,7,8" but instead, my script made them the new 1,2 and 3. I tried many solutions and when I couldn't fix that I just assigned random numbers to them.
Is there a way to make this script work as I intended?
This command finds the biggest file name number amongst files in current directory. If no file is found, biggest number is assigned to 0.
biggest_number=$(ls -1 | sed -n 's/^[^0-9]*\([0-9]\+\)\(\.[a-zA-Z]\+\)\?$/\1/p' | sort -r -g | head -n 1)
[[ ! -z "$biggest_number" ]] || biggest_number=0
The regex in sed command assumes that there is no digit in filenames before the trailing number intended for increment.
As soon as you have found the biggest number, you can use it to start your loop to prevent overwrites.
n=$((biggest_number+1))

Removing changing pattern from filenames in directory in Linux

I have a directory containing files following the following naming convention:
Label_0000_AA.gz
Label_0001_BB.gz
Label_0002_CC.gz
...
All I want to do is to rename these files so that the _#### number pattern is removed, resulting in:
Label_AA.gz
Label_BB.gz
Label_CC.gz
...
but only up to a certain number. E.g.: I may have 10000 files but might only want to remove the pattern in the first 3000. Would this be possible using something like bash?
If you don't have prename or rename -
(assuming the names are consistent)
for f in Label_[0-9][0-9][0-9][0-9]_[A-Z][A-Z].gz
do mv "$f" "${f//_[0-9][0-9][0-9][0-9]/}"
done
To just do a certain range -
for n in {0000..2999}
do for f in Label_${n}_??.gz
do mv $f ${f//_$n/}
done
done
You're sure there are not collisions?
If you can name the pattern you want to change/remove in a regex you can use the command prename:
prename 's/_[0-3][[:digit:]]{3}_/_/g' Label_*.gz
This regex would only remove numbers 0000-3999.
Using the flag -n does a "dry-run" and shows what it would do.
Edit: Thanks #KamilCuk to remind me about two renames. I made it clear and changed the name to prename.

grep empty output file

I made a shell script the purpose of which is to find files that don't contain a particular string, then display the first line that isn't empty or otherwise useless. My script works well in the console, but for some reason when I try to direct the output to a .txt file, it comes out empty.
Here's my script:
#!/bin/bash
# takes user input.
echo "Input substance:"
read substance
echo "Listing media without $substance:"
cd media
# finds names of files that don't feature the substance given, then puts them inside an array.
searchresult=($(grep -L "$substance" *))
# iterates the array and prints the first line of each - contains both the number and the medium name.
# however, some files start with "Microorganisms" and the actual number and name feature after several empty lines
# the script checks for that occurence - and prints the first line that doesnt match these criteria.
for i in "${searchresult[#]}"
do
grep -m 1 -v "Microorganisms\|^$" $i
done >> output.txt
I've tried moving the >>output.txt to right after the grep line inside the loop, tried switching >> to > and 2>&1, tried using tee. No go.
I'm honestly feeling utterly stuck as to what the issue could be. I'm sure there's something I'm missing, but I'm nowhere near good enough with this to notice. I would very much appreciate any help.
EDIT: Added files to better illustrate what I'm working with. Sample inputs I tried: Glucose, Yeast extract, Agar. Link to files [140kB] - the folder was unzipped beforehand.
The script was given full permissions to execute. I don't think the output is being rewritten because even if I don't iterate and just run a single line of the loop, the file is empty.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

script or command to change the starting index number of a set of sequentially numbered files

i have a set of files named img1.png , img2.png ,...img10.png,.. and so on. what i want to achieve is renaming these files so that the starting index is increased by 30 such that the files become img31.png, img32.png,.....img40.png,....and so on. Is this possible using the "rename" command? or is a script required? in either case how do i do this?
related - for this to work do i have to first rename the files to img001.png, img002.png, ...img010.png , and so on? how is this to be done, if required?
add 30 to the numbers in each filename
rename 's/(\d+)/$1+30/e' *png
rename to be 3 digits long
rename 's/(\d+)/sprintf("%03d",$1)/e' *png
See perldoc perlre http://perldoc.perl.org/perlre.html for details of how this works, rename is a perl program
LOCATION=/my/image/directory #change this to your location
for file in $(ls -1 ${LOCATION})
do
ind=$(echo ${file}|cut -c 4-|cut -d"." -f1)
(( newind=${ind}+30 ))
mv ${LOCATION}/${file} ${LOCATION}/img${newind}.png
done
I am sure there is much more elegant way of doing this on one line using likes of awk/sed/perl etc, but this shows you the logic behind it.
Hope it helps

Resources