Bulk change file names in a directory - Shell - linux

I tried to find something close to what I need, but ended up with bits and pieces from many questions here, and, obviously, my code doesn't work.
I've never programmed anything by myself, and have close to zero knowledge in programming.
What I'm trying to do, is to rename a bunch of files I have in 2 different directories, so that files in both have the same name, with no space chars.
For example:
~/Documents/Dir1/1.pdf instead of: ~/Documents/Dir1/file A.pdf
~/Documents/Dir2/1.pdf instead of: ~/Documents/Dir2/file A.pdf
This was the extent of my abilities:
#!/bin/bash
b4file=$1
c=0
for i in $b4file do
c=$((c+1))
pref=$(printf "%03d" $c)
mv "$i" "${pref}|$i"
done
The error I get is
mv.sh: line 7: syntax error near unexpected token `c=$((c+1))'
mv.sh: line 7: ` c=$((c+1))'

For loop read files one by one so use below code , this will remove your error.
#!/bin/bash
b4file=$1
c=0
for i in $b4file/* do
c=$((c+1))
pref=$(printf "%03d" $c)
mv "$i" "${pref}|$i"
done

Related

Batch renaming files using variable extracted from file text

Apologies if this has been answered, but I've spent several hours experimenting and searching online for the solution.
I have a folder with several thousand text files named e.g. '1.dat', '2.dat', 3.dat' etc. I would like to rename all of the files by extracting an 8-digit numerical ID from within the file text (the ID is always on the last line in columns 65-73), so that '1.dat' becomes '60741308.dat' etc.
I have made it as far as extracting the ID (using tail and cut) from the text file and assigning it to a variable, which I can then use to rename the file, on a single file,but I am unable to make it work as a batch process in a 'for' loop.
Here is what I have tried:
for i in *.dat
tmpname=$(tail -1 $i| cut -c 65-73)
mv $i $tmpname.dat
done
I get the following error: bash: syntax error near unexpected token `tmpname=$(tail -1 $i| cut -c 65-73)'
Any help much appreciated.
The syntax of a for loop in Bash is:
for i in {1..10}
do
echo $i
done
I can see that, you are missing the do keyword in your example. So, the correct version would be:
for i in *.dat
do
tmpname=$(tail -1 "$i" | cut -c 65-73)
mv "$i" "$tmpname.dat"
done

Issue with reading file content using linux

I want to read contents of a file using a linux shell script. The contents of file_list.txt is:
abc
def
ghi
And the script to read this is read_file_content.sh:
#!/bin/bash
for file in $(cat file_list.txt)
do
"processing "$file
done
When I run the command as ./read_file_content.sh, I get the following error:
./read_file_content.sh: line 6: processing abc: command not found
./read_file_content.sh: line 6: processing def: command not found
./read_file_content.sh: line 6: processing ghi: command not found
Why does this print 'command not found'?
You wrote "processing "$file without any command in front of it.
Bash will take this literally and try to execute it as a command.
To print the text on the screen, you can use echo or printf.
Echo example
echo "processing" "$file"
Printf example
printf "%s\n" "$file"
(This is the recommend way if you're going to process weird filenames that contain - and space characters. See Why is printf better than echo?)
Notice the way I did the quotes, this prevents problems with filenames that contain special characters like stars and spaces.

How not to overwrite an existing file when both are named by RANDOM special variable

I have written a basic script for saving the not-empty error files from the different parts of a big program because they are deleted after a while.
I thought I solved the problem by saving them under different names using the bash special variable $RANDOM. It worked well, but now I just realized that I have lost some error files, probably because it was overwritten by random naming procedure. How can I save the new (not empty) error files as well without overwriting the older ones?
My script is:
while [ ! -e ${myfile} ]; do
for FILE in $( find dirnames -name job.err )
do
if [[ -s ${FILE} ]] ; then
echo ${FILE} >> LIST
cp ${FILE} COLLECT/job_${RANDOM}.err
fi
done
sleep 3600
done
You could use mktemp to create a file name that's guaranteed to be unique. For example:
cp "${FILE}" "COLLECT/$(mktemp job_XXXXXXXXX)"
You lose the ".err" suffix in the above case, but you could work around that with some additional code if you really want it.
Well, for a simple solution, just add the time since the epoch in nanoseconds to the filename. While it won't guarantee an absence of collisions, it will at least make them very unlikely, especially when combined with the random value. E.g.
cp ${FILE} COLLECT/job_${RANDOM}-$(date +%s.%N).err
Using $RANDOM gives you an 1 in 32767 chance of having a name collision. That's pretty likely.
You already know how to check if a file exists, use that same method to approach your file copy:
copy="COLLECT/job_${RANDOM}.err"
while [[ -e "$copy" ]] ; do
copy="COLLECT/job_${RANDOM}.err"
done
cp "${FILE}" "$copy"
Of course, the closer you get to 32767 files, the harder it will try to find a free destination. It would be better to use a scheme that didn't depend on randomness.

awk system syntax throwing error

I'm trying to move all my files in my directory individually to a new directory phonex and rename them at the same time phonex.txt.
so e.g.
1.txt, 2.txt, jim.txt
become:
phone1.txt in directory phone1
phone2.txt in directory phone2
phone3.txt in directory phone3.
I'm a newbie to awk, but I have managed to create the directories, but I cannot get the rename right.
I have tried:
ls|xargs -n1|awk ' {i++;system("mkdir phone"i);system("mv "$0" phone”i ”.txt -t phone"i)}'
which errors with lots of:
mv: cannot stat `phone”i': No such file or directory
and:
ls|xargs -n1|awk ' {i++;system("mkdir phone"i);system("mv "$0" phone”i ”/phone"i”.txt”)}'
error:
awk: 1: unexpected character 0xe2
xargs: /bin/echo: terminated by signal 13
Can anyone help me finish it off? TIA!
Piping ls into xargs into awk is completely unnecessary in this scenario. What you are trying to do can be accomplished using a simple loop:
for f in *.txt; do
i=$((i+1))
dir="phone$i"
mkdir "$dir" && mv "$f" "$dir/$dir.txt"
done
Depending on your shell, the increment of $i can be done in a different way (like ((++i)) in bash) but this way is POSIX-compliant so should work on any modern shell.
By the way, the reason for your original error is that you are using curly quotes ”, which are not understood by the shell. You should always only use single ' and double " in the shell.
If you wanted it with incrementing numbers.
Else Tom Fenech's way is the way to go !
for i in *.txt;do d=phone$((++X));mkdir "$d"; mv "$i" "$d/$d.txt";done
Also you may want to set x to zero x=0 before doing this in case it is already set as something else

Bash Script to replicate files

I have 25 files in a directory. I need to amass 25000 files for testing purposes. I thought I could just replicate these files over and over until I get 25000 files. I could manually copy paste 1000 times but that seemed tedious. So I thought I could write a script to do it for me. I tried
cp * .
As a trial but I got an error that said the source and destination file are the same. If I were to automate it how would i do it so that each of the 1000 times the new files are made with unique names?
As discussed in the comments, you can do something like this:
for file in *
do
filename="${file%.*}" # get everything up to last dot
extension="${file##*.}" # get extension (text after last dot)
for i in {00001..10000}
do
cp $file ${filename}${i}${extension}
done
done
The trick for i in {00001..10000} is used to loop from 1 to 10000 having the number with leading zeros.
The ${filename}${i}${extension} is the same as $filename$i$extension but makes more clarity over what is a variable name and what is text. This way, you can also do ${filename}_${i}${extension} to get files like a_23.txt, etc.
In case your current files match a specific pattern, you can always do for file in a* (if they all are on the a + something format).
If you want to keep the extension of the files, you can use this. Assuming, you want to copy all txt-files:
#!/bin/bash
for f in *.txt
do
for i in {1..10000}
do
cp "$f" "${f%.*}_${i}.${f##*.}"
done
done
You could try this:
for file in *; do for i in {1..1000}; do cp $file $file-$i; done; done;
It will append a number to any existing files.
The next script
for file in *.*
do
eval $(sed 's/\(.*\)\.\([^\.]*\)$/base="\1";ext="\2";/' <<< "$file")
for n in {1..1000}
do
echo cp "$file" "$base-$n.$ext"
done
done
will:
take all files with extensions *.*
creates the basename and extension (sed)
in a cycle 1000 times copyes the original file to file-number.extension
it is for DRY-RUN, remove the echo if satisfied

Resources