I am on a Linux system and I am trying to rename all .jpg files in many subdirectories to sequential filenames, so all the jpeg files in each subdirectory are renamed 0001.jpg, 0002.jpg, etc. I have a 'rename' command that works in a single directory:
rename -n 's/.*/sprintf("%04d",$::iter++ +1).".jpg"/e' *.jpg
I am trying to use it like this:
for i in ls -D; do rename -n 's/.*/sprintf("%04d",$::iter++ +1).".jpg"/e' *.jpg; done
but for output I get this:
*.jpg renamed as 0001.jpg
for each subdirectory. What am I doing wrong?
You need to put the command in backticks (or use the $( ... ) bash syntax) in order
to iterate over its output. Also use the $i variable together with the *.jpg file
name pattern, e.g.
for i in `ls -D`
do
rename -n 's/.*/sprintf("%04d",$::iter++ +1).".jpg"/e' $i/*.jpg
done
however, for this scenario you want to iterate over all the subdirectories, and you are
better of using the find command:
for i in `find . -type d`; do rename ...
It seems to me you've forgot to change a current working directory so it should looks like
for i in *; do
[ -d "$i" ] || continue
pushd "$i"
# rename is here
popd
done
Related
I am trying to rename the files and directories using a text file separated by space.
The text file looks like this:
dir1-1 dir1_1
dir2-1 dir223_1
My command is as follows:
xargs -r -a files.txt -L1 mv
This command can rename only folders from dir1-1 to dir1_1 and dir2-1to dir223_1so on but it doesn't rename the files in the subdirectories. The files in the corresponding directories also have these prefix of these directories.
Looking forward for the assistance.
Assuming you don't have special characters(space of tab...) in your file/dir names,
try
perl_script=$(
echo 'chop($_); $orig=$_;'
while read -r src tgt; do
echo 'if (s{(.*)/'"$src"'([^/]*)}{$1/'"$tgt"'\2}) { print "$orig $_\n";next;}'
done < files.txt)
find . -depth | perl -ne "$perl_script" | xargs -r -L1 echo mv
Remove echo once you see it does what you wanted.
I have the following structure:
FolderA
Sub1
Sub2
filexx.csv
filexx.doc
FolderB
Sub1
Sub2
fileyy.csv
fileyy.doc
I want to write a script that will move the .csv files into the folder sub1 for each parent directory (Folder A, Folder B and so on) giving me the following structure:
FolderA
Sub1
filexx.csv
Sub2
filexx.doc
FolderB
Sub1
fileyy.csv
Sub2
fileyy.doc
This is what I have till now but I get the error mv: cannot stat *.csv: No such file or directory
for f in */*/*.csv; do
mv -v "$f" */*/Sub1;
done
for f in */*/*.doc; do
mv -v "$f" */*/Sub2;
done
I am new to bash scripting so please forgive me if I have made a very obvious mistake. I know I can do this in Python as well but it will be lengthier which is why I would like a solution using linux commands.
find . -name "*.csv" -type f -execdir mv '{}' Sub1/ \;
Using find, search for all files with the extension .csv and then when we find them, execute a move command from within the directory containing the files, moving the files to directory Sub1
find . -name "*.doc" -type f -execdir mv '{}' Sub2/ \;
Follow the same principle for files with the extension .doc but this time, move the files to Sub2.
I believe you are getting this error because no file matched your wildcard. When it happens, the for loop will give $f the value of the wildcard itself. You are basically trying to move the file *.csv which does not exist.
To prevent this behavior, you can add shopt -s nullglob at the top of your script. When using this, if no file is found, your script won't enter the loop.
My advise is, make sure you run your script from the correct location when using wildcards like this. But maybe what you meant to do by writing */*/*.csv is to recursively match all the csv files. If that's what you intended to do, this is not the right way to do it.
To recursively match all csv/doc/etc files using native bash you can add shopt -s globstar to the top of your script and use **/*.csv as wildcard
#!/bin/bash
shopt -s globstar nullglob
for f in **/*.csv; do
mv "$f" Destination/ # Note that $f is surrounded by "" to handle whitespaces in filenames
done
You could also use the find (1) utility to achieve that. But if you're planning to do more processing on the files than just moving them, a for loop might be cleaner as you won't have to inline everything in the same command.
Side note : "Linux commands" as you say are actually not Linux commands, they are part of the GNU utilities (https://www.gnu.org/gnu/linux-and-gnu.en.html)
If csv files you want to move are in the top directories (from the point of view of the current directory), but not in the subdirectories of them, then simply:
#!/bin/bash
for dir in */; do
mv -v "$dir"*.csv "${dir}Sub1/"
mv -v "$dir"*.doc "${dir}Sub2/"
done
If the files in all subdirectories are wanted to be moved similarly, then:
shopt -s globstar
for file in **/*.csv; do
mv -v "$file" "${file%/*}/Sub1/"
done
for file in **/*.doc; do
mv -v "$file" "${file%/*}/Sub2/"
done
Note that, the directories Sub1 and Sub2 are relative to the directory where csv and doc files reside.
I have directory structure like this.
From this I want to create different zip files such as
data-A-A_1-A_11.zip
data-A-A_1-A_12.zip
data-B-B_1-B_11.zip
data-C-C_1-C_11.zip
while read line;
do
echo "zip -r ${line//\//-}.zip $line";
# zip -r "${line//\//-}.zip" "$line"
done <<< "$(find data -maxdepth 3 -mindepth 2 -type d)"
Redirect the result of a find command into a while loop. The find command searches the directory data for directories only, searching 3 directories deep only. In the while loop with use bash expansion to convert all forward slashes to "-" and add ".zip" in such a way that we can build a zip command on each directory. Once you are happy that the zip command looks fine when echoed for each directory, comment in the actual zip command
I have multiple folders with multiple files. I need to rename those files with the same name like the folder where the file stored with "_partN" prefix.
As example,
I have a folder named as "new_folder_for_upload" which have 2 files. I need to convert the name of these 2 files like,
new_folder_for_upload_part1
new_folder_for_upload_part2
I have so many folders like above which have multiple files. I need to convert all the file names as I describe above.
Can anybody help me to find out for a single linux command or script to do this work automatically?
Assuming bash shell, and assuming you want the file numbering to restart for each subdirectory, and doing the moving of all files to the top directory (leaving empty subdirectories). Formatted as script for easier reading:
find . -type f -print0 | while IFS= read -r -d '' file
do
myfile=$(echo $file | sed "s#./##")
mydir=$(dirname "$myfile")
if [[ $mydir != $lastdir ]]
then
NR=1
fi
lastdir=${mydir}
mv "$myfile" "$(dirname "$myfile")_part${NR}"
((NR++))
done
Or as one-line command:
find . -type f -print0 | while IFS= read -r -d '' file; do myfile=$(echo $file | sed "s#./##"); mydir=$(dirname "$myfile"); if [[ $mydir != $lastdir ]]; then NR=1; fi; lastdir=${mydir}; mv "$myfile" "$(dirname "$myfile")_part${NR}"; ((NR++)); done
Beware. This is armed, and will do a bulk renaming / moving of every file in or below your current work directory. Use at your own risk.
To delete the empty subdirs:
find . -depth -empty -type d -delete
I have an entire directory structure with zip files. I would like to:
Traverse the entire directory structure recursively grabbing all the zip files
I would like to find a specific file "*myLostFile.ext" within one of these zip files.
What I have tried
1. I know that I can list files recursively pretty easily:
find myLostfile -type f
2. I know that I can list files inside zip archives:
unzip -ls myfilename.zip
How do I find a specific file within a directory structure of zip files?
You can omit using find for single-level (or recursive in bash 4 with globstar) searches of .zip files using a for loop approach:
for i in *.zip; do grep -iq "mylostfile" < <( unzip -l $i ) && echo $i; done
for recursive searching in bash 4:
shopt -s globstar
for i in **/*.zip; do grep -iq "mylostfile" < <( unzip -l $i ) && echo $i; done
You can use xargs to process the output of find or you can do something like the following:
find . -type f -name '*zip' -exec sh -c 'unzip -l "{}" | grep -q myLostfile' \; -print
which will start searching in . for files that match *zip then will run unzip -ls on each and search for your filename. If that filename is found it will print the name of the zip file that matched it.
Some have suggested to use ugrep to search zip files and tarballs. To find the zip files that contain a mylostfile file, specify it as a -g glob pattern like so:
ugrep -z -l -g'myLostfile' ''
With the empty regex pattern '' this this recursively searches all files down the working directory, including any zip, tar, cpio/pax archives for mylostfile. If you only want to search the zip files located in the working directory:
ugrep -z -l -g'myLostfile' '' *.zip