Linux rename files as dirname - linux

i got lots of files like this:
./1/wwuhw.mp3
./2/nweiewe.mp3
./3/iwqjoiw.mp3
./4/ncionw.MP3
./5/joiwqfm.wmv
./6/jqoifiew.WMV
how can i rename them like this in Linux Bash:
./1/1.mp3
./2/2.mp3
./3/3.mp3
./4/4.MP3
./5/5.wmv
./6/6.WMV

Try this,
for i in */*; do mv $i $(dirname $i)/$(dirname $i).${i##*.}; done
For loop iterates over each file in directory one by one. and mv statement renames the each file in directory one by one.

Something like this should do the job:
for i in */*; do
echo mv "${i}" "${i%/*}/${i%/*}.${i##*.}"
done
See e.g. here, what this cryptic parameter expansions (like ${i%/*}) mean in bash.
The script above will only print the commands in the console, without invoking them. Once you are sure you want to proceed, you can remove the echo statement and let it run.

If you don't mind using external tool, then rnm can do this pretty easily:
rnm -ns '/pd0/./e/' */*
/pd0/ is the immediate parent directory, /pd1/ is the directory before that and so forth.
-ns means name string and /pd/ and /e/ are name string rules which expands to parent directory and file extension respectively.
The general format of the /pd/ rule is /pd<digit>-<digit>-<delim>/, for example, a rule like /pd0-2-_/ will construct dir0_dir1_dir2 from a directory structure of dir2/dir1/dir0
More examples can be found here.

The for loop method, as outlined in some of the other answers, would suffice and work great for most cases where you need to rename every file in a directory to the first parent's directory name. My particular case called for a bit more granularity, where I only wanted to rename a subset of the files in a directory and assert that the operand was, in fact, an actual file, not an empty directory, symbolic link, etc. Using find can achieve exactly what you want in addition to the added ability to apply filtration and processing to the file inputs and outputs.
#####################################
# Same effect as using a `for` loop #
#####################################
#
# -mindepth 2 : ensures that the file has a parent directory.
# -type f : ensures that we are working with a `regular file` (not directory, symlink, etc.).
find . -mindepth 2 -type f -exec bash -c 'file="{}"; dir="$(dirname $file)"; mv "$file" "$dir/${dir##*/}.${file##*.}"' \;
#########################
# Additional filtration #
#########################
# mp3 ONLY (case insensitive)
find . -mindepth 2 -type f -iname "*.mp3" -exec bash -c 'file="{}"; dir="$(dirname $file)"; mv "$file" "$dir/${dir##*/}.${file##*.}"' \;
# mp3 OR mp4 ONLY (case insensitive)
find . -mindepth 2 -type f \( -iname "*.mp3" -or -iname "*.mp4" \) -exec bash -c 'file="{}"; "dir=$(dirname $file)"; mv "$file" "$dir/${dir##*/}.${file##*.}"' \;

Related

Using find to rename files recursively with random chars

I have an IP camera that takes snapshots and nests those snapshots into multiple directories. The sub directories look something like this.
/cam_folder
|--Date
|----Hour
|------Minute
|-------->file1
|-------->file2...etc
|------Minute
|-------->file1...etc
There is a ton of sub directories because of the way it stores files since it places those snapshots within a Minute directory of the Date/Hour directories.
At any rate, there are other types of files mixed in, but I know how to use find to find all the .jpgs I need:
find /cam_folder/ -type f -name '*.jpg'
But what I need to do is rename all the .jpg files to random characters. I was able to find this which works from a single directory in a bash script:
for file in *.jpg; do
new_file="$(mktemp XXXXXXXX.jpg)"
mv -f -- "$file" "$new_file"
done
My problem is how to tie these together? I need to use find to feed these into a bash script I guess?
Is there an easier way to just walk a directory recursively renaming as I go?
find /cam_folder/ -type f -name '*.jpg' -exec sh -c '
for f; do
mv -f -- "$f" "${f%/*}/$(mktemp -u XXXXXXXX.jpg)"
done' _ {} +
find
Shell Command Language § The for Loop
Shell Command Language § Parameter Expansions
I think that meets your demand
while IFS= read -r file ; do
new_file="$(mktemp XXXXXXXX.jpg)"
mv -f -- "$file" "$new_file"
done < <(find /cam_folder/ -type f -name '*.jpg')

Best way to tar and zip files meeting specific name criteria?

I'm writing a shell script on a Linux machine to be run via a crontab which is meant to move all files older than the current day to a new folder, and then tar and zip the entire folder. Seems like a simple task but for some reason, I'm running into all kinds of roadblocks. I'm new to this and self-taught so any help or redirection would be greatly appreciated.
Specific criteria for which files to archive:
All log files are in /home/tech/logs/ and all pdfs are in /home/tech/logs/pdf
All files are over a day old as indicated by the file name (file name does not include $CURRENT_DATE)
All files must be *.log or *.pdf (i.e. don't archive files that don't include $CURRENT_DATE if it isn't a log or pdf file.
Filename formatting specifics:
All the log file names are in home/tech/logs in the format NAME 00_20180510.log, and all the pdf files are in a "pdf" subdirectory (home/tech/logs/pdf) with the format NAME 00_20180510_00000000.pdf ("20180510" would be whenever the file was created and the 0's would be any number). I need to use the name rather than the file metadata for the creation date, and all files (pdf/log) whose name does not include the current date are "old". I also can't just move all files that don't contain $CURRENT_DATE in the name because it would take any non-*.pdf or *.log files with it.
Right now the script creates a new folder with a new pdf subdir for the old files (mkdir -p /home/tech/logs/$ARCHIVE_NAME/pdf). I then want to move the old logs into $ARCHIVE_NAME, and move all old pdfs from the original pdf subdirectory into $ARCHIVE_NAME/pdf.
Current code:
find /home/tech/logs -maxdepth 1 -name ( "*[^$CURRENT_DATE].log" "*.log" ) -exec mv -t "$ARCHIVE_NAME" '{}' ';'
find /home/tech/logs/pdf -maxdepth 1 -name ( "*[^$CURRENT_DATE]*.pdf" "*.pdf" ) -exec mv -t "$ARCHIVE_NAME/pdf" '{}' ';'
This hasn't been working because it treats the numbers in $CURRENT_DATE as a list of numbers to exclude rather than a literal string.
I've considered just using tar's exclude options like this:
tar -cvzPf "$ARCHIVE_NAME.tgz" --directory /home/tech/logs --exclude="$CURRENT_DATE" --no-unquote --recursion --remove-files --files-from="/home/tech/logs/"
But a) it doesn't work, and b) it would theoretically include all files that weren't *.pdf or *.log files, which would be a problem.
Am I overcomplicating this? Is there a better way to go about this?
I would go about this using bash's extended glob features, which allow you to negate a pattern:
#!/bin/bash
shopt -s extglob
mv /home/tech/logs/*!("$CURRENT_DATE")*.log "$ARCHIVE_NAME"
mv /home/tech/logs/pdf/*!("$CURRENT_DATE")*.pdf "$ARCHIVE_NAME"/pdf
With extglob enabled, !(pattern) expands to everything that doesn't match the pattern (or list of pipe-separated patterns).
Using find it should also be possible:
find /home/tech/logs -name '*.log' -not -name "*$CURRENT_DATE*" -exec mv -t "$ARCHIVE_NAME" {} +
Building on #tom-fenech answer, optimized to avoid many mv invocations:
find /home/tech/logs -maxdepth 1 -name '*.log' -not -name "*_${CURRENT_DATE?}.log" | \
xargs mv -t "${ARCHIVE_NAME?}"
An interesting feature, from processing the file thru pipes, is the ability to filter them with extra tools (aka grep :), which can (arguably) become more readable i.e. ->
find /home/tech/logs -maxdepth 1 -name '*.log' | fgrep -v "_${CURRENT_DATE?}" | \
xargs mv -t "${ARCHIVE_NAME?}"
Then similarly for the pdf ones, BTW you can "dry-run" above by just replacing mv by echo mv.
--jjo

Creating a file in a directory other than root using bash

I am currently working on an auto grading script for a class project. It has to be able to search any number of given directories lets say
for example
usr/autograder/jdoe/
jdoe contains two files house.c and readme.txt.
I need to create a file in jdoe called jdoe.pdf
Currently i'm using this line of code below to get the path to where i need to create the file. Where $1 is user input of the path containing the projects the auto grader will grade.
find $1 -name "*.txt" -exec sh -c "dirname {}"
When I try adding /somename.pdf to the end of this statement I get readme.txt/somename.pdf
along with another -exec to get the name for the file.
\; -exec sh -c "dirname {} xargs -n 1 basename" \;
I'm having problems combining these two into one working statement.
I'm new to unix programming and would appreciate any advice or help even if it means re-writing the code using different unix tools.
The main question here is how do I create files in a path other than the directory I call my script from. Thanks in advance.
How about this?
find "$1" -name "*.txt" -exec bash -c 'd=$(dirname "$1"); touch $d"/"$(basename "$d").pdf' - {} \;
You can create files in another path using change directory command (cd).
If you start your script in usr/autograder/script and want to change to usr/autograder/jdoe you can change directory with shell command cd ../jdoe (relative) or cd usr/autograder/jdoe (absolute).
Now you are in the directory of usr/autograder/jdoe and you are able to create files in this directory, for example gedit readme.txt will open gedit and creates the file in usr/autograder/jdoe.
The simplest way is to loop over the files returned by find and then do whatever you need to do.
For example:
find "$1" -type f -name "*.txt" -print0 | while IFS= read -r -d $'\0' filename; do
dir=$(dirname "$filename")
# create pdf file
touch "$dir/${dir##*/}.pdf"
done
(Note the use of find -print0 to correctly handle filenames containing whitespace and newline characters.)
Is this what you are looking for?
function process_file {
dir=$(dirname "$1")
name=$(basename "$1")
echo name is $name and dir is $dir;
cd "$dir"
touch "${dir##*/}.pdf" # or anything else
}
# export the function, so that it is known in the child processes
export -f process_file
find . -name '*.txt' -exec bash -c "process_file '{}'" \;

Linux recursive copy files to its parent folder

I want to copy recursively files to its parent folder for a specific file extension. For example:
./folderA/folder1/*.txt to ./folderA/*.txt
./folderB/folder2/*.txt to ./folderB/*.txt
etc.
I checked cp and find commands but couldn't get it working.
I suspect that while you say copy, you actually mean to move the files up to their respective parent directories. It can be done easily using find:
$ find . -name '*.txt' -type f -execdir mv -n '{}' ../ \;
The above command recurses into the current directory . and then applies the following cascade of conditionals to each item found:
-name '*.txt' will filter out only files that have the .txt extension
-type f will filter out only regular files (eg, not directories that – for whatever reason – happen to have a name ending in .txt)
-execdir mv -n '{}' ../ \; executes the command mv -n '{}' ../ in the containing directory where the {} is a placeholder for the matched file's name and the single quotes are needed to stop the shell from interpreting the curly braces. The ; terminates the command and again has to be escaped from the shell interpreting it.
I have passed the -n flag to the mv program to avoid accidentally overwriting an existing file.
The above command will transform the following file system tree
dir1/
dir11/
file3.txt
file4.txt
dir12/
file2.txt
dir2/
dir21/
file6.dat
dir22/
dir221/
dir221/file8.txt
file7.txt
file5.txt
dir3/
file9.dat
file1.txt
into this one:
dir1/
dir11/
dir12/
file3.txt
file4.txt
dir2/
dir21/
file6.dat
dir22/
dir221/
file8.txt
file7.txt
dir3/
file9.dat
file2.txt
file5.txt
To get rid of the empty directories, run
$ find . -type d -empty -delete
Again, this command will traverse the current directory . and then apply the following:
-type d this time filters out only directories
-empty filters out only those that are empty
-delete deletes them.
Fine print: -execdir is not specified by POSIX, though major implementations (at least the GNU and BSD one) support it. If you need strict POSIX compliance, you'll have to make do with the less safe -exec which would need additional thought to be applied correctly in this case.
Finally, please try your commands in a test directory with dummy files, not your actual data. Especially with the -delete option of find, you can loose all your data quicker than you might imaging. Read the man page and, if that is not enough, the reference manual of find. Never blindly copy shell commands from random strangers posted on the internet if you don't understand them.
$cp ./folderA/folder1/*.txt ./folderA
Try this commnad
Run something like this from the root(ish) directory:
#! /bin/bash
BASE_DIR=./
new_dir() {
LOC_DIR=`pwd`
for i in "${LOC_DIR}"/*; do
[[ -f "${i}" ]] && cp "${i}" ../
[[ -d "${i}" ]] && cd "${i}" && new_dir
cd ..
done
return 0
}
new_dir
This will search each directory. When a file is encountered, it copies the file up a directory. When a directory is found, it will move down into the directory and start the process over again. I think it'll work for you.
Good luck.

How to gzip all files in all sub-directories in bash

I want to iterate among sub directories of my current location and gzip each file seperately. For zipping files in a directory, I use
for file in *; do gzip "$file"; done
but this can just work on current directory and not the sub directories of the current directory. How can I rewrite the above statements so that It also zips the files in all subdirectories?
I'd prefer gzip -r ./ which does the same thing but is shorter.
No need for loops or anything more than find and gzip:
find . -type f ! -name '*.gz' -exec gzip "{}" \;
This finds all regular files in and below the current directory whose names don't end with the .gz extension (that is, all files that are not already compressed). It invokes gzip on each file individually.
Edit, based on comment from user unknown:
The curly braces ({}) are replaced with the filename, which is passed directly, as a single word, to the command following -exec as you can see here:
$ touch foo
$ touch "bar baz"
$ touch xyzzy
$ find . -exec echo {} \;
./foo
./bar baz
./xyzzy
find . -type f | while read file; do gzip "$file"; done
I can't comment on the top post (yet...), but I read in the man pages of "find" that -execDir is safer than -exec because the command is done in the subdirectory where the match is found, rather than the parent directory where "find" is ran from.
If anyone would like to use a regex with to locate specific files in a subdirectory to zip, I'd recommend using
find ./ -type f -name 'addRegexHere' -execdir gzip -k "{}" \;
if you don't need regex's, stick with the recursive gzip call above (or below, if I gain any traction haha)
source

Resources