Changing all file's extensions in a folder using CLI in Linux - linux

How to change all file's extensions in a folder using one command on CLI in Linux?

Use rename:
rename 's/.old$/.new/' *.old

If you have the perl rename installed (there are different rename implementations) you can do something like this:
$ ls -1
test1.foo
test2.foo
test3.foo
$ rename 's/\.foo$/.bar/' *.foo
$ ls -1
test1.bar
test2.bar
test3.bar

You could use a for-loop on the command line:
for foo in *.old; do mv $foo `basename $foo .old`.new; done
this will take all files with extension .old and rename them to .new

This should works on current directory AND sub-directories. It will rename all .oldExtension files under directory structure with a new extension.
for f in `find . -iname '*.oldExtension' -type f -print`;do mv "$f" ${f%.oldExtension}.newExtension; done

This will work recursively, and with files containing spaces.
Be sure to replace .old and .new with the proper extensions before running.
find . -iname '*.old' -type f -exec bash -c 'mv "$0" "${0%.old}.new"' {} \;

Source : recursively add file extension to all files
(Not my answer.)
find . -type f -exec mv '{}' '{}'.jpg \;
Explanation: this recursively finds all files (-type f) starting from the current directory (.) and applies the move command (mv) to each of them. Note also the quotes around {}, so that filenames with spaces (and even newlines...) are properly handled.

Related

linux: find common files from two directories with single command

Dir1: [anyName]-test/target/surefire-reports/*.xml
Dir2: target/surefire-reports/*.xml
jenkins shell cmd i came up:
sh "jar -cMvf Test.zip target/surefire-reports/*.xml *-test/target/surefire-reports/*.xml "
only one directory exists ( dir1 or dir2), so the shell step always fails for no file or directory.
Any better idea to look for xml files, in single command, without failing ? (may be some regular expression) Thanks !
With GNU find:
find . -type f -regex '\./\([^/]*-test/\)?target/surefire-reports/[^/]*\.xml'\
-exec jar -cMvf Test.zip {} +
The -regex action matches the path of your regular (type -f) files. This will only add *.xml files from the surefire-reports directories, not from its subdirectories. If you want to include subdirectories, replace [^/]*\.xml with .*\.xml.
Alternative using glob patterns:
find target/surefire-reports *-test/target/surefire-reports -maxdepth 1 -type f\
-name '*.xml' -exec jar -cMvf Test.zip {} +
If you want to include subdirectories, remove -maxdepth 1.
Run both commands from the parent directory of target (your project dir).
Try:
cd <TOP DIRECTORY>
find . -type f -name "*unitTest.xml" -print | xargs jar -cMvf Test.zip
I put "*unitTest.xml since in Dir2 the J is in caps.
This way it will capture the JUnitTtest.xml files only if they exist.
So the results of the find command are used as arguments to the jar command. This is done by xargs. find does not care if the file is there or not, so no error.
Tested on bash.

Loop through a directory with any level of depth

I want to execute a command on all files present on all levels in the directory. It may have any number of files and sub directories. Even these sub directories may contain any number of files and subdirectories. I want to do this using shell script. As I am new to this field can any one suggest me a way out.
You can use the command "find" with "xargs" after "|"(pipe).
Example: Suppose that I want to remove all files that have ".txt" extension on "Documents" directory:
find Documents -iname *.txt |xargs rm -f
Helps?
You can use a recursive command that uses wildcard characters (*) like so:
for dir in ~/dev/myproject/*; do (cd "$dir" && git status); done
If you want to apply commands on the individual files you should use the find command and execute commands on it like so:
find yourdirectory -type f -exec echo "File found: '{}'" \;
What this does:
finds all the items in the directory yourdirectory
that have the type f - so are a file
runs an exec on each file
Use find:
find -type f -exec COMMAND {} \;
-f applies the command only to files, not to directories. The command is recursive by default.

Linux recursive copy files to its parent folder

I want to copy recursively files to its parent folder for a specific file extension. For example:
./folderA/folder1/*.txt to ./folderA/*.txt
./folderB/folder2/*.txt to ./folderB/*.txt
etc.
I checked cp and find commands but couldn't get it working.
I suspect that while you say copy, you actually mean to move the files up to their respective parent directories. It can be done easily using find:
$ find . -name '*.txt' -type f -execdir mv -n '{}' ../ \;
The above command recurses into the current directory . and then applies the following cascade of conditionals to each item found:
-name '*.txt' will filter out only files that have the .txt extension
-type f will filter out only regular files (eg, not directories that – for whatever reason – happen to have a name ending in .txt)
-execdir mv -n '{}' ../ \; executes the command mv -n '{}' ../ in the containing directory where the {} is a placeholder for the matched file's name and the single quotes are needed to stop the shell from interpreting the curly braces. The ; terminates the command and again has to be escaped from the shell interpreting it.
I have passed the -n flag to the mv program to avoid accidentally overwriting an existing file.
The above command will transform the following file system tree
dir1/
dir11/
file3.txt
file4.txt
dir12/
file2.txt
dir2/
dir21/
file6.dat
dir22/
dir221/
dir221/file8.txt
file7.txt
file5.txt
dir3/
file9.dat
file1.txt
into this one:
dir1/
dir11/
dir12/
file3.txt
file4.txt
dir2/
dir21/
file6.dat
dir22/
dir221/
file8.txt
file7.txt
dir3/
file9.dat
file2.txt
file5.txt
To get rid of the empty directories, run
$ find . -type d -empty -delete
Again, this command will traverse the current directory . and then apply the following:
-type d this time filters out only directories
-empty filters out only those that are empty
-delete deletes them.
Fine print: -execdir is not specified by POSIX, though major implementations (at least the GNU and BSD one) support it. If you need strict POSIX compliance, you'll have to make do with the less safe -exec which would need additional thought to be applied correctly in this case.
Finally, please try your commands in a test directory with dummy files, not your actual data. Especially with the -delete option of find, you can loose all your data quicker than you might imaging. Read the man page and, if that is not enough, the reference manual of find. Never blindly copy shell commands from random strangers posted on the internet if you don't understand them.
$cp ./folderA/folder1/*.txt ./folderA
Try this commnad
Run something like this from the root(ish) directory:
#! /bin/bash
BASE_DIR=./
new_dir() {
LOC_DIR=`pwd`
for i in "${LOC_DIR}"/*; do
[[ -f "${i}" ]] && cp "${i}" ../
[[ -d "${i}" ]] && cd "${i}" && new_dir
cd ..
done
return 0
}
new_dir
This will search each directory. When a file is encountered, it copies the file up a directory. When a directory is found, it will move down into the directory and start the process over again. I think it'll work for you.
Good luck.

Search for text files in a directory and append a (static) line to each of them

I have a directory with many subdirectories and files with suffixes in those subdirectories (e.g FileA-suffixA FileB-SuffixB FileC-SuffixC FileD-SuffixA, etc).
How can I recursively search for files with a certain suffix, and append a user-defined line of text to those files? I feel like this is a job for grep and sed, but I'm not sure how I would go about doing it. I'm fairly new to scripting, so please bear with me.
You can do it like
find /where/to/search -type f -iname '*.SUFFIX' -exec echo "USER DEFINED STRING" >> \{\} \;
find searches in the suplied path
-type f finds only files
-iname '*.SUFFIX' find the .SUFFIXed names, case ignored
find ./ -name "*suffix" -exec bash -c 'echo "line_to_add" >> $1' -- {} \;
Basically you use find to get a list of the files. Then you use bash to echo append your line to that list.

How to gzip all files in all sub-directories in bash

I want to iterate among sub directories of my current location and gzip each file seperately. For zipping files in a directory, I use
for file in *; do gzip "$file"; done
but this can just work on current directory and not the sub directories of the current directory. How can I rewrite the above statements so that It also zips the files in all subdirectories?
I'd prefer gzip -r ./ which does the same thing but is shorter.
No need for loops or anything more than find and gzip:
find . -type f ! -name '*.gz' -exec gzip "{}" \;
This finds all regular files in and below the current directory whose names don't end with the .gz extension (that is, all files that are not already compressed). It invokes gzip on each file individually.
Edit, based on comment from user unknown:
The curly braces ({}) are replaced with the filename, which is passed directly, as a single word, to the command following -exec as you can see here:
$ touch foo
$ touch "bar baz"
$ touch xyzzy
$ find . -exec echo {} \;
./foo
./bar baz
./xyzzy
find . -type f | while read file; do gzip "$file"; done
I can't comment on the top post (yet...), but I read in the man pages of "find" that -execDir is safer than -exec because the command is done in the subdirectory where the match is found, rather than the parent directory where "find" is ran from.
If anyone would like to use a regex with to locate specific files in a subdirectory to zip, I'd recommend using
find ./ -type f -name 'addRegexHere' -execdir gzip -k "{}" \;
if you don't need regex's, stick with the recursive gzip call above (or below, if I gain any traction haha)
source

Resources