How to use sed to change file extensions? - linux

I have to do a sed line (also using pipes in Linux) to change a file extension, so I can do some kind of mv *.1stextension *.2ndextension like mv *.txt *.c. The thing is that I can't use batch or a for loop, so I have to do it all with pipes and sed command.

you can use string manipulation
filename="file.ext1"
mv "${filename}" "${filename/%ext1/ext2}"
Or if your system support, you can use rename.
Update
you can also do something like this
mv ${filename}{ext1,ext2}
which is called brace expansion

sed is for manipulating the contents of files, not the filename itself. My suggestion:
rename 's/\.ext/\.newext/' ./*.ext
Or, there's this existing question which should help.

This may work:
find . -name "*.txt" |
sed -e 's|./||g' |
awk '{print "mv",$1, $1"c"}' |
sed -e "s|\.txtc|\.c|g" > table;
chmod u+x table;
./table
I don't know why you can't use a loop. It makes life much easier :
newex="c"; # Give your new extension
for file in *.*; # You can replace with *.txt instead of *.*
do
ex="${file##*.}"; # This retrieves the file extension
ne=$(echo "$file" | sed -e "s|$ex|$newex|g"); # Replaces current with the new one
echo "$ex";echo "$ne";
mv "$file" "$ne";
done

You can use find to find all of the files and then pipe that into a while read loop:
$ find . -name "*.ext1" -print0 | while read -d $'\0' file
do
mv $file "${file%.*}.ext2"
done
The ${file%.*} is the small right pattern filter. The % marks the pattern to remove from the right side (matching the smallest glob pattern possible), The .* is the pattern (the last . followed by the characters after the .).
The -print0 will separate file names with the NUL character instead of \n. The -d $'\0' will read in file names separated by the NUL character. This way, file names with spaces, tabs, \n, or other wacky characters will be processed correctly.

You may try following options
Option 1 find along with rename
find . -type f -name "*.ext1" -exec rename -f 's/\.ext1$/ext2/' {} \;
Option 2 find along with mv
find . -type f -name "*.ext1" -exec sh -c 'mv -f $0 ${0%.ext1}.ext2' {} \;
Note: It is observed that rename doesn't work for many terminals

Another solution only with sed and sh
printf "%s\n" *.ext1 |
sed "s/'/'\\\\''/g"';s/\(.*\)'ext1'/mv '\''\1'ext1\'' '\''\1'ext2\''/g' |
sh
for better performance: only one process created
perl -le '($e,$f)=#ARGV;map{$o=$_;s/$e$/$f/;rename$o,$_}<*.$e>' ext2 ext3

well this should work
mv $file $(echo $file | sed -E -e 's/.xml.bak.*/.xml/g' | sed -E -e 's/.\///g')
output
abc.xml.bak.foobar -> abc.xml

Related

Bash script to recursively find and replace in files [duplicate]

How do I find and replace every occurrence of:
subdomainA.example.com
with
subdomainB.example.com
in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find.
The simplest way for me is
grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index.
find /home/www/ -type f -exec \
sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one:
find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} +
find <mydir>: look up in the directory.
-type f:
File is of type: regular file
-exec command {} +:
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.:
sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f)
NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code .
NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 |
xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag)
ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g'
Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler:
Linux:
git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g'
Mac:
git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g'
(Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance:
grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out.
Leading you to:
grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0)
grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep.
git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive)
find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} +
Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command.
If you want to include hidden files use,
find . -type f -exec sed -i 's/foo/bar/g' {} +
In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
find /home/www/ -type f will list all files in /home/www/ (and its subdirectories).
The "-exec" flag tells find to run the following command on each file found.
perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
is the command run on the files (many at a time). The {} gets replaced by file names.
The + at the end of the command tells find to build one command for many filenames.
Per the find man page:
"The command line is built in much the same way that
xargs builds its command lines."
Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own:
cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel:
grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done
I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames.
Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this:
sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post:
find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x
find * /home/www -type f | while read files
do
sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p')
if [ "${sedtest}" ]
then
sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp
mv "${files}".tmp "${files}"
fi
done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?.
Here's the deal:
recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl).
(optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname'
after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given:
:bufdo %s#string#replacement#gc | update
You can use awk to solve this as below,
for file in `find /home/www -type f`
do
awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file;
done
hope this will help you !!!
For replace all occurrences in a git repository you can use:
git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X.
There are few trickeries:
• Will only edit files with extension .sls under the current directory
• . must be escaped to ensure sed does not evaluate them as "any character"
• , is used as the sed delimiter instead of the usual /
Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic).
First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files):
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Edit the sed command as needed, once you are ready to make changes:
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page).
Happy seding folks!
just to avoid to change also
NearlysubdomainA.example.com
subdomainA.example.comp.other
but still
subdomainA.example.com.IsIt.good
(maybe not good in the idea behind domain root)
find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front).
du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
If you want to add filters like file extensions use grep:
du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP.
Limitations of qsh commands:
find does not have the -print0 option
xargs does not have -0 option
sed does not have -i option
Thus the solution in qsh:
PATH='your/path/here'
SEARCH=\'subdomainA.example.com\'
REPLACE=\'subdomainB.example.com\'
for file in $( find ${PATH} -P -type f ); do
TEMP_FILE=${file}.${RANDOM}.temp_file
if [ ! -e ${TEMP_FILE} ]; then
touch -C 819 ${TEMP_FILE}
sed -e 's/'$SEARCH'/'$REPLACE'/g' \
< ${file} > ${TEMP_FILE}
mv ${TEMP_FILE} ${file}
fi
done
Caveats:
Solution excludes error handling
Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing:
find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed
for pp in $(grep -Rl looking_for_string)
do
sed -i 's/looking_for_string/something_other/g' "${pp}"
done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
called a “Perl pie” (easy as a pie)

How to loop over files of same format in linux shell [duplicate]

This question already has answers here:
Bash sed in loop
(2 answers)
Closed last year.
I want to apply a specific action in various *.dat files. What I want to do is use sed to remove a specific character using
sed 's/"//g' file.dat >file.dat
I've tried to use the above code in the following way
sed 's/"//g' *.dat > *.dat
but it doesn't seem to work for all the files in the directory.
Any idea on how to loop over all those file in linux shell?
I would use the find command and sed -i (the -i is in-place). So, the complete command would be something like -
find . -name "*.dat" -exec sed -i 's/\"//g' {} \;
You can't read from a file and write to the same file in the same pipeline, so
sed … file > file
will fail. In fact, it will truncate the file. Many implementations of sed contain the nonstandard -i flag, which abstracts the work of writing to a temporary file:
sed -i … file
So you could do:
for dat in *.dat; do
sed -i 's/"//g' "$dat"
done
If your sed doesn't have the -i, you can use tr to remove a single character from files very efficiently:
for dat in *.dat; do
tr -d '"' "$dat" > "$dat.tmp"
mv "$dat.tmp" "$dat"
done
If you want to do this recursively (that is, on file nested within directories within your initial target directory), use either bash's globstar setting, or find:
shopt -s globstar
for dat in **/*.dat; do … # the rest is the same as above
or
find . -name '*.dat' -exec sed -i 's/"//g' {} \;
try this code:
find . -type f -name *.dat -exec sed 's/"//g' {} > {} ';'

Linux rename files to uppercase

I have large number of files in the format x00000.jpg, X00000.jpg and xx00000.jpg.
How can I rename these files so they are all uppercase, ignoring the numeric part of the name?
for f in * ; do mv -- "$f" "$(tr [:lower:] [:upper:] <<< "$f")" ; done
You can't rename files from Bash only, because Bash doesn't have any built-in command for renaming files. You have to use at least one external command for that.
If Perl is allowed:
perl -e 'for(#ARGV){rename$_,uc}' *.jpg
If Python is allowed:
python -c 'import os, sys; [os.rename(a, a.upper()) for a in sys.argv[1:]]' *.jpg
If you have thousands or more files, the solutions above are fast, and the solutions below are noticably slower.
If AWK, ls and mv are allowed:
# Insecure if the filenames contain an apostrophe or newline!
eval "$(ls -- *.jpg | awk '{print"mv -- \x27"$0"\x27 \x27"toupper($0)"\x27"}')"
If you have a lots of file, the solutions above don't work, because *.jpg expands to a too long argument list (error: Argument list too long).
If tr and mv are allowed, then see damienfrancois' answer.
If mv is allowed:
for file in *; do mv -- "$file" "${file^^}"; done
Please note that these rename .jpg to .JPG at the end, but you can modify them to avoid that.
The bash shell has a syntax for translating a variable name to all-caps.
for file in * ; do # or *.jpg, or x*.jpg, or whatever
mv "$file" "${file^^}"
done
This feature was introduced in bash version 4.0, so first verify that your version of bash implements it. To avoid mistakes, try it once replacing mv by echo mv, just to make sure it's going to do what you want.
The documentation for this feature is here, or type info bash and search for "upper".
You should probably decide what to do if the target file already exists (say, if both x00000.jpg and X00000.JPG already exists), unless you're certain it's not an issue. To detect such name collisions, you can try:
ls *.txt | tr '[a-z]' '[A-Z]' | sort | uniq -c | sort -n
and look for any lines not starting with 1.
rename
Probably the easiest way for renaming multiple files is using Perl's rename. To translate lowercase names to upper, you'd:
rename 'y/a-z/A-Z/' *
If the files are also in subdirs you can use globstar or find:
find . -maxdepth 1 -type f -iname "*.jpg" -execdir rename "y/a-z/A-Z/" {} +
References
Howto: Linux Rename Multiple Files At a Shell Prompt – nixCraft
More info about y/, translate instead of substitute.
DistroTube - Tools For Renaming Files In Linux
Combining previous answers could yield:
for file in * ; do # or *.jpg, or x*.jpg, or whatever
basename=$(tr '[:lower:]' '[:upper:]' <<< "${file%.*}")
newname="$basename.${file#*.}"
mv "$file" "$newname"
done
Using tr:
f="x00000.jpg"
n="${f%.*}"
n=$(tr '[:lower:]' '[:upper:]' <<< "$n")
f="$n.${f#*.}"
echo "$f"
OUTPUT:
X00000.jpg
if you are using the zsh like me:
for f in * ; do mv -- "$f" "${f:u}" ; done
If only renaming files/dirs is all you want, then you can use rnm :
rnm -rs '/./\C/g' -fo -dp -1 *
Explanation:
-rs : replace string. /./\C/g replaces all match of . (regex) to it's uppercase.
-fo : file only mode
-dp : depth of directory (-1 means unlimited).
More examples can be found here.
Rename all source files to uppercase and stage to git
#!/bin/bash
SOURCE_DIRS=('./src' './Public' './Private')
FILE_EXTENSION="*.php"
for dir in $SOURCE_DIRS; do
files="$(find "$dir" -name "${FILE_EXTENSION[#]}";)"
for file in $files; do
base_name="$(basename $file)";
new_name=$(sed "s/$base_name/${base_name^}/g" <<< $file)
git mv $file ${new_name} # or
# mv $file ${new_name}
done
done

Convert all EOL (dos->unix) of all files in a directory and sub-directories recursively without dos2unix

How do I convert all EOL (dos->unix) of all files in a directory and sub-directories recursively without dos2unix? (I do not have it and cannot install it.)
Is there a way to do it using tr -d '\r' and pipes? If so, how?
For all files in current directory you can do it with a Perl one-liner: perl -pi -e 's/\r\n/\n/g' * (stolen from here)
EDIT: And with a small modification you can do subdirectory recursion:
find | xargs perl -pi -e 's/\r\n/\n/g'
You can use sed's -i flag to change the files in-place:
find . -type f -exec sed -i 's/\x0d//g' {} \+
If I were you, I would keep the files around to make sure the operation went okay. Then you can delete the temporary files when you get done. This can be done like so:
find . -type f -exec sed -i'.OLD' 's/\x0d//g' {} \+
find . -type f -name '*.OLD' -delete
Do you have sane file names and directory names without spaces, etc in them?
If so, it is not too hard. If you've got to deal with arbitrary names containing newlines and spaces, etc, then you have to work harder than this.
tmp=${TMPDIR:-/tmp}/crlf.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15
find . -type f -print |
while read name
do
tr -d '\015' < $name > $tmp.1
mv $tmp.1 $name
done
rm -f $tmp.?
trap 0
exit 0
The trap stuff ensures you don't get temporary files left around. There other tricks you can pull, with more random names for your temporary file names. You don't normally need them unless you work in a hostile environment.
You can also use the editor in batch mode.
find . -type f -exec bash -c 'echo -ne "%s/\\\r//\nx\n" | ex "{}" ' \;
If \r isn't followed by \n (maybe the case in files of Tim Pote):
deleting \r (using tr -d) may remove newlines
replacing \r with \n may not cause double / triple newlines
Maybe Tim Pote could verify the points above for the files he mentioned.
This removes carriage returns from all files in the current directory and all subdirectories, and should work on most Unix-like OSs:
grep -lIUre '\r' | xargs sed -i 's/\r//'
If its done in widows:
try to run the command in git bash:
$ find | xargs perl -pi -e 's/\r\n/\n/g'
It can show some Can't do inplace edit: type a message so ignore it

Linux - Replacing spaces in the file names

I have a number of files in a folder, and I want to replace every space character in all file names with underscores. How can I achieve this?
This should do it:
for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done
I prefer to use the command 'rename', which takes Perl-style regexes:
rename "s/ /_/g" *
You can do a dry run with the -n flag:
rename -n "s/ /_/g" *
Use sh...
for i in *' '*; do mv "$i" `echo $i | sed -e 's/ /_/g'`; done
If you want to try this out before pulling the trigger just change mv to echo mv.
If you use bash:
for file in *; do mv "$file" ${file// /_}; done
What if you want to apply the replace task recursively? How would you do that?
Well, I just found the answer myself. Not the most elegant solution, (also tries to rename files that do not comply with the condition) but it works. (BTW, in my case I needed to rename the files with '%20', not with an underscore)
#!/bin/bash
find . -type d | while read N
do
(
cd "$N"
if test "$?" = "0"
then
for file in *; do mv "$file" ${file// /%20}; done
fi
)
done
Here is another solution:
ls | awk '{printf("\"%s\"\n", $0)}' | sed 'p; s/\ /_/g' | xargs -n2 mv
uses awk to add quotes around the name of the file
uses sed to replace space with underscores; prints the original name with quotes(from awk); then the substituted name
xargs takes 2 lines at a time and passes it to mv
Try something like this, assuming all of your files were .txt's:
for files in *.txt; do mv “$files” `echo $files | tr ‘ ‘ ‘_’`; done
Quote your variables:
for file in *; do echo mv "'$file'" "${file// /_}"; done
Remove the "echo" to do the actual rename.
To rename all the files with a .py extension use,
find . -iname "*.py" -type f | xargs -I% rename "s/ /_/g" "%"
Sample output,
$ find . -iname "*.py" -type f
./Sample File.py
./Sample/Sample File.py
$ find . -iname "*.py" -type f | xargs -I% rename "s/ /_/g" "%"
$ find . -iname "*.py" -type f
./Sample/Sample_File.py
./Sample_File.py
This will replace ' ' with '_' in every folder and file name recursivelly in Linux with Python >= 3.5. Change path_to_your_folder with your path.
Only list files and folders:
python -c "import glob;[print(x) for x in glob.glob('path_to_your_folder/**', recursive=True)]"
Replace ' ' with '_' in every folder and file name
python -c "import os;import glob;[os.rename(x,x.replace(' ','_')) for x in glob.glob('path_to_your_folder/**', recursive=True)]"
With Python < 3.5, you can install glob2
pip install glob2
python -c "import os;import glob2;[os.rename(x,x.replace(' ','_')) for x in glob2.glob('path_to_your_folder/**')]"
The easiest way to replace a string (space character in your case) with another string in Linux is using sed. You can do it as follows
sed -i 's/\s/_/g' *
Hope this helps.

Resources