I need to insert text from 1 file at the top of a large number of files in a directory and its subdirectories. I have been able to do this successfully on a file by file basis using ed:
ed -s FileToAddTo.txt <<< $'0r TextToAdd.txt\nw'
However, when I replace FileToAddTo.txt with *.txt, nothing happens. How can I modify this, or use another Unix command such as sed, to add the contents of TextToAdd.txt recursively to all files in a directory ending with a specific extension? e.g
ed -rs *.txt <<< $'0r TextToAdd.txt\nw'
Please note that the code above this line does not work, it merely demonstrates what I would like to achieve.
Like this:
cat TextToAdd.txt FileToAddTo.txt > $$.tmp && mv $$.tmp FileToAddTo.txt
i.e. cat the new header file and the original file into a temporary file and then, if it was successful, rename the temporary file as the original.
And to run recursively !! PLEASE TEST ON BACKED UP DATA!!!
find . -type f -name "*.txt" -exec sh -c "cat TextToAdd.txt {} > $$.tmp && mv $$.tmp {}" \;
This works pretty well for me:
$ for fn in `find . -name '*.txt'`; do; cat textToAdd.txt $fn > $$.tmp && mv $$.tmp $fn; done;
Based on this, you can try:
for fn in `ls -R /folderName`; do cat "$fn" >> fileName; done
Related
Trying to run a script that will fetch all directories, and files containing these directories, and logs data onto a .CSV file.
So, if I were to have structure like:
mainDir.dir -> [sub1.dir -> file01.png, sub2.dir -> file02.png]
, I would get a CSV of
dir; file
sub1; file01.png
sub2; file02.png
This is the script I currently have
for dir in */ .*/ ;
do
for entry in $dir
do
path="$entry"
empty=""
file="${$dir/$empty}"
echo -e "$dir;$file;" >> file.csv
done
done
find is useful for processing many files recursively.
Command
find . -type f -execdir sh -c "pwd | tr -d '\n' >> ~/my_file.csv; echo -n ';' >> ~/my_file.csv; echo {} | sed -e 's/^\.\///' >> ~/my_file.csv" \;
Note: make sure you do not give a relative path to the output CSV file. execdir changes the working directory (and that is what makes pwd work).
Breakdown
find . -type f find all files recursively starting here
-execdir sh -c "pwd | tr -d '\n' >> ~/my_file.csv; echo -n ';' >> ~/my_file.csv; For each file, execute in its directory pwd. Strip the newline and add directory name to output. Also add a semicolon, again with no newline.
echo {} | sed -e 's/^\.\///' >> ~/my_file.csv" \; Append filename to output. This time, leave newline, but by default find will place the ./ in front of the filename. The sed here removes it.
If you don't need more than one level deep, this seems to work
for i in **/*; do echo $i | tr / \; ; done >> file.csv
I want duplicates of the files with different name.
I am currently trying out these commands before putting them into my bash script.
$ set dir = /somewhere/states
$ find $dir -name "total.txt" -type f | xargs ls -1
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/total.txt
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/total.txt
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/total.txt
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/total.txt
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/total.txt
I know I can rename file using something like this, but it isn't exactly what I want:
$ find $dir -name "total.txt" -exec sh -c 'cp {} `dirname {}`/`basename {} `why.xls' \;
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/total.txtwhy.xls
May I know how to copy the files and have the new files in the same dir?
below are the examples.
I want to name the new files as everything behind "fixed." and before "/somewhere" and changing the file extension as well
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/fl_Asite_ttl.xls
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/hi_Bsite_ttl.xls
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/ky_Asite_ttl.xls
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/mi_Csite_ttl.xls
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/tx_Vsite_ttl.xls
Update:
/somewhere/states/florida_fixed_ttl/fixed.fl_Asite_ttl/somewhere/total.txt
Probably not the most elegant but this should work:
find . -name total.txt | while read F ; do [[ $F =~ fixed.[^/]* ]] ; N=$(echo $BASH_REMATCH | sed s/fixed\.//) ; echo "cp $F $(dirname $F)/$N.xls" ; done
If you are happy with the output just remove the last echo, i.e. this:
echo "cp $F $(dirname $F)/$N.xls"
to this:
cp "$F" "$(dirname $F)/$N.xls"
Note, if the .txt and .xls contents will always remain the same you can use ln instead of cp -- one file, two names.
I want to delete email "sudar.dudi#in.xyz.com" from all the files which lies in a particular directory. I used the below command after searching a lot.
The problem with this command is, it keeps on searching other than the desired directories and due to this command my other files got corrupted.
Please help me to correct this:
rrcg0/utioi
(in utioi direcotry are my files and the files are with specific names
like mpp.properties.JKH,mpp.properties.KJL,mpp.properties.KHF
likewise)
yes n|for file in $(find . -type f)
do
sed 's/sudar.dudi#in.xyz.com//g' $file > $file.tmp
mv $file.tmp $file
done
You do not need find when you know the directory.
When you want to check that an entry given by ls is a file, you can use a test:
mydir=rrcg0/utioi
for file in ${mydir}/*
do
if [ -f "${file}" ]; then
sed 's/sudar.dudi#in.xyz.com//g' $file > $file.tmp && mv $file.tmp $file
fi
done
When you only want properies files you can use
for file in ${mydir}/*properties*
I have used &&. This makes sure the mv command will only be done when the redirection succeeded.
I did not use the sed -i option, that option is not always supported.
EDIT: What about spaces in the filename?
When you might have spaces in the filename, the above script must be corrected:
sed 's/sudar.dudi#in.xyz.com//g' "$file" > "$file.tmp" && mv "$file.tmp" "$file"
Make it simpler, try using exec option of find and GNU sed option for inline editing
find <absolute dir path> -name "*" -type f -exec sed -i '/hello/d' {} \;
You can make changes to the existing files also, as then there is no need to move the file. Include -i in sed command.
Modifying codes of Walter A
mydir=rrcg0/utioi
for file in ${mydir}/*
do
if [ -f "${file}" ]; then
sed -i 's/sudar.dudi#in.xyz.com//g' $file
fi
done
I am trying to convert all files in a given directory with suffix ".foo" to files containing the same basename but with suffix modified to ".bar". I am able to do this with a shell script and a for loop, but I want to write a one-liner that will achieve the same goal.
Objective:
Input: *.foo
Output: *.bar
This is what I have tried:
find . -name "*.foo" | xargs -I {} mv {} `basename {} ".foo"`.bar
This is close but incorrect. Results:
Input: *.foo
Output: *.foo.bar
Any ideas on why the given suffix is not being recognized by basename? The quotes around ".foo" are dispensable and the results are the same if they are omitted.
Although basename can work on file extensions, using the shell parameter expansion features is easier:
for file in *.foo; do mv "$file" "${file%.foo}.bar"; done
Your code with basename doesn't work because the basename is only run once, and then xargs just sees {}.bar each time.
for file in *.foo ; do mv $file echo $file | sed 's/\(.*\.\)foo/\1bar/' ; done
Example:
$ ls
1.foo 2.foo
$ for file in *.foo ; do mv $file `echo $file | sed 's/\(.*\.\)foo/\1bar/'` ; done
$ ls
1.bar 2.bar
$
for x in $(find . -name "*.foo"); do mv $x ${x%%foo}bar; done
$ for f in *.foo; do echo mv $f ${f%foo}bar; done
mv a.foo a.bar
mv b.foo b.bar
Remove echo when ready.
If you have installed mmv, you can do
mmv \*.foo \#1.bar
.
Why don't you use "rename" instead of scripts or loops.
RHEL: rename foo bar .*foo
Debian: rename 's/foo/bar/' *.foo
What is the good way to add file extension ".jpg" to extension-less files with bash?
# Strip .jpg from all filenames
for f in *.jpg; do mv "$f" "${f%.jpg}"; done
# Add .jpg to all filenames (even those with .jpg already)
for f in *; do mv "$f" "$f.jpg"; done
# Add .jpg to all filenames...unless they are already .jpg
for f in *; do case "$f" in *.jpg) echo skipped $f;; *) mv "$f" "$f".jpg; esac; done
# Add .jpg to all filenames...unless they already have a . extension
for f in *; do case "$f" in *.*) echo skipped $f;; *) mv "$f" "$f".jpg; esac; done
You can use rename:
rename 's/(.*)/$1.jpg/' *
Another way - without loops
find . -type f -not -name "*.*" -print0 |\
xargs -0 file |\
grep 'JPEG image data' |\
sed 's/:.*//' |\
xargs -I % echo mv % %.jpg
Breakdown:
find all files without extension
check the file type
filter out only JPG files
delete filetype info
xargs run the "mv" for each file
the above command is for dry run, after it you should remove the "echo" before mv
EDIT
Some people suggesting that here is needed "Wrap path arguments in quotes; avoids argument splitting on paths with spaces".
Usually, this recommendation is true, in this case isn't. Because, here the % is got replaced not by shell expansion but by the xargs internally (directly), so the % will be substituted correctly even with spaces in filenames.
Simple demo:
$ mkdir xargstest
$ cd xargstest
# create two files with spaces in names
$ touch 'a b' 'c d'
$ find . -type f -print
./c d
./a b
# notice, here are spaces in the above paths
#the actual xargs mv WITHOUT quotes
$ find . -type f -print | xargs -I % mv % %.ext
$ find . -type f -print
./a b.ext
./c d.ext
# the result is correct even in case with spaces in the filenames...
Simple,
cd to the directory where your files are and:
for f in *;do mv $f $f.jpg;done
dry run:
rename -n s/$/.jpg/ *
actual renaming:
rename s/$/.jpg/ *
find . | while read FILE; do if [ $(file --mime-type -b "$FILE") == "image/jpeg" ]; then mv "$FILE" "$FILE".jpg; fi; done;
In my case i was not aware of the filetype so i used the mv command with the help of the file command to examine and possibly find the file type. This solution might not be perfect for all files since the file command might not recognize the filetype but it worked mostly good for me.
for f in *; do ext=$(file $f | awk '{print $2;}'); mv -n "$f" "$f.$ext"; done
The use of awk is to strip the second word of the string returned from the command file that is actually the extension.
rename --dry-run * -a ".jpg" # test
* -a ".jpg" # rename
You can use move multiple files. I am a maintainer of this project. The syntax is simple.
mmf files*
It will open your $EDITOR with all files names, or vim by default and you can simply highlight the end of all file names using Ctrl+v+G in vim , save the file,quit and that it , all your files are renamed
Ryan Li
The correct syntax for adding a file extension to multiple files within a directory which do not have a file extension is
find . | while read FILE; do if [[ -n `file --mime-type "$FILE" | grep 'message/rfc822'` ]]; then mv "$FILE" "$FILE".eml; fi; done;