How to rename files to param-case in Bash - string

How to rename a bunch of files in a directory to be param-case with the hyphen?
Here's one that does it in JavaScript, but I'm not sure how to go about this in bash.

Below is an option in bash:
for file in ./* ; do mv "$file" "$(echo $file | sed 's/\(.\)\([A-Z]\)/\1-\2/g' | tr '[:upper:]' '[:lower:]')" ; done
An alternative with perl:
for file in ./* ; do mv "$file" "$(echo $file | perl -ne 'print lc(join("-", split(/(?=[A-Z])/)))')" ; done

Related

Avoid collision, if copying files

I was trying to copy all files of a certain filetype from all subfolders to one place. Unfortunately, this might cause collisions, if two files have the same name from two different subfolders.
I was using
find ./ -name '*.jpg' -exec mv -u '{}' . \;
How can I adjust this to automatically rename files (e.g. append "_1") to avoid collisions.
Or better: check if the files are the same (e.g. same size) beforehand. If yes, ignore (overwrite would be fine, too). If No, rename to avoid collision.
Suggestion would be appreciated. Thanks!
You could check before moving each individual file. Here I've used cksum to compare, which returns both the filesize and a simple checksum.
find ./ -name '*.jpg' -print0 |
while read -d '' -r path; do
file=$(basename "$path")
if [[ -e $file ]]; then
if [[ $(cksum "$file" | awk '{print $1 $2}') = $(cksum "$path" | awk '{print $1 $2}') ]]; then
continue
fi
read -n 1 -p "File '$file' would be overwritten by '$path', continue? (y/N) " -r prompt </dev/tty
if [[ $prompt != [Yy] ]]; then
continue
fi
fi
mv -f -v "$path" "$file"
done

batch rename dropbox conflict files

I have a large number of conflict files generated (incorrectly) by the dropbox service. These files are on my local linux file system.
Example file name = compile (master's conflicted copy 2013-12-21).sh
I would like to rename the file with its correct original name, in this case compile.sh and remove any existing file with that name. Ideally this could be scripted or in such a way to be recursive.
EDIT
After looking over the solution provided and playing around and further research I cobbled together something that works well for me:
#!/bin/bash
folder=/path/to/dropbox
clear
echo "This script will climb through the $folder tree and repair conflict files"
echo "Press a key to continue..."
read -n 1
echo "------------------------------"
find $folder -type f -print0 | while read -d $'\0' file; do
newname=$(echo "$file" | sed 's/ (.*conflicted copy.*)//')
if [ "$file" != "$newname" ]; then
echo "Found conflict file - $file"
if test -f $newname
then
backupname=$newname.backup
echo " "
echo "File with original name already exists, backup as $backupname"
mv "$newname" "$backupname"
fi
echo "moving $file to $newname"
mv "$file" "$newname"
echo
fi
done
all files from current directory:
for file in *
do
newname=$(echo "$file" | sed 's/ (.*)//')
if [ "$file" != "$newname" ]; then
echo moving "$file" to "$newname"
# mv "$file" "$newname" #<--- remove the comment once you are sure your script does the right thing
fi
done
or to recurse, put the following into a script that i'll call /tmp/myrename:
file="$1"
newname=$(echo "$file" | sed 's/ (.*)//')
if [ "$file" != "$newname" ]; then
echo moving "$file" to "$newname"
# mv "$file" "$newname" #<--- remove the comment once you are sure your script does the right thing
fi
then find . -type f -print0 | xargs -0 -n 1 /tmp/myrename (This is a bit hard to do on the command line without using an extra script because the file names contain blanks).
a small contribution:
I've had a problem this this script. The files with spaces in their name do not made a copy. So I modified line 17 :
-------cut-------------cut---------
if test -f "$newname"
-------cut-------------cut---------
This script displayed above is now outdated; the following works fine with the latest version of Dropbox running on Linux Mint at the time of writing:
#!/bin/bash
#modify this as needed
folder="./"
clear
echo "This script will climb through the $folder tree and repair conflict files"
echo "Press a key to continue..."
read -n 1
echo "------------------------------"
find "$folder" -type f -print0 | while read -d $'\0' file; do
newname=$(echo "$file" | sed 's/ (.*Case Conflict.*)//')
if [ "$file" != "$newname" ]; then
echo "Found conflict file - $file"
if test -f "$newname"
then
backupname=$newname.backup
echo " "
echo "File with original name already exists, backup as $backupname"
mv "$newname" "$backupname"
fi
echo "moving $file to $newname"
mv "$file" "$newname"
echo
fi
done
You can use the tool Dropbox Conflict Fix. It resolved all my conflicted copy files.

After extracting files with 7zip how can you rename those file and save

I am using the following command to extract the files with 7zip
7za x -p$passwd $file -o$outdir
there are many files getting extracted, i want to rename these files after extraction how can i do it with help of writing a script in ksh
files=`ls ABC_0722*.zip | xargs -r`
outdir="/abc/def/prq/xyz"
for file in $files; do
passwd=`echo $file| awk '{print substr($0,11,2)}'``echo ABC``echo $file| awk '{print substr($0,5,2)}'`
7za x -p$passwd $file -o$outdir
done
After the extraction I need to rename the files to abcdef.
After decompress all files, use a for loop looking for all normal files that are not symbolink links, replace the basename of the path and replace it with abcde plus a counter:
for f in $outdir/*; do
[[ -f $f && ! -L $f ]] && { ((++i)); mv -- "$f" "${f%/*}/abcde$i"; };
done

How to grep for a pattern in the files in tar archive without filling up disk space

I have a tar archive which is very big ~ 5GB.
I want to grep for a pattern on all files (and also print the name of the file that has the pattern ) in the archive but do not want to fill up my disk space by extracting the archive.
Anyway I can do that?
I tried these, but this does not give me the file names that contain the pattern, just the matching lines:
tar -O -xf test.tar.gz | grep 'this'
tar -xf test.tar.gz --to-command='grep awesome'
Also where is this feature of tar documented? tar xf test.tar $FILE
Seems like nobody posted this simple solution that processes the archive only once:
tar xzf archive.tgz --to-command \
'grep --label="$TAR_FILENAME" -H PATTERN ; true'
Here tar passes the name of each file in a variable (see the docs) and it is used by grep to print it with each match. Also true is added so that tar doesn't complain about failing to extract files that don't match.
Here's my take on this:
while read filename; do tar -xOf file.tar "$filename" | grep 'pattern' | sed "s|^|$filename:|"; done < <(tar -tf file.tar | grep -v '/$')
Broken out for explanation:
while read filename; do -- it's a loop...
tar -xOf file.tar "$filename" -- this extracts each file...
| grep 'pattern' -- here's where you put your pattern...
| sed "s|^|$filename:|"; - prepend the filename, so this looks like grep. Salt to taste.
done < <(tar -tf file.tar | grep -v '/$') -- end the loop, get the list of files as to fead to your while read.
One proviso: this breaks if you have OR bars (|) in your filenames.
Hmm. In fact, this makes a nice little bash function, which you can append to your .bashrc file:
targrep() {
local taropt=""
if [[ ! -f "$2" ]]; then
echo "Usage: targrep pattern file ..."
fi
while [[ -n "$2" ]]; do
if [[ ! -f "$2" ]]; then
echo "targrep: $2: No such file" >&2
fi
case "$2" in
*.tar.gz) taropt="-z" ;;
*) taropt="" ;;
esac
while read filename; do
tar $taropt -xOf "$2" \
| grep "$1" \
| sed "s|^|$filename:|";
done < <(tar $taropt -tf $2 | grep -v '/$')
shift
done
}
Here's a bash function that may work for you. Add the following to your ~/.bashrc
targrep () {
for i in $(tar -tzf "$1"); do
results=$(tar -Oxzf "$1" "$i" | grep --label="$i" -H "$2")
echo "$results"
done
}
Usage:
targrep archive.tar.gz "pattern"
It's incredibly hacky, but you could abuse tar's -v option to process and delete each file as it is extracted.
grep_and_delete() {
if [ -n "$1" -a -f "$1" ]; then
grep -H 'this' -- "$1" </dev/null
rm -f -- "$1" </dev/null
fi
}
mkdir tmp; cd tmp
tar -xvzf test.tar.gz | (
prev=''
while read pathname; do
grep_and_delete "$prev"
prev="$pathname"
done
grep_and_delete "$prev"
)
tar -tf test.tar.gz | grep -v '/$'| \
xargs -n 1 -I _ \
sh -c 'tar -xOf test.tar.gz _|grep -q <YOUR SEARCH PATTERN> && echo _'
Try:
tar tvf name_of_file |grep --regex="pattern"
The t option will test the tar file without extracting the files. The v is verbose and the f prints he filenames. This should save you considerable hard disk space.
may help
zcat log.tar.gz | grep -a -i "string"
zgrep -i "string" log.tar.gz
http://www.commandlinefu.com/commands/view/9261/grep-compressed-log-files-without-extracting

Change extension of file using shell script

How to change extension of all *.dat files in a directory to *.txt.
Shell script should take the directory name as an argument. Can
take multiple directories as arguments. Print the log of command
result in appending mode with date and timestamp.
Bash can do all of the heavy lifting such as extracting the extension and tagging on a new one. For example:
for file in $1/*.dat ; do mv "$file" "${file%.*}.txt" ; done
Batch File Rename By File Extension in Unix
# change .htm files to .html
for file in *.htm ; do mv $file `echo $file | sed 's/\(.*\.\)htm/\1html/'` ; done
# change .html files to .htm
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1htm/'` ; done
#change .html files to .shtml
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1shtml/'` ; done
#change .html files to php
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1php/'` ; done
so ==>
# change .dat files to .txt
for file in *.dat ; do mv $file `echo $file | sed 's/\(.*\.\)dat /\1txt/'` ; done
#!/bin/bash
for d in $*; do
for f in $(ls $d/*.dat); do
echo $(date) $(mv -v $f ${f%.dat}.txt)
done
done
Output redirection should be done by the shell when running the script
Leaving out argument validity checks
Simple script:
#!/bin/bash
if [ $# -lt 2 ] then
echo "Usage `basename $0` <any number of directories space separated>"
exit 85 # exit status for wrong number of arguments.
fi
for directories
do
for files in $(ls $directories/*.dat); do
echo $(date) $(mv -v $files ${files%.dat}.txt)
done
done
The first for loop by default loops on the $# i.e. command-line arguments passed.
Follow Pben's solution, if your filename contains blank space, you should use double quotation marks to the variable like the following:
#remove the space in file name
#example file name:19-014-0100.mp3 .mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/ //g'`;
echo "$target";
mv "$file" "$target";
done;
#remove the duplicate file extension in file name
#example file name:19-014-0100.mp3.mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/\.mp3\.mp3$/.mp3/g'`;
echo "$target";
mv "$file" "$target";
done;
To rename (changing extention) all my html files on epub files I use this command line :
find . -name "*.html*" -exec rename -v 's/\.html$/\.epub/i' {} \;
Script, first finds the names of the given extensions.
It removes the extension from names. Then adds backslash()
for identification of terminal.
Then the 'mv' command executed.
Here the '.temp' folder is used to hide the process from user,
in GUI.
#!/bin/sh
if [ $# -ne 3 ]
then
echo "Usage: ./script folder current_extension modify_extension"
exit
fi
mkdir .temp
find $1 -name "*.$2" > .temp/output_1 && sed "s/$2//" .temp/output_1 > .temp/output_2 && sed -e "s/[ \t]/\\\ /g" .temp/output_2 > .temp/output_3
while read line
do
mv -v "$line""$2" "$line""$3"
done < .temp/output_3
rm -rf .temp
The output files are saved inside the '.temp' folder,later the '.temp' folder is removed.
The top voted answer didn't really work for me. I may have been doing something wrong. My scenario was trying to create a file with the original name, but with the date appended to it, along with changing the extension from .xslx to .csv. This is what worked for me:
csvname=`echo $xlsx |sed 's/\.xlsx//'`"-$now"`echo $xlsx | sed 's/\(.*\.\)xlsx/\.csv/'`
So, for all the .dat files in a directory (without the date addition), you could run something like this:
for i in *.dat
do mv $i `echo $i |sed 's/\.dat//'``echo $i | sed 's/\(.*\.\)dat/\.txt/'`
done
From the above, this section of code just removed the extension:
echo $i |sed 's/\.dat//'
And this section changes the .dat to .txt:
echo $i | sed 's/\(.*\.\)dat/\.txt/'
And by bumping them next to each other, it concatenates the two outputs into the filename. It's like doing this:
mv [filename][.dat] [filename] + [.txt]
Though, I did use STDOUT instead of the 'mv' command.
Following command to change file extention .c to .h
find . -depth -name "*.c" -exec sh -c 'dname=$(dirname {}) && fname=$(basename {} .c) && mv {} $dname/$fname.h' ";"
change js to cjs extension files recursively:
cd dist # where you place your .js
for file in $(find . -type f -name "*.js"); do mv "$file" "${file%.*}.cjs"; done

Resources