change name of file in nested folders - linux

I have been trying to think of a way to rename file names that are listed in nested folders and am having an issue resolving this matter. as a test i have been able to cut out what part of the name i would like to alter but can't think of how to put that into a variable and chain the name together. the file format looks like this.
XXX_XXXX_YYYYYYYYYY_100426151653-all.mp3
i have been testing this format out to cut the part out i was looking to change but i am not sure this would be the best way of doing it.
echo XXX_XXXX_YYYYYYYYYY_100426095135-all.mp3 |awk -F_ '{print $4}' | cut -c 1-6
I would like to change the 100426151653 to this 20100426-151653 format in the name.
i have tried to use the rename the file with this command with this format 's/ //g' but that format did not work i had to resort to rename ' ' '' file name to remove a blank space.
so the file would start as this
XXX_XXXX_YYYYYYYYYY_100426151653-all.mp3
and end like this
XXX_XXXX_YYYYYYYYYY_20100426-151653-all.mp3

How about using find and a bash function
#!/bin/bash
modfn () {
suffix=$2
fn=$(basename $1)
path=$(dirname $1)
fld1=$(echo $fn | cut -d '_' -f1)
fld2=$(echo $fn | cut -d '_' -f2)
fld3=$(echo $fn | cut -d '_' -f3)
fld4=$(echo $fn | cut -d '_' -f4)
fld5=${fld4%$suffix}
l5=${#fld5}
fld6=${fld5:0:$(($l5 - 6))}
fld7=${fld5:$(($l5 - 6)):6}
newfn="${fld1}_${fld2}_${fld3}_20${fld6}-${fld7}${suffix}"
echo "moving ${path}/${fn} to ${path}/${newfn}"
mv ${path}/${fn} ${path}/${newfn}"
}
export -f modfn
suffix="-all.mp3"
export suffix
find . -type f -name "*${suffix}" ! -name "*-*${suffix}" -exec bash -c 'modfn "$0" ${suffix}' {} +
The above bash script uses find to search in the current folder and it's contents for files like WWW_XXXX_YYYYYYYYYY_AAAAAABBBBBB-all.mp3 yet excludes ones that are already renamed and look like WWW_XXXX_YYYYYYYYYY_20AAAAAA-BBBBBB-all.mp3.
W,X,Y,A,B can be any character other than underscore or dash.
All the found files are renamed
NOTE: There are ways to shrink the above script but doing that makes the operation less obvious.

This perl one-liner does the job:
find . -name "XXX_XXXX_YYYYYYYYYY_*-all.mp3" -printf '%P\n' 2>/dev/null | perl -nle '$o=$_; s/_[0-9]{6}/_20100426-/; $n=$_; rename($o,$n)if!-e$n'
Note: I came just with a find command and regex part. The credit for a perl one liner goes to perlmonks user at http://www.perlmonks.org/?node=823355

Related

Open header-files in editor based on content in corresponding source

I have several files that have the same name, but a different extension. For example
echo "array" > A.hpp
echo "..." > A.h
echo "content" > B.hpp
echo "..." > B.h
echo "content" > C.hpp
echo "..." > C.h
I want to get a list of *.h files based on some content in the corresponding *.hpp file. In particular I am looking for a one-liner to open them in my editor.
It is fair to assume that for each *.hpp file the corresponding *.h file exists. Also, since they are source files, it may be assumed that the filenames do not contain whitespaces.
Current approach
I know how to get a list of *.hpp files based on their content. An approach (but surely not the only or the best) is to
find . -type f -iname '*.hpp' -print | xargs grep -i 'content' | cut -d":" -f1
which gives
./B.hpp
./C.hpp
Opening in my editor is then done by
st `find . -type f -iname '*.hpp' -print | xargs grep -i 'content' | cut -d":" -f1`
But how can I get/open the corresponding *.h files?
You say you want to get a list of *.h files based on some content in the corresponding *.hpp file.
while read -r line ; do
echo "${line%.hpp}.h"
done < <(grep -i 'content' *.hpp| cut -d":" -f1)
BashFAQ 001 recommends to use a while loop and read command to read a data stream.
One-liner as requested
st `while IFS= read -r line ; do echo "${line%.hpp}.h"; done < <(grep -i 'content' *.hpp| cut -d":" -f1)`
If you are dealing with filenames containing whitespace, you need to use printf instead of echo.
st `while IFS= read -r line ; do printf '%q' "${line%.hpp}.h"; done < <(grep -i 'content' *.hpp| cut -d":" -f1)`
The %q lets printf format the output so that it can be reused as shell input.
Explanation
You have to read it from behind. First we grep all files ending in .hpp in the current directory for the string 'content' and cut everything but the basename.
The while loop will read the output of grep and assign the basename to the variable line.
Inside the while loop we use bash's parameter substitution to change the file extension from .h to .hpp.
Your question still isn't clear but is this all you're trying to do (using GNU awk for gensub())?
$ awk '/content/{print gensub(/[^.]+$/,"h",1,FILENAME)}' *.hpp
B.h
C.h

how to remove the extension of multiple files using cut in a shell script?

I'm studying about how to use 'cut'.
#!/bin/bash
for file in *.c; do
name = `$file | cut -d'.' -f1`
gcc $file -o $name
done
What's wrong with the following code?
There are a number of problems on this line:
name = `$file | cut -d'.' -f1`
First, to assign a shell variable, there should be no whitespace around the assignment operator:
name=`$file | cut -d'.' -f1`
Secondly, you want to pass $file to cut as a string, but what you're actually doing is trying to run $file as if it were an executable program (which it may very well be, but that's beside the point). Instead, use echo or shell redirection to pass it:
name=`echo $file | cut -d. -f1`
Or:
name=`cut -d. -f1 <<< $file`
I would actually recommend that you approach it slightly differently. Your solution will break if you get a file like foo.bar.c. Instead, you can use shell expansion to strip off the trailing extension:
name=${file%.c}
Or you can use the basename utility:
name=`basename $file .c`
You should use the command substitution (https://www.gnu.org/software/bash/manual/bashref.html#Command-Substitution) to execute a command in a script.
With this the code will look like this
#!/bin/bash
for file in *.c; do
name=$(echo "$file" | cut -f 1 -d '.')
gcc $file -o $name
done
With the echo will send the $file to the standard output.
Then the pipe will trigger after the command.
The cut command with the . delimiter will split the file name and will keep the first part.
This is assigned to the name variable.
Hope this answer helps

Bash: Variable contains executable path -> Convert to string

I have this problem and not found a sufficient solutionyet, maybe you guys can help me.
I need to do this:
find -name some.log
It will give a lot hits back. So now I would like to go through it with a "for" like this:
for a in $(find -name vmware.log)
do
XXXXXXX
done
After that, I would like to cut the path in variable $a. Lets assume, $a has the following content:
./this/is/a/path/some.log
I'll cut this variable with
cut -d/ -f2 $a
The finished code is like this:
for a in $(find -name vmware.log)
do
cutpath=cut -d/ -f2 $a
done
When I do this, the bash uses the content of $a as a system path and not as a string. So "cut" tries to access the file directly, but it should only cut the string path in $a.The error I get on VMware ESXi is:
-sh: ./this/is/a/path/some.log: Device or resource busy
What am I doing wrong? Can anybody help me out?
Firstly, looping through the output of find with for is discouraged, since it's not going to work for filenames containing spaces or glob metacharacters (such as *).
This achieves what you want, using the -exec switch. The name of the file {} is passed to the script as $0.
find -name 'vmware.log' -exec sh -c 'echo "$0" | cut -d/ -f2' {} \;
# or with bash
find -name 'vmware.log' -exec bash -c 'cut -d/ -f2 <<<"$0"' {} \;
It looks like you want to use cut on the filename, rather than the file's contents, so you need to pass the name to cut on standard input, not as an argument. This can be done using a pipe | or using Bash's <<< herestring syntax.
You should try to use something like this :
#!/bin/bash
VAR1="$1"
VAR2="$2"
MOREF='sudo run command against $VAR1 | grep name | cut -c7-'
echo $MOREF
Using tickquotes will execute the command and store it in the variable.

execute cut command inside bash script

Every line printed with the echo includes the forward slashes for the directories that the given files are in. I am trying to cut the forward slashes using the cut command but it is not working. The files are gzipped so they have the .gz extension.
#!/bin/bash
for filename in /data/logs/2017/month_01/201701*
do
echo $filename
cut $filename -d '/' -f1
done
Thanks in advance.
The order of commands is wrong. You need to stream the string input to the cut command via pipe(|) or here-strings(<<<).
echo "$filename" | cut -d '/' -f1
(or)
cut -d '/' -f1 <<<"$filename"
(or) using here-docs
cut -d '/' -f1 <<EOF
$filename
EOF
data
And don't forget to double-quote variables to avoid Word-Splitting done by the shell.
Assuming filename is /a/b/c.gz you just want c.gz ?
Well there's two very easy answers:
basename $filename
The other is:
echo ${filename##*/}
The latter make use bash's built-in string delete parameter expansion.
Another way of solving your problem, is you could change directory first, i.e.
#!/bin/bash
pushd /data/logs/2017/month_01
for filename in 201701*
do
echo $filename
done
popd
Reference:
http://wiki.bash-hackers.org/syntax/pe
(EDIT: Fixed typo identified by #123)
As suggested b #lnian cut command used with echo command via pipe sign
For getting only file name with your script you need to use.
cut with -f1 option will get first value before / which would give blank so you need to get last value from the filename.
#!/bin/bash
for filename in /data/logs/2017/month_01/201701*
do
echo "$filename" | rev| cut -d '/' -f1
done
rev command reverse the filename so you will get last value which is your filename

Rename directories from abc.folder.xyz to folder.xyz

Say I have a directory with a bunch of site names in it.
i.e.
dev.domain.com
dev.domain2.com
dev.domain3.com
How can I rename those to <domain>.com on the linux cli using piping and/or redirection bash?
I get to a point than am stuck.
find . -maxdepth 1 | grep -v "all" | cut --delimiter="." -f3 | awk '{ print $0 }'
Gives me the domain part, but I can't get past that. Not sure awk is the answer either. Any advice is appreciated.
To strip the leading 'dev.' from names it should be like this:
for i in $(find * -maxdepth 1 -type d); do mv $i $(echo $i | sed 's/dev.\(.*\)/\1/'); done
for i in *; do mv $i $( echo $i | sed 's/\([^\.]*\).\([^\.]*\).\([^\.]*\)/\2.\1/' ); done
Explained:
for i in *; do ....; done
do it for every file
echo $i | sed 's/\([^\.]*\).\([^\.]*\).\([^\.]*\)/\2.\1/'
takes three groups of "every character except ." and changes their order
\2.\1 means: print second group, a dot, first group
the $( ... ) takes output of sed and "pastes" it after mv $i and is called "command substitution" http://www.gnu.org/software/bash/manual/bashref.html#Command-Substitution
Try the rename command. It can take a regular expression like this:
rename 's/\.domain.*/.com/' *.com
under the directory you want to work with, try :
ls -dF *|grep "/$"|awk 'BEGIN{FS=OFS="."} {print "mv "$0" "$2,$3}'
will print mv command. if you want to do the rename, add "|sh" and the end:
ls -dF *|grep "/$"|awk 'BEGIN{FS=OFS="."} {print "mv "$0" "$2,$3}'|sh

Resources