I have a file named "01 - Welcome To The Jungle.mp3", and I want to do eyeD3 -t "Welcome To the Jungle" 01 - Welcome To The Jungle.mp3 to modify the tag of the all the files in the folder. I've extracted from the file with awk: "Welcome To The Jungle" doing:
#!/bin/bash
for i in *.mp3
do
eyeD3 -t $(echo ${i} | awk -F' - ' '{print $2}' | awk -F'.' '{print $1}') ${i}
done
It doesn't work. Neither the whole "$(echo S{i}....)" nor the "${i}" seem to work for replacing the names of the respective files.
You need to prevent word splitting on IFS (default: space, tab, newline) by shell, as your input filename contains space(s). The typical workaround is to use double quotes around the variable expansion.
Do:
for i in *.mp3; do eyeD3 -t "$(echo "$i")" | ...; done
You can leverage here string, <<<, to avoid the echo-ing:
for i in *.mp3; do eyeD3 -t <<<"$i" | ...; done
You need to double-quote variables that may contain spaces, as #heemayl already pointed out.
Also, in this example, instead of using awk,
it would be better to use native Bash pattern substitution to extract the title, for example:
for file in *.mp3; do
title=${file%.mp3}
title=${title#?????}
eyeD3 -t "$title" "$file"
done
That is:
Remove .mp3 at the end
Remove the first 5 characters (the count prefix NN -)
Related
I'm passing two positional args to a script to run, both args are a path, and while in the scenario analyzing the paths, the problem is sometimes there is some path like: m i sc . . . . .. . . it has dots and spaces, and sometimes even we have a backslash in dir names.
It is so tried to get arguments via two procedures, directly and via at sign.
SOURCE_ARG=$1
DESTINATION_ARG=$2
and
ARG_COUNT=0
for POSITIONAL_ARGUMENTS in "${#}"
do
((ARG_COUNT++))
ARGUMENT_ARRAY[$ARG_COUNT]=$POSITIONAL_ARGUMENTS
done
In the loop, I iterate through the result of commands that have forwarded to them.
while IFS= read -r dir
do
echo "${ARGUMENT_ARRAY[1]}"
echo "${dir}"
while IFS= read -r item
do
# do some stuff
done < <(ls -A "$dir"/)
done < <(du -hP "$SOURCE_ARG" | awk '{$1=""; print $0}' | grep -v "^.$" | sed "s/^ //g")
when i use echo "${ARGUMENT_ARRAY[1]}" i get the same path as i need to check but when using loop iteration varible as dir in here ->echo "${dir}" i got all the spaces escaped, since other commands for that path could not do their jobs.
What I'm Asking for is that how can I get the output of $dir within the loop and as like as echo "${ARGUMENT_ARRAY[1]}" that i mentioned above(input with all spaces and backslashes)
Thanks to #Barmar in comments.
The only reason that filenames are without escapes (i.e. you see directories with no special character or special characters have been escaped) is because du is printing the filenames with escapes, so $dir variable would have escaped once and special characters are no longer available for the other loop iteration in my problem.
Now that we know the problem was raised by using du in my script:
while IFS= read -r dir
# do sth
done < <(du -hP "$SOURCE_ARG" | awk '{$1=""; print $0}' | grep -v "^.$" | sed "s/^ //g")
We can change the du to find and the problem is solved:
while IFS= read -r dir
# do sth
done < <(find "$SOURCE_ARG" -type d –)
PS 1:
Another problem raised as I wanted to print the lines to check them if they are ok or not (i.e. while debugging application) was with echo.
So be sure to try printf "%s\n" "$dir" instead of echo, as some versions of echo process escape sequences.
echo "${dir}"
printf "%s\n" "$dir"
PS 2:
Also If a filename has more than one space in a row, The way I used awk, was collapsing them into a single space.
awk '{$1=""; print $0}' | grep -v "^.$" | sed "s/^ //g"
I am currently using the following command:
grep -l -Z -E '.*?FindMyRegex' /home/user/folder/*.csv | xargs -0 -I{} mv {} /home/destination/folder
This works fine. The problem is it uses grep on the entire file.
I would like to use the grep command on the FIRST line of the file only.
I have tried to use head -1 file | at the beginning, but it did not work.
A change I would add to your script is -
for file in *.csv; do
head -1 "$file" | grep -l -Z -E '.*?FindMyRegex' | xargs -0 -I{} mv {} /home/destination/folder;
done
you can maybe try sed '1q' file.csv | grep ... to search the regexp only in the first line.
You don't need grep or find, as long as your files don't have embedded newlines.
I don't know an easy way off the top of my head to get sed to delimit with nulls.
mv $( for f in /home/user/folder/*.csv;
do sed -ns '1 { /yourPattern/F; q; }' $f;
done ) /home/destination/folder/
EDIT
Rewrote with a loop. This will run a separate instance of sed to check each file, but at least it shouldn't read beyond the first line. It will fail syntactically if there are no hits.
You might need -E depending on your regex.
-n says don't print records from the files.
-s says treat each file as a distinct input - this is so the filenames aren't always the first one.
This does require GNU sed for the F.
gawk 'FNR==1{if($0~/PATTERN/)
printf "mv %s %s\n",FILENAME, "/target";nextfile}' /path/*.csv
First of all, in your regex: .*?FindMyRegex the .*? doesn't make any sense, they could be removed.
The above awk (gawk) one-liner will build up mv file target command lines for you. You can check them, if you are satisfied with them, pipe the output to |sh , the commands are gonna be executed.
replace PATTERN by your regex pattern, and /target by the real target dir.
The one-liner is assuming that the filenames don't contain special chars (space i.e.), if it is the case, add "s to the mv cmd.
using GNU awk to find the filenames, pipe the filenames into xargs
gawk -v pattern="myRegex" '
FNR == 1 {if ($0 ~ pattern) printf "%s\0", FILENAME; nextfile}
' *.csv | xargs -0 echo mv -t destination
If it looks OK, remove "echo"
Try this Shellcheck-clean Bash code:
#! /bin/bash
shopt -s nullglob # Globs that match nothing expand to nothing
shopt -s dotglob # Globs match files whose names start with '.'
dest=/home/destination/folder
for file in *.csv ; do
head -n 1 -- "$file" | grep -qE '.*?FindMyRegex' && mv -- "$file" "$dest"
done
shopt -s nullglob prevents an error if there are no .csv files in the directory.
shopt -s dotglob ensures that files whose name starts with '.' are handled.
The -- in the options for head and mv ensures that files whose names begin with - are handled correctly.
The quotes in "$file" and "$dest" ensure that names that contain whitespace (actually $IFS) characters (including newlines) or glob metacharacters are handled correctly.
Note that the .*? in the reqular expression is probably redundant, and may not do what you think it does (grep -E doesn't do non-greedy matching).
I am attempting to use sed to delete a line, read from user input, from a file whose name is stored in a variable. Right now all sed does is print the line and nothing else.
This is a code snippet of the command I am using:
FILE="/home/devosion/scripts/files/todo.db"
read DELETELINE
sed -e "$DELETELINE"'d' "$FILE"
Is there something I am missing here?
Edit: Switching out the -e option with -i fixed my woes!
You need to delimit the search.
#!/bin/bash
read -r Line
sed "/$Line/d" file
Will delete any line containing the typed input.
Bear in mind that sed matches on regex though and any special characters will be seen as such.
For example searching for 1* will actually delete lines containing any number of 1's not an actual 1 and a star.
Also bear in mind that when the variable expands, it cannot contain the delimiters or the command will break or have unexpexted results.
For example if "$Line" contained "/hello" then the sed command will fail with
sed: -e expression #1, char 4: extra characters after command.
You can either escape the / in this case or use different delimiters.
Personally i would use awk for this
awk -vLine="$Line" '!index($0,Line)' file
Which searches for an exact string and has none of the drawbacks of the sed command.
You might have success with grep instead of sed
read -p "Enter a regex to remove lines: " filter
grep -v "$filter" "$file"
Storing in-place is a little more work:
tmp=$(mktemp)
grep -v "$filter" "$file" > "$tmp" && mv "$tmp" "$file"
or, with sponge (apt install moreutils)
grep -v "$filter" "$file" | sponge "$file"
Note: try to get out of the habit of using ALLCAPSVARS: one day you'll accidentally use PATH=... and then wonder why your script is broken.
I found this, it allows for a range deletion with variables:
#!/bin/bash
lastline=$(whatever you need to do to find the last line)` //or any variation
lines="1,$lastline"
sed -i "$lines"'d' yourfile
keeps it all one util.
Please try this :
sed -i "${DELETELINE}d" $FILE
I have (probably a obvious/stupid) problem:
I want to loop over a list of paths, cut them and use the strings to grep in log files.
While every step works fine on its own and 'processed manually' results in hits - grep does not find anything when in the loop?
for FILE in `awk -F "/" '{print $13}' /tmp/files_not_visible.uniq`; do
echo -e "\n\n$FILE\n";
grep "$FILE" /var/log/PATH/FILENAME-2015.12.*;
done
I also tried to do a while loop as reverse exercise, but fails with the same non-result
while read FILE; do
echo $FILE;
echo $FILE | awk -F "/" '{print $13}' | grep -f - /var/log/PATH/FILENAME-2015.12.* ;
done < /tmp/files_not_visible.uniq/tmp/files_not_visible.uniq
So, I guess there is some systematic issue, how I handle the search string with grep?
Found it: the list of files contained invisible characters as the last character of the line! Probably the user, who send me the list of files, created it on some other OS! And I only copied -of course- the visible characters when testing by hand!
Fixed the loop by cutting the last character of a line with
> sed -e 's/.$//'
I need some assistance trying to build up a variable using a list of exclusions in a file.
So I have a exclude file I am using for rsync that looks like this:
*.log
*.out
*.csv
logs
shared
tracing
jdk*
8.6_Code
rpsupport
dbarchive
inarchive
comms
PR116PICL
**/lost+found*/
dlxwhsr*
regression
tmp
working
investigation
Investigation
dcsserver_weblogic_
dcswebrdtEAR_weblogic_
I need to build up a string to be used as a variable to feed into egrep -v, so that I can use the same exclusion list for rsync as I do when egrep -v from a find -ls.
So I have created this so far to remove all "*" and "/" - and then when it sees certain special characters it escapes them:
cat exclude-list.supt | while read line
do
echo $line | sed 's/\*//g' | sed 's/\///g' | 's/\([.-+_]\)/\\\1/g'
What I need the ouput too look like is this and then export that as a variable:
SEXCLUDE_supt="\.log|\.out|\.csv|logs|shared|PR116PICL|tracing|lost\+found|jdk|8\.6\_Code|rpsupport|dbarchive|inarchive|comms|dlxwhsr|regression|tmp|working|investigation|Investigation|dcsserver\_weblogic\_|dcswebrdtEAR\_weblogic\_"
Can anyone help?
A few issues with the following:
cat exclude-list.supt | while read line
do
echo $line | sed 's/\*//g' | sed 's/\///g' | 's/\([.-+_]\)/\\\1/g'
Sed reads files line by line so cat | while read line;do echo $line | sed is completely redundant also sed can do multiple substitutions by either passing them as a comma separated list or using the -e option so piping to sed three times is two too many. A problem with '[.-+_]' is the - is between . and + so it's interpreted as a range .-+ when using - inside a character class put it at the end beginning or end to lose this meaning like [._+-].
A much better way:
$ sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' file
\.log
\.out
\.csv
logs
shared
tracing
jdk
8\.6\_Code
rpsupport
dbarchive
inarchive
comms
PR116PICL
lost\+found
dlxwhsr
regression
tmp
working
investigation
Investigation
dcsserver\_weblogic\_
dcswebrdtEAR\_weblogic\_
Now we can pipe through tr '\n' '|' to replace the newlines with pipes for the alternation ready for egrep:
$ sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' file | tr "\n" "|"
\.log|\.out|\.csv|logs|shared|tracing|jdk|8\.6\_Code|rpsupport|dbarchive|...
$ EXCLUDE=$(sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' file | tr "\n" "|")
$ echo $EXCLUDE
\.log|\.out|\.csv|logs|shared|tracing|jdk|8\.6\_Code|rpsupport|dbarchive|...
Note: If your file ends with a newline character you will want to remove the final trailing |, try sed 's/\(.*\)|/\1/'.
This might work for you (GNU sed):
SEXCLUDE_supt=$(sed '1h;1!H;$!d;g;s/[*\/]//g;s/\([.-+_]\)/\\\1/g;s/\n/|/g' file)
This should work but I guess there are better solutions. First store everything in a bash array:
SEXCLUDE_supt=$( sed -e 's/\*//g' -e 's/\///g' -e 's/\([.-+_]\)/\\\1/g' exclude-list.supt)
and then process it again to substitute white space:
SEXCLUDE_supt=$(echo $SEXCLUDE_supt |sed 's/\s/|/g')