Link separated words [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I would know if there is a program or a script that can link separated words in a file name with a dot (or other punctuation marks).
Example:
How are you.pdf >>> How.are.you.pdf
I did not explain myself well, I'm looking for a bash script that can rename file names

Assuming you have How are you.pdf in a variable you can use parameter expansions:
% a="How are you.pdf"
% echo "${a// /.}"
How.are.you.pdf
The above is a bash expansion and doesn't work in a POSIX shell. In that case sed or simulare would be needed.
To rename all files in the current directory:
for a in *; do
[ -f "$a" ] || continue
mv -- "$a" "${a// /.}"
done

sed is your help here.
For example:
echo "How are you.pdf" | sed 's/ /./g'
The sed command is divided to command/1st argument/2nd argument/command
In our example:
s - search
[space] - the space character
. - the dot character to
replace
g - do this globally and not only for the first occurrence

There is a tool called perl-rename sometimes called rename - not to be confused with rename from util-linux.
This tool take a Perl expressions and renames accordingly:
perl-rename 's/ /./g' *
The above will rename all files / directories in the current directory to add replace spaces for periods:
How are you.pdf -> How.are.you.pdf
This is another.file -> This.is.another.file
You can try the regex online

Related

How to print output twice in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Which command is use to print the file name twice on output?
I want to write a pipe that List all the files beginning with the character ‘P’ on the screen twice in succession.
Something like:
ls -1 | while read i ; do echo $i $i ; done
… should do the trick.
ls | sed -E 's/^(P.*)/\1 \1/'
ls, when used with a pipe, puts 1 file per line.
We use sed with extended RE support -E.
We capture the name of any word beginning with P: ^(P.*)
and replace it with itself, a space, followed by itself \1 is a back-reference to what is captured in the parenthesis ( ... ) .
I suggest to use the find utility:
find . -maxdepth 1 -type f -name 'P*' -print -print

Get a subset of several alphabetically ordered files in Bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Let's assume I have a group of files "aab.md", "aac.md", "aad.md" ... "csdw.md". Content/filenames are actually in (non-latin) utf-8. They can be sorted alphabetically.
How can I get in Bash a subset of those files starting with e.g. "aad.md" and upwards?
declare -a files
while IFS= read -r -d $'\0' file; do
filename=${file##*/}
if [[ ! "$filename" < "aad.md" ]]
then
files=("${files[#]}" "$file")
fi
done < <(find . -name "*.md" -print0)
The array "${files[#]}" should now contain paths to files whose basename is greater than aad.md.
This uses a number of less well-known techniques in bash: arrays, prefix substitution, zero-terminated records (and their reading), and process substitution; so don't hesitate to ask if something is unclear.
Note that bash [[...]] construct doesn't know about >= operator, so we need to improvise with ! ...<....
This is almost pure bash, no external commands except find. If you accept external commands, $(basename $file) is more obvious than ${file##*/}, but at that point you might as well use awk... and if you can use awk, why not Ruby?
ruby -e "puts Dir['**/*.md'].select{|x| File.basename(x) >= 'aad.md'}"

Linux - How to add incremental numbers to the middle of a filename? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been seeing a ton of answers that show how to add to the beginning or end of a file, but I'm looking to replace numbers in the middle of a file. I have, for example:
ShowName - S00E01 - Episode Name.mkv
ShowName - S00E02 - Episode Name.mkv
ShowName - S00E03 - Episode Name.mkv
I want to change the E01-E03 part to be something like E20 to E22, or similar, without modifying the rest of the filename.
Couldn't figure out how to do this using linux's "rename" call, any other suggestions?
Linux utility rename is just a simple tool. A more advanced tool using a regular expression is perl-rename, which usually installed separately. However it still will not solve your problem.
For anything more complicated, i usually resort to writing a small bash for loop.
E.g. this script should work for your problem:
# for every file ending with .mkv
for f in *.mkv; do
# transform the filename using sed, so that character '|' character will separate episode number from the lest of the filename (so it can be extracted)
# e.g.
# 'ShowName - S00E01 - Episode Name.mkv' will be
# 'ShowName - S00E|01| - Episode Name.mkv'
# Then read such string to three variables:
# prefix enum and suffix splitting on '|' character
IFS='|' read -r prefix enum suffix < <(sed 's/\(.*S[0-9][0-9]E\)\([0-9][0-9]\)\(.*\)/\1|\2|\3/' <<<"$f");
# newfilename consist of prefix, calculated episode number and the rest of the filename
# i assumed you want to add 19 to episode number
# it may be also a good idea to move files to another directory, to avoid unintentional overwriting of existing files
# you may also consider using -n/--no-clobber or --backup options to mv
newf="another_directory/${prefix}$(printf "%02d" "$((enum-1+20))")${suffix}"
# move "$f" to "$newf"
# filenames have special characters (spaces), so remember about qoutes
echo "'$f' -> '$newf'"
mv -v "$f" "$newf"
done
Use some other tools like grep to help you:
for f in *.mkv; do
NUM=$(echo "$f" | grep -Po '(?<=E)[0-9]{2}')
NEWNUM=$((NUM+20))
fn=${f/E${NUM}/E${NEWNUM}}
mv "$f" "$fn"
done
The rest should be easily done with shell's builtin string replacement functionality.

How to loop over a list containing two different pattern files in linux (bash)? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I got a list containing filenames that match the following two patterns:
one is like XXX_01.fastq
another is XXX_01_001.fastq
I am going to write a for loop (in bash) to loop over all the filenames with different patterns and I need to determine which ones match the patterns above. Any help about it?
Contents of list.txt:
$ cat list.txt
AAA_01.fastq
AA_01_001.fastq
BBB_01_002.fastq
BBB_02.fastq
Example using bash pattern matching:
for file in `cat list.txt`; do
if [[ $file =~ [A-Z]{3}_[0-9]{2}\.fastq || $file =~ [A-Z]{3}_[0-9]{2}_[0-9]{3}\.fastq ]]; then
echo "MATCH $file";
fi;
done
Output:
MATCH: AAA_01.fastq
MATCH: BBB_01_002.fastq
MATCH: BBB_02.fastq

how to delete a line that contains a word in all text files of a folder? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, in linux, I have a folder with lots of big text files.
I want to delete all the lines of these files that contain a specific keyword.
Is there any easy way to do that across all files?
There already many similar answers. I'd like to add that if you want to match this is a line containing a keyword but not this is a line containing someoneelseskeyword, then you had better added brackets around the word:
sed -i '/\<keyword\>/d' *.txt
I cannot test this right now, but it should get you started
find /path/to/folder -type f -exec sed -i '/foo/d' {} ';'
find files in the directory /path/to/folder
find lines in these files containing foo
delete those lines from those files
Sure:
for x in files*
do
grep -v your_pattern "$x" > x && mv x "$x"
done
try this:
find your_path_filename |xargs sed -i '/key_word/d'
sed -i '/keyword/d' *.txt -- run this in your directory.
sed - stream editor , use it here for deleting lines in individual files
-i option : to make the changes permenent in the input files
'/keywprd/' : specifies the pattern or the key to be searched in the files
option d : informs sed that matching lines need to be deleted.
*.txt : simply tells sed to use all the text files in the directory as input for
processing , you can specify a individual or a extension like *.txt the way i did.

Resources