print multiple words in specific pattern [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a file word.txt
$ cat word.txt
cat
dog
rat
bird
I have a URL like this
https://example.com/?word=
I want to generate a URL list like this
https://example.com/?word=cat
https://example.com/?word=cat,dog
https://example.com/?word=cat,dog,rat
https://example.com/?word=cat,dog,rat,bird
I have 177 words so how can I automate this process with Bash or any other easy programming

Read the input line by line, add the line to the URL. Don't include the comma for the first line.
#! /bin/bash
url='https://example.com/?word='
while read -r line ; do
url+=$comma$line
comma=,
echo "$url"
done < word.txt

This task can be accomplished with a single GNU sed command:
sed -n 's|^|https://example.com/?word=|; :a; p; N; s/\n/,/; ba' word.txt
That should be more efficient than the plain bash.
Explanation:
-n   With this option, sed only produces output when explicitly told to via the p command.
s|^|https://example.com/?word=|   Replaces the beginning of the line with the https://example.com/?word=. This command effectively prepends that string to the pattern space.
:a   Label a for branch command (b). Used when looping through the lines.
p   Prints the pattern space.
N   Adds a newline to the pattern space, then appends the next line of input to the pattern space.
s/\n/,/   Replaces the newline with the comma (,).
ba   Jumps to the label a. This effectively creates a loop for all input lines except the first line.

Another variant:
url='https://example.com/?word='
while read word; do
words+=($word) list="${words[#]}"
printf '%s%s\n' "$url" ${list// /,}
done < word.txt

You can use something like this if you want to do it in python
words=["a","b","c"]
joinedWords = ",".join(words)
url="https://example.com/?word="
print(url,joinedWords)
reqUrl = url+joinedWords
print(reqUrl)

Related

Loop through every 'even' line in a file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 days ago.
Improve this question
I have a fasta file with the following structure. For context, a fasta file is simply a text file with a header denoted by '>' and below it is the text. I want to create a for-loop that can iterate through every even line of this fasta file.
The name of the file is chicken_topmotifs.fasta
>gene8
ATGAATTATTATACACCTCAAATACTCTCCTCAATCTCTCCAACATTCCCCACCACAATTCTCGGTGACTTTACTACACTTCTACAATCATACACTTCT
>gene12
ATGGTAGATCTCTATTACGATTATCTTTCTTAGATCACATAATTATCACCCCCCCTTATAAATCTACACTTCTACAACCAATTACACTTCTACAAAACA
>gene18
ATGCTTTTACACTTCTACAACTACTTTTAACTCGATACTTCTACAATCTACACATATCACAATAACAAAAACAAAAAGCTACTAATATATATATATACA
>gene21
ATGTCTCAATTTCACCAATCTATAATTTACTACGCCGTACTCTTTATAACCTTACTTTCTTAAATAACATTACACTTCTACATTACATATTTTACATCA
for sequence in chicken_topmotifs.fasta;
do
echo $sequence
done
Just do two reads each time through the loop. The first read gets the odd line, the second one gets the even line after it.
while read -r gene; do
read -r sequence
# do stuff with $sequence
done < chicken_topmotifs.fasta
Assumptions:
ignore header (>) lines
ignore blank lines
One bash idea:
while read -r sequence
do
echo "$sequence"
done < <(grep '^[ATGC]' chicken_topmotifs.fasta)
If we don't have to worry about blank lines:
while read -r sequence
do
echo "$sequence"
done < <(grep -v '^>' chicken_topmotifs.fasta)
Both of these generate:
ATGAATTATTATACACCTCAAATACTCTCCTCAATCTCTCCAACATTCCCCACCACAATTCTCGGTGACTTTACTACACTTCTACAATCATACACTTCT
ATGGTAGATCTCTATTACGATTATCTTTCTTAGATCACATAATTATCACCCCCCCTTATAAATCTACACTTCTACAACCAATTACACTTCTACAAAACA
ATGCTTTTACACTTCTACAACTACTTTTAACTCGATACTTCTACAATCTACACATATCACAATAACAAAAACAAAAAGCTACTAATATATATATATACA
ATGTCTCAATTTCACCAATCTATAATTTACTACGCCGTACTCTTTATAACCTTACTTTCTTAAATAACATTACACTTCTACATTACATATTTTACATCA

How can i search for an hexadecimal content in a file in a linux/unix/bash script? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an hexadecimal string s and a file f, i need to search the first occurence of that string in the file and save that in a variable with his offset. I thought that the right way to do that is convert the file to hex and search that with a grep. The main problem is that i saw a lot of commands(hexdump,xxd,etc.) to convert but none of them actually work. Any suggestion?
My attempt was like this:
xxd -plain $f > $f
grep "$s" .
output should be like:
> offset:filename
A first approach without any error handling could look like
#!/bin/bash
BINFILE=$1
SEARCHSTRING=$2
HEXSTRING=$(xxd -p ${BINFILE} | tr -d "\n")
echo "${HEXSTRING}"
echo "Searching ${SEARCHSTRING}"
OFFSET=$(grep -aob ${SEARCHSTRING} <<< ${HEXSTRING} | cut -d ":" -f 1)
echo ${OFFSET}:${BINFILE}
I've used xxd here because of Does hexdump respect the endianness of its system?. Please take also note that according How to find a position of a character using grep? grep will return multiple matches, not only the first one. The offset will be counted beginning from 1, not 0. To substract 1 from the variable ${OFFSET} you may use $((${OFFSET}-1)).
I.e. search for the "string" ELF (HEX 454c46) in a system binary will look like
./searchHEX.sh /bin/yes 454c46
7f454c460201010000000000000000000...01000000000000000000000000000000
Searching 454c46
2:/bin/yes
I would use regex for this as well:
The text file:
$ cat tst.txt
1234567890x1fgg0x1cfffrr
A script you can easily change/extend yourself.
#! /bin/bash
part="$(perl -0pe 's/^((?:(?!0(x|X)[0-9a-fA-F]+).)*)(0(x|X)[0-9a-fA-F]+)(.|\n)*/\1:\3\n/g;' tst.txt)"
tmp=${part/:0x*/}
tmp=${#tmp}
echo ${part/*:0x/$tmp:0x} # Echoes 123456789:0x1f
Regex:
^((?:(?!0x[0-9a-fA-F]+).)*) = Search for the first entry that's a hexadecimal number and create a group of it (\1).
(0x[0-9a-fA-F]+) = Make a group of the hexadecimal number (\3).
(.|\n)* = Whatever follows.
Please note that tmp=${part/:0x*/} could cause problems if you have text like :0x before the hexadecimal number that is caught.

SED Command Replacement [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I have a file with warnings. Each warning in a new line with an id that has 3 capital letters followed by 3 digits only, should be replaced by its id.
Example:
SIM_WARNING[ANA397]: Node q<159> for vector output signal does not exist
The output should be ANA397 and the rest of line is deleted.
How to do so using sed?
I don't think you need sed for that. A simple grep with --only-matching could do, as in:
grep -E 'SIM_WARNING\[(.)\]' --only-matching
should work for you.
Where:
-E does "enhanced regular expressions. I think we need those for capturing with ( )
then follows the pattern, which consists of the fixed SIM_WARNING, followed by a match inside the square brackets
--only-matching simply makes grep print only matching content
In other words: by using ( match ) you tell grep that you only care about the thing in that match pattern.
for id in $(grep -o "^SIM_WARNING\[[A-Z][A-Z][A-Z][0-9][0-9][0-9]\]" test1.bla | grep -o "[A-Z][A-Z][A-Z][0-9][0-9][0-9]" test1.bla ); do echo $id; done
This finds ANA397 from the below.
SIM_WARNING[ANA397]: Node q<159> for vector output signal does not exist
First of all, you have to choose how to use the IDs, for example if you need to cycle the file first or the IDs later...
E.G. (Cycle file first)
exec 3<file
while read -r line <&3; do
id="$(printf "%s" "${line}" | sed -e "s/.*\[\([[:alnum:]]\+\)\].*/\1/")"
### Do something with id
done
exec 3>&-
Otherwise you can decide to cycle the output of sed...
E.G.
for id in $(sed -e "s/.*\[\([[:alnum:]]\+\)\].*/\1/" file); do
### Do something with id
done
Both of the examples should work with posix shell (If I am not missing something...), but shell like posh may not support classes as [[:alnum:]], you can substitute them with the equivalent [a-zA-Z0-9], as every guide will teach you.
Note that the check is not on 3 letters and 3 digits, but for any letter and digit between brackets ([ and ]).
EDIT:
If your lines start with SIM_WARNING you can discriminates those lines with -e "/^SIM_WARNING/! d"
For a strict check on 3 letters and 3 digits you can use -e "s/.*\[\([a-zA-Z][a-zA-Z][a-zA-Z][0-9][0-9][0-9]\)\].*/\1/"
So taking the example above you can do somethin like:
for id in $(sed -e "/^SIM_WARNING/! d" -e "s/.*\[\([a-zA-Z][a-zA-Z][a-zA-Z][0-9][0-9][0-9]\)\].*/\1/" file)
### Do something with id
done

Check if an array element is in a file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am writing a bash script to check if an array element is in a file.
For example:
I have an array of errors errors=("1234" "5678" "9999")
I have a file that contains patterns of strings
123400 452612 9999A0 1010EB
I am looking to loop over the file that contains the errors and check to see if any of the array elements matches any string pattern in the file. If it does then give me back the exact array pattern it matched in the file for further processing.
Any ideas on how I can do this?
Here's a way where you only need to invoke grep once:
$ grep -oFf <(printf "%s\n" "${errors[#]}") file
1234
9999
The -f option is to specify a file that contains the pattern. I use a process substitution to "contain" the patterns, one per line.
The -F option specifies plain-text matching: I assume your "errors" array won't contain regular expressions.
Sounds like you just want a loop:
for error in "${errors[#]}"; do
if grep -qE "(^| )$error( |\$)" file; then
# $error was found in the file
fi
done
This matches the error preceded by the start of the line or a space, and followed by a space or the end of the line.
I made an effort to not match appearances of the errors within substrings but if you don't care, then you could change the grep command to this:
grep -qF "$error" file
This will return success if the error string occurs anywhere on the line.
The script goes like this,
#/bin/bash
errors=("1234" "5678" "9999")
for error in "${errors[#]}"
do
grep -o "$error" file
done
For a sample file,
$ cat file
123400 452612 9999A0 1010EB
The script produces an output
$ ./script.sh
1234
9999
meaning the above two keys from the array have matched in the file. The -o flag in grep is to identify only the matching parts from the array. An excerpt from the man grep page.
-o, --only-matching
Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line.

Change the path address in a text file by shell scripting [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In my Bash script, I have to change a name to a path address(new address) in a text file:
(MYADDREES) change to ( /home/run1/c1 ) and save it as new file.
I did like this: defined a new variable = new address and tried to replace it in previous address in text file.
I use sed but it has problem.
My script was:
#!/bin/bash
# To debug
set -x
x=`pwd`
echo $x
sed "s/MYADDRESS/$x/g" < sample1.txt > new.txt
exit
The output of pwd is likely to contain / characters, making your sed expression look something like s/MYADDRESS//home/user/somewhere/. This makes it impossible for sed to sort out what should be replaced with what. There are two solutions:
Use a different delimiter for sed:
sed "s,MYADDRESS,$x,g" < sample1.txt > new.txt
...although this will have the same problem if the current path contains a comma character or something else that is a special character for sed, so the more robust approach is to use awk instead:
awk -v curdir="$(pwd)" '{ gsub("MYADDRESS", curdir); print }' < sample1.txt > new.txt

Resources