I have a shell script I wrote that grabs a list of names from a location, and each name is separated by a comma , <-- I was wondering if there is anything I can write to make the list of names that gets stored in the text file to indent to a new line after each comma?
For example the list of names that gets stored in the text file look like this:
"Red", "Blue", "Green"
And I want them to look like this:
Red
Blue
Green
The data gets pulled from html code off a website so they have quotations and commas around them, if it's possible to at least format them to a new line, that would be great. Thanks if you help.
Assuming the comma separated date is in the variable $data, you can tell bash to split it by setting $IFS (the list separator variable) to ', ' and using a for loop:
TMPIFS=$IFS #Stores the original value to be reset later
IFS=', '
echo '' > new_file #Ensures the new file is empty, omit if the file already has contents
for item in $data; do
item=${item//'"'/} #Remove double quotes from entries, use item=${item//"'"/} to remove single quotes
echo "$item" >> new_file #Appends each item to the new file, automatically starts on a new line
done
IFS=$TMPIFS #Reset $IFS in case other programs rely on the default value
This will give you the output in the desired format, albeit with a leading blank line.
Just use sed.
% echo '"Red", "Blue", "Green"' | sed -e 's/\"//g' -e 's/, /\n/g'
Red
Blue
Green
awk -F, '{for(i=1;i<=NF;i++){ print $i;}}'
see below command line:
kent$ echo '"Red", "Blue", "Green"'|sed 's/, /\n/g'
"Red"
"Blue"
"Green"
\n is for new line. Like "Red\n", "Blue\n", "Green\n"
Related
Newbie to unix/shell/bash. I have a file name CellSite whose 6th line is as below:
btsName = "RV74XC038",
I want to extract the string from 6th line that is between double quotes (i.e.RV74XC038) and save it to a variable. Please note that the 6th line starts with 4 blank spaces. And this string would vary from file. So I am looking for a solution that would extract a string from 6th line between the double quotes.
I tried below. But does not work.
str2 = sed '6{ s/^btsName = \([^ ]*\) *$/\1/;q } ;d' CellSite;
Any help is much appreciated. TIA.
sed is a stream editor.
For just parsing files, you want to look into awk. Something like this:
awk -F \" '/btsName/ { print $2 }' CellSite
Where:
-F defines a "field separator", in your case the quotation marks "
the entire script consists of:
/btsName/ act only on lines that contain the regex "btsName"
from that line print out the second field; the first field will be everything before the first quotation marks, second field will be everything from the first quotes to the second quotes, third field will be everything after the second quotes
parse through the file named "CellSite"
There are possibly better alternatives, but you would have to show the rest of your file.
Using sed
$ str2=$(sed '6s/[^"]*"\([^"]*\).*/\1/' CellSite)
$ echo "$str2"
RV74XC038
You can use the following awk solution:
btsName=$(awk -F\" 'NR==6{print $2; exit}' CellSite)
Basically, get to the sixth line (NR==6), print the second field value (" is used to split records (lines) into fields) and then exit.
See the online demo:
#!/bin/bash
CellSite='Line 1
Line 2
Line 3
btsName = "NO74NO038",
Line 5
btsName = "RV74XC038","
Line 7
btsName = "no11no000",
'
btsName=$(awk -F\" 'NR==6{print $2; exit}' <<< "$CellSite")
echo "$btsName" # => RV74XC038
This might work for you (GNU sed):
var=$(sed -En '6s/.*"(.*)".*/\1/p;6q' file)
Simplify regexs and turn off implicit printing.
Focus on the 6th line only and print the value between double quotes, then quit.
Bash interpolates the sed invocation by means of the $(...) and the value extracted defines the variable var.
I have a text file that contains numerous lines that have partially duplicated strings. I would like to remove lines where a string match occurs twice, such that I am left only with lines with a single match (or no match at all).
An example output:
g1: sample1_out|g2039.t1.faa sample1_out|g334.t1.faa sample1_out|g5678.t1.faa sample2_out|g361.t1.faa sample3_out|g1380.t1.faa sample4_out|g597.t1.faa
g2: sample1_out|g2134.t1.faa sample2_out|g1940.t1.faa sample2_out|g45.t1.faa sample4_out|g1246.t1.faa sample3_out|g2594.t1.faa
g3: sample1_out|g2198.t1.faa sample5_out|g1035.t1.faa sample3_out|g1504.t1.faa sample5_out|g441.t1.faa
g4: sample1_out|g2357.t1.faa sample2_out|g686.t1.faa sample3_out|g1251.t1.faa sample4_out|g2021.t1.faa
In this case I would like to remove lines 1, 2, and 3 because sample1 is repeated multiple times on line 1, sample 2 is twice on line 2, and sample 5 is repeated twice on line 3. Line 4 would pass because it contains only one instance of each sample.
I am okay repeating this operation multiple times using different 'match' strings (e.g. sample1_out , sample2_out etc in the example above).
Here is one in GNU awk:
$ awk -F"[| ]" '{ # pipe or space is the field reparator
delete a # delete previous hash
for(i=2;i<=NF;i+=2) # iterate every other field, ie right side of space
if($i in a) # if it has been seen already
next # skit this record
else # well, else
a[$i] # hash this entry
print # output if you make it this far
}' file
Output:
g4: sample1_out|g2357.t1.faa sample2_out|g686.t1.faa sample3_out|g1251.t1.faa sample4_out|g2021.t1.faa
The following sed command will accomplish what you want.
sed -ne '/.* \(.*\)|.*\1.*/!p' file.txt
grep: grep -vE '(sample[0-9]).*\1' file
Inspiring from Glenn's answer: use -i with sed to directly do changes in the file.
sed -r '/(sample[0-9]).*\1/d' txt_file
I have already seen few similar type questions, but I have a different situation here to post it.
I have to replace different strings to same occurences in a file. I have used
sed -i 's/X/Y/g' file.txt
For similar occurence I have used line numbers like
sed -i '3s/X/Y/g ; 4s/X/Z/g' file.txt
This is possible only if those strings are always in same line.
Ex : file.txt
This color is
...some more lines
This color is
...some more lines
This color is
...some more lines
This color is
...some more lines`
I need to change them as
This color is blue
...some more lines
This color is red
...some more lines
This color is green
...some more lines
This color is yellow
...some more lines
Without using line numbers. As the line numbers for those strings can change anytime if more info is added?
Can anyone please help. Thank you
awk to the rescue!
this will cycle through the colors if there are more lines than the colors
$ awk -v colors='blue,red,green,yellow' 'BEGIN {n=split(colors,v,",")}
/color/ {$0=$0 OFS v[i++%n+1]}1' file
to embed this into a quoted string, it will be easier to remove double quotes altogether. Simply change to
$ awk -v colors='blue red green yellow' 'BEGIN {n=split(colors,v)}
/color/ {$0=$0 OFS v[i++%n+1]}1' file
if your colors are not single words, you can't use the above, so back to splitting with comma (or any other delimiter), just need to escape them
$ awk -v colors='true blue,scarlet red,pistachio green,canary yellow' '
BEGIN {n=split(colors,v,\",\")}
/color/ {$0=$0 OFS v[i++%n+1]}1' file
Your question isn't clear but it SOUNDS like you're trying to do this:
awk 'NR==FNR{colors[NR];next} /This color is/{$0 = $0 OFS colors[++c]} 1' colors file
where colors is a file containing one color per line and file is the file you want the color values added to. If that's not what you want then edit your question to specify your requirements more clearly and come up with a better (and complete/testable) example.
I have a file with the following structure:
# #################################################################
# TEXT: MORE TEXT
# TEXT: MORE TEXT
# #################################################################
___________________________________________________________________
ITEM 1
___________________________________________________________________
PROPERTY1: VALUE1_1
PROPERTY222: VALUE2_1
PROPERTY33: VALUE3_1
PROPERTY4444: VALUE4_1
PROPERTY55: VALUE5_1
Description1: Some text goes here
Description2: Some text goes here
___________________________________________________________________
ITEM 2
___________________________________________________________________
PROPERTY1: VALUE1_2
PROPERTY222: VALUE2_2
PROPERTY33: VALUE3_2
PROPERTY4444: VALUE4_2
PROPERTY55: VALUE5_2
Description1: Some text goes here
Description2: Some text goes here
I want to add another item to the file, using sed or awk:
sed -i -r "\$a$PROPERTY1: VALUE1_3" file.txt
sed -i -r "\$a$PROPERTY2222: VALUE2_3" file.txt
etc. So my next item looks like this:
___________________________________________________________________
ITEM 3
___________________________________________________________________
PROPERTY1: VALUE1_3
PROPERTY222: VALUE2_3
PROPERTY33: VALUE3_3
PROPERTY4444: VALUE4_3
PROPERTY55: VALUE5_3
Description1: Some text goes here
Description2: Some text goes here
The column values is jagged. How do I align my values to the left like for previous items? I can see 2 solutions here:
To align the values while inserting them into the file.
To insert the values into the file the way I did it and align them next.
The command
sed -i -r "s|.*:.*|&|g" file.txt
catches the properties and values I want to align, but I haven't been able to align them properly, i.e.
awk '/^.*:.*$/{ printf "%-40s %-70s\n", $1, $2 }' file.txt
It prints out the file, but it includes the description values and tags, cuts the values if they include spaces or dashes. It just a big mess.
I've tried more commands based on what I've found on Stack Overflow and some blogs, but nothing does what I need.
Note: Values of the description tags are not jagged- this is because I write them to the file in a separate way.
What is wrong with my commands? How do I achieve what I need?
When your file is without tabs, try this:
sed -r 's/: +/:\t/' file.txt | expand -20
When this works, redirect the output to a tmpfile and move the tmpfile to file.txt.
You can use gensub and thoughtful field seperators to take care of this:
for i in {1..5}; do
echo $(( 10 ** i )): $i;
done | awk -F ':::' '/^[^:]+:.+/{
$0 = gensub(/: +/, ":::", $0 );
key=( $1 ":" );
printf "%-40s %s\n", key, $2;
}'
The relevant part being where we swap out ": +" for just ":::" and then do a printf to bring it back together.
You could use \t to insert tabs (rather than spaces which is why you get 'jagged' values)
instead of
sed -i -r "\$a$PROPERTY1: VALUE1_3" file.txt
use
sed -i -r "\$a$PROPERTY1:\t\tVALUE1_3" file.txt
All you need to do is remember the existing indentation when inserting the new line, e.g.:
echo 'PROPERTY732: VALUE9_8_7' |
awk -v prop="PROPERTY1" -v val="VALUE1_3" '
match($0,/^PROPERTY[^[:space:]]+[[:space:]]+/) { wid=RLENGTH }
{ print }
END { printf "%-*s%s\n", wid, prop":", val }
'
PROPERTY732: VALUE9_8_7
PROPERTY1: VALUE1_3
but it's not clear that adding 1 line at a time makes sense or where all of the other text you're adding is coming from.
The above will work with any awk on any UNIX system.
If your "properties" don't actually start with the word PROPERTY then you just need to edit your question to show more realistic sample input/output and tell/show us how to distinguish a PROPERTY line from a Description line and, again, the solution will be trivial with awk.
What is the best way to remove all lines from a text file starting at first empty line in Bash? External tools (awk, sed...) can be used!
Example
1: ABC
2: DEF
3:
4: GHI
Line 3 and 4 should be removed and the remaining content should be saved in a new file.
With GNU sed:
sed '/^$/Q' "input_file.txt" > "output_file.txt"
With AWK:
$ awk '/^$/{exit} 1' test.txt > output.txt
Contents of output.txt
$ cat output.txt
ABC
DEF
Walkthrough: For lines that matches ^$ (start-of-line, end-of-line), exit (the whole script). For all lines, print the whole line -- of course, we won't get to this part after a line has made us exit.
Bet there are some more clever ways to do this, but here's one using bash's 'read' builtin. The question asks us to keep lines before the blank in one file and send lines after the blank to another file. You could send some of standard out one place and some another if you are willing to use 'exec' and reroute stdout mid-script, but I'm going to take a simpler approach and use a command line argument to let me know where the post-blank data should go:
#!/bin/bash
# script takes as argument the name of the file to send data once a blank line
# found
found_blank=0
while read stuff; do
if [ -z $stuff ] ; then
found_blank=1
fi
if [ $found_blank ] ; then
echo $stuff > $1
else
echo $stuff
fi
done
run it like this:
$ ./delete_from_empty.sh rest_of_stuff < demo
output is:
ABC
DEF
and 'rest_of_stuff' has
GHI
if you want the before-blank lines to go somewhere else besides stdout, simply redirect:
$ ./delete_from_empty.sh after_blank < input_file > before_blank
and you'll end up with two new files: after_blank and before_blank.
Perl version
perl -e '
open $fh, ">","stuff";
open $efh, ">", "rest_of_stuff";
while(<>){
if ($_ !~ /\w+/){
$fh=$efh;
}
print $fh $_;
}
' demo
This creates two output files and iterates over the demo data. When it hits a blank line, it flips the output from one file to the other.
Creates
stuff:
ABC
DEF
rest_of_stuff:
<blank line>
GHI
Another awk would be:
awk -vRS= '1;{exit}' file
By setting the record separator RS to be an empty string, we define the records as paragraphs separated by a sequence of empty lines. It is now easily to adapt this to select the nth block as:
awk -vRS= '(FNR==n){print;exit}' file
There is a problem with this method when processing files with a DOS line-ending (CRLF). There will be no empty lines as there will always be a CR in the line. But this problem applies to all presented methods.