For an assignment, I'm trying to make a shell script that will print a triangle that looks like the following:
+
| \
| \
| \
| \
+-----
Here is my code in VIM:
echo'+
| \
| \
| \
| \
+----- '
However, instead of getting that as the output when I run the script, it outputs as the following:
Can anybody tell me what I'm doing wrong?
Try this
#!/bin/bash
echo '
+
| \
| \
| \
| \
+----- '
just start it on the next line since you need spaces before the "+"
How did your output got merged to 3 lines?
I think your original command was with a space after echo and double quotes:
echo "+
| \
| \
| \
| \
+----- "
And now pay attention to the last character of each line. When the last character is the \, the following line is appended to the current line.
Make sure each line ends with a space (or use single quotes).
Related
I'm currently working on a bash script to automate a list of regex for a list of links to clean up the file. Currently i'm doing all manually on kate with find/replace, but having it as a script would be more comfortable. Since i'm fairly new to bash scripting, i ask you for help.
Example list of urls:
0: "/suburl0"
1: "/suburl1"
2: "/suburl2"
3: "/suburl3"
4: "/suburl4"
Currently script i have:
#!/bin/bash
awk '[^\x00-\x7F]+' $1 #there are non-ascii chars in the file, so clean it out
awk 'NF' $1 # remove non-character lines
awk '^[0-900]{0,3}: ' $1 #delete all those number infront of the link
awk '"' $1 # remove those quotation marks
awk '!seen[$0]++' $1 #remove duplicate lines
awk '{print "http://example.com/" $0}' $1 #prepend the full url to the suburl
The goal is to apply all those regexes to the file, so the file ends cleaned up
My guess is, that i'm not redirecting the output of awk correctly, but when i tried to pipe it into the file, the file was just empty lines.
A more-or-less translation of what you wanted, without restricting to awk:
cat $1 \
| tr -cd '[:print:][:space:]' \
| grep . \
| sed -r 's/^[0-9]{1,3}: //' \
| tr -d '"' \
| sort -u \
| awk '{print "http://example.com" $0}'
Note that sort will change the order, I am assuming the order doesn't matter.
Also note that sed -r is GNU.
A slightly simplified and more portable version:
cat $1 \
| tr -cd '[:graph:]\n' \
| grep . \
| tr -d '"' \
| sort -u \
| sed 's,^[0-9]*:,http://example.com,'
Output:
http://example.com/suburl0
http://example.com/suburl1
http://example.com/suburl2
http://example.com/suburl3
http://example.com/suburl4
I am trying to make a file but I keep getting this error
tr: extra operand ' '
./homework: line2: syntax error near unexpected token '|'
./homework: line '| tr ' \t' '\n\n' \ '
Here is my file contents
tr ['a-z'] ['A-Z'] < practice \
| tr ' \t' '\n\n' \
| sed '/^$/d' \
| sort \
| tee x-sor.out
The error is said to be in line 2 and I cant tell what the problem is or apparently syntax
But if anyone is curious yes it is tr ' \t' that's what was used from my textbook
I have read this post Select random lines from a file in bash and Random selection of columns using linux command however they don't work specifically with a set of lines that need to stay in the same order. I also searched to find if there was any randomization option using the cut command.
My attempt:
I am trying to replace spaces with new lines, then sort Randomly and then use Head to grab a random string (for each line).
cat file1.txt | while read line; do echo $line | sed 's/ /\n/g' | sort -R | head -1
While this does get the basic job done for one random string, I would like to know if there is a better more efficient way of writing this code? This way, I can add the options to get 1-2 random strings rather than just one.
Here's file1.txt:
#Sample #Example #StackOverflow #Question
#Easy #Simple #Code #Examples #Help
#Support #Really #Helps #Everyone #Learn
Here's my desired output (random values):
#Question
#Code #Examples
#Helps
If you know a better way to implement this code, I would really appreciate your positive input and support.
This is the solution
while read -r line; do echo "$line" | grep -oP '(\S+)' | shuf -n $((RANDOM%2+1)) | paste -s -d' '; done < file1.txt
Using AWK:
%awk 'BEGIN { srand() } { print $(1+int(rand()*NF))}' data.txt
#Question
#Help
#Support
You can modify this to select 2 (or more) random words per line (with duplicates), by repeating the $(rand...) construct, a corresponding number of times (or defining a user function to do this).
Choosing N words from an each line w/o duplicates (by position), is a bit more tricky:
awk '
BEGIN { N=2; srand() }
{
#Collect fields into an array (w)
delete w;
for(i=1;i<=NF;i++) w[i]=$i;
#Randomize Array (Fisher–Yates style)
for(j=NF;j>=2;j--) {
r=1+int(rand()*(j));
if(r!=j) {
x=w[j]; w[j]=w[r]; w[r]=x;
}
}
#Take N first items off the randomized array
for(g=1;g<=(N<NF?N:NF);g++) {
if(g>1) printf " "
printf w[g];
}
printf "\n"
}' data.txt
N - is a (maximum) number of words to pick per line.
To pick a random (at most N) number of items per line, modify the code like that:
awk '
BEGIN { N=2; srand() }
{
#Collect fields into an array (w)
delete w;
for(i=1;i<=NF;i++) w[i]=$i;
#Randomize Array (Fisher–Yates style)
for(j=NF;j>=2;j--) {
r=1+int(rand()*(j));
if(r!=j) {
x=w[j]; w[j]=w[r]; w[r]=x;
}
}
#Take L < N first items off the randomized array
L=1+int(rand()*N);
for(g=1;g<=(L<NF?L:NF);g++) {
if(g>1) printf " "
printf w[g];
}
printf "\n"
}' data.txt
This will print 1 or 2 (N) randomly chosen words per each line.
This code can still be optimized a bit (i.e. by only shuffling first L elements of an array), yet it is 2 or 3 orders of magnitude faster than a shell based solution.
An attempt on bash
cat file1 | xargs -n1 -I# bash -c "output_count=2; \
line=\$(echo \"#\"); \
words=\$(echo \${line} | wc -w); \
for i in \$(eval echo \"{1..\${output_count}}\"); do \
select=\$((1 + RANDOM % \${words})); \
echo \${line} | cut -d \" \" -f \${select} | tr '\n' ' '; \
done;
echo \" \" "
Assumes that the file is called file1.
In order to change the number of randomly selected words, set a different number to output_count
Prints
$ cat file1 | xargs -n1 -I# bash -c "output_count=2; \
line=\$(echo \"#\"); \
words=\$(echo \${line} | wc -w); \
for i in \$(eval echo \"{1..\${output_count}}\"); do \
select=\$((1 + RANDOM % \${words})); \
echo \${line} | cut -d \" \" -f \${select} | tr '\n' ' '; \
done;
echo \" \" "
#Example #Example
#Examples #Help
#Support #Learn
$ cat file1 | xargs -n1 -I# bash -c "output_count=2; \
line=\$(echo \"#\"); \
words=\$(echo \${line} | wc -w); \
for i in \$(eval echo \"{1..\${output_count}}\"); do \
select=\$((1 + RANDOM % \${words})); \
echo \${line} | cut -d \" \" -f \${select} | tr '\n' ' '; \
done;
echo \" \" "
#Question #StackOverflow
#Help #Help
#Everyone #Learn
This might work for you (GNU sed):
sed 'y/ /\n/;s/.*/echo "&"|shuf -n$((RANDOM%2+1))/e;y/\n/ /' file
Replace the spaces in each line by newlines and the using seds substitution eflag, pass each set of lines into the shuf -n command
.
Closely related to this question: Bash printf prefix
I have the following Bash script that is generating an RRDGraph with RRDTool.
#!/bin/bash
now=$(date +%s)
now_formatted=$(date +%s | awk '{printf "%s\n", strftime("%c",$1)}' | sed -e 's/:/\\:/g')
# create power graph for last week
/usr/bin/rrdtool graph /var/www/power-week.png \
--start end-7d --width 543 --height 267 --end $now-1min --slope-mode \
--vertical-label "Watts" --lower-limit 0 \
--alt-autoscale-max \
--title "Power: Last week vs. week before" \
--watermark "(©) $(date +%Y) Alyn R. Tiedtke" \
--font WATERMARK:8 \
DEF:Power=/root/currentcost/ccdata.rrd:Power:AVERAGE \
DEF:Power2=/root/currentcost/ccdata.rrd:Power:AVERAGE:end=$now-7d1min:start=end-7d \
VDEF:Last=Power,LAST \
VDEF:First=Power,FIRST \
VDEF:Min=Power,MINIMUM \
VDEF:Peak=Power,MAXIMUM \
VDEF:Average=Power,AVERAGE \
CDEF:kWh=Power,1000,/,168,* \
CDEF:Cost=kWh,.1029,* \
SHIFT:Power2:604800 \
LINE1:Power2#00CF00FF:"Last Week\\n" \
HRULE:Min#58FAF4:"Min " \
GPRINT:Power:MIN:"%6.2lf%sW" \
COMMENT:"\\n" \
LINE1:Power#005199FF:"Power " \
AREA:Power#00519933:"" \
GPRINT:Last:"%6.2lf%sW" \
COMMENT:"\\n" \
HRULE:Average#9595FF:"Average" \
GPRINT:Power:AVERAGE:"%6.2lf%sW" \
COMMENT:"\\n" \
HRULE:Peak#ff0000:"Peak " \
GPRINT:Power:MAX:"%6.2lf%sW" \
COMMENT:"\\n" \
GPRINT:kWh:AVERAGE:" total %6.2lfkWh\\n" \
GPRINT:Cost:AVERAGE:" cost %6.2lf £\\n" \
GPRINT:Cost:AVERAGE:"$(printf \\" cost %11s\\" £%.2lf | sed 's/\£/\£ /g')\\n" \
COMMENT:" \\n" \
GPRINT:First:"Showing from %c\\n":strftime \
GPRINT:Last:" to %c\\n":strftime \
COMMENT:" Created at $now_formatted"
Which produces a graph like this (notice the leading \ on the lower cost line in the legend):-
Concentrating specifically on the following line:-
GPRINT:Cost:AVERAGE:"$(printf \\" cost %11s\\" £%.2lf | sed 's/\£/\£ /g')\\n" \
This is the line that is printing out the lower cost line in the legend.
I am passing a GPRINT formatted value of £4.54 to Bash's printf function to be padded out to 11 spaces and a cost label prefixed on it. I am then piping this to sed to add a space between the £ and the actual value.
What I want to know is, why is the escaped \ coming through in the output? If I remove the \\ just after printf bash complains that something is missing.
How would I suppress this \ from coming through in the output.
Try this line:
GPRINT:Cost:AVERAGE:"$(printf ' cost %11s' £%.2lf | sed 's/\£/\£ /g')\\n" \
I changed the inner " marks to ' marks and removed the backslashes.
I have a shell script that pulls the number of online players, but I need a little help.
The script:
#!/usr/bin/bash
wget --output-document=- http://runescape.com/title.ws 2>/dev/null \
| grep PlayerCount \
| head -1l \
| sed 's/^[^>]*>//' \
| sed "s/currently.*$/$(date '+%r %b %d %Y')/"
It outputs the following:
<p class="top"><span>69,215</span> people 06:31:37 PM Nov 22 2011
What I would like it to say is this:
69,215 people 06:31:37 PM Nov 22 2011
Can any of you help me? :)
This is one of many different ways to do this. Used cut and sed (cut -d">" -f 3,4 | sed 's/<\/span>//'):
[ 15:40 jon#hozbox.com ~ ]$ echo "<p class="top"><span>69,215</span> people 06:31:37 PM Nov 22 2011" | cut -d">" -f 3,4 | sed 's/<\/span>//'
69,215 people 06:31:37 PM Nov 22 2011
#!/usr/bin/bash
wget --output-document=- http://runescape.com/title.ws 2>/dev/null \
| grep PlayerCount \
| head -1l \
| sed 's/^[^>]*>//' \
| sed "s/currently.*$/$(date '+%r %b %d %Y')/" \
| cut -d">" -f 3,4 \
| sed 's/<\/span>//'
I think what you're after is code that removes any tags. Your sed 's/^[^>]*>//' only replaces the up to the first >.
You may want to consider sed 's/<[^>]*>//g' instead.
Pipe the output to:
sed 's%<p class="top"><span>\(.*\)</span>%\1%'
Or combine the two separate sed scripts you already have with this one, giving you:
sed -e 's/^[^>]*>//' \
-e "s/currently.*$/$(date '+%r %b %d %Y')/" \
-e 's%<p class="top"><span>\(.*\)</span>%\1%'
In fact, the grep and head commands are also superfluous; you could do the lot with a single sed command. Note that putting the | on the end of the line means you don't need a backslash.
#!/usr/bin/bash
wget --output-document=- http://runescape.com/title.ws 2>/dev/null |
sed -e '/PlayerCount/!{d;n}' \
-e 's/^[^>]*>//' \
-e "s/currently.*$/$(date '+%r %b %d %Y')/" \
-e 's%<p class="top"><span>\(.*\)</span>%\1%' \
-e 'q'
The /PlayerCount/!n means skip to the next input line unless the input matches 'PlayerCount'. The next three lines do what they always did. The last line implements head -1l by printing (implicitly) and quitting.
(As a matter of idle interest, the wget command produces some 790 lines of data if it runs to completion. I get a 'cannot write to "-" (Broken pipe)' error if I don't redirect standard error to /dev/null (plus some progress reporting that isn't wanted). There are probably options to handle that; it also appears there's only one line with 'PlayerCount' so you could omit the '-e q' command.)