I built my web server and I'm trying to do a test. So I simulate many requests with bash script:
i=0
while [ $i -lt 20 ]; do
echo ''
echo ''
echo ''
echo '============== current time ==============='
echo $i
echo '==========================================='
echo ''
curl -i http://www.example.com/index?key=abceefgefwe
i=$((i+1))
done
This works well but I prefer to make all of echo at the same position on the terminal.
I've read this: How to show and update echo on same line
So I add -ne for echo but it doesn't seem to work as expected.
The messages of curl can still push the echo away.
This is what I need:
============== current time =============== ---\
1 <------ this number keeps updating ----> the 3 lines stay here
=========================================== ---/
Here is the messages of `curl`, which are showing as normal way
There's another option, to position the cursor before you write to stdout.
You can set x and y to suit your needs.
#!/bin/bash
y=10
x=0
i=0
while [ $i -lt 20 ]; do
tput cup $y $x
echo ''
echo ''
echo ''
echo '============== current time ==============='
echo $i
echo '==========================================='
echo ''
curl -i http://www.example.com/index?key=abceefgefwe
i=$((i+1))
done
You could add a clear command at the beginning of your while loop. That would keep the echo statements at the top of the screen during each iteration, if that's what you had in mind.
When I do this sort of thing, rather than using curses/ncurses or tput, I just restrict myself to a single line and hope it doesn't wrap. I re-draw the line every iteration.
For example:
i=0
while [ $i -lt 20 ]; do
curl -i -o "index$i" 'http://www.example.com/index?key=abceefgefwe'
printf "\r==== current time: %2d ====" $i
i=$((i+1))
done
If you're not displaying text of predictable length, you might need to reset the display first (since it doesn't clear the content, so if you go from there to here, you'll end up with heree with the extra letter from the previous string). To solve that:
i=$((COLUMNS-1))
space=""
while [ $i -gt 0 ]; do
space="$space "
i=$((i-1))
done
while [ $i -lt 20 ]; do
curl -i -o "index$i" 'http://www.example.com/index?key=abceefgefwe'
output="$(head -c$((COLUMNS-28))) "index$i" |head -n1)"
printf "\r%s\r==== current time: %2d (%s) ====" "$space" $i "$output"
i=$((i+1))
done
This puts a full-width line of spaces to clear the previous text and then writes over the now-blank line with the new content. I've used a segment of the first line of the retrieved file up to a maximum of the line's width (counting the extra text; I may be one off somewhere). This would be cleaner if I could just use head -c$((COLUMNS-28)) -n1 (which would care about the order!).
Related
I use nohup to run a bash script to read in each line of a file (and extract info I need). I've used it on multiple files with different line sizes, mostly between 50k and 100k. But no matter how many lines my file is, nohup always stops outputting info around 6k before the last line.
my script called: fetchStuff.sh
#!/bin/bash
urlFile=$1
myHost='http://example.com'
useragent='me'
count=0
total_lines=$(wc -l < $urlFile)
while read url; do
if [[ "$url" == *html ]]; then continue; fi
reqURL=${myHost}${url}
stuffInfo=$(curl -s -XGET -A "$useragent" "$reqURL" | jq -r '.stuff')
[ "$stuffInfo" != "null" ] && echo ${stuffInfo/unwanted_garbage/} >> newversion-${urlFile}
((count++))
if [ $(( $count%20 )) -eq 0 ]
then
sleep 1
fi
if [ $(( $count%100 )) -eq 0 ]; then echo "$urlFile read ${count} of $total_lines"; fi
done < $urlFile
I call it like so: nohup ./fetchStuff.sh file1.txt &
I get count info in nohup.out, e.g. "file1 read 100 of 60000", "file1 read 200 of 60000", etc.
But it always stops around 6k before end of file.
When I do tail nohup.out each time after running the script on the file, I get these as the last line in nohup.out:
file1.txt read 90000 of 96317
file2.txt read 68000 of 73376
file3.txt read 85000 of 91722
file4.txt read 93000 of 99757
I can't figure out why it always stops around 6k before end of file. (I put the sleep timer in to avoid flooding the api w/a lot of requests).
The loops skips lines that end with html, and they're not counted in $count. So I'll bet there are 6317 lines in file1.txt that end with html, 5376 in file2.txt, and so on.
If you want $count to include them, put ((count++)) before the if statement that checks the suffix.
while read url; do
((count++))
if [[ "$url" == *html ]]; then continue; fi
reqURL=${myHost}${url}
stuffInfo=$(curl -s -XGET -A "$useragent" "$reqURL" | jq -r '.stuff')
[ "$stuffInfo" != "null" ] && echo ${stuffInfo/unwanted_garbage/} >> newversion-${urlFile}
if [ $(( $count%20 )) -eq 0 ]
then
sleep 1
fi
if [ $(( $count%100 )) -eq 0 ]; then echo "$urlFile read ${count} of $total_lines"; fi
done < $urlFile
Alternatively you could leave them out of total_lines with:
total_lines=$(grep -c -v 'html$' "$urlFile")
And you could do away with the if statement by using
grep -v 'html$' "$urlFile" | while read url; do
...
done
I'm writing a bash script to read a set of files line by line and perform some edits. To begin with, I'm simply trying to move the files to backup locations and write them out as-is, to test the script is working. However, it is failing to copy the last line of each file. Here is the snippet:
while IFS= read -r line
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
I obviously want to preserve whitespace when I copy the files, which is why I have set the IFS to null. I can see from the output that the last line of each file is being read, but it never appears in the output.
I've also tried an alternative variation, which does print the last line, but adds a newline to it:
while IFS= read -r line || [ -n "$line" ]
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
What is the best way to do this do this read-write operation, to write the files exactly as they are, with the correct whitespace and no newlines added?
The command that is adding the line feed (LF) is not the read command, but the echo command. read does not return the line with the delimiter still attached to it; rather, it strips the delimiter off (that is, it strips it off if it was present in the line, IOW, if it just read a complete line).
So, to solve the problem, you have to use echo -n to avoid adding back the delimiter, but only when you have an incomplete line.
Secondly, I've found that when providing read with a NAME (in your case line), it trims leading and trailing whitespace, which I don't think you want. But this can be solved by not providing a NAME at all, and using the default return variable REPLY, which will preserve all whitespace.
So, this should work:
#!/bin/bash
inFile=in;
outFile=out;
rm -f "$outFile";
rc=0;
while [[ $rc -eq 0 ]]; do
read -r;
rc=$?;
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
echo "$REPLY" >>"$outFile";
elif [[ -n "$REPLY" ]]; then ## incomplete line
echo "incomplete=\"$REPLY\"";
echo -n "$REPLY" >>"$outFile";
fi;
done <"$inFile";
exit 0;
Edit: Wow! Three excellent suggestions from Charles Duffy, here's an updated script:
#!/bin/bash
inFile=in;
outFile=out;
while { read -r; rc=$?; [[ $rc -eq 0 || -n "$REPLY" ]]; }; do
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
printf '%s\n' "$REPLY" >&3;
else ## incomplete line
echo "incomplete=\"$REPLY\"";
printf '%s' "$REPLY" >&3;
fi;
done <"$inFile" 3>"$outFile";
exit 0;
After review i wonder if :
{
line=
while IFS= read -r line
do
echo "$line"
line=
done
echo -n "$line"
} <$INFILE >$OUTFILE
is juts not enough...
Here my initial proposal :
#!/bin/bash
INFILE=$1
if [[ -z $INFILE ]]
then
echo "[ERROR] missing input file" >&2
exit 2
fi
OUTFILE=$INFILE.processed
# a way to know if last line is complete or not :
lastline=$(tail -n 1 "$INFILE" | wc -l)
if [[ $lastline == 0 ]]
then
echo "[WARNING] last line is incomplete -" >&2
fi
# we add a newline ANYWAY if it was complete, end of file will be seen as ... empty.
echo | cat $INFILE - | {
first=1
while IFS= read -r line
do
if [[ $first == 1 ]]
then
echo "First Line is ***$line***" >&2
first=0
else
echo "Next Line is ***$line***" >&2
echo
fi
echo -n "$line"
done
} > $OUTFILE
if diff $OUTFILE $INFILE
then
echo "[OK]"
exit 0
else
echo "[KO] processed file differs from input"
exit 1
fi
Idea is to always add a newline at the end of file and to print newlines only BETWEEN lines that are read.
This should work for quite all text files given they are not containing 0 byte ie \0 character, in which case 0 char byte will be lost.
Initial test can be used to decided whether an incomplete text file is acceptable or not.
Add a new line if line is not a line. Like this:
while IFS= read -r line
do
echo "Line is ***$line***";
printf '%s' "$line" >&3;
if [[ ${line: -1} != '\n' ]]
then
printf '\n' >&3;
fi
done < $POM.backup 3>$POM
So I keep messing this up and I think where I was going wrong was that the code i'm writing needs to return only the file name and number of lines from an argument.
So using wc I need to get something to accept either 0 or 1 arguments and print out something like "The file findlines.sh has 4 lines" or if they give a ./findlines.sh Desktop/testfile they'll get the "the file testfile has 5 lines"
I have a few attempts and all of them have failed. I can't seem to figure out how to approach it at all.
Should I echo "The file" and then toss the argument name in and then add another echo for "has the number of lines [lines]"?
Sample input would be from terminal something like
>findlines.sh
Output:the file findlines.sh has 18 lines
Or maybe
>findlines.sh /home/directory/user/grocerylist
Output of 'the file grocerylist has 16 lines
#! /bin/sh -
file=${1-findfiles.sh}
lines=$(wc -l < "$file") &&
printf 'The file "%s" has %d lines\n' "$file" "$lines"
This should work:
#!/bin/bash
file="findfiles.sh"
if [ $# -ge 1 ]
then
file=$1
fi
if [ -f $file ]
then
lines=`wc -l "$file" | awk '{print $1}'`
echo "The file $file has $lines lines"
else
echo "File not found"
fi
See sch's answer for a shorter example that doesn't use awk.
I'm trying to do a for loop with multiple conditions and I didn't find any information about how to do it on the web
I'm sorry for the stupid questions, but I just started programming in linux
what am I doing wrong here?
#!/bin/bash
j=0
for line in `cat .temp_` || j in `seq 0 19`
do
...
done
the error says wrong syntax and that I can't use ||
Thanks a lot
for line in `cat .temp_`
do
if ! grep -Fxq $line $dictionary && [ $j -lt 20 ]
then
echo $line >> $dictionary
j=$((j+1))
fi
[ $j -gt 20 ] && break
done
You can't check a condition in a for loop in shell. You must do it in a extra statement.
In this case so:
[ $j -gt 20 ] && break
Another solution, without break:
while read line && [ $j -lt 20 ]
do
if ! grep -Fxq $line $dictionary && [ $j -lt 20 ]
then
echo $line >> $dictionary
j=$((j+1))
fi
done < .temp_
You are trying to combine two different styles of for-loops into one. Instead, just break if the value of j becomes too large:
j=0
while read -r line; do
[[ j >= 20 ]] && break
...
done < ._temp
(This, by the way, is the preferred way to iterate over a file in bash. Using a for-loop runs into problems if the file is too large, as you essentially building a command line using the entire contents of the file.)
[UPDATE: the following is based on my conjecture as to the purpose of the loop. See Calculate Word occurrences from file in bash for the real context.]
Actually, you can dispense with the loop. You are looking for at most 20 lines from .temp_ that do not already appear in the file whose name is in dictionary, right?
sort -u .temp_ | grep -f $dictionary -Fx -v -m 20 >> $dictionary
This will call grep just once, instead of once per line in .temp_.
A simple nested for-loop?
for line in `cat file`; do for j in {0..19}; do
your_commands
done; done
If I understand you correctly (you want to run a command on every line 20 times).
Once I need to process all files inside a directory, but only until nth file or until all is processed:
count=1
nth=10
for f in folder/*; do
((count++))
# process $f
if [[ $count -gt $nth ]]
then
break
fi
done
I am looking for a command (or way of doing) the following:
echo -n 6 | doif -criteria "isgreaterthan 4" -command 'do some stuff'
The echo part would obviously come from a more complicated string of bash commands. Essentially I am taking a piece of text from each line of a file and if it appears in another set of files more than x (say 100) then it will be appended to another file.
Is there a way to perform such trickery with awk somehow? Or is there another command.. I'm hoping that there is some sort of xargs style command to do this in the sense that the -I% portion would be the value with which to check the criteria and whatever follows would be the command to execute.
Thanks for thy insight.
It's possible, though I don't see the reason why you would do that...
function doif
{
read val1
op=$1
val2="$2"
shift 2
if [ $val1 $op "$val2" ]; then
"$#"
fi
}
echo -n 6 | doif -gt 3 ls /
if test 6 -gt 4; then
# do some stuff
fi
or
if test $( echo 6 ) -gt 4; then : ;fi
or
output=$( some cmds that generate text)
# this will be an error if $output is ill-formed
if test "$output" -gt 4; then : ; fi