Bash - Multiple conditions in a for loop - linux

I'm trying to do a for loop with multiple conditions and I didn't find any information about how to do it on the web
I'm sorry for the stupid questions, but I just started programming in linux
what am I doing wrong here?
#!/bin/bash
j=0
for line in `cat .temp_` || j in `seq 0 19`
do
...
done
the error says wrong syntax and that I can't use ||
Thanks a lot

for line in `cat .temp_`
do
if ! grep -Fxq $line $dictionary && [ $j -lt 20 ]
then
echo $line >> $dictionary
j=$((j+1))
fi
[ $j -gt 20 ] && break
done
You can't check a condition in a for loop in shell. You must do it in a extra statement.
In this case so:
[ $j -gt 20 ] && break
Another solution, without break:
while read line && [ $j -lt 20 ]
do
if ! grep -Fxq $line $dictionary && [ $j -lt 20 ]
then
echo $line >> $dictionary
j=$((j+1))
fi
done < .temp_

You are trying to combine two different styles of for-loops into one. Instead, just break if the value of j becomes too large:
j=0
while read -r line; do
[[ j >= 20 ]] && break
...
done < ._temp
(This, by the way, is the preferred way to iterate over a file in bash. Using a for-loop runs into problems if the file is too large, as you essentially building a command line using the entire contents of the file.)
[UPDATE: the following is based on my conjecture as to the purpose of the loop. See Calculate Word occurrences from file in bash for the real context.]
Actually, you can dispense with the loop. You are looking for at most 20 lines from .temp_ that do not already appear in the file whose name is in dictionary, right?
sort -u .temp_ | grep -f $dictionary -Fx -v -m 20 >> $dictionary
This will call grep just once, instead of once per line in .temp_.

A simple nested for-loop?
for line in `cat file`; do for j in {0..19}; do
your_commands
done; done
If I understand you correctly (you want to run a command on every line 20 times).

Once I need to process all files inside a directory, but only until nth file or until all is processed:
count=1
nth=10
for f in folder/*; do
((count++))
# process $f
if [[ $count -gt $nth ]]
then
break
fi
done

Related

Why does my nohup bash script reading in file always stop outputting count around 6k before end of file?

I use nohup to run a bash script to read in each line of a file (and extract info I need). I've used it on multiple files with different line sizes, mostly between 50k and 100k. But no matter how many lines my file is, nohup always stops outputting info around 6k before the last line.
my script called: fetchStuff.sh
#!/bin/bash
urlFile=$1
myHost='http://example.com'
useragent='me'
count=0
total_lines=$(wc -l < $urlFile)
while read url; do
if [[ "$url" == *html ]]; then continue; fi
reqURL=${myHost}${url}
stuffInfo=$(curl -s -XGET -A "$useragent" "$reqURL" | jq -r '.stuff')
[ "$stuffInfo" != "null" ] && echo ${stuffInfo/unwanted_garbage/} >> newversion-${urlFile}
((count++))
if [ $(( $count%20 )) -eq 0 ]
then
sleep 1
fi
if [ $(( $count%100 )) -eq 0 ]; then echo "$urlFile read ${count} of $total_lines"; fi
done < $urlFile
I call it like so: nohup ./fetchStuff.sh file1.txt &
I get count info in nohup.out, e.g. "file1 read 100 of 60000", "file1 read 200 of 60000", etc.
But it always stops around 6k before end of file.
When I do tail nohup.out each time after running the script on the file, I get these as the last line in nohup.out:
file1.txt read 90000 of 96317
file2.txt read 68000 of 73376
file3.txt read 85000 of 91722
file4.txt read 93000 of 99757
I can't figure out why it always stops around 6k before end of file. (I put the sleep timer in to avoid flooding the api w/a lot of requests).
The loops skips lines that end with html, and they're not counted in $count. So I'll bet there are 6317 lines in file1.txt that end with html, 5376 in file2.txt, and so on.
If you want $count to include them, put ((count++)) before the if statement that checks the suffix.
while read url; do
((count++))
if [[ "$url" == *html ]]; then continue; fi
reqURL=${myHost}${url}
stuffInfo=$(curl -s -XGET -A "$useragent" "$reqURL" | jq -r '.stuff')
[ "$stuffInfo" != "null" ] && echo ${stuffInfo/unwanted_garbage/} >> newversion-${urlFile}
if [ $(( $count%20 )) -eq 0 ]
then
sleep 1
fi
if [ $(( $count%100 )) -eq 0 ]; then echo "$urlFile read ${count} of $total_lines"; fi
done < $urlFile
Alternatively you could leave them out of total_lines with:
total_lines=$(grep -c -v 'html$' "$urlFile")
And you could do away with the if statement by using
grep -v 'html$' "$urlFile" | while read url; do
...
done

Using/Filtering raw live data using the "grep" command with the "while-read" loop

Goal: Filter raw live data using the grep command. I would like the to provide the user with a filter tag "-f" that will allow them to filter the data that is coming based on their own input to the terminal
Issue: Although the grep command seems to be running in the script without issue, BUT it doesn't seem to be actively filtering each element of the file line by line as the while loop reads in the data
the code I have thus far is as follows:
maxbar=60 # largest no. of char in the bar
MAX=60
while read dayst timst qty
do
if ! [[ -z $FILTER ]] # FILTER flag starts process
then
grep "200510" tailcount.sh
printf '%6.6s %6.6s %4d:' $dayst $timst $qty
pr_bar $qty $MAX
fi
printf '%6.6s %6.6s %4d:' $dayst $timst $qty
pr_bar $qty $MAX
done < <( if ! [[ -z $ON ]]; then bash killswitch.sh; elif [[ -z $ON ]] ; then bash tailcount.sh; fi )

Unix - Replace column value inside while loop

I have comma separated (sometimes tab) text file as below:
parameters.txt:
STD,ORDER,ORDER_START.xml,/DML/SOL,Y
STD,INSTALL_BASE,INSTALL_START.xml,/DML/IB,Y
with below code I try to loop through the file and do something
while read line; do
if [[ $1 = "$(echo "$line" | cut -f 1)" ]] && [[ "$(echo "$line" | cut -f 5)" = "Y" ]] ; then
//do something...
if [[ $? -eq 0 ]] ; then
// code to replace the final flag
fi
fi
done < <text_file_path>
I wanted to update the last column of the file to N if the above operation is successful, however below approaches are not working for me:
sed 's/$f5/N/'
'$5=="Y",$5=N;{print}'
$(echo "$line" | awk '$5=N')
Update: Few considerations which need to be considered to give more clarity which i missed at first, apologies!
The parameters file may contain lines with last field flag as "N" as well.
Final flag needs to be update only if "//do something" code has successfully executed.
After looping through all lines i.e, after exiting "while loop" flags for all rows to be set to "Y"
perhaps invert the operations do processing in awk.
$ awk -v f1="$1" 'BEGIN {FS=OFS=","}
f1==$1 && $5=="Y" { // do something
$5="N"}1' file
not sure what "do something" operation is, if you need to call another command/script it's possible as well.
with bash:
(
IFS=,
while read -ra fields; do
if [[ ${fields[0]} == "$1" ]] && [[ ${fields[4]} == "Y" ]]; then
# do something
fields[4]="N"
fi
echo "${fields[*]}"
done < file | sponge file
)
I run that in a subshell so the effects of altering IFS are localized.
This uses sponge to write back to the same file. You need the moreutils package to use it, otherwise use
done < file > tmp && mv tmp file
Perhaps a bit simpler, less bash-specific
while IFS= read -r line; do
case $line in
"$1",*,Y)
# do something
line="${line%Y}N"
;;
esac
echo "$line"
done < file
To replace ,N at the end of the line($) with ,Y:
sed 's/,N$/,Y/' file

bash: How to echo strings at the same position

I built my web server and I'm trying to do a test. So I simulate many requests with bash script:
i=0
while [ $i -lt 20 ]; do
echo ''
echo ''
echo ''
echo '============== current time ==============='
echo $i
echo '==========================================='
echo ''
curl -i http://www.example.com/index?key=abceefgefwe
i=$((i+1))
done
This works well but I prefer to make all of echo at the same position on the terminal.
I've read this: How to show and update echo on same line
So I add -ne for echo but it doesn't seem to work as expected.
The messages of curl can still push the echo away.
This is what I need:
============== current time =============== ---\
1 <------ this number keeps updating ----> the 3 lines stay here
=========================================== ---/
Here is the messages of `curl`, which are showing as normal way
There's another option, to position the cursor before you write to stdout.
You can set x and y to suit your needs.
#!/bin/bash
y=10
x=0
i=0
while [ $i -lt 20 ]; do
tput cup $y $x
echo ''
echo ''
echo ''
echo '============== current time ==============='
echo $i
echo '==========================================='
echo ''
curl -i http://www.example.com/index?key=abceefgefwe
i=$((i+1))
done
You could add a clear command at the beginning of your while loop. That would keep the echo statements at the top of the screen during each iteration, if that's what you had in mind.
When I do this sort of thing, rather than using curses/ncurses or tput, I just restrict myself to a single line and hope it doesn't wrap. I re-draw the line every iteration.
For example:
i=0
while [ $i -lt 20 ]; do
curl -i -o "index$i" 'http://www.example.com/index?key=abceefgefwe'
printf "\r==== current time: %2d ====" $i
i=$((i+1))
done
If you're not displaying text of predictable length, you might need to reset the display first (since it doesn't clear the content, so if you go from there to here, you'll end up with heree with the extra letter from the previous string). To solve that:
i=$((COLUMNS-1))
space=""
while [ $i -gt 0 ]; do
space="$space "
i=$((i-1))
done
while [ $i -lt 20 ]; do
curl -i -o "index$i" 'http://www.example.com/index?key=abceefgefwe'
output="$(head -c$((COLUMNS-28))) "index$i" |head -n1)"
printf "\r%s\r==== current time: %2d (%s) ====" "$space" $i "$output"
i=$((i+1))
done
This puts a full-width line of spaces to clear the previous text and then writes over the now-blank line with the new content. I've used a segment of the first line of the retrieved file up to a maximum of the line's width (counting the extra text; I may be one off somewhere). This would be cleaner if I could just use head -c$((COLUMNS-28)) -n1 (which would care about the order!).

bash: convert string to int & if int > #

I would like to have a bash script that checks if a file has more than # amount of lines but i have not yet got it working right and I'm not so sure on how to do it.
I've never used bash before.
right now i use: linesStr=$(cat log | wc -l) to get the amount of lines in the file (expect it to be a string). when echo'ing it gives me the number 30 which is correct.
but since its most likely a string it doesnt do the if-statement, so i need to have linesStr converted into a int called linesInt.
I also have the feeling the if-statement itself is not done correctly either.
#!/bin/bash
linesStr=$(cat log | wc -l)
echo $linesStr
if [$linesStr > 29]
then echo "log file is bigger than 29 lines"
#sed -i 1d log
fi
I would appreciate if anyone can give me a simple beginners solution.
No need for cat.
Lack of spaces around [ and ].
Use a numeric comparison operator instead of the redirect operator.
Here is a working script.
#!/bin/bash
linesStr=$( wc -l < log )
if [[ "$linesStr" -gt "29" ]]; then
echo Foo
fi
your if block of code is wrong if [$linesStr > 29] there should be a space after [ and before ]
#!/bin/bash
linesStr=$(wc -l < log )
echo $linesStr
if [[ $lineStr -gt 29 ]];then
echo "log file is bigger than 29 lines"
fi
it is advisable that you always use [[ ]] with an if statement rather than using [ ]. Whenever you want to compare integers dont use > or <, use -gt -ge -lt -le. And if you want to do any form of mathematical comparison it is advisable that you use (( )).
(( lineStr > 29 )) && {
# do stuff
}
you should also note that you don't need the bash comparison operators or getting the value of a variable with $ when using (( ))
There are no string or integer types to convert. The problem is that you're using the wrong comparison operator. For numeric comparison use if [ $linesStr -gt 29 ]. Read man bash section CONDITIONAL EXPRESSIONS for available operators.
(( $(wc -l < log) > 29 )) && echo too long

Resources