How to write an infinite loop echoing numbers from 1 to infinite in bash.
I am using a for loop but it is being killed by bash on using value more than 100000000.
#!/bin/bash
for a in {1..100000000..1}
do
echo "$a"
done
any alternative for it?
Have you tried doing a while loop?
#!/bin/bash
num=0;
while :
do
num=$((num+1))
echo "$num"
done
This should work in all POSIX shells:
i=0; while :; do echo "$((i+=1))"; done
: is interchangeable with the true builtin (which you can use instead): it's a no-op that always succeeds (=returns 0).
If integer overflow bothers you, and you want arbitrary precision with standard tools:
nocontinuation(){ sed ':x; /\\$/ { N; s/\\\n//; tx }'; }
i=99999999999999999999999999999999999999999999999999999999999999999999;
while : ; do i=`echo "$i + 1" | bc | nocontinuation`; echo "$i"; done
This would be quite slow, because it spawns in each iteration.
To avoid that, you could reuse one bc instance and communicate with it over pipes:
#!/usr/bin/bash
set -e
nocontinuation(){ sed -u ':x; /\\$/ { N; s/\\\n//; tx }'; }
trap 'rm -rf "$tmpdir"' exit
tmpdir=`mktemp -d`
cd "$tmpdir"
mkfifo p n
<p bc | nocontinuation >n &
exec 3>p
exec 4<n
i=99999999999999999999999999999999999999999999999999999999999999999999;
while : ; do
echo "$i + 1" >&3
read i <&4
echo "$i"
done
Can't you just do a while true;?
a=0
while true;
do
a=$((a+1))
# $[$a+1] also works.
echo "$a"
done
Related
#!/bin/ksh
if [ -n "$1" ]
then
if grep -w -- "$1" codelist.lst
then
true
else
echo "Value not Found"
fi
else
echo "Please enter a valid input"
fi
This is my script and it works exactly how I want at the moment, I want to add if I add more arguments It will give me the multiple outputs, How can I do that?
So For Example I do ./test.sh apple it will grep apple in codelist.lst and Give me the output : Apple
I want to do ./test.sh apple orange and will do:
Apple
Orange
You can do that with shift and a loop, something like (works in both bash and ksh):
for ((i = $#; i > 0 ; i--)) ; do
echo "Processing '$1'"
shift
done
You'll notice I've also opted not to use the [[ -n "$1" ]] method as that would terminate the loop early with an empty string (such as with ./script.sh a b "" c stopping without doing c).
To iterate over the positional parameters:
for pattern in "$#"; do
grep -w -- "$pattern" codelist.lst || echo "'$pattern' not Found"
done
For a more advanced usage, which only invokes grep once, use the -f option with a shell process substitution:
grep -w -f <(printf '%s\n' "$#") codelist.lst
This is my code:
#!/bin/sh
echo "ARGUMENTS COUNT : " $#
echo "ARGUMENTS LIST : " $*
dictionary=`awk '{ print $1 }'`
function()
{
for i in dictionary
do
for j in $*
do
if [ $j = $i ]
then
;
else
append
fi
done
done
}
append()
{
ls $j > dictionary1.txt
}
function
I need using unix shell functions make "dictionary". For example: I write in arguments default word, example hello. Then my function checks the file dictionary1 if that word is existing in the file. If not - append that word in file, if it's already exist - do nothing.
For some reason, my script does not work. When I start my script, it waits for something and that's it.
What I am doing wrong? How can I fix it?
An implementation that tries to care about both performance and correctness might look like:
#!/usr/bin/env bash
# ^^^^- NOT sh; sh does not support [[ ]] or <(...)
addWords() {
local tempFile dictFile
tempFile=$(mktemp dictFile.XXXXXX) || return
dictFile=$1; shift
[[ -e "$dictFile" ]] || touch "$dictFile" || return
sort -um "$dictFile" <(printf '%s\n' "$#" | sort -u) >"$tempFile"
mv -- "$tempFile" "$dictFile"
}
addWords myDict beta charlie delta alpha
addWords myDict charlie zulu
cat myDict
...has a final dictionary state of:
alpha
beta
charlie
delta
zulu
...and it rereads the input file only once for each addWords call (no matter how many words are being added!), not once per word to add.
Don't name a function "function".
Don't read in and walk through the whole file - all you need is to know it the word is there or not. grep does that.
ls lists files. You want to send a word to the file, not a filename. use echo or printf.
sh isn't bash. Use bash unless there's a clear reason not to, and the only reason is because it isn't available.
Try this:
#! /bin/env bash
checkWord() {
grep -qm 1 "$1" dictionary1.txt ||
echo "$1" >> dictionary1.txt
}
for wd
do checkWord "$wd"
done
If that works, you can add more structure and error checking.
You can remove your dictionary=awk... line (as mentioned it's blocking waiting for input) and simply grep your dictionary file for each argument, something like the below :
for i in "$#"
do
if ! grep -qow "$i" dictionary1.txt
then
echo "$i" >> dictionary1.txt
fi
done
With any awk in any shell on any UNIX box:
awk -v words="$*" '
BEGIN {
while ( (getline word < "dictionary1.txt") > 0 ) {
dict[word]++
}
close("dictionary1.txt")
split(words,tmp)
for (i in tmp) {
word = tmp[i]
if ( !dict[word]++ ) {
newWords = newWords word ORS
}
}
printf "%s", newWords >> "dictionary1.txt"
exit
}'
i'm trying to make a bash script that counts the newlines in an input. The first if statement (switch $0) works fine but the problem I'm having is trying to get it to read the WC of a file in a terminal argument.
e.g.
~$ ./script.sh
1
2
3
4
(User presses CTRL+D)
display word count here # answer is 5 - works fine
e.g.
~$ .script1.sh < script1.sh
WC here -(5)
~$ succesfully redirects the stdin from a file
but
e.g.
~$ ./script1.sh script1.sh script2.sh
WC displayed here for script1.sh
WC displayed here for script2.sh
NOTHING
~$
the problem I believe is the second if statement, instead of executing the script in the terminal it goes to the if statement and waits for a user input and its not giving back the echo statement.
Any help would be greatly appreciated since I cannot figure out why it won't work without the ~$ < operator.
#!/bin/bash
#!/bin/sh
read filename ## read provided filename
USAGE="Usage: $0 $1 $2..." ## switch statement
if [ "$#" == "0" ]; then
declare -i lines=0 words=0 chars=0
while IFS= read -r line; do
((lines++))
array=($line)
((words += ${#array[#]}))
((chars += ${#line} + 1)) # add 1 for the newline
done < /dev/stdin
fi
echo "$lines $words $chars $filename" ## filename doesn't print, just filler
### problem if statement####
if [ "$#" != "0" ]; then # space between [] IS VERY IMPORTANT
declare -i lines=0 words=0 chars=0
while IFS= read -r line; do
lines=$( grep -c '\n'<"filename") ##should use grep -c to compare only new lines in the filename. assign to variable line
words=$( grep -c '\n'<"filename")
chars=$( grep -c '\n'<"filename")
echo "$lines $words $chars"
#lets user press CTRL+D to end script and count the WC
fi
#!/bin/sh
set -e
if [ -t 0 ]; then
# We are *not* reading stdin from a pipe or a redirection.
# Get the counts from the files specified on the cmdline
if [ "$#" -eq 0 ]; then
echo "no files specified" >&2
exit 1
fi
cat "$#" | wc
else
# stdin is attached to a pipe or redirected from a file
wc
fi | { read lines words chars; echo "lines=$lines words=$words chars=$chars"; }
The variables from the read command only exist within the braces, due to the way the shell (some shells anyway) use subshells for commands in a pipeline. Typically, the solution for that is to redirect from a process substitution (bash/ksh).
This can be squashed down to
#!/bin/bash
[[ -t 0 ]] && files=true || files=false
read lines words chars < <({ ! $files && cat || cat "$#"; } | wc)
echo "lines=$lines words=$words chars=$chars"
a very quick demo of cmd | read x versus read x < <(cmd)
$ x=foo; echo bar | read x; echo $x
foo
$ x=foo; read x < <(echo bar); echo $x
bar
Use wc.
Maybe the simplest is to replace the second if block with a for.
$: cat tst
#! /bin/env bash
declare -i lines=0 words=0 chars=0
case "$#" in
0) wc ;;
*) for file in $*
do read lines words chars x <<< "$( wc $file )"
echo "$lines $words $chars $file"
done ;;
esac
$: cat foo
hello
world
and
goodbye cruel world!
$: tst < foo
6 6 40
$: tst foo tst
6 6 40 foo
9 38 206 tst
i want to compute all *bin files inside a given directory. Initially I was working with a for-loop:
var=0
for i in *ls *bin
do
perform computations on $i ....
var+=1
done
echo $var
However, in some directories there are too many files resulting in an error: Argument list too long
Therefore, I was trying it with a piped while-loop:
var=0
ls *.bin | while read i;
do
perform computations on $i
var+=1
done
echo $var
The problem now is by using the pipe subshells are created. Thus, echo $var returns 0.
How can I deal with this problem?
The original Code:
#!/bin/bash
function entropyImpl {
if [[ -n "$1" ]]
then
if [[ -e "$1" ]]
then
echo "scale = 4; $(gzip -c ${1} | wc -c) / $(cat ${1} | wc -c)" | bc
else
echo "file ($1) not found"
fi
else
datafile="$(mktemp entropy.XXXXX)"
cat - > "$datafile"
entropy "$datafile"
rm "$datafile"
fi
return 1
}
declare acc_entropy=0
declare count=0
ls *.bin | while read i ;
do
echo "Computing $i" | tee -a entropy.txt
curr_entropy=`entropyImpl $i`
curr_entropy=`echo $curr_entropy | bc`
echo -e "\tEntropy: $curr_entropy" | tee -a entropy.txt
acc_entropy=`echo $acc_entropy + $curr_entropy | bc`
let count+=1
done
echo "Out of function: $count | $acc_entropy"
acc_entropy=`echo "scale=4; $acc_entropy / $count" | bc`
echo -e "===================================================\n" | tee -a entropy.txt
echo -e "Accumulated Entropy:\t$acc_entropy ($count files processed)\n" | tee -a entropy.txt
The problem is that the while loop is part of a pipeline. In a bash pipeline, every element of the pipeline is executed in its own subshell [ref]. So after the while loop terminates, the while loop subshell's copy of var is discarded, and the original var of the parent (whose value is unchanged) is echoed.
One way to fix this is by using Process Substitution as shown below:
var=0
while read i;
do
# perform computations on $i
((var++))
done < <(find . -type f -name "*.bin" -maxdepth 1)
Take a look at BashFAQ/024 for other workarounds.
Notice that I have also replaced ls with find because it is not good practice to parse ls.
A POSIX compliant solution would be to use a pipe (p file). This solution is very nice, portable, and POSIX, but writes something on the hard disk.
mkfifo mypipe
find . -type f -name "*.bin" -maxdepth 1 > mypipe &
while read line
do
# action
done < mypipe
rm mypipe
Your pipe is a file on your hard disk. If you want to avoid having useless files, do not forget to remove it.
So researching the generic issue, passing variables from a sub-shelled while loop to the parent. One solution I found, missing here, was to use a here-string. As that was bash-ish, and I preferred a POSIX solution, I found that a here-string is really just a shortcut for a here-document. With that knowledge at hand, I came up with the following, avoiding the subshell; thus allowing variables to be set in the loop.
#!/bin/sh
set -eu
passwd="username,password,uid,gid
root,admin,0,0
john,appleseed,1,1
jane,doe,2,2"
main()
{
while IFS="," read -r _user _pass _uid _gid; do
if [ "${_user}" = "${1:-}" ]; then
password="${_pass}"
fi
done <<-EOT
${passwd}
EOT
if [ -z "${password:-}" ]; then
echo "No password found."
exit 1
fi
echo "The password is '${password}'."
}
main "${#}"
exit 0
One important note to all copy pasters, is that the here-document is setup using the hyphen, indicating that tabs are to be ignored. This is needed to keep the layout somewhat nice. It is important to note, because stackoverflow doesn't render tabs in 'code' and replaces them with spaces. Grmbl. SO, don't mangle my code, just cause you guys favor spaces over tabs, it's irrelevant in this case!
This probably breaks on different editor(settings) and what not. So the alternative would be to have it as:
done <<-EOT
${passwd}
EOT
This could be done with a for loop, too:
var=0;
for file in `find . -type f -name "*.bin" -maxdepth 1`; do
# perform computations on "$i"
((var++))
done
echo $var
I've implemented a way to have concurrent jobs in bash, as seen here.
I'm looping through a file with around 13000 lines. I'm just testing and printing each line, as such:
#!/bin/bash
max_bg_procs(){
if [[ $# -eq 0 ]] ; then
echo "Usage: max_bg_procs NUM_PROCS. Will wait until the number of background (&)"
echo " bash processes (as determined by 'jobs -pr') falls below NUM_PROCS"
return
fi
local max_number=$((0 + ${1:-0}))
while true; do
local current_number=$(jobs -pr | wc -l)
if [[ $current_number -lt $max_number ]]; then
echo "success in if"
break
fi
echo "has to wait"
sleep 4
done
}
download_data(){
echo "link #" $2 "["$1"]"
}
mapfile -t myArray < $1
i=1
for url in "${myArray[#]}"
do
max_bg_procs 6
download_data $url $i &
((i++))
done
echo "finito!"
I've also tried other solutions such as this and this, but my issue is persistent:
At a "random" given step, usually between the 2000th and the 5000th iteration, it simply gets stuck. I've put those various echo in the middle of the code to see where it would get stuck but it the last thing it prints is the $url $i.
I've done the simple test to remove any parallelism and just loop the file contents: all went fine and it looped till the end.
So it makes me think I'm missing some limitation on the parallelism, and I wonder if anyone could help me out figuring it out.
Many thanks!
Here, we have up to 6 parallel bash processes calling download_data, each of which is passed up to 16 URLs per invocation. Adjust per your own tuning.
Note that this expects both bash (for exported function support) and GNU xargs.
#!/usr/bin/env bash
# ^^^^- not /bin/sh
download_data() {
echo "link #$2 [$1]" # TODO: replace this with a job that actually takes some time
}
export -f download_data
<input.txt xargs -d $'\n' -P 6 -n 16 -- bash -c 'for arg; do download_data "$arg"; done' _
Using GNU Parallel it looks like this
cat input.txt | parallel echo link '\#{#} [{}]'
{#} = the job number
{} = the argument
It will spawn one process per CPU. If you instead want 6 in parallel use -j:
cat input.txt | parallel -j6 echo link '\#{#} [{}]'
If you prefer running a function:
download_data(){
echo "link #" $2 "["$1"]"
}
export -f download_data
cat input.txt | parallel -j6 download_data {} {#}