How can I loop through a file, in bash, and echo the line and line number?
I have this, which has everything but the line number:
while read p;
do
echo "$p" "$LINE";
done < file.txt
Thanks for your help!
edit this will be run multi-thread using xargs, so i don't want to use a counter.
I would just use cat -n file
But if you really want to use a bash loop:
i=0
while read; do
printf '%d %s\n' $(( ++i )) "$REPLY"
done < file
Update: I now prefer nl to cat -n, as the former is standard. To get the same result as cat -n, use nl -b a "$file".
You can use awk:
awk '{print NR,$0}' file.txt
You can use awk:
awk '{print NR "\t" $0}' file.txt
Or you can keep a count of the line number in your loop:
count=0
while read line
do
((count+=1))
printf "%5d: %s\n" "$count" "$line"
done
Using printf allows you to format the number, so the lines in the file are all lined up.
Just for fun, bash v4
mapfile lines < file.txt
for idx in "${!lines[#]}"; do printf "%5d %s" $((idx+1)) "${lines[idx]}"; done
Don't do this.
Related
I have the following code in Linux shell file:
I need to replace the "..." with the number of letters of the coresponding line.
#!/bin/sh
echo "Filename is: $1\n"
nr_lines=$(wc -l <$1)
echo "Number of lines in files is: $nr_lines\n"
for line in $(seq 1 $nr_lines);
do
echo "Line $line has ... letters"
done
The general pattern for iterating over all lines of a file is something like:
i=0; while read line; do
printf "Line $((++i)) has %d letters\n" \
"$(echo "$line" | tr -dc a-zA-Z | wc -c)";
done < input
but using while read ...; do ... done < input in a shell is often better done with awk:
awk '{gsub("[^a-zA-Z]", ""); printf "Line %d has %d letters\n", NR, length}' input
I have the following data in two files:
domains.txt contains:
http://example1.com
urls.txt contains:
http://example1.com/url-example/
http://example5.com/url-example/
http://example2.com/url-example/
Using the following command (I'm using this structure because usually there is more in the files and this is just a minimal example):
cat domains.txt | while read LINE; do grep -m 1 "$LINE" urls.txt
This will give me the matching line.
http://example1.com/url-example/
However, I would like the desired output to be:
http://example1.com,http://example1.com/url-example/
I would like to add a pipe that would prepend the "LINE" variable before the matched output. I was thinking something with sed should be easy? Your input is highly appreciated.
Update:
Although this is easy with awk, if someone has an answer with piping the output, I would like to use that to go with the script.
while IFS= read -r line; do echo -n "$line,"; grep -m 1 "$line" urls.txt; done < domains.txt
This is very easy to do using awk:
while read LINE; do awk -v pattern="$LINE" '$0 ~ pattern { print pattern "," $0 }' urls.txt; done < domains.txt
Or using sed:
while read LINE; do sed -ne "s?$LINE.*?$LINE,&?p" urls.txt; done < domains.txt
To answer your follow-up question, to limit the result to the first match using awk:
while read LINE; do awk -v pattern="$LINE" '$0 ~ pattern { print pattern "," $0; exit }' urls.txt; done < domains.txt
I am willing to add a different random number at the end of each line of a file. I have to repeat the process a few times and each file contain about 20k lines and each line contains about 500k characters.
The only solution I came up with so far is
file="example.txt"
for lineIndex in $(seq 1 "$(wc -l ${file})")
do
lineContent=$(sed "${lineIndex}q;d" ${file})
echo "${lineContent} $RANDOM" >> tmp.txt
done
mv tmp.txt ${file}
Is there a faster solution?
You can do it much simpler, and without opening and closing the input and output files and spawning new processes on every line, like this:
while read line
do
echo "$line $RANDOM"
done < "$file" > tmp.txt
You could use awk:
awk '{ print $0, int(32768 * rand()) }' "$file" > tmp && \
mv tmp "$file"
Using awk:
awk -v seed=$RANDOM 'BEGIN{srand(seed)} {print $0, int(rand() * 10^5+1)}' file
If you have gnu awk then you can use inplace saving of file:
awk -i inplace -v seed=$RANDOM 'BEGIN{srand(seed)} {print $0, int(rand() * 10^5+1)}' file
Your script could be rewritten as:
file="example.txt"
cat ${file} | while read line; do
echo "${line} $RANDOM"
done > tmp.txt
mv tmp.txt ${file}
I have a file called input.txt:
A 1 2
B 3 4
Each line of this file means A=1*2=2 and B=3*4=12...
So I want to output such calculation to a file output.txt:
A=2
B=12
And I want to use shell script calculate.sh to finish this task:
#!/bin/bash
while read name; do
$var1=$(echo $name | cut -f1)
$var2=$(echo $name | cut -f2)
$var3=$(echo $name | cut -f3)
echo $var1=(expr $var2 * $var3)
done
and I type:
cat input.txt | ./calculate.sh > output.txt
But my approach doesn't work. How to get this task done right?
I would use awk.
$ awk '{print $1"="$2*$3}' file
A=2
B=12
Use output redirection operator to store the output to another file.
awk '{print $1"="$2*$3}' file > outfile
In BASH you can do:
while read -r a m n; do printf "%s=%d\n" $a $((m*n)); done < input.txt > output.txt
cat output.txt
A=2
B=12
calculate.sh:
#!/bin/bash
while read a b c; do
echo "$a=$((b*c))"
done
bash calculate.sh < input.txt outputs:
A=2
B=12
For bash doing math requires double parenthesis:
echo "$((3+4))"
I have a file named abc.csv which contains these 6 lines:
xxx,one
yyy,two
zzz,all
aaa,one
bbb,two
ccc,all
Now whenever all comes in a line that line should be replaced by both one and two, that is:
xxx,one
yyy,two
zzz,one
zzz,two
aaa,one
bbb,two
ccc,one
ccc,two
Can someone help how to do this?
$ awk -F, -v OFS=, '/all/ { print $1, "one"; print $1, "two"; next }1' foo.input
xxx,one
yyy,two
zzz,one
zzz,two
aaa,one
bbb,two
ccc,one
ccc,two
If you want to stick to a shell-only solution:
while read line; do
if [[ "${line}" = *all* ]]; then
echo "${line%,*},one"
echo "${line%,*},two"
else
echo "${line}"
fi
done < foo.input
In sed:
sed '/,all$/{ s/,all$/,one/p; s/,one$/,two/; }'
When the line matches ,all at the end, first substitute all with one and print it; then substitute one with two and let the automatic print do the printing.