displaying 2 outputs as a 2 separate columns | bash - linux

I have two outputs from 2 commands:
comm1=`ip a | grep ens | grep -v lo | cut -d' ' -f2`
output example:
>eth1
and command two
comm2=`ip a | grep inet| grep -v inet6 | grep -v 127 | cut -d' ' -f6`
output example:
>123.156.789
234.167.290
148.193.198
138.25.49
142.137.154
125.175.166
246.173.7
154.167.67
Desired output:
echo "$comm1 $comm2"
> eth1 123.156.789
234.167.290
148.193.198
138.25.49
142.137.154
125.175.166
246.173.7
154.167.67
If that would be single line outputs, then column -t works just fine,
echo "$comm1 $comm2" | column -t
but in this case, when one of the columns is multi line, it is not working..
Looking for an efficient solution
edited

You can use command paste and process substitution for this, e.g.:
$ paste <(comm1) <(comm2)

You might want the paste command.
$ seq 1 3 > a.txt
$ seq 5 10 > b.txt
$ paste a.txt b.txt
1 5
2 6
3 7
8
9
10

Related

Grepping inside a log with a threshold on the result

I have certain tags stored in a series of .log files and i would like for the grep to show me Only the values > 31, meaning different to 0 and higher than 31
I have this code:
#! /bin/bash
-exec grep -cFH "tag" {} ; | grep -v ':[0-31]$' >> file.txt
echo < file.txt
Output:
I have this result from the grep:
/opt/logs/folder2.log:31
i was expecting not to bring nothing back if the result is 31 or less but still shows the result 31
i have also tried to add:
|tail -n+31
but didn't work
[0-31] means "0 or 1 or 2 or 3 or 1".
To drop all lines with 0-9, 10-19, 20-29, 30, and 31, you could use the following:
... | grep -ve ':[0-9]$' -e ':[12][0-9]$' -e ':3[01]$'
or as single regex:
... | grep -v ':\([12]\?[0-9]\|3[01]\)$'
With extended grep:
... | grep -vE ':([12]?[0-9]|3[01])$'

How to sort files by word count in linux?

I have to sort files in current directory based on word count by using command wc and pipe | necessarily.
What command do I have to use?
I thought I had to use a command sort | wc -w, but I failed.
I think this can help.
ls -1 | xargs wc -w | sort
The ls -1 will list all files of the current directory, and then pass it to xargs to use the output of the previous command as input of the command wc -w. Finally we pipe the result to sort command to order them by number of words each file contain.
You can learn more about xargs here.
The output:
[amirreza#localhost test]$ ls -1
four_words
three_words
two_words
[amirreza#localhost test]$ ls -1 | xargs wc -w
4 four_words
3 three_words
2 two_words
9 total
[amirreza#localhost test]$ ls -1 | xargs wc -w | sort
2 two_words
3 three_words
4 four_words
9 total
Edit
I just figured out that my answer was not correct. Because sort command by default works character by character, so the result of sorting 2, 10, 3 will be:
10, 2, 3
Because it only checks the first character of 10 and it's 1 so it's less than 2 and 3.
To fix it we should use numerical sort, by using n flag. Here's how it works:
[amirreza#localhost test]$ ls -1 | xargs wc -w | sort
10 ten_words
19 total
2 two_words
3 three_words
4 four_words
[amirreza#localhost test]$ ls -1 | xargs wc -w | sort -n
2 two_words
3 three_words
4 four_words
10 ten_words
19 total
And just to make output more cleaner we can remove the total line and just show the file names.
[amirreza#localhost test]$ ls -1 | xargs wc -w | sort -n | awk '{print $2}' | head -n -1
zero_word
two_words
three_words
four_words
ten_words

Obtaining the total of coincidences with multiple pattern using grep command

I have a file in Linux contains strings:
CALLTMA
Starting
Starting
Ending
Starting
Ending
Ending
CALLTMA
Ending
I need the quantity of any string (FE. #Ending, # Starting, #CALLTMA). In my example I need obtaining:
CALLTMA : 2
Starting: 3
Ending : 4
I can obtaining this output when I execute 3 commands:
grep -i "Starting" "/myfile.txt" | wc -l
grep -i "Ending" "/myfile.txt" | wc -l
grep -i "CALLTMA" "/myfile.txt" | wc -l
I want to know if it is possible to obtain the same output using only one command.
I try running this command
grep -iE "CALLTMA|Starting|Ending" "/myfile.txt" | wc -l
But this returned the total of coincidences. I appreciate your help .
Use sort and uniq:
sort myfile.txt | uniq -c
The -c adds the counts to the unique lines. If you want to sort the output by frequency, add
| sort -n
to the end (and change to -nr if you want the descending order).
A simple awk way to handle this:
awk '{counts[$1]++} END{for (c in counts) print c, counts[c]}' file
Starting 3
Ending 4
CALLTMA 2
grep -c will work. You can put it all together in a short script:
for i in Starting CALLTMA Ending; do
printf "%-8s : %d\n" "$i" $(grep -c "$i" file.txt)
done
(to enter the search terms as arguments, just use the arguments array for the loop list, e.g. for i in "$#"; do)
Output
Starting : 3
CALLTMA : 2
Ending : 4

How sort specific column in bash (linux) without modifying other columns and without creating a temporary file?

Input file
0 1.0069770730517629 A
1 1.0068122761874614 A
2 1.0004297763706849 B
3 1.0069220626905635 C
4 1.0079998216945956 C
5 1.0006092898635817 D
6 1.0071274842017928 A
7 1.0083750686808803 A
8 1.0006868227863552 B
9 1.0073693844413083 C
10 1.0086546525825624 C
11 1.0007234442925264 D
Expected output:
0 1.0086546525825624 A
1 1.0083750686808803 A
2 1.0079998216945956 B
3 1.0073693844413083 C
4 1.0071274842017928 C
5 1.0069770730517629 D
6 1.0069220626905635 A
7 1.0068122761874614 A
8 1.0007234442925264 B
9 1.0006868227863552 C
10 1.0006092898635817 C
11 1.0004297763706849 D
My solution using a temporal file
awk '{print $2}' input.txt | sort -gr > temp.txt
paste input.txt temp.txt | awk '{print $1,$4,$3}'
rm temp.txt
Question
Is posible sort a specific column in bash (linux) without modifying other columns and without creating a temporary file?
You can use - as a filename argument to paste to tell it to use standard input.
cut -d' ' -f2 input.txt | sort -gr | paste input.txt - | cut -d' ' -f1,4,3
And if it didn't support this, you could use process substitution.
paste input.txt <(cut -d' ' -f2 input.txt | sort -gr) | cut -d' ' -f1,4,3
awk to the rescue
awk '{c1[NR]=$1; c2[NR]=$2; c3[NR]=$3} END {asort(c2); for(i=1;i<=NR;i++) print c1[i],c2[NR+1-i],c3[i]}'

wc -l is NOT counting last of the file if it does not have end of line character

I need to count all lines of an unix file. The file has 3 lines but wc -l gives only 2 count.
I understand that it is not counting last line because it does not have end of line character
Could any one please tell me how to count that line as well ?
grep -c returns the number of matching lines. Just use an empty string "" as your matching expression:
$ echo -n $'a\nb\nc' > 2or3.txt
$ cat 2or3.txt | wc -l
2
$ grep -c "" 2or3.txt
3
It is better to have all lines ending with EOL \n in Unix files. You can do:
{ cat file; echo ''; } | wc -l
Or this awk:
awk 'END{print NR}' file
This approach will give the correct line count regardless of whether the last line in the file ends with a newline or not.
awk will make sure that, in its output, each line it prints ends with a new line character. Thus, to be sure each line ends in a newline before sending the line to wc, use:
awk '1' file | wc -l
Here, we use the trivial awk program that consists solely of the number 1. awk interprets this cryptic statement to mean "print the line" which it does, being assured that a trailing newline is present.
Examples
Let us create a file with three lines, each ending with a newline, and count the lines:
$ echo -n $'a\nb\nc\n' >file
$ awk '1' f | wc -l
3
The correct number is found.
Now, let's try again with the last new line missing:
$ echo -n $'a\nb\nc' >file
$ awk '1' f | wc -l
3
This still provides the right number. awk automatically corrects for a missing newline but leaves the file alone if the last newline is present.
Respect
I respect the answer from John1024 and would like to expand upon it.
Line Count function
I find myself comparing line counts A LOT especially from the clipboard, so I have defined a bash function. I'd like to modify it to show the filenames and when passed more than 1 file a total. However, it hasn't been important enough for me to do so far.
# semicolons used because this is a condensed to 1 line in my ~/.bash_profile
function wcl(){
if [[ -z "${1:-}" ]]; then
set -- /dev/stdin "$#";
fi;
for f in "$#"; do
awk 1 "$f" | wc -l;
done;
}
Counting lines without the function
# Line count of the file
$ cat file_with_newline | wc -l
3
# Line count of the file
$ cat file_without_newline | wc -l
2
# Line count of the file unchanged by cat
$ cat file_without_newline | cat | wc -l
2
# Line count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -l
3
# Line count of the file changed by only the first call to awk
$ cat file_without_newline | awk 1 | awk 1 | awk 1 | wc -l
3
# Line count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -l
3
Counting characters (why you don't want to put a wrapper around wc)
# Character count of the file
$ cat file_with_newline | wc -c
6
# Character count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -c
6
# Character count of the file
$ cat file_without_newline | wc -c
5
# Character count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -c
6
Counting lines with the function
# Line count function used on stdin
$ cat file_with_newline | wcl
3
# Line count function used on stdin
$ cat file_without_newline | wcl
3
# Line count function used on filenames passed as arguments
$ wcl file_without_newline file_with_newline
3
3

Resources