delete first line and 3 columns from txt file - linux

I have a file in the following format:
Received 4 packets, got 4 answers, remaining 252 packets
52:54:00:12:35:00 192.168.1.1
52:54:00:12:35:00 192.168.1.2
08:00:27:87:d3:08 192.168.1.3
08:00:27:3e:99:5c 192.168.1.23
I want to delete first line and the mac col from all the lines , The output should be:
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.23
and the command run in background please I don't need to show result in terminal window

awk 'NR != 1 {print $2}' file1
In action
$ cat file1
Received 4 packets, got 4 answers, remaining 252 packets
52:54:00:12:35:00 192.168.1.1
52:54:00:12:35:00 192.168.1.2
08:00:27:87:d3:08 192.168.1.3
08:00:27:3e:99:5c 192.168.1.23
$ awk 'NR != 1 {print $2}' file1
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.23
For silent output, you could direct the output to another file.
$ awk 'NR != 1 {print $2}' file1 > file2
$ cat file2
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.23

Related

Using a script to Organize a file

So I have a two flat files that is in the format as follows:
File1.txt
Customer1 12345 12346 12347
Customer2 14444 14445
File2.txt
12345 aol.com
12347 gmail.com
12346 google.com
14444 yahoo.com
14445 outlook.com
I need to be able to translate the file into something like this:
Customer1 aol.com google.com gmail.com
Customer2 yahoo.com outlook.com
This is what I have so far
$ awk 'NR==FNR {a[$1]=$2; next} $2 in a {print $0, a[$2]}' OFS='\t' File2.txt File1.txt
However this only looks at Column 2 of the first file, how can I expand it to look at all columns in file1
awk can loop through fields. Try something like this -
$: awk 'NR==FNR {a[$1]=$2; next;}
{ printf "%s ", $1;
for (i=2;i<=NF;i++) {
printf "%s ", a[$i];
};
printf "\n";
}' File2.txt File1.txt
Customer1 aol.com google.com gmail.com
Customer2 yahoo.com outlook.com
You can let awk go through the file, separate NF==2 and NF==3 records into two hashtables/arrays, then link the two arrays and print output.
You can also go through the file twice to do the same logic.
I think this gives you a good start.

displaying 2 outputs as a 2 separate columns | bash

I have two outputs from 2 commands:
comm1=`ip a | grep ens | grep -v lo | cut -d' ' -f2`
output example:
>eth1
and command two
comm2=`ip a | grep inet| grep -v inet6 | grep -v 127 | cut -d' ' -f6`
output example:
>123.156.789
234.167.290
148.193.198
138.25.49
142.137.154
125.175.166
246.173.7
154.167.67
Desired output:
echo "$comm1 $comm2"
> eth1 123.156.789
234.167.290
148.193.198
138.25.49
142.137.154
125.175.166
246.173.7
154.167.67
If that would be single line outputs, then column -t works just fine,
echo "$comm1 $comm2" | column -t
but in this case, when one of the columns is multi line, it is not working..
Looking for an efficient solution
edited
You can use command paste and process substitution for this, e.g.:
$ paste <(comm1) <(comm2)
You might want the paste command.
$ seq 1 3 > a.txt
$ seq 5 10 > b.txt
$ paste a.txt b.txt
1 5
2 6
3 7
8
9
10

print a file content side by side bash

I have a file with below contents. I need to print each line side by side
hello
1223
man
2332
xyz
abc
Output desired:
hello 1223
man 2332
xyz abc
Is there any other alternative than paste command?
You can use this awk:
awk '{ORS = (NR%2 ? FS : RS)} 1' file
hello 1223
man 2332
xyz abc
This sets ORS (output record separator) equal to input field separator (FS) for odd numbered lines, for even numbered lines it will be set to input record separator (RS).
To get tabular data use column -t:
awk '{ORS = (NR%2 ? FS : RS)} 1' file | column -t
hello 1223
man 2332
xyz abc
awk/gawk solution:
$ gawk 'BEGIN{ OFS="\t"} { COL1=$1; getline; COL2=$1; print(COL1,COL2)}' file
hello 1223
man 2332
xyz abc
Bash solution (no paste command):
$ echo $(cat file) | while read col1 col2; do printf "%s\t%s\n" $col1 $col2; done
hello 1223
man 2332
xyz abc

Linux command to find similarity in two text files.If both have matching tring, push it to a text file

file1.txt
Message Id:4401811003996404
file2.txt
Message Id:4401811003996404 End Time
Message Id:4401811007868646 End Time
I tried to use cut to trim the End Time from the file2.txt but it cut digits too.
Also tried comm -12 file1.txt and file2.txt, output was blank.
I want output something like this
Message Id:4401811003996404
Simple:
two files, uno and due.
[root#srvrux33 ~]# cat uno
1 end
2 end
3 end
1 end
[root#srvrux33 ~]# cat due
1
3
[root#srvrux33 ~]#
You can achieve your result in the following way:
[root#srvrux33 ~]# for L in $(cat due); do grep $L uno| uniq | awk '{ print $1 }'; done
1
3
[root#srvrux33 ~]#
Regards
Giova

script to join 2 separate text files and also add specified text

hi there i want to creat a bash script that does the following:
i have 2 texts files one is adf.ly links and the other Recipie names
i want to creat a batch scrript that takes each line from each text file and do the following
<li>**Recipie name line 1 of txt file**</li>
<li>**Recipie name line 2 of txt file**</li>
ect ect and save all the results to another text file called LINKS.txt
someone please help or point me in direction of linux bash script
this awk one-liner will do the job:
awk 'BEGIN{l="<li>%s</li>\n"}NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
more clear version (same script):
awk 'BEGIN{l="<li>%s</li>\n"}
NR==FNR{a[NR]=$0;next}
{printf l, a[FNR],$0}' file1 file2
example:
kent$ seq -f"%g from file1" 7 >file1
kent$ seq -f"%g from file2" 7 >file2
kent$ head file1 file2
==> file1 <==
1 from file1
2 from file1
3 from file1
4 from file1
5 from file1
6 from file1
7 from file1
==> file2 <==
1 from file2
2 from file2
3 from file2
4 from file2
5 from file2
6 from file2
7 from file2
kent$ awk 'BEGIN{l="<li>%s</li>\n"};NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
<li>1 from file2</li>
<li>2 from file2</li>
<li>3 from file2</li>
<li>4 from file2</li>
<li>5 from file2</li>
<li>6 from file2</li>
<li>7 from file2</li>
EDIT for the comment of OP:
if you have only one file: (the foo here is just dummy text)
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1
output from same file1 example:
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
if you want to save the output to a file:
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1 > newfile
Try doing this :
$ cat file1
aaaa
bbb
ccc
$ cat file2
111
222
333
$ paste file1 file2 | while read a b; do
printf '<li>%s</li>\n' "$a" "$b"
done | tee newfile
Output
<li>111</li>
<li>222</li>
<li>333</li>

Resources