I have pcap file which contains many DNS request and responses and i want to find the max value of ttl field from all of these packets for example:
If my pcap packets are the following:
DNS response ttl 1045
DNS response ttl 202
DNS response ttl 45
DNS response ttl 162
DNS response ttl 398
I want to find out how to recieve the value 1045 or even the packet itself.
It's all new to me so please try to explain carefully.
thanks for the helpers
To find the maximum TTL among packets from your pcap file, you could add a new TTL column and sort by this column.
To do this, you can right click on one of the column's name (e.g., Source), go to Column Preferences..., click the + sign at the bottom of the new window, and complete the new row that appeared with a title and dns.resp.ttl as the Fields option.
If you go back to the main Wireshark window, you should have a new column, which you can use to sort packets.
You can also accomplish this by using command-line tools, which I find to be faster and simpler, and depending on your needs, can also be scripted. For example:
tshark -r file.pcap -Y dns.resp.ttl -T fields -e dns.resp.ttl -E aggregator=/s | sort -nr | head -1
This command:
Utilizes the Wireshark command-line companion capture tool tshark to read the given file, filtering only for those packets containing a dns.resp.ttl field and then writing only that field to stdout, which is then piped to sort
sort is then instructed to conduct a reverse numeric sort (so highest-to-lowest value instead of the default lowest-to-highest) and pipe that output to head
head -1 will then display only the 1st line of output (instead of the default 10 lines), which will be the largest value ... probably*.
Refer to the tshark man page for more details about the options I used, such as -Y and -e, and to the sort and head man pages for more details about those commands.
*You should know that it's possible for some DNS packets to contain more than one occurrence of the dns.resp.ttl field, so this command may not always give you the largest overall value if the largest value happens to be contained within a packet with multiple occurrences of that field and where it isn't the first occurrence. This is also true for the Wireshark solution though. In other words, when you sort the column from high-to-low, the largest value may not necessarily be the first one if a packet contains multiple occurrences of the field because the sort only takes into account the value of the first occurrence.
Related
I want to just see the total number of keys available in Azure Redis cache that matches the given pattern. I tried the following command it is showing the count after displaying all the keys (which caused server load), But I need only the count.
>SCAN 0 COUNT 10000000 MATCH "{UID}*"
Except command SCAN, the command KEYS pattern can return the same result as your current command SCAN 0 COUNT 10000000 MATCH "{UID}*".
However, for your real needs to get the number of keys matching a pattern, there is an issue add COUNT command from the Redis offical GitHub repo which had answered by the author antirez for you, as the content below.
Hi, KEYS is only intended for debugging since it is O(N) and performs a full keyspace scan. COUNT has the same problem but without the excuse of being useful for debugging... (since you can simply use redis-cli keys ... | grep ...). So feature not accepted. Thanks for your interest.
So you can not directly get the count of KEYS pattern, but there are some possible solutions for you.
Count the keys return from command KEYS pattern in your programming language for the small number of keys with a pattern, like doing redis-cli KEYS "{UID}*" | wc -l on the host server of Redis.
To use the command EVAL script numkeys key \[key ...\] arg \[arg ...\] to run a Lua script to count the keys with pattern, there are two scripts you can try.
2.1. Script 1
return #redis.call("keys", "{UID}*")
2.2. Script 2
return table.getn(redis.call('keys', ARGV[1]))
The completed command in redis-cli is EVAL "return table.getn(redis.call('keys', ARGV[1]))" 0 {UID}*
I have searched high and low for an answer to this, but I have been stuck for 2 days. I am attempting to read data into BRO IDS from a file using :
Input::add_table([$source=sinkhole_list_location,
$name="sinkhole", $idx=Idx, $val=Val, $destination=sinkhole_list2, $mode=Input::REREAD]);
The file is formatted as stated by Bro documentation:
fields ip ipname
10.10.20.20 hi
8.8.8.8 hey
192.168.1.1 yo
Yet whenever I run this, or any of the other scripts out there on my Bro IDS I always get HEADERS ARE INCORRECT. What format should the file be in??????
error: sinkhole_ip.dat/Input::READER_ASCII: Did not find requested field ip in input data file sinkhole_ip.dat.
1481713377.164791 error: sinkhole_ip.dat/Input::READER_ASCII: Init: cannot open sinkhole_ip.dat; headers are incorrect
I can answer my own question here, its in the use of tab seperated files which BRO uses by default. Every single field must be tabbed.
Then you can output the table contents as a test within... Input::end_of_data event() as once this event has been received all data from the input file is available in the table.
Assume you have an unsorted file with the following content:
identifier,count=Number
identifier, extra information
identifier, extra information
...
I want to sort this file so that for each id, write the line with the count first and then the lines with extra info. I can only use the sort unix command with option -k1,1 but am allowed to slightly change the lines to get this sort.
As an example, take
a,Count=1
a,giulio
aa,Count=44
aa,tango
aa,information
ee,Count=2
bb,que
f,Count=3
b,Count=23
bax,game
f,ee
c,Count=3
c,roma
b,italy
bax,Count=332
a,atlanta
bb,Count=78
c,Count=3
The output should be
a,Count=1
a,atlanta
a,giulio
aa,Count=44
aa,information
aa,tango
b,Count=23
b,italy
bax,Count=332
bax,game
bb,Count=78
bb,que
c,Count=3
c,roma
ee,Count=2
f,Count=3
f,ee
but I get:
aa,Count=44
aa,information
aa,tango
a,atlanta
a,Count=1
a,giulio
bax,Count=332
bax,game
bb,Count=78
bb,que
b,Count=23
b,italy
c,Count=3
c,Count=3
c,roma
ee,Count=2
f,Count=3
f,ee
I tried adding spaces at the end of the identifier and/or at the beginning of the count field and other characters, but none of these approaches work.
Any pointer on how to perform this sorting?
EDIT:
if you consider for example the products with id starting with a, one of them has info 'atlanta' and appears before Count (but I wand Count to appear before any information). In addition, bb should be after b in alphabetical order for the ids. To make my question clearer: How can I get the IDs sorted by alphabetical order and such that for a given ID, the line with Count appears before the others. And how to do this using sort -k1,1 (This is a group project I am working on and I am not free to change the sorting command) and maybe changing the content (I tried for example adding a '~' to all the infos so that Count is before)
you need to tell sort, that comma is used as field separator
sort -t, -k1,1
For ASCII sorting make sure LC_ALL=C and LANG and LANGUAGE are unset
I have two files:
file1 has the format:
field1;field2;field3;field4
(file1 is initially unsorted)
file2 has the format:
field1
(file2 is sorted)
I run the 2 following commands:
sort -t\; -k1 file1 -o file1 # to sort file 1
join -t\; -1 1 -2 1 -o 1.1 1.2 1.3 1.4 file1 file2
I get the following message:
join: file1:27497: is not sorted: line_which_was_identified_as_out_of_order
Why is this happening ?
(I also tried to sort file1 taking into consideration the entire line not only the first filed of the line but with no success)
sort -t\; -c file1 doesn't output anything. Around line 27497, the situation is indeed strange which means that sort doesn't do its job correctly:
XYZ113017;...
line 27497--> XYZ11301;...
XYZ11301;...
To complement Wumpus Q. Wumbley's helpful answer with a broader perspective (since I found this post researching a slightly different problem).
When using join, the input files must be sorted by the join field ONLY, otherwise you may see the warning reported by the OP.
There are two common scenarios in which more than the field of interest is mistakenly included when sorting the input files:
If you do specify a field, it's easy to forget that you must also specify a stop field - even if you target only 1 field - because sort uses the remainder of the line if only a start field is specified; e.g.:
sort -t, -k1 ... # !! FROM field 1 THROUGH THE REST OF THE LINE
sort -t, -k1,1 ... # Field 1 only
If your sort field is the FIRST field in the input, it's tempting to not specify any field selector at all.
However, if field values can be prefix substrings of each other, sorting whole lines will NOT (necessarily) result in the same sort order as just sorting by the 1st field:
sort ... # NOT always the same as 'sort -k1,1'! see below for example
Pitfall example:
#!/usr/bin/env bash
# Input data: fields separated by '^'.
# Note that, when properly sorting by field 1, the order should
# be "nameA" before "nameAA" (followed by "nameZ").
# Note how "nameA" is a substring of "nameAA".
read -r -d '' input <<EOF
nameA^other1
nameAA^other2
nameZ^other3
EOF
# NOTE: "WRONG" below refers to deviation from the expected outcome
# of sorting by field 1 only, based on mistaken assumptions.
# The commands do work correctly in a technical sense.
echo '--- just sort'
sort <<<"$input" | head -1 # WRONG: 'nameAA' comes first
echo '--- sort FROM field 1'
sort -t^ -k1 <<<"$input" | head -1 # WRONG: 'nameAA' comes first
echo '--- sort with field 1 ONLY'
sort -t^ -k1,1 <<<"$input" | head -1 # ok, 'nameA' comes first
Explanation:
When NOT limiting sorting to the first field, it is the relative sort order of chars. ^ and A (column index 6) that matters in this example. In other words: the field separator is compared to data, which is the source of the problem: ^ has a HIGHER ASCII value than A, and therefore sorts after 'A', resulting in the line starting with nameAA^ sorting BEFORE the one with nameA^.
Note: It is possible for problems to surface on one platform, but be masked on another, based on locale and character-set settings and/or the sort implementation used; e.g., with a locale of en_US.UTF-8 in effect, with , as the separator and - permissible inside fields:
sort as used on OSX 10.10.2 (which is an old GNU sort version, 5.93) sorts , before - (in line with ASCII values)
sort as used on Ubuntu 14.04 (GNU sort 8.21) does the opposite: sorts - before ,[1]
[1] I don't know why - if somebody knows, please tell me. Test with sort <<<$'-\n,'
sort -k1 uses all fields starting from field 1 as the key. You need to specify a stop field.
sort -t\; -k1,1
... or the gnu sort is just as buggy as every other GNU command
try and sort Gi1/0/11 vs Gi1/0/1 and you'll never be able to get an actual regular textual sort suitable for join input because someone added some extra intelligence in sort which will happily use numeric or human numeric sorting automagically in such cases without even bothering to add a flag to force the regular behavior
what is suitable for humans is seldom suitable for scripting
I am trying to write a script that will take in a BIND zone file, grab all of the A records, in the format host ip. I've done that by grep -w 'A' "$A_ZONE"|awk '{print $1,$4}'|sort -V, to skip the IN A part. Now, I need to extract PTR records from all of the reverse zones that I have. Those are grouped by /24 subnets, so if I have a PTR record for 10.0.0.1, it would be in the 0.0.10.in-addr.arpa.zone file, as 10 IN PTR host.domain.tld. Seeing as that is a bit convoluted, I'm not sure how to extract the IP well, so that it would be in the format of the first file that I extracted, host ip.
Any suggestions?
You can use the following command:
egrep '^[0-9]+' 0.0.10.in-addr.arpa.zone | \
perl -p -e 's/^(\d+).*\s(\S+)\s*$/$2 10.0.0.$1/'
Output:
host.domain.tld. 10.0.0.10
It greps all the records that start with a number, and match the number and hostname and reverse them. The IP address is then constructed along with the hostname.
Note that in the command I showed, the subnet is hardcoded in the regexp, but you could apply similar strategy to extract it from your filename plug it into the regex.
You may also want to consider running your zone files through named-compilezone so as to make sure that they are in a canonical format suitable for scripting.