Using name resolution in tshark - linux

I wanted to enable name resolution in tshark, the equivalent in wireshark would be View->Name Resolution-> resolve network addresses.
I ran tshark with the -N n flag, and the output only shows de ip.
When opening the same file with wireshark it shows the names.
How can I make tshark show the names?
For example:
❯ tshark -r test_capture.pcap -N n -Y "ip.src_host contains DESKTOP # This line does not return anything
❯ tshark -r test_capture.pcap -Y "ip.src_host contains DESKTOP" # neither does this
But opening the file with wireshark, I have
ths output
Just in case it's relevant, I tried this two ways to capture the packets
❯ tshark -w test_capture.pcap -c 100
❯ tshark -N n -w test_capture.pcap -c 100
I want to be able to have the resolution names activated using tshark.

Related

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

Tmux link-pane with format variables

I am trying to link a window from another session by specifying target session using format variable. In that way I hope to get it always linked next to the current active window.
The hard coded version of the working command:
:link-window -a -s 1:remote -t 0:2
in which case I specify a target pane literaly. When I try any of:
:link-window -a -s 1:remote -F -t "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -F "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -t "#{session_name}":"#{window_index}"
I got an error. The notable part here is that when I do use -F flag, the usage for link-window command is displayed. And when I omit it and use only -t, the error is cann't find window #{session_name}
Does it mean that link-window command simply doesn't support format variables?
-t does not support format variables and link-window does not support -F. run-shell will expand so you can do it by doing, for example:
run "tmux linkw -t '#{session_name}'"

How to store/save the output from a command in Bash? [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 6 years ago.
I want to save a command output to an variable in a bash script. I've tried possibilities that I've found here in this forum but it doesn't work for my script.
I use the command: cangen vcan0 -g 4 -I 7E -L 8 -D r -v to generate CAN data. -g, -I, -L, -D, -V are parameter to define how the CAN data have to be generate.
Normally i get the data printed on the Terminal like this:
I want to store this output in a variable:
#!/bin/bash
#We have to generate a virtual CAN bus Interface
sudo modprobe vcan
sudo ip link add dev vcan0 type vcan
sudo ip link set up vcan0
candata= `(cangen vcan0 -g 0.008 -I 7E -L 8 -D r -v)`
echo $candata
and when i run my script, i do not obtain the output from my cangen-command. I get the output:
RTNETLINK answers: File exists
I do not have much experience with Linux and bash script programming. Can someone help me?
I think that your script works, this message is the content of $candata, and the reason for it that the vcan0 device already exists. (Have you tried several times already maybe?)
Anyhow I would suggest to write:
candata=$(cangen vcan0 -g 0.008 -I 7E -L 8 -D r -v)
or
candata=`cangen vcan0 -g 0.008 -I 7E -L 8 -D r -v`
As you have written it, you open a sub-shell which rises complexity for nothing.

Unable to run nw.js application in ubuntu

Myself created a node webkit helloworld application based on the tutorial given in this link. Also I tried to run the same in ubuntu OS using the command given in this link. But when I run the command nw /home/myUsername/Documents/myNodeWebkitApps/helloWorld/myApp.nw it throws the following results in terminal.
usage
nw [udp] <options> <host> <port>
Default TCP protocol can be changed to UDP by ``udp'' argument.
UDP options
currently none
TCP options
-f firewall mode, connection is initiated by netread.
Host specification is ignored and can be omited.
-c ignored. Transmission checksum is activated by
default.
-C algorithm use the specified algorithm for checksum. This
option also implies -c.
Supported algorithms (the first is default):
md5 none
general options
-i <file> read data from file instead of stdin.
-b print speed in b/s instead of B/s
-h <n> print `#' after each n KiB transferred (def. 10485.76).
-H <n> print `#' after each n MiB transferred (def. 10.24).
-q be quiet.
-v be verbose.
-vv be very verbose.
-V show version.
-vV show verbose version.
return values
0 no errors.
1 some error occured.
2 checksum validation failed.
How can I run the same as given by first link?
The output here did not come from nw.js but netrw which is installed on your machine. You can fix it by removing netrw from your machine or correcting the path to nw.js.
Finally, I managed to run the hello world application with the help of this link and this stackoverflow answer like
install nw builder by command npm install nw-builder -g
If you got the error something like /usr/bin/env: node: No such file or directory then as given in second link above do a symlink of node as ln -s /usr/bin/nodejs /usr/bin/node
Now we can run our application by the command nwbuild -r ~/Desktop/webkit-example

List of loaded iptables modules

Is there any convenient way to show loaded iptables module list? I can show installed modules by listing /lib/iptables/ (or /lib64/iptables/) directory but I need active modules list.
Loaded iptables modules can be found in /proc/net/ip_tables_matches proc filesystem entry.
cat /proc/net/ip_tables_matches
In PHP I can access the loaded iptables modules by loading and exploding file contents:
$content = file_get_contents('/proc/net/ip_tables_matches');
$modules = explode("\n", $content);
Of course it requires proc filesystem to be mounted (Most GNU Linux distros mount it by default)
This is a really old post but here we go:
# lsmod | grep ip
shows a list of loaded modules, which I think most are related to iptables...
/proc/net/ip_tables_matches doesn't show modules (at least not in RHEL 6)
Take a look in the following directory (replace per your kernel version):
ls /lib/modules/2.6.32-504.8.1.el6.x86_64/kernel/net/netfilter/
You can load the module using (dropping the .ko as listed in the directory):
modprobe nf_conntrack_ftp
Alternatively, you can ensure it's loaded at boot by adding it to:
/etc/sysconfig/iptables-config (RHEL/CENTOS)
IPTABLES_MODULES="nf_conntrack_ftp"
This seems to be poorly documented.
Try this for a fast overview on the netfilter modules present on your system, here a one-liner for pasting:
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*; do echo -e "\e[33;1m$(basename "$i")\e[0m"; strings "$i" | \grep -e description -e depends| sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'; echo; done
Again for readability, with added newlines:
#!/bin/bash
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*
do
echo -e "\e[33;1m$(basename "$i")\e[0m"
strings "$i" | \grep -e description -e depends | sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'
echo
done
Filename will appear in yellow, from which you can guess if the module in question exists or not. Description and dependencies are the next two lines below.
This will not cover everything (because this would be too easy, ofc). Only looking up the modules manually, to see if they exist, gives you 100% accurate information.
iptables -m <match/module name> --help
If a module exists on your system, at the end of the help text you will get some info on how to use it:
ctr-014# iptables -m limit --help
iptables v1.4.14
Usage: iptables -[ACD] chain rule-specification [options]
iptables -I chain [rulenum] rule-specification [options]
...
[!] --version -V print package version.
limit match options:
--limit avg max average match rate: default 3/hour
[Packets per second unless followed by
/sec /minute /hour /day postfixes]
--limit-burst number number to match in a burst, default 5
ctr-014#
It the module is not present on your system:
ctr-014# iptables -m iplimit --help
iptables v1.4.14: Couldn't load match `iplimit':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
ctr-014#
As Gonio has suggested lsmod lists all loaded kernel modules, but grepping "ip" won't give you all iptables modules.
I would rather use
lsmod|grep -E "nf_|xt_|ip"
and still, I'm not sure the list will be complete.
As an alternative method, this can also be done with a Python script.
First make sure you have the iptc library.
sudo pip install --upgrade python-iptables
(Assuming Python3 is your version)
import iptc
table = iptc.Table(iptc.Table.FILTER)
for chain in table.chains:
print("------------------------------------------")
print("Chain ", chain.name)
for rule in chain.rules:
print("Rule ", "proto", rule.protocol, "src:", rule.src, "dst:" , rule.dst, "in:", rule.in_interface, "out:", rule.out_interface)
print("Matches:")
for match in rule.matches:
print(match.name)
print("Target:")
print(rule.target.name)
print("------------------------------------------")

Resources