Is there any convenient way to show loaded iptables module list? I can show installed modules by listing /lib/iptables/ (or /lib64/iptables/) directory but I need active modules list.
Loaded iptables modules can be found in /proc/net/ip_tables_matches proc filesystem entry.
cat /proc/net/ip_tables_matches
In PHP I can access the loaded iptables modules by loading and exploding file contents:
$content = file_get_contents('/proc/net/ip_tables_matches');
$modules = explode("\n", $content);
Of course it requires proc filesystem to be mounted (Most GNU Linux distros mount it by default)
This is a really old post but here we go:
# lsmod | grep ip
shows a list of loaded modules, which I think most are related to iptables...
/proc/net/ip_tables_matches doesn't show modules (at least not in RHEL 6)
Take a look in the following directory (replace per your kernel version):
ls /lib/modules/2.6.32-504.8.1.el6.x86_64/kernel/net/netfilter/
You can load the module using (dropping the .ko as listed in the directory):
modprobe nf_conntrack_ftp
Alternatively, you can ensure it's loaded at boot by adding it to:
/etc/sysconfig/iptables-config (RHEL/CENTOS)
IPTABLES_MODULES="nf_conntrack_ftp"
This seems to be poorly documented.
Try this for a fast overview on the netfilter modules present on your system, here a one-liner for pasting:
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*; do echo -e "\e[33;1m$(basename "$i")\e[0m"; strings "$i" | \grep -e description -e depends| sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'; echo; done
Again for readability, with added newlines:
#!/bin/bash
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*
do
echo -e "\e[33;1m$(basename "$i")\e[0m"
strings "$i" | \grep -e description -e depends | sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'
echo
done
Filename will appear in yellow, from which you can guess if the module in question exists or not. Description and dependencies are the next two lines below.
This will not cover everything (because this would be too easy, ofc). Only looking up the modules manually, to see if they exist, gives you 100% accurate information.
iptables -m <match/module name> --help
If a module exists on your system, at the end of the help text you will get some info on how to use it:
ctr-014# iptables -m limit --help
iptables v1.4.14
Usage: iptables -[ACD] chain rule-specification [options]
iptables -I chain [rulenum] rule-specification [options]
...
[!] --version -V print package version.
limit match options:
--limit avg max average match rate: default 3/hour
[Packets per second unless followed by
/sec /minute /hour /day postfixes]
--limit-burst number number to match in a burst, default 5
ctr-014#
It the module is not present on your system:
ctr-014# iptables -m iplimit --help
iptables v1.4.14: Couldn't load match `iplimit':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
ctr-014#
As Gonio has suggested lsmod lists all loaded kernel modules, but grepping "ip" won't give you all iptables modules.
I would rather use
lsmod|grep -E "nf_|xt_|ip"
and still, I'm not sure the list will be complete.
As an alternative method, this can also be done with a Python script.
First make sure you have the iptc library.
sudo pip install --upgrade python-iptables
(Assuming Python3 is your version)
import iptc
table = iptc.Table(iptc.Table.FILTER)
for chain in table.chains:
print("------------------------------------------")
print("Chain ", chain.name)
for rule in chain.rules:
print("Rule ", "proto", rule.protocol, "src:", rule.src, "dst:" , rule.dst, "in:", rule.in_interface, "out:", rule.out_interface)
print("Matches:")
for match in rule.matches:
print(match.name)
print("Target:")
print(rule.target.name)
print("------------------------------------------")
Related
I wanted to enable name resolution in tshark, the equivalent in wireshark would be View->Name Resolution-> resolve network addresses.
I ran tshark with the -N n flag, and the output only shows de ip.
When opening the same file with wireshark it shows the names.
How can I make tshark show the names?
For example:
❯ tshark -r test_capture.pcap -N n -Y "ip.src_host contains DESKTOP # This line does not return anything
❯ tshark -r test_capture.pcap -Y "ip.src_host contains DESKTOP" # neither does this
But opening the file with wireshark, I have
ths output
Just in case it's relevant, I tried this two ways to capture the packets
❯ tshark -w test_capture.pcap -c 100
❯ tshark -N n -w test_capture.pcap -c 100
I want to be able to have the resolution names activated using tshark.
I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm
I am trying to link a window from another session by specifying target session using format variable. In that way I hope to get it always linked next to the current active window.
The hard coded version of the working command:
:link-window -a -s 1:remote -t 0:2
in which case I specify a target pane literaly. When I try any of:
:link-window -a -s 1:remote -F -t "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -F "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -t "#{session_name}":"#{window_index}"
I got an error. The notable part here is that when I do use -F flag, the usage for link-window command is displayed. And when I omit it and use only -t, the error is cann't find window #{session_name}
Does it mean that link-window command simply doesn't support format variables?
-t does not support format variables and link-window does not support -F. run-shell will expand so you can do it by doing, for example:
run "tmux linkw -t '#{session_name}'"
I am currently having trouble running linux perf, mostly because /proc/sys/kernel/kptr_restrict is currently set to 1.
However, if I try to /proc/sys/kernel/kptr_restrict by echoing 0 to it as follows...
echo 0 > /proc/sys/kernel/kptr_restrict
I get a permission denied error. I don't think I can change permissions on it either.
Is there a way to set this directly somehow? I am super user. I don't think perf will function acceptably without this being set.
In your example, echo is running as root, but your shell is running as you.
So please try this command:
sudo sh -c " echo 0 > /proc/sys/kernel/kptr_restrict"
All the files located in /proc/sys can only be modified by root (actually 99.9% files, check with ls -l). Therefore you have to use sudo to modify those files (or your preferred way to execute commands as root).
The proper way to modify the files in /proc/sys is to use the sysctl tool. Note that yu should replace the slashes (/) with dots (.) and omit the /proc/sys/ prefix... read the fine manual.
Read the current value:
$ sysctl kernel.kptr_restrict
kernel.kptr_restrict = 1
Modify the value:
$ sudo sysctl -w kernel.kptr_restrict=0
sysctl kernel.kptr_restrict=1
To make your modifications reboot persistent, you should edit /etc/sysctl.conf or create a file in /etc/sysctl.d/50-mytest.conf (edit the file as root or using sudoedit), containing:
kernel.kptr_restrict=1
In which case you should execute this command to reload your configuration:
$ sysctl -p /etc/sysctl.conf
P.S. it is possible to directly write in the virtual file. https://stackoverflow.com/users/321730/cdyson37 command is quite elegant: echo 0 | sudo tee /proc/sys/kernel/kptr_restrict
I am writing a shell script. The tutorial that I am reading have the first line like this :
#!/usr/bin/env bash/
but it isn't working for me. (error : no such directory)
How can I find out which bash I am using and where is it located?
Appreciate for any advice and help.
Thanks a lot. It works now.
solution is #!/usr/bin/env bash
Another problem: Why it just can't read the word 'restart'
my code in the start.sh:
#!/usr/bin/env bash/
RESTART="apachectl restart"
$RESTART
I does not work.
Usage: /usr/local/apache2/bin/httpd [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|sto p]
[-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
-D name : define a name for use in <IfDefine name> directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
why is it like that? it seems that it can read the word restart.
Thank you all! I have fixed it now.
solution: edit the file in unix (vim/nano and whatever but not in windows)
Thank again :)
Yet another way: echo $SHELL.
If you remove the / from bash/, it should work.
You ca try the following command
which bash
at a shell. Then put
#!<the output of which bash>
To find out where bash is, issue the command:
type bash
at your command prompt. and to make sure it is always found by your script use:
#!bash
this has the problem that some other bash may be found and used, which could be security issue, but I have been doing this for years.
Remove the extra character(s) you have at the end of lines. No slash is required and
dos2unix yourscript will remove the unwanted CRs.
#!/usr/bin/env bash
Actually better would be to open a new question for your restart problem.
Most probably you are not at the directory where the restart command is
defined or restart is not in your path. Try putting the whole path.