How can I use iptables as a per-user whitelist web filter on Linux? - linux

I'm trying to use iptables to create a web filter that whitelists a list of websites and blacklists everything else on a per-user basis. So one user would have full web access while another would be restricted only to the whitelist.
I am able to block all outgoing web traffic on a per-user basis, but I cannot seem to whitelist certain websites. My current FILTER table is setup as:
Chain INPUT (policy ACCEPT 778 packets, 95768 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 777 packets, 95647 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 176.32.98.166 0.0.0.0/0 owner UID match 1000
0 0 ACCEPT all -- * * 176.32.103.205 0.0.0.0/0 owner UID match 1000
0 0 ACCEPT all -- * * 205.251.242.103 0.0.0.0/0 owner UID match 1000
677 73766 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 1000 reject-with icmp-port-unreachable
This was created with the following commands:
sudo iptables -A OUTPUT -s amazon.com -m owner --uid-owner <USERNAME> -j ACCEPT
sudo iptables -A OUTPUT -m owner --uid-owner <USERNAME> -j REJECT
My understanding was that iptables would use the first rule that matches to a packet but it seems that is not the case here. All web traffic is being blocked for the user while being allowed for all other users. Is there another way to set this up?

Related

Incoming traffic is not forwarded to correct docker container

There is an incoming traffic with port 1111/UDP from Server-A to Server-B and Server-B has multiple containers up and running and one of the containers (udp-listener) is listening on port 1111/udp and it's IP is (172.17.0.2), the issue is:
Stop container "udp-listener" with ip 172.17.0.2
Start new container like Nginx, so now Nginx has the ip 172.17.0.2
start "udp-listener" which has the next IP available 172.17.0.3
and now, the incoming traffic from Server-A is still trying to access 172.17.0.2, here is the output
$ tcpdump port 1111
17:30:09.875982 IP Server-A-IP.pvsw > 172.17.0.2.pvsw: UDP, length 49
and now if I give the "udp-listener" container IP 172.17.0.2, then is going to work again.
Any hint where can I look up, btw Server-A is not accessible and it just set to send event to server-B public IP.
what is the best practice for debugging? is there any tools or any tutorials?
I also checked IP tables for any rules, but I could not find anything, here is the result:
Chain PREROUTING (policy ACCEPT 2178 packets, 155K bytes)
pkts bytes target prot opt in out source destination
12M 805M PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
12M 805M PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
12M 805M PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
3408K 204M DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 780 packets, 46800 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 789 packets, 47332 bytes)
pkts bytes target prot opt in out source destination
6021K 361M OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 807 packets, 48412 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE udp -- * * 172.17.0.1 172.17.0.1 udp dpt:8080
0 0 MASQUERADE udp -- * * 172.17.0.2 172.17.0.1 udp dpt:1111
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
3348K 201M RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- br-4a68f517a271 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT udp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:8080to:172.17.0.1:8080
0 0 DNAT udp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:1111 to:172.17.0.2:1111

Forwarding traffic to custom key chain in iptables

I need help regarding iptables. I have the following iptables rules when i use the command iptables -L
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
Chain FORWARD (policy DROP)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain MYSSH (0 references)
target prot opt source destination
Now I want to add a rule to the INPUT chain of my filter table that will send all ssh traffic to the MYSSH chain. I have to make sure this new rule follows (not precedes) the RELATED,ESTABLISHED rule, so it doesn't apply to existing connections!
I tried:
iptables -I INPUT 1 -p tcp -m MYSSH --dport 22 -j ACCEPT
but this is not working. Can you please tell me how to do that?
This is kind of a question for Superuser, but okay. I have my admin hat on today. :P
The main thing is that you can use your chain as a target like ACCEPT, REJECT or DROP, so you want to pass it as -j option, i.e.
iptables -A INPUT -p tcp --dport 22 -j MYSSH
would append a rule to pipe all TCP traffic to port 22 through the MYSSH chain to the INPUT chain.
The other question is where to insert this rule. Generally, when I do this kind of stuff manually (these days I usually use shorewall because its easier to maintain), I just work with iptables -A commands and run them in the right order. In your case, it looks as though you want to insert it as the second or third rule, before the catchall
ACCEPT all -- anywhere anywhere
rule (although that might have some additionall conditions that iptables -L will not show without -v; I can't know that). Then we're looking at
iptables -I INPUT 2 -p tcp --dport 22 -j MYSSH
or
iptables -I INPUT 3 -p tcp --dport 22 -j MYSSH
depending on where you want it.
Note, by the way, that if this catch-all rule doesn't have additional conditions that I'm not seeing, the rule below it will never be reached.

iptables: Index of deletion too big BASH

I am having some difficulties setting a default iptables script as it won't run. It shows the error: iptables: Index of deletion too big
I have tried re-ordering the rules, attempting to delete all first before adding, etc. but it doesn't seem to be helping. What am I doing wrong?
Here is the script:
#!/bin/bash
iptables -P FORWARD DROP
iptables -D FORWARD 1
iptables -P INPUT DROP
iptables -D INPUT 5
iptables -D INPUT 4
iptables -I INPUT -p tcp --dport 22 -j ACCEPT
iptables -D INPUT 3
iptables -I INPUT -p icmp -j ACCEPT
the original IP tables looks like this:
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
119 13723 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
0 0 ACCEPT icmp -- any any anywhere anywhere
0 0 ACCEPT all -- lo any anywhere anywhere
1 60 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh
0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited

Removing port from TCP_IN does not close it from outside traffic on CSF

A few days ago I have installed CSF on my Ubuntu host via SSH. Everything seemed to be working fine and I had the chance to play with it for a few hours. Figuring out how I close and open ports. Everything seemed to be working fine.
Today I tried to make a restriction for my 3306 mysql port and allow access only for a specific IP address. I did this by checking that it is removed from TCP_IN and TCP_OUT lines on csf.conf and inserting it on csf.allow.
This seamed not to be working as the port was appearing to be open when scanning it with nmap. After further debugging I figured out that any change that I was now making on the csf.conf and csf.allow files had no effect on the availability of the ports.
I've research further and found out that there might be some issues between the ufw firewall, iptables and csf so I stopped the ufw firewall and deleted all my iptables rules and setting them to the default values.
:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
:~$ sudo service ufw status
ufw stop/waiting
And now I just flushed, stopped and started the csf firewall:
csf -f, csf -x, csf -e
After the restart it seems like sudo iptables -L will output a huge list of rules with source as anywhere and destination as anywhere. I have no previous experience with this so I am not really sure if I am able to extract the right sensitive information but after reading about it I assumed this is not right for my situation.
On the other hand csf -L has a different output. With most source and destination ip's as 0.0.0.0/0. What I could extract from the csf -L output is that there is an INVALID Chain.
Chain INVALID (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 INVDROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
2 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x3F/0x00
3 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x3F/0x3F
4 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x03/0x03
5 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x06
6 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x05/0x05
7 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x11/0x01
8 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x18/0x08
9 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x30/0x20
10 0 0 INVDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 ctstate NEW
and
Chain ALLOWIN (1 references)
num pkts bytes target prot opt in out source destination
1 210 10680 ACCEPT all -- !lo * [mysship] 0.0.0.0/0
Chain ALLOWOUT (1 references)
num pkts bytes target prot opt in out source destination
1 295 41404 ACCEPT all -- * !lo 0.0.0.0/0 [mysship]
MYSSHIP is the ip from which I connect using SSH which I've put on the csf.allow and also the ssh port is found on csf.conf TCP_IN, TCP_OUT lists.
Emm, for me i changed the policy to Drop then i allowed whatever i wanted, take a look :
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
Chain FORWARD (policy DROP)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
You can add the IP# you want with -s (for source) or -d (for destinaiton) !
I am not really sure what was causing the confusion but I flushed all my previous configs from both iptables and csf. I re-installed csf than wrote all the configs one by one, testing at every step using nmap. I've also modified the TESTING_INTERVAL to 15. I think that my firewall settings were getting cleared too fast while I was keeping TESTING = 1;

iptables --gid-owner works only for user's main group

I am trying to disable access to IP 1.2.3.4 for all users except for members of group "neta". This is a new group which I created only for this matter.
iptables -I OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner ! --gid-owner neta -j REJECT
This disables access to 1.2.3.4 for all users, even if they are member of group "neta".
I have an user xx and he is a member of groups xx (main group) and neta. If I change the rule to:
iptables -I OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner \! --gid-owner xx -j REJECT
everyone except user xx is not able to access 1.2.3.4.
I added root to this group xx:
usermod -a -G xx root
but root was still not able to access this IP.If I add main user's group (root, xx) to the rule everything works as expected.
I tried spliting it in two rules just to be sure (and log rejected):
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner --gid-owner neta -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m limit --limit 2/s --limit-burst 10 -j LOG
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -j REJECT
but there is no difference. Everything is being rejected.
There are no other iptables rules.
root#vm1:~# iptables -nvL
Chain INPUT (policy ACCEPT 19 packets, 1420 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 10 packets, 1720 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 1.2.3.4 owner GID match 1001
0 0 LOG tcp -- * eth0 0.0.0.0/0 1.2.3.4 limit: avg 2/sec burst 10 LOG flags 0 level 4
0 0 REJECT tcp -- * eth0 0.0.0.0/0 1.2.3.4 reject-with icmp-port-unreachable
I want to be able to (dis)allow access to this IP by adding/removing users from this "neta" group instead of adding iptables rules for every user.
Ok, to be honest I know to little about linux and iptables to be sure about my theory, but since I wanted to do the same for a VPN here we go.
I assume that the match is done using the process from which the packets originate from and that a linux process doesn't get all groups of a user assigned but instead a process runs with one uid and one gid.
That means that you have to execute the command explicitly using this specific group, or else the command/process is executed using the default group of the user.
Writing this I had an idea to see whether there is such possibility.
I restricted access to a certain IP range using the group VPN. This never worked. Now I tested with the following command and it works:
sg vpn -c "ssh user#10.15.1.1"
So I hope my theory was correct.
Old post, but chiming in since I have run into this exact problem in Ubuntu 16.04.3 LTS server.
Ubuntu's implementation of iptables extensions through netfilter examines the owner of the current network packet, and queries only the primary group id of that user. It doesn't dig deeper and get all the group memberships. Only the primary group is compared to the --gid-owner value. It doesn't look any further.
What the OP was trying to accomplish would work if he/she changed the primary/default user group of all relevant usernames to "neta". Those users would then be captured by the rule.
To use a supplementary group you need to add the --suppl-groups flag to your iptables command
From man page:
--suppl-groups
Causes group(s) specified with --gid-owner to be also checked in the supplementary groups of a process.

Resources