Running Nodejs application on VPS OVH - node.js

I just rent a VPS at OVH and i'm trying to run a Nodejs application on port 1337.
The application runs correctly, when I try :
curl 0.0.0.0:1337
It gives me the code of the page (works with 127.0.0.1 too)
But when I try to connect to 176.31.184.111:1337, it doesn't work.
#root: iptables -L -v -n
Chain INPUT (policy ACCEPT 8645 packets, 592K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 7556 packets, 729K bytes)
pkts bytes target prot opt in out source destination

Related

Docker listening inside the docker host for RabbitMQ but not from outside, why?

This is how I run the rabbitMQ image:
docker run -d --restart always --hostname host-rabbit --name cg-rabbit -p 5029:5672 -p 5020:15672 -e RABBITMQ_DEFAULT_VHOST=sample_vhost -e RABBITMQ_DEFAULT_USER=sampleuser -e RABBITMQ_DEFAULT_PASS=samplepass rabbitmq:3-management
Now in netstat -nltp:
ubuntu#infra:~$ netstat -nltp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::5020 :::* LISTEN -
tcp6 0 0 :::5029 :::* LISTEN -
I'm not sure why I see tcp6 when docker exposes ports to host and if it makes issues!
Now when I telnet from within the server I can see that port is open:
ubuntu#infra:~$ telnet MY-SERVER-IP-ADDRESS 5029
Trying MY-SERVER-IP-ADDRESS...
Connected to MY-SERVER-IP-ADDRESS.
Escape character is '^]'.
^]
telnet> Connection closed.
But in my machine when I try to telnet (or from another server):
$ telnet MY-SERVER-IP-ADDRESS 5020
Trying MY-SERVER-IP-ADDRESS...
^C
iptables -L reports:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:5020
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:5029
ACCEPT tcp -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:amqp
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:15672
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
It is good to note that I have installed a redis server in server (non-docker) and I am able to telnet to it form outside.
EDIT-1:
sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:15672
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:amqp
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
DNAT tcp -- anywhere anywhere tcp dpt:15672 to:172.17.0.2:15672
DNAT tcp -- anywhere anywhere tcp dpt:amqp to:172.17.0.2:5672
EDIT-2:
Docker configuration:
ubuntu#infra:~$ sudo cat /var/snap/docker/796/config/daemon.json
{
"log-level": "error",
"storage-driver": "overlay2"
}
This is really odd. By flushing the NAT in iptables everything works as expected:
iptables -t nat -F
My nat before flushing:
ubuntu#infra:~$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:15672
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:amqp
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
DNAT tcp -- anywhere anywhere tcp dpt:15672 to:172.17.0.2:15672
DNAT tcp -- anywhere anywhere tcp dpt:amqp to:172.17.0.2:5672
And now after flushing everything is gone:
ubuntu#infra:~$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain DOCKER (0 references)
target prot opt source destination
NOTE: by restarting docker via sudo snap restart docker net rules are back again and I had to flush NATs again!

How can I use iptables as a per-user whitelist web filter on Linux?

I'm trying to use iptables to create a web filter that whitelists a list of websites and blacklists everything else on a per-user basis. So one user would have full web access while another would be restricted only to the whitelist.
I am able to block all outgoing web traffic on a per-user basis, but I cannot seem to whitelist certain websites. My current FILTER table is setup as:
Chain INPUT (policy ACCEPT 778 packets, 95768 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 777 packets, 95647 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 176.32.98.166 0.0.0.0/0 owner UID match 1000
0 0 ACCEPT all -- * * 176.32.103.205 0.0.0.0/0 owner UID match 1000
0 0 ACCEPT all -- * * 205.251.242.103 0.0.0.0/0 owner UID match 1000
677 73766 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 1000 reject-with icmp-port-unreachable
This was created with the following commands:
sudo iptables -A OUTPUT -s amazon.com -m owner --uid-owner <USERNAME> -j ACCEPT
sudo iptables -A OUTPUT -m owner --uid-owner <USERNAME> -j REJECT
My understanding was that iptables would use the first rule that matches to a packet but it seems that is not the case here. All web traffic is being blocked for the user while being allowed for all other users. Is there another way to set this up?

Netcat server and intermittent UDP datagram loss

The client on enp4s0 (192.168.0.77) is sending short text-messages permanently to 192.168.0.1:6060. The server on 192.168.0.1 listen on 6060 via nc -ul4 --recv-only 6060
A ping (ping -s 1400 192.168.0.77) from server to client works fine. Wireshark is running on 192.168.0.1 (enp4s0) and detects that all datagrams are correct. There are no packages missing.
But netcat (as also a simple UDP-server) receives only sporadic on datagrams.
Any Idea what's going wrong?
System configuration:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# ip route
default via 192.168.77.1 dev enp0s25 proto dhcp metric 100
default via 192.168.0.1 dev enp4s0 proto static metric 101
192.168.0.0/24 dev enp4s0 proto kernel scope link src 192.168.0.1 metric 101
192.168.77.0/24 dev enp0s25 proto kernel scope link src 192.168.77.25 metric 100
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1
# uname -a
Linux nadhh 5.1.20-300.fc30.x86_64 #1 SMP Fri Jul 26 15:03:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Server not accepting request on port 9999

I am trying to deploy my socket[phpws library] on amazon EC2 instance. For this I deployed code and run socket. I have selected port 9999 for socket handshake, but it is not working.
I tried to captured request on this server by command :
sudo tcpdump -i any port 9999
Then I hit this port, I did not received any request. I thought this is because of iptables. So I checked it by command :
sudo iptables -L
But it clearly shows allow for all requests. Here is output :
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
At this point I am clueless how to move forward. Any help is gently appreciable. Looking for response.
You have to also open port 9999 in the AWS Security Group attached to the EC2 instance.

Strongswan RoadWarrior VPN-Config

I want to setup an VPN-Server for my local web traffic (iPhone/iPad/MacBook).
So far I managed to setup basic configuration with CA & Client-Cert.
For the moment my client can connect to the server and access server resources, but has no route to the internet.
The server is accessible directly via public IP (no home installation...).
What do I need to change to route all my client traffic through the VPN-Server and enable internet access for my clients?
Thanks in advance
/etc/ipsec.conf
config setup
conn rw
keyexchange=ikev1
authby=xauthrsasig
xauth=server
auto=add
#
#LEFT (SERVER)
left=%defaultroute
leftsubnet=0.0.0.0/0
leftfirewall=yes
leftcert=serverCert.pem
#
#RIGHT (CLIENT)
right=%any
rightsubnet=10.0.0.0/24
rightsourceip=10.0.0.0/24
rightcert=clientCert.pem
iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 10.0.0.1 anywhere policy match dir in pol ipsec reqid 1 proto esp
ACCEPT all -- anywhere 10.0.0.1 policy match dir out pol ipsec reqid 1 proto esp
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Found the solution!
/etc/ipsec.conf
rightsubnet=10.0.0.0/24
iptables
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE
System
sysctl net.ipv4.ip_forward=1

Resources