Host_A tries to send some data to Host_B over TCP. Host_B is listening on port 8181. Both Host_A & Host_B are Linux boxes (Red Hat Enterprise). The TCP layer is implemented using Java NIO API.
Whatever Host_A sends, Host_B is unable to receive. Sniffing the data on wire using WireShark resulted in the following log:
1) Host_A (33253) > Host_B (8181): [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=513413781 TSER=0 WS=7
2) Host_B (8181) > Host_A (33253): [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
The logs show that Host_A sends a [SYN] flag to Host_B in order to establish connection. But instead of [SYN, ACK] Host_B responds with an [RST, ACK] which resets/closes the connection. This behavior is observed always.
I am wondering under what circumstance does a TCP listener sends [RST,ACK] in response to a [SYN]?
RST, ACK means the port is closed. You sure Host_B is listening on the right IP/interface?
Also check your firewall for a -j REJECT --reject-with tcp-reset
It happened to me because I did not set sockaddr_in.sin_family to AF_INET, in the server c++ program.
Related
I have a droplet on DigitalOcean with IPv4 and IPv6 enabled. The droplet is behind a digital ocean network firewall with the following rules:
Inbound:
SSH TCP 22 All IPv4, All IPv6
HTTP TCP 80 All IPv4, All IPv6
HTTP TCP 443 All IPv4, All IPv6
Outbound:
ICMP ICMP All IPv4 All IPv6
All TCP TCP All ports All IPv4 All IPv6
All UDP UDP All ports All IPv4
My understanding and expectation is that will block all ssh attempts on ports other than port 22. However when checking the sshd unit in systemd journal. I see the following entries:
2022-12-29 03:00:32 Disconnected from invalid user antonio 43.153.179.44 port 45614 [preauth]
2022-12-29 03:00:32 Received disconnect from 43.153.179.44 port 45614:11: Bye Bye [preauth]
2022-12-29 03:00:31 Invalid user antonio from 43.153.179.44 port 45614
2022-12-29 02:58:37 Disconnected from invalid user desliga 190.129.122.3 port 1199 [preauth]
2022-12-29 02:58:37 Received disconnect from 190.129.122.3 port 1199:11: Bye Bye [preauth]
2022-12-29 02:58:37 Invalid user desliga from 190.129.122.3 port 1199
and many more of these lines, which means the firewall is not blocking ssh connections on ports other than 22.
The following graph shows the number of ssh connections to ports other than 22 in the last hour. The connections are reduced with enabling the Network Filter, but they not diminished.
Could it be that the Network Firewall of DigitalOcean is broken?
What am I missing?
Anyone is seeing the same situation on their infrastructure?
The ports being shown in the log are the remote ports that the connections are coming from on the remote IPs, and not indicating that those ports are listening on your server or through the firewall. The firewall is configured from your description to allow for any remote IP and port to connect to your droplet on local ports 22, 80, and 443.
I am running a k8s cluster (3 nodes) using Microk8s and the application is randomly deployed to them. I have a applications running on some Kubernetes pod communicating with a Solace Message Broker which is outside the k8s cluster. Suddenly, the node did not forward the message packet from message broker to the pod and sent [RST] to the message broker (Refer to 73105 and 73106). May I Know what cause the node did not forward message packet (73105) and send [RST] packet to the message broker?
192.168.100.153 (k8s Node Address)
10.1.22.138 (k8s Pod Address)
192.168.99.100 (Solace Message Broker Address)
No. Time Source Destination Protocol Length Info
73097 2022-05-28 03:15:49.111502 192.168.99.100 192.168.100.153 AMQP 76 (empty)
73098 2022-05-28 03:15:49.111566 192.168.99.100 10.1.22.138 AMQP 76 (empty)
73099 2022-05-28 03:15:49.111587 10.1.22.138 192.168.99.100 TCP 68 48296 → 5672 [ACK] Seq=1 Ack=146201 Win=501 Len=0 TSval=2957547792 TSecr=1008294801
73100 2022-05-28 03:15:49.111631 192.168.100.153 192.168.99.100 TCP 68 16886 → 5672 [ACK] Seq=1 Ack=146201 Win=501 Len=0 TSval=2957547792 TSecr=1008294801
73101 2022-05-28 03:15:51.112852 192.168.99.100 192.168.100.153 AMQP 76 (empty)
73102 2022-05-28 03:15:51.112930 192.168.99.100 10.1.22.138 AMQP 76 (empty)
73103 2022-05-28 03:15:51.112988 10.1.22.138 192.168.99.100 TCP 68 48296 → 5672 [ACK] Seq=1 Ack=146209 Win=501 Len=0 TSval=2957549793 TSecr=1008296802
73104 2022-05-28 03:15:51.113029 192.168.100.153 192.168.99.100 TCP 68 16886 → 5672 [ACK] Seq=1 Ack=146209 Win=501 Len=0 TSval=2957549793 TSecr=1008296802
73105 2022-05-28 03:15:53.114733 192.168.99.100 192.168.100.153 AMQP 76 (empty)
73106 2022-05-28 03:15:53.115263 192.168.100.153 192.168.99.100 TCP 56 16886 → 5672 [RST] Seq=1 Win=0 Len=0
I wonder is it due to conntrack which cause intermittent connection reset. Should I need to apply echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal?
I've been struggling with a strange problem on Ubuntu 18.04. I cannot reach services exposed by containers. I will show you on an example with nginx.
Starting the container:
sudo docker run -it --rm -d -p 8080:80 --name web nginx
docker ps shows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f09c71db299a nginx "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 0.0.0.0:8080->80/tcp web
listening is on 0.0.0.0:8080 as expected
But curl throws "Connection reset by peer":
$ curl -4 -i -v localhost:8080
* Rebuilt URL to: localhost:8080/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.58.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* stopped the pause stream!
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
I used tshark to inspect a network traffic:
$ sudo tshark -i any
66 7.442606878 127.0.0.1 → 127.0.0.1 TCP 68 8080 → 47430 [ACK] Seq=1 Ack=79 Win=65408 Len=0 TSval=4125875840 TSecr=4125875840
67 7.442679088 172.16.100.2 → 172.16.100.1 TCP 56 80 → 37906 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
68 7.442784223 127.0.0.1 → 127.0.0.1 TCP 68 8080 → 47430 [RST, ACK] Seq=1 Ack=79 Win=65536 Len=0 TSval=4125875840 TSecr=4125875840
I see RST within the container(?). I have never had such an issue and I'm a bit lost how to solve it. Can someone help me out?
UPDATE: I used docker inspect f09c71db299a and it shows that:
"Gateway": "172.16.100.1"
"IPAddress": "172.16.100.2"
172.16.100.1 it's my docker0 IP address. It looks it rejects traffic from the container, right?
UPDATE 2: According to NightsWatch's suggestion I checked if the host accepts connection on the 8080. Telnet says:
~$ telnet localhost 8080
Trying ::1...
Connected to localhost.
Escape character is '^]'.
So it looks port is open but the request is blocked :/
I encountered this strange behavior today I could not find a cause for. I am using MacOS Sierra.
I have this code (Express):
app.server.listen(config.port, config.address, function () {
logger.info('app is listening on', config.address + ':' + config.port);
});
And it prints
app is listening on 127.0.0.1:5000
How ever, if I try to curl, it fails.
$ curl http://localhost:5000/api/ping
curl: (56) Recv failure: Connection reset by peer
I checked my hosts file:
$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
So I ping localhost to make sure it resolves to 127.0.0.1:
$ ping localhost
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.061 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.135 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.061/0.107/0.135/0.033 ms
I try again, but it fails
$ curl http://localhost:5000/api/ping
curl: (56) Recv failure: Connection reset by peer
Now I try to use 127.0.0.1 instead and voila, it works?
$ curl http://127.0.0.1:5000/api/ping
pong
What's wrong?
cURL is trying to connect via IPv6 but your Express server is listening on 127.0.0.1 which is IPv4.
You can force cURL to connect via IPv4 with the -4 option.
curl -4 http://127.0.0.1:5000/api/ping
I have a HAProxy + NodeJS + Rails Setup, I use the NodeJS Server for file upload purposes.
The problem I'm facing is that if I'm uploading through haproxy to nodejs and a "TCP (Fast) Retransmission" occurs because of a lost packet the TX rate on the client drops to zero for about 5-10 secs and gets flooded with TCP Retransmissions.
This does not occur if I upload to NodeJS directly (TCP Retransmission happens too but it doesn't get stuck with dozens of retransmission attempts).
My test setup is a simple HTML4 FORM (method POST) with a single file input field.
The NodeJS Server only reads the incoming data and does nothing else.
I've tested this on multiple machines, networks, browsers, always the same issue.
Here's a TCP Traffic Dump from the client while uploading a file:
.....
TCP 1506 [TCP segment of a reassembled PDU]
>> everything is uploading fine until:
TCP 1506 [TCP Fast Retransmission] [TCP segment of a reassembled PDU]
TCP 66 [TCP Dup ACK 7392#1] 63265 > http [ACK] Seq=4844161 Ack=1 Win=524280 Len=0 TSval=657047088 TSecr=79373730
TCP 1506 [TCP Retransmission] [TCP segment of a reassembled PDU]
>> the last message is repeated about 50 times for >>5-10 secs<< (TX drops to 0 on client, RX drops to 0 on server)
TCP 1506 [TCP segment of a reassembled PDU]
>> upload continues until the next TCP Fast Retransmission and the same thing happens again
The haproxy.conf (haproxy v1.4.18 stable) is the following:
global
log 127.0.0.1 local1 debug
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
defaults
log global
mode http
option httplog
option tcplog
frontend http-in
bind *:80
timeout client 6000
acl is_websocket path_beg /node/
use_backend node_backend if is_websocket
default_backend app_backend
# Rails Server (via nginx+passenger)
backend app_backend
option httpclose
option forwardfor
timeout server 30000
timeout connect 4000
server app1 127.0.0.1:3000
# node.js
backend node_backend
reqrep ^([^\ ]*)\ /node/(.*) \1\ /\2
option httpclose
option forwardfor
timeout queue 5000
timeout server 6000
timeout connect 5000
server node1 127.0.0.1:3200 weight 1 maxconn 4096
Thanks for reading! :)
Simon
Try setting "timeout http-request" to 6 seconds globally. It can typically be too low to pickup re-transmits and while it won't explain the cause it might solve your problem.
Try using https://github.com/nodejitsu/node-http-proxy. I am not sure if it will fit in your overall architecture requirement but it would be worth a try.