My server setup is nginx directly connects to a node.js server (nginx and node.js are in the same node and nginx is forwarding request to node: 127.0.0.1:8000). The symptom is sometimes there are some 504 logs in nginx log. And node.js log doesn't show any sign of ever receiving the request.
I then enabled tcp log using iptables, which logs all tcp packets to port 8000. After checking the tcp log, it seems that nginx was trying to establish a tcp connection with node.js server, but it never succeeds. It just kept retrying sending SYN packets and then got timed out by nginx. Here's an example (tcp + nginx log):
13:44:44 sp:48103 dp:8000 SYN
13:44:45 sp:48103 dp:8000 SYN
13:44:47 sp:48103 dp:8000 SYN
13:44:51 sp:48103 dp:8000 SYN
13:44:59 sp:48103 dp:8000 SYN
13:45:15 sp:48103 dp:8000 SYN
13:45:44 nginx 504
During the period, the CPU load is pretty light, memory < 50%, incoming request is less than 50 per minute. And other requests were processed normally.
Server is Ubuntu 14.04.2 LTS
Any idea what's going on? Seems like an OS level issue? Thank you in advance.
Check whether something is actually running on TCP port 8000 on your loopback interface. Try some commands like:
lsof -P -n -i tcp:8000
fuser 8000/tcp
ss -4lnt
netstat -4lnt
These should give you some hints on whether something is listening at all. Or it may only be listening on a specific interface/address and not your loopback.
Related
I have a simple program which creates a simple web server at localhost with a random port between 10000 and 65535 (which is the highest unsigned 16-bit integer). You can also specify a port but if you don't know on which port it runs it's hard to find out.
I have written a little helper program that should show every port that's being listened to.
The helper:
import requests
for port in range(10000, 65535):
try:
print(port, requests.get("http://localhost:{}".format(port)))
except Exception as e:
print("{}: {}".format(type(e).__name__, port), end="\r")
I expect it to show ConnectionError: 10000 and counting up to 65535 and showing any found connections. But it hangs always on port 25564 25565, last showing the message for port 25564. And if I do a completely unrelated request to 'http://localhost:25564' or any higher port it hangs.
The script hangs on port 25565 when I start a server on 25564.
Normally if a port has no server listening it will immediately refuse the connection and give a ConnectionError. Above port 25564 it doesn't but just waits until I stop it.
This behaviour seems completely random as port 25564 is unassigned according to speedguide.net.
Port 25565 is the standard MySQL and Minecraft Dedicated Server port (according to speedguide.net), both of which I haven't running on my machine. Therefore the hang still seems random.
I'm using python3 on Ubuntu 20.04 LTS.
Interestingly it didn't fail on my laptop with Linux Mint 21...
As #root requested in the comments, here is the output of nmap localhost:
Starting Nmap 7.80 ( https://nmap.org ) at 2022-09-25 11:42 CEST
Host is up (0.00014s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
80/tcp open http
631/tcp open ipp
8080/tcp open http-proxy
9050/tcp open tor-socks
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
Just a little note: port 80/tcp is listened on by apache2 with the "You are an idiot" flash animation.
As per the comments, you can try something like this:
You will note that i have added the timeout parameter in the requests. This units are in seconds. The default timeout is None, which means it'll wait (hang) until the connection is closed.
import requests
for port in range(10_000, 65_535):
try:
r = requests.get(f'http://localhost:{port}', timeout=5)
print(port)
except Exception as e:
print(f'{type(e).__name__}, {port}', end='\r')
Got strange stuff. Setup: VirtualBox + CentOs7 + python3.6 + scapy2.4.0
Got network with only 4-5 hosts active: gateway, CentOs in VirtualBos, PC on which VirtualBox running and something else.
Trying to do:
ans, unans = sr(IP(dst='10.10.10.1-100')/ICMP(), iface = 'enp0s3', retry=0, timeout=1)
Begin emission: ...
Received 1822 packets, got 99 answers, remaining 1 packets
ans
Results: TCP:0 UDP:0 ICMP:99 Other:0
unans
Unanswered: TCP:0 UDP:0 ICMP:1 Other:0
ans[x] - are legit ICMP Reply packets.
unans[0] - no ICMP reply from CentOs VM itself
So looks like everything is alive instead of 4-5 hosts which actually are alive
What could be the possible reason ?
You want to know the possible reason, but scapy is not giving you enough details. So use tcpdump:
$ sudo tcpdump -e -c 200 icmp
Send the probe packets while tcpdump is running, in order to view address and timing details. It is possible you are seeing lots of perfectly normal ICMPs, for example port unreachable, or network unreachable. Tcpdump will tell you exactly what went over the network interface.
I'm running a node/express app and it keeps dying randomly after a few hours.
The process is always still up, no logs or CPU/memory spikes or even much utilization but the process doesn't serve any requests anymore.
I suspect it's that the process has too many active UDP connections: lsof -i -a -p X | wc -l counted 9k+ network connections when I ssh'd into the docker-container running the node process, 98% UDP like this:
node 6 root 223u IPv4 614173 0t0 UDP *:63025
node 6 root 224u IPv4 324249 0t0 UDP *:34622
node 6 root 225u IPv4 415898 0t0 UDP *:44176
The #connections grows at a rate of exactly 10 new connections per minute.
The only UDP-related functionality in my app is https://github.com/sazze/winston-logstash-udp
Details:
Node v6.12.3 in Docker on AWS ec2 t2.medium
This behavior started happening after migrating from a debian:wheezy Docker base image to node:6.14.2-alpine.
Questions:
How can I further debug each UDP connection? E.g. target, duration, ... That would help find the underlying problem.
What is the limit for node's connection limit? I've read 4096.
This problem didn't occur under debian, what differences are there that might be related?
I am running a Node.js SSH server that spawns a child process to exec code (using require('child_process').spawn) after successful authentication.
The client server connections works fine on port 22 and connection is kept alive successfully through spawned process.
I am trying to setup up now with HAProxy 1.6, to forward port 22 to a non-privileged port on which the SSH server is listening.
However, when the child process is spawned the server either errors Error: write EPIPE or Error: read ECONNRESET.
This suggests to me there is an issue with prematurely closed stream or connection between the client -> HAProxy -> server?
I am looking at websocket configurations and ssh configurations for HAProxy and various keep alive options. However I cannot get the connection to work.
My configuration:
global
daemon
maxconn 10000
log 127.0.0.1 local0
defaults
log global
option tcplog
option logasap
timeout connect 500s
timeout client 5000s
timeout server 2h
timeout server-fin 5000s
timeout client-fin 5000s
timeout tunnel 1h
option tcpka
frontend sshd
bind *:22
default_backend ssh
timeout client 2h
backend ssh
mode tcp
server ssh2server 127.0.0.1:5000 check port 5000
Any pointers or help would be awesome. Thanks in advance.
EDIT
Runing haproxy in debug mode I have
00000000:sshd.accept(0004)=0005 from [my ip]
00000000:ssh.srvcls[0005:0006]
00000000:ssh.clicls[0005:0006]
00000000:ssh.closed[0005:0006].
On the tcplog
Oct 15 15:15:38 localhost haproxy[16036]: 128.277.13.23:51146 [15/Oct/2016:15:15:38.804] sshd ssh/ssh2server 1/0/+0 +0 -- 1/1/1/1/0 0/0
In order to develop a cross-plateform syslog client, I am trying to do it without using the syslog syscall. I am developping this client in C++ and for now testing in Linux. The old syslog client that I am replacing was working perfectly fine with the syslog syscall.
For how, it simply doesn't work. The trace is not in /var/log/user.log like it should be, either anywhere else (greped). But I do receive it when I listen on the right port with netcat. Shouldn't the port 514 be already in use by the way ?
The trace is as it should be sent on UDP/514. I tried to stick the RFC 3164 but something is still obviously wrong.
Id really appreciate if someone could give me a hint to solve this.
Trace: severity: 2 (Critical); facility: 23 (Local Use 7) ==> priority: 186
sh$> sudo nc -ul localhost -p 514
<186>Oct 18 10:36:03 hostname test_trace: | 10:36:03.242995 | CRIT | xxx-MAIN[5473-000] | 00000 | 0008 : main : user_msg
Thank you !
I think I found the problem in my own question: Rsyslog (my syslog server) doesn't listen on UDP/514 correctly.
/etc/rsyslog.conf
$ModLoad imudp
$UDPServerAddress 0.0.0.0
$UDPServerRun 514
If someone has any idea of why it still doesn't listen on UDP/514, I'd be really thanksful cause I really don't see why.
Thank you again.
The syslog() call writes to /dev/log and the system logger reads this unix domain socket to pick up the message. UDP/514 is for network transmission.
So it is not clear what you want.