I am trying to bind a HaProxy on a Fedora 21 instance to socket 1883 (for the MQTT protocol).
I get
[ALERT] 215/130943 (926) : Starting proxy mqtt: cannot bind socket [0.0.0.0:1883]
There does not seem to be anything listening on this port as long as I could check. Does anyone have an idea about what might be the problem?
Related
I have a droplet on DigitalOcean with IPv4 and IPv6 enabled. The droplet is behind a digital ocean network firewall with the following rules:
Inbound:
SSH TCP 22 All IPv4, All IPv6
HTTP TCP 80 All IPv4, All IPv6
HTTP TCP 443 All IPv4, All IPv6
Outbound:
ICMP ICMP All IPv4 All IPv6
All TCP TCP All ports All IPv4 All IPv6
All UDP UDP All ports All IPv4
My understanding and expectation is that will block all ssh attempts on ports other than port 22. However when checking the sshd unit in systemd journal. I see the following entries:
2022-12-29 03:00:32 Disconnected from invalid user antonio 43.153.179.44 port 45614 [preauth]
2022-12-29 03:00:32 Received disconnect from 43.153.179.44 port 45614:11: Bye Bye [preauth]
2022-12-29 03:00:31 Invalid user antonio from 43.153.179.44 port 45614
2022-12-29 02:58:37 Disconnected from invalid user desliga 190.129.122.3 port 1199 [preauth]
2022-12-29 02:58:37 Received disconnect from 190.129.122.3 port 1199:11: Bye Bye [preauth]
2022-12-29 02:58:37 Invalid user desliga from 190.129.122.3 port 1199
and many more of these lines, which means the firewall is not blocking ssh connections on ports other than 22.
The following graph shows the number of ssh connections to ports other than 22 in the last hour. The connections are reduced with enabling the Network Filter, but they not diminished.
Could it be that the Network Firewall of DigitalOcean is broken?
What am I missing?
Anyone is seeing the same situation on their infrastructure?
The ports being shown in the log are the remote ports that the connections are coming from on the remote IPs, and not indicating that those ports are listening on your server or through the firewall. The firewall is configured from your description to allow for any remote IP and port to connect to your droplet on local ports 22, 80, and 443.
I have my Windows server 2012 which is active on production and running 2 websites of .NET. Now I want to run my wordpress site I had configured everything and my wordpress site was working fine before but all of sudden now am unable to connect to local host and even wp admin dashboard is not appearing so I deleted all that stuff uninstalled MYSQL connector MYSQL and web platform installer too. Even now I'm facing the same problem.
Whenever I try to connect 127.0.0.1 /Localhost I get the same message for both "This site can't be
reached" and if I try to connect with my public ip it says "HTTP Error 404. The requested resource is not found."
My netstat results are mentioned below:
C:\Users\Administrator>netsh http show iplisten
IP addresses present in the IP listen list:
173.208.205.34
173.208.205.35
173.208.205.36
C:\Users\Administrator>netstat -ano
Active Connections
1. Proto Local Address Foreign Address State
PID TCP 0.0.0.0:135 0.0.0.0:0
LISTENING
1192 TCP 0.0.0.0:180 0.0.0.0:0
LISTENING 1388 TCP 0.0.0.0:445 0.0.0.0:0
LISTENING 4 TCP 0.0.0.0:1433 0.0.0.0:0
LISTENING 2812 TCP 0.0.0.0:1443 0.0.0.0:0
LISTENING 1388 TCP 173.208.205.34:80 0.0.0.0:0
LISTENING 4 TCP 173.208.205.34:139 0.0.0.0:0
LISTENING 4 TCP 173.208.205.34:443 0.0.0.0:0
LISTENING 4 TCP 173.208.205.34:443
160.153.147.141:35160 TIME_WAIT 0
TCP 173.208.205.34:1433 122.176.28.110:2048 ESTABLISHED 28
Additionally, I have checked the etc/hosts file it have 127.0.0.1 localhost uncommented there.
I have also disabled the firewall that make no change.
Can anyone tell what is wrong with this ?
I notice that there is no 0.0.0.0:80 in the IP listen list. Does your site bind to localhost:80?
The correct IP address in list should include
0.0.0.0:80 (ipv4) and [::]:80 (ipv6)
I think you can add 127.0.0.1 to IP listen list.
netsh http add iplisten ipaddress=127.0.0.1
Then check whether it is in list.
My server setup is nginx directly connects to a node.js server (nginx and node.js are in the same node and nginx is forwarding request to node: 127.0.0.1:8000). The symptom is sometimes there are some 504 logs in nginx log. And node.js log doesn't show any sign of ever receiving the request.
I then enabled tcp log using iptables, which logs all tcp packets to port 8000. After checking the tcp log, it seems that nginx was trying to establish a tcp connection with node.js server, but it never succeeds. It just kept retrying sending SYN packets and then got timed out by nginx. Here's an example (tcp + nginx log):
13:44:44 sp:48103 dp:8000 SYN
13:44:45 sp:48103 dp:8000 SYN
13:44:47 sp:48103 dp:8000 SYN
13:44:51 sp:48103 dp:8000 SYN
13:44:59 sp:48103 dp:8000 SYN
13:45:15 sp:48103 dp:8000 SYN
13:45:44 nginx 504
During the period, the CPU load is pretty light, memory < 50%, incoming request is less than 50 per minute. And other requests were processed normally.
Server is Ubuntu 14.04.2 LTS
Any idea what's going on? Seems like an OS level issue? Thank you in advance.
Check whether something is actually running on TCP port 8000 on your loopback interface. Try some commands like:
lsof -P -n -i tcp:8000
fuser 8000/tcp
ss -4lnt
netstat -4lnt
These should give you some hints on whether something is listening at all. Or it may only be listening on a specific interface/address and not your loopback.
I am running a Node.js SSH server that spawns a child process to exec code (using require('child_process').spawn) after successful authentication.
The client server connections works fine on port 22 and connection is kept alive successfully through spawned process.
I am trying to setup up now with HAProxy 1.6, to forward port 22 to a non-privileged port on which the SSH server is listening.
However, when the child process is spawned the server either errors Error: write EPIPE or Error: read ECONNRESET.
This suggests to me there is an issue with prematurely closed stream or connection between the client -> HAProxy -> server?
I am looking at websocket configurations and ssh configurations for HAProxy and various keep alive options. However I cannot get the connection to work.
My configuration:
global
daemon
maxconn 10000
log 127.0.0.1 local0
defaults
log global
option tcplog
option logasap
timeout connect 500s
timeout client 5000s
timeout server 2h
timeout server-fin 5000s
timeout client-fin 5000s
timeout tunnel 1h
option tcpka
frontend sshd
bind *:22
default_backend ssh
timeout client 2h
backend ssh
mode tcp
server ssh2server 127.0.0.1:5000 check port 5000
Any pointers or help would be awesome. Thanks in advance.
EDIT
Runing haproxy in debug mode I have
00000000:sshd.accept(0004)=0005 from [my ip]
00000000:ssh.srvcls[0005:0006]
00000000:ssh.clicls[0005:0006]
00000000:ssh.closed[0005:0006].
On the tcplog
Oct 15 15:15:38 localhost haproxy[16036]: 128.277.13.23:51146 [15/Oct/2016:15:15:38.804] sshd ssh/ssh2server 1/0/+0 +0 -- 1/1/1/1/0 0/0
I am running a nodejs server and using socket.io 1.3.5 for handling websocket connections. When server receive a socket disconnect event with "ping timeout" or "transport close", it disconnects the socket but do not clear all the binding. It prints the below log
socket.io:client client close with reason ping timeout
socket.io:socket closing socket - reason ping timeout
socket.io:client ignoring remove for WX-M8GL6SvkQtxXMAAAA
The strange thing I have noticed when socket disconnects due to some network error, the tcp socket binding from server to browser is not cleared and remains in ESTABLISHED state forever. I can see the the below mentioned connection even after 12 hrs of receiving disconnect due to ping timeout.
node 29881 user 14u IPv4 38563924 0t0 TCP 10.5.7.33:5100->10.5.6.50:49649 (ESTABLISHED)
node 29881 user 15u IPv4 38563929 0t0 TCP 10.5.7.33:5100->10.5.6.50:49653 (ESTABLISHED)
node 29881 user 16u IPv4 38563937 0t0 TCP 10.5.7.33:5100->10.5.6.60:49659 (ESTABLISHED)
What can I do to remove the stale connections on socket disconnect events ?
I found the solution! According to this issue, there is a bug in SocketIO which is fixed in version 1.4.x:
https://github.com/socketio/socket.io/issues/2118
I have upgraded to v1.4.5 and the TCP connections are now correctly closed after the SocketIO connection closes.