Can't start HAProxy on Cygwin - cygwin

I'm trying to start up HAProxy on Cygwin. When I do so, I get the following response:
$ /usr/local/sbin/haproxy -f /usr/local/sbin/haproxy.cfg
[ALERT] 313/180006 (4008) : cannot change UNIX socket ownership
(/tmp/haproxy.socket). Aborting.
[ALERT] 313/180006 (4008) : [/usr/local/sbin/haproxy.main()]
Some protocols failed to start
their listeners! Exiting.
It looks like it's due to the following line in my config file, when I rip this it starts up:
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
The entire config:
global
log 127.0.0.1 local0 info
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
maxconn 1000
daemon
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1000
timeout connect 5s
timeout client 120s
timeout server 120s
listen rabbitmq_local_cluster 127.0.0.1:5555
mode tcp
balance roundrobin
server rabbit_0 127.0.0.1:5673 check inter 5000 rise 2 fall 3
server rabbit_1 127.0.0.1:5674 check inter 5000 rise 2 fall 3
listen private_monitoring 127.0.0.1:8100
mode http
option httplog
stats enable
stats uri /stats
stats refresh 5s
Any ideas would be appreciated, Thanks!

Simple answer, as I expected. My user "haproxy" which is referenced in the problematic line:
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
Did not have necessary permissions on the local machine. Once this was set up, it started up fine.

Nice to know that it still works on cygwin, what version of haproxy is this ? I did not know that UNIX sockets were supported on windows BTW. Or maybe they're emulated via named pipes ?

Related

python3 requests hangs when accessing port 25564 or higher on Ubuntu 20.04 LTS

I have a simple program which creates a simple web server at localhost with a random port between 10000 and 65535 (which is the highest unsigned 16-bit integer). You can also specify a port but if you don't know on which port it runs it's hard to find out.
I have written a little helper program that should show every port that's being listened to.
The helper:
import requests
for port in range(10000, 65535):
try:
print(port, requests.get("http://localhost:{}".format(port)))
except Exception as e:
print("{}: {}".format(type(e).__name__, port), end="\r")
I expect it to show ConnectionError: 10000 and counting up to 65535 and showing any found connections. But it hangs always on port 25564 25565, last showing the message for port 25564. And if I do a completely unrelated request to 'http://localhost:25564' or any higher port it hangs.
The script hangs on port 25565 when I start a server on 25564.
Normally if a port has no server listening it will immediately refuse the connection and give a ConnectionError. Above port 25564 it doesn't but just waits until I stop it.
This behaviour seems completely random as port 25564 is unassigned according to speedguide.net.
Port 25565 is the standard MySQL and Minecraft Dedicated Server port (according to speedguide.net), both of which I haven't running on my machine. Therefore the hang still seems random.
I'm using python3 on Ubuntu 20.04 LTS.
Interestingly it didn't fail on my laptop with Linux Mint 21...
As #root requested in the comments, here is the output of nmap localhost:
Starting Nmap 7.80 ( https://nmap.org ) at 2022-09-25 11:42 CEST
Host is up (0.00014s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
80/tcp open http
631/tcp open ipp
8080/tcp open http-proxy
9050/tcp open tor-socks
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
Just a little note: port 80/tcp is listened on by apache2 with the "You are an idiot" flash animation.
As per the comments, you can try something like this:
You will note that i have added the timeout parameter in the requests. This units are in seconds. The default timeout is None, which means it'll wait (hang) until the connection is closed.
import requests
for port in range(10_000, 65_535):
try:
r = requests.get(f'http://localhost:{port}', timeout=5)
print(port)
except Exception as e:
print(f'{type(e).__name__}, {port}', end='\r')

rsyslog doesn`t open tcp listener

I am configuring rsyslog on a Linux server and want to configure it with TLS secure transport, I follow many documentation including rsyslog official guide (https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html), the thing is that I can see udp port listening, but tcp doesn't and not getting errors on configuration validation, so I am blind and not seeing why tcp port is not listening, I try low and high ports and nothing, I am attaching configuration file that I use last time and the configuration validation output, thanks for any help!
module(load="imuxsock")
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
input(type="imtcp" port="11514")
module(load="imudp")
input(type="imudp" port="1514")
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/var/ossec/agentless/rsyslog/ca.pem"
DefaultNetstreamDriverCertFile="/var/ossec/agentless/rsyslog/server/cert.pem"
DefaultNetstreamDriverKeyFile="/var/ossec/agentless/rsyslog/server-key.pem"
)
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$RepeatedMsgReduction on
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
$WorkDirectory /var/spool/rsyslog
$IncludeConfig /etc/rsyslog.d/*.conf
And validation:
# rsyslogd -N6
rsyslogd: version 8.16.0, config validation run (level 6), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.
Netstat output:
# netstat -na |grep 514
udp 0 0 0.0.0.0:1514 0.0.0.0:*
udp6 0 0 :::1514 :::*
Thanks for the answers, the problem apparently was not in the rsyslog configuration, but in Wazuh, the software that was trying to receive the logs of the rsyslog, what I did was change the configuration of the ossec.conf of Wazuh and the open port, create another remote control, one with safe value and one with syslog value and it worked, thanks for all the support as always !!! Hugs and take care

What is etcd looking for in 127.0.0.1:4001?

I'm trying to set up a test cluster using etcd 2.3.7 installed from CentOS RPM on CentOS 7.1. On the Loader 1 I executed:
etcdctl member add loader2 http://10.11.51.231:2380
And received response which confirmed the operation completed successfully.
Similarly:
etcdctl member add loader3 http://10.11.51.231:2380
with all default settings, and here's what I see:
Loader 1 10.11.51.166
systemctl status etcd -ln1
etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled)
Active: active (running) since Sun 2017-02-19 14:33:18 IST; 28min ago
Main PID: 19009 (etcd)
CGroup: /system.slice/etcd.service
└─19009 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://localhost:2379
Feb 19 15:02:03 loader3 etcd[19009]: cannot get the version of member a4803061db803edc (Get http://10.11.51.166:2380/version: dial tcp 10.11.51.166:2380: getsockopt: connection refused)
Tried to see cluster health:
etcdctl --debug cluster-health
Cluster-Endpoints: http://127.0.0.1:4001, http://127.0.0.1:2379
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
member ce2a822cea30bfca is unhealthy: got unhealthy result from http://localhost:2379
member da05b63349d818dc is unreachable: no available published client urls
cluster is unhealthy
Note how this ignores the two nodes added previously, but sends requests to random port on localhost...
Loader 2 10.11.51.174
At first this machine started OK, but after I saw there was something wrong with Loader 1, I tried adding Loader 1 as a member from this machine, and now I see the same picture on this machine too. I.e. it tries to query this 4001 port, where nobody responds. On all machines:
netstat -tupln | grep etcd
tcp 0 0 127.0.0.1:7001 0.0.0.0:* LISTEN 4507/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 4507/etcd
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 4507/etcd
Nobody listens on 4001...
Loader 3 10.11.51.231
On this loader I didn't try to add new members. So it looks like this:
etcdctl --debug cluster-health
Cluster-Endpoints: http://127.0.0.1:4001, http://127.0.0.1:2379
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
member ce2a822cea30bfca is healthy: got healthy result from http://localhost:2379
cluster is healthy
In other words it still sends requests to random port, but this time it isn't bothered by the fact that nobody replied...
Below is the contents of the configuration files:
cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
And:
cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
So... what is going on? The error messages given by etcd are the typical mindless nonsense produced by Go built-ins. The HTTP server that etcd uses is again, the Go built-in junk, that produces non-standard and absolutely worthless replies. So I cannot understand what was (if at all) misconfigured / missing.

HAProxy 1.6 configuration Node.js ssh server child process

I am running a Node.js SSH server that spawns a child process to exec code (using require('child_process').spawn) after successful authentication.
The client server connections works fine on port 22 and connection is kept alive successfully through spawned process.
I am trying to setup up now with HAProxy 1.6, to forward port 22 to a non-privileged port on which the SSH server is listening.
However, when the child process is spawned the server either errors Error: write EPIPE or Error: read ECONNRESET.
This suggests to me there is an issue with prematurely closed stream or connection between the client -> HAProxy -> server?
I am looking at websocket configurations and ssh configurations for HAProxy and various keep alive options. However I cannot get the connection to work.
My configuration:
global
daemon
maxconn 10000
log 127.0.0.1 local0
defaults
log global
option tcplog
option logasap
timeout connect 500s
timeout client 5000s
timeout server 2h
timeout server-fin 5000s
timeout client-fin 5000s
timeout tunnel 1h
option tcpka
frontend sshd
bind *:22
default_backend ssh
timeout client 2h
backend ssh
mode tcp
server ssh2server 127.0.0.1:5000 check port 5000
Any pointers or help would be awesome. Thanks in advance.
EDIT
Runing haproxy in debug mode I have
00000000:sshd.accept(0004)=0005 from [my ip]
00000000:ssh.srvcls[0005:0006]
00000000:ssh.clicls[0005:0006]
00000000:ssh.closed[0005:0006].
On the tcplog
Oct 15 15:15:38 localhost haproxy[16036]: 128.277.13.23:51146 [15/Oct/2016:15:15:38.804] sshd ssh/ssh2server 1/0/+0 +0 -- 1/1/1/1/0 0/0

AWS - EC2 - MongoDB replica set time sync issue - NTP - replication lag

We are encountering clock drift issues with our MongoDB replica set running on AWS. This just seemed to start happening recently after we added additional data to the set, before then we did not really notice this issue unless the system was under heavy load. The following error is logged in the mongod.log file sporadically and the system is not under load.
To test this we have isolated a set of machines with the same dataset and not in use by our web application though the error is still occurring;
2014-12-12T13:33:51.333+0000 [rsBackgroundSync] changing sync target
because current sync target's most recent OpTime is Dec 12 13:32:42:c
which is more than 30 seconds behind member mongo1:27017 whose most
recent OpTime is 1418391230
From the above the time stamp shows that one of the mongodb replica set members is over a minute behind. The worst we have seen is 12 minutes out of sync.
This error in turn causes replication lag and we receive the notification about this from the Mongo Monitoring Service although it does correct itself.
The setup is 3 x r3.xlarge AWS Linux instances, 1 in each availability zone of the EU-West-1A region. The machines have been setup using the Mongo recommended settings with a Raid array and the cloud formation scripts provided by Mongo. The data is around 4GB in size.
We think the issue is related to the NTP sync, by default on the AWS Linux Amazon Machine Image the ntpd service is configured to go to a pool of aws ntp servers hosted on www.pool.ntp.org.
To try and rule this out we setup our own NTP server on AWS that the MongoDB servers could sync to. The issue still occurred so we changed the maxpoll and minpoll time for the ntpd service on the mongo machines to sync the time every 16 seconds from the NTP server but the error is still occurring.
We increased the MongoDB OpLog size as well to see if that would make any difference but it didn’t.
Does anyone else encounter this type of issue? Is there something we are missing?
Cheers,
Colin.
ps -ef |grep ntp;
mongodb1
ntp 5163 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 15865 15839 0 09:31 pts/2 00:00:00 grep ntp
mongodb2
ntp 4834 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 19056 19029 0 09:31 pts/0 00:00:00 grep ntp
mongodb3
ntp 5795 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 26199 26173 0 09:31 pts/0 00:00:00 grep ntp
cat /etc/ntp.conf;
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.amazon.pool.ntp.org iburst dynamic
#server 1.amazon.pool.ntp.org iburst dynamic
#server 2.amazon.pool.ntp.org iburst dynamic
#server 3.amazon.pool.ntp.org iburst dynamic
server time-server.domain.com iburst
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
# Enable additional logging.
logconfig =clockall =peerall =sysall =syncall
# Listen only on the primary network interface.
interface listen eth0
interface ignore ipv6
ntpq -npcrv;
remote refid st t when poll reach delay offset jitter
==============================================================================
*172.31.14.137 91.*.*.* 3 u 557 1024 377 1.121 -0.264 0.161
associd=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,
version="ntpd 4.2.6p5#1.2349-o Sat Mar 23 00:37:31 UTC 2013 (1)",
processor="x86_64", system="Linux/3.14.23-22.44.amzn1.x86_64", leap=00,
stratum=4, precision=-23, rootdelay=23.597, rootdisp=109.962,
refid=172.31.14.137,
reftime=d83a757a.175b5fa1 Tue, Dec 16 2014 9:10:18.091,
clock=d83a77a7.82431efa Tue, Dec 16 2014 9:19:35.508, peer=27361,
tc=10, mintc=3, offset=-0.264, frequency=-13.994, sys_jitter=0.000,
clk_jitter=0.358, clk_wander=0.053
After upgrading to MongoDB 3 using the WiredTiger storage engine we do not see this issue any more.

Resources