rsyslog doesn`t open tcp listener - linux

I am configuring rsyslog on a Linux server and want to configure it with TLS secure transport, I follow many documentation including rsyslog official guide (https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html), the thing is that I can see udp port listening, but tcp doesn't and not getting errors on configuration validation, so I am blind and not seeing why tcp port is not listening, I try low and high ports and nothing, I am attaching configuration file that I use last time and the configuration validation output, thanks for any help!
module(load="imuxsock")
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
input(type="imtcp" port="11514")
module(load="imudp")
input(type="imudp" port="1514")
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/var/ossec/agentless/rsyslog/ca.pem"
DefaultNetstreamDriverCertFile="/var/ossec/agentless/rsyslog/server/cert.pem"
DefaultNetstreamDriverKeyFile="/var/ossec/agentless/rsyslog/server-key.pem"
)
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$RepeatedMsgReduction on
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
$WorkDirectory /var/spool/rsyslog
$IncludeConfig /etc/rsyslog.d/*.conf
And validation:
# rsyslogd -N6
rsyslogd: version 8.16.0, config validation run (level 6), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.
Netstat output:
# netstat -na |grep 514
udp 0 0 0.0.0.0:1514 0.0.0.0:*
udp6 0 0 :::1514 :::*

Thanks for the answers, the problem apparently was not in the rsyslog configuration, but in Wazuh, the software that was trying to receive the logs of the rsyslog, what I did was change the configuration of the ossec.conf of Wazuh and the open port, create another remote control, one with safe value and one with syslog value and it worked, thanks for all the support as always !!! Hugs and take care

Related

Apache can't start "could not bind to address [::]:443" though no process is using it, and netcat can openit

my version of apache
Server version: Apache/2.4.6 (CentOS)
Server built: Apr 20 2018 18:10:38
when I run the command lsof -i :443 it returns nothing
but if I try to run apache (directly by running httpd I got the error, I verified with ps aux that there was no previous httpd/apache process already running)
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:443
However if i try to run a netcat process on 443 nc 0.0.0.0 -l 443 , it does open and I can send data
I'm a bit lost on what could be the problem ?
Found it
Listen 443 was present two times among the different configuration files of apache
it's a pity apache does not have a more explicit error/warning message (i.e "option defined two times" etc.)
It seems another process is using port 443 on your server.
netstat -anp | grep 443
output will be
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN
disable port 443 and start
systemctl start httpd.service

Unable to tell what port Logstash is bound to or listening on when started normally

My logstash version is:
# /opt/logstash/bin/logstash --version
logstash 2.2.4
it is configured to receive input from port 5044 according to the filebeat file:
/etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => false
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
I have set ssl to false as I am not using it
but when I start the logstash service normally with systemctl it start and checking the status confirms it is running
systemctl status logstash
● logstash.service - LSB: Starts Logstash as a daemon.
Loaded: loaded (/etc/rc.d/init.d/logstash)
Active: active (exited) since Mon 2016-07-18 19:14:51 BST; 15h ago
Docs: man:systemd-sysv-generator(8)
Process: 19965 ExecStop=/etc/rc.d/init.d/logstash stop (code=exited, status=0/SUCCESS)
Process: 19970 ExecStart=/etc/rc.d/init.d/logstash start (code=exited, status=0/SUCCESS)
...
logstash started
The problem is that logstash does not seem to be receiving input on port 5044. hosts sending filebeats encounter:
single.go:126: INFO Connecting error publishing events (retrying): dial tcp 192.72.0.92:5044: getsockopt: connection refused
when I check the port
# netstat -an | grep 5044
I get nothing. So even though logstash is running, I can't tell what port it is bound to and listening on.
Also the firewall is stopped temporarily to investigate this.
The strange thing is that is I run logstash is debug mode like so:
# ./logstash --debug -f /etc/logstash/conf.d/02-beats-input.conf
I can see
# netstat -an | grep 5044
tcp6 0 0 :::5044 :::* LISTEN
tcp6 0 0 192.72.0.92:5044 192.168.36.70:53720 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45980 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45975 ESTABLISHED
tcp6 0 0 192.72.0.92:5044 192.72.0.90:45976 ESTABLISHED
or
# lsof -i :5044
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 15136 root 7u IPv6 7191510 0t0 TCP *:lxi-evntsvc (LISTEN)
java 15136 root 33u IPv6 7192379 0t0 TCP hostname:lxi-evntsvc->192.72.0.90:45975 (ESTABLISHED)
and the host sending filebeats can connect
output.go:87: DBG output worker: publish 7 events
2016/07/19 10:02:08.017890 client.go:146: DBG Try to publish 7 events to logstash with window size 10
2016/07/19 10:02:08.038579 client.go:124: DBG 7 events out of 7 events sent to logstash. Continue sending ...
2016/07/19 10:02:08.038615 single.go:135: DBG send completed
Please help point out what I may be doing wrong with this configuration. Thanks
Based on the hing provided by #LiGhTx117
I think
The startup script used by logstash in:
/etc/init.d/logstash
has the following variables among others:
LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/var/lib/logstash
LS_LOG_DIR=/var/log/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_DIR=/etc/logstash/conf.d
The ownership and permission on these seem to be the issue.
I ensured that the directories where recursively accessible to the
user logstash as well as the group logstash
and
Then I also ensured that the log_file: logstash.log was writeable by
the user/group logstash
restarted logstash

time_wait in logstash server

I have setup logstash, kibana, elasticsearch in logstash-server and logstash-forwarder in client-servers. I have setup five client-servers where logstash-forwarder is installed. It was working fine when there was two and three client-servers but after adding more servers I was unable to see the log in kibana. Is this because client-servers sending too much data? I am using port 5000 for sending and receiving the logs. Because there was no log I use command netstat -an to see what is happening. From the command I see results as follows:
xxx.xx.xxx.xx => logstash-server,
yyy.yy.yyy.yy => client-server
tcp 0 0 ::ffff:xxx.xx.xxx.xx:5000 ::ffff:yyy.yy.yyy.yy:44693 TIME_WAIT
tcp 0 0 ::ffff:xxx.xx.xxx.xx:5000 ::ffff:xxx.xx.xxx.xx:9300 TIME_WAIT
tcp 0 0 ::ffff:xxx.xx.xxx.xx:5000 ::ffff:yyy.yy.yyy.yy:48026 TIME_WAIT
tcp 0 0 ::ffff:xxx.xx.xxx.xx:5000 ::ffff:yyy.yy.yyy.yy:9300 TIME_WAIT
tcp 0 0 ::ffff:xxx.xx.xxx.xx:5000 ::ffff:yyy.yy.yyy.yy:49719 TIME_WAIT
I have already Google it and didn't find any solution till now. My question is how do I remove this TIME_WAIT or kill these and restart accepting the logs from the server. Is there anyway so that I can optimize it?
Well I am running logstash-1.4.2 and elasticsearch-1.2.1, I am debugging the problem, I ran following command in client-server /opt/logstash-forwarder/bin/logstash-forwarder.sh -config /etc/logstash-forwarder (it may be different for you). The problem I see till now is that the ssl certificate has expired. I again regenerated the ssl key and configured the logstash again and see problems like
Failure connecting to xxx.xxx.xxx.xxx: dial tcp xx.xxx.xxx.xxx:5000: i/o timeout,
and
Read error looking for ack: EOF
This may be additional question, why I am getting it. May be a bug.

Can't start HAProxy on Cygwin

I'm trying to start up HAProxy on Cygwin. When I do so, I get the following response:
$ /usr/local/sbin/haproxy -f /usr/local/sbin/haproxy.cfg
[ALERT] 313/180006 (4008) : cannot change UNIX socket ownership
(/tmp/haproxy.socket). Aborting.
[ALERT] 313/180006 (4008) : [/usr/local/sbin/haproxy.main()]
Some protocols failed to start
their listeners! Exiting.
It looks like it's due to the following line in my config file, when I rip this it starts up:
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
The entire config:
global
log 127.0.0.1 local0 info
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
maxconn 1000
daemon
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1000
timeout connect 5s
timeout client 120s
timeout server 120s
listen rabbitmq_local_cluster 127.0.0.1:5555
mode tcp
balance roundrobin
server rabbit_0 127.0.0.1:5673 check inter 5000 rise 2 fall 3
server rabbit_1 127.0.0.1:5674 check inter 5000 rise 2 fall 3
listen private_monitoring 127.0.0.1:8100
mode http
option httplog
stats enable
stats uri /stats
stats refresh 5s
Any ideas would be appreciated, Thanks!
Simple answer, as I expected. My user "haproxy" which is referenced in the problematic line:
stats socket /tmp/haproxy.socket uid haproxy mode 770 level admin
Did not have necessary permissions on the local machine. Once this was set up, it started up fine.
Nice to know that it still works on cygwin, what version of haproxy is this ? I did not know that UNIX sockets were supported on windows BTW. Or maybe they're emulated via named pipes ?

Linux - Syslog client

In order to develop a cross-plateform syslog client, I am trying to do it without using the syslog syscall. I am developping this client in C++ and for now testing in Linux. The old syslog client that I am replacing was working perfectly fine with the syslog syscall.
For how, it simply doesn't work. The trace is not in /var/log/user.log like it should be, either anywhere else (greped). But I do receive it when I listen on the right port with netcat. Shouldn't the port 514 be already in use by the way ?
The trace is as it should be sent on UDP/514. I tried to stick the RFC 3164 but something is still obviously wrong.
Id really appreciate if someone could give me a hint to solve this.
Trace: severity: 2 (Critical); facility: 23 (Local Use 7) ==> priority: 186
sh$> sudo nc -ul localhost -p 514
<186>Oct 18 10:36:03 hostname test_trace: | 10:36:03.242995 | CRIT | xxx-MAIN[5473-000] | 00000 | 0008 : main : user_msg
Thank you !
I think I found the problem in my own question: Rsyslog (my syslog server) doesn't listen on UDP/514 correctly.
/etc/rsyslog.conf
$ModLoad imudp
$UDPServerAddress 0.0.0.0
$UDPServerRun 514
If someone has any idea of why it still doesn't listen on UDP/514, I'd be really thanksful cause I really don't see why.
Thank you again.
The syslog() call writes to /dev/log and the system logger reads this unix domain socket to pick up the message. UDP/514 is for network transmission.
So it is not clear what you want.

Resources