Writing log data to syslog using log4j - linux

I'm unable to write log messages into syslog. Any help would be great.
Here is my simple log4j program
import org.apache.log4j.Logger;
import java.io.*;
import java.sql.SQLException;
import java.util.*;
public class log4jExample
{
/* Get actual class name to be printed on */
static Logger log = Logger.getLogger(log4jExample.class.getName());
public static void main(String[] args) throws IOException,SQLException
{
log.error("Hello this is an error message");
log.info("Hello this is an info message");
log.fatal("Fatal error message");
}
}
My syslog properties file
# configure the root logger
log4j.rootLogger=INFO, SYSLOG
# configure Syslog facility LOCAL1 appender
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.threshold=WARN
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.facility=LOCAL4
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.conversionPattern=[%p] %c:%L - %m%n

Add the following lines to rsyslog.conf file
$ModLoad imudp
$UDPServerRun 514
It worked for me.
Need to restart the rsyslog after modfications.

The answer from #Sandeep above is the correct one, but it's from 2012 so I wanted to expand a little bit for folks who are using more recent setups. For instance, on Ubuntu 18.04 the /etc/rsyslog.conf file now has data near the top of the file that looks like this:
#################
#### MODULES ####
#################
module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")
# provides TCP syslog reception
#module(load="imtcp")
#input(type="imtcp" port="514")
Uncommenting the two UDP lines and then running sudo service rsyslog restart worked for me. The Java Log4J Syslog appender (https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SyslogAppender.html) expects syslog to be listening on UDP port 514 on localhost.
As a potential further security improvement, you may also consider binding to the loopback address so port 514 isn't visible external to the host if you don't need it to be:
input(type="imudp" port="514" address="127.0.0.1")
It's also possible to make this update without having to touch the existing /etc/rsyslog.conf file; instead you can add a new conf file under the /etc/rsyslog.d/ directory, e.g. /etc/rsyslog.d/10-open-upd-port.conf, that only contains these lines:
module(load="imudp")
input(type="imudp" port="514" address="127.0.0.1")
And then restart the rsyslog daemon as described above.
To see whether or not the rsyslog daemon is actively listening on the UDP port 514, I found this command useful as well: sudo lsof -iUDP:514 -nP -c rsyslogd -a (show listeners on port UDP 514 whose command is "rsyslogd").

Related

log4J connection to syslog server causes startup errors when unavailable

I have a tomcat application using log4J to log to a syslog server. If the syslog server is unavailable because of network issues, etc. The application will not start logging an error. I suspect that when the syslog server is not available, the application will restart. Is there any way to have the application ignore the syslog error? log configuration is:
Thanks in advance

Hos to use rsyslog to ship non-syslog files to remote server?

I've been following this rsyslog/logstash article to try to ship my applications' log files to a remote server, via rsyslog. From that page, here are the steps I've taken. Note that firewall and SELinux are off on both client (VM sending logs) and server (VM receiving logs). I have proven via netcat utility that I can send packets between client and server.
On my client side, I've configured my /etc/rsyslog.conf file like so:
# Load the imfile module
module(load="imfile" PollingInterval="10")
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Debugging
$DebugFile /var/log/rsyslog-debug.log
$DebugLevel 2
# General configuration
$RepeatedMsgReduction off
$WorkDirectory /var/spool/rsyslog
$ActionQueueFileName mainqueue
$ActionQueueMaxDiskSpace 500M
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount -1
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate"
type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="%HOSTNAME%")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"role\":\"app-server\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
On client side, I have /etc/rsyslog.d/01-trm-error-logs.conf
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log"
Tag="trm-error-logs:"
readMode="2"
escapeLF="on"
)
if $programname == 'trm-error-logs:' then {
action(
type="omfwd"
Target="my.remoteserver.com"
Port="514"
Protocol="tcp"
template="textLogTemplate"
)
stop
}
On server side, I have in my /etc/rsyslog.conf
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
I've restarted the rsyslog service on both sides.
However, I don't see logs being shipped out. I do see the contents of /usr/share/tomcat/dist/logs/trm-error.log in /var/log/messages, though I do NOT want them to appear there. I do see the contents of /usr/share/tomcat/dist/logs/trm-error.log being read per the contents of the /var/log/rsyslog-debug.log file I generate.
I do run the following on the client machine, and see nothing.
tcpdump -i eth0 -n host my.remoteserver.com -P out -vvv
This turned out to be firewall issue on the server. i did stop the firewall, but did NOT disable it, so when I restarted the server, it was back on.

Unable to get snmptrap working with logstash

I am trying to get the snmptrap input working with logstash. I am starting logstash as root initially because I want to make sure this works before changing ports. I am also using the local computer for SNMP because I thought that world be easier to start. When I use port 161 I get the “SNMP Trap listener died” error. If I change to port 162 I get no error, but no data. If I point to a server that does not exist I also get the SNMP Trap listener died error on any port. I believe it should be port 161, but I may be wrong.
Logstash works if I use a different input. I eventually want the output to go to graphite and that works too with different input.
Do I have something misconfigured? Is there some permission thing that could be causing a problem even though I am running as root and everything is on the same machine?
Thanks for any help.
This is my .conf file:
input {
snmptrap {
host => "127.0.0.1"
community => "public"
port => "161"
type => "snmp_trap"
}
}
output {
stdout { codec => rubydebug }
}
This is the partial result of snmpwalk locally:
snmpwalk -mAll -v1 -cpublic 127.0.0.1:161
iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.8072.3.2.10
iso.3.6.1.2.1.1.3.0 = Timeticks: (7218152) 20:03:01.52
This is netstat:
root#lab-graphite:~# netstat -lpn | grep snmp
udp 0 0 127.0.0.1:161 0.0.0.0:* 43559/snmpd
udp 0 0 0.0.0.0:54155 0.0.0.0:* 43559/snmpd
unix 2 [ ACC ] STREAM LISTENING 2593117 43559/snmpd /var/agentx/master
This is the full error message:
SNMP Trap listener died {:exception=>#<SocketError: bind: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:540:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:585:in `create_transport'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:618:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:74:in `build_trap_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:78:in `snmptrap_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:53:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:342:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:336:in `start_input'"], :level=>:warn}
In .conf file "host" parameter stands for ip address or hostname of computer running logstash. If you are going to receive snmp traps from external world sources it should not be localhost (127.0.0.1). It is OK for local setup tests though.
As already mentioned in comment, default snmptrap port is 162 (and there is no reason to change it in your setup).
Also, since netstat shows that there is snmpd running and it listens to udp port 161, your logstash won't be allowed to bind to the same port 161.
`snmpwalk` is not the right way to test your setup (it actually polls snmpd daemon on port 161) - it is `snmptrap` command that will send trap to your logstash input. For example,
`snmptrap -v1 -c public 127.0.0.1 .1.3 i 0 123456780 127.0.0.1 0 .1.3.6 i 12345`
You can also run tcpdump port 162 as root to check that snmptrap is sending packets to target at 127.0.0.1:162 .
(here 127.0.0.1 is the host address used in logstash .conf below).
So, for local test use
`snmptrap {
host => "127.0.0.1"
community => "public"
port => "162"
type => "snmp_trap"
}
}`
I was using the SNMPTRAP input, but expecting it to work like a regular SNMP get. It was actually working, but no traps were being sent.

Logstash doesn't start. Error: "Could not start TCP server: Address in use"

Logstash doesn't start. It says following:
:message=>"Could not start TCP server: Address in use", :host=>"0.0.0.0", :port=>1514, :level=>:error}The error reported is: \n Address already in use - bind - Address already in use"}
In logstash configuration file, port 1514 is not specified. And when logstash is stopped no service is listening on this port. When I start logstash and although I don't specify this port in configuration file, it starts listening on this port. If I put this port in logstash configuration file and start logstash it gives me the error that the address is in use. I need to use tcp/1514 port, because all my esxi hypervisors are configured to send logs to this port.
Why when I start logstash it starts listening on this port despite I dont have this port in the configuration file?
What can I do to successfully start logstash service using this port in configuration file?
The problem is that there were two configuration files used by logstash.
root#srv-syslog:~# locate central.conf
/etc/logstash/conf.d/central.conf
/etc/logstash/conf.d/central.conf.save
I deleted the second one and now everything is ok.

haproxy bind command to include cipher in haproxy.cfg file

I am configuring the haproxy.cfg file for haproxy. i need to add cipher suite in this file. for that i am using bind command. My bind command is as below.
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/private1.pem nosslv3
prefer-server-ciphers ciphers
TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH
With bind command bind *:443 it is working fine. once i add the other arguments its throwing error.
After including this command in haproxy.cfg file and restarting the haproxy service. i am getting the error.
**
[ALERT] 164/074924 (31084) : parsing [/etc/haproxy/haproxy.cfg:80] : 'bind' only supports the 'transparent', 'defer-accept', 'name', 'id', 'mss' and 'interface' options.
[ALERT] 164/074924 (31084) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 164/074924 (31084) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.
**
For resolving this issue i tried to install "libssl-dev" package. but i am not able to install that package also.
**Please guide me to do this. and i need to know is it neccesary to give the pem file entry in bind, or i can directly include cipher itself like this.
bind *:8443 ciphers TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH**
Appending my haproxy.cfg file below.
**#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
frontend inbound
mode http
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/private1.pem nosslv3 prefer-server-ciphers ciphers TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend postgresqlcluster1
mode http
balance roundrobin
server postgres1 192.44.9.101:8080 check**
You need to be using 1.5-dev19+ (current is 15.-dev26) to utilize any of the ssl functionality; based on the error and the config excerpt, it looks like you are running 1.4.

Resources