convert rsyslog message format when forwarding messages with structured-data - linux

My daemon uses libc syslog() call that logs messages in RSYSLOG_TraditionalFileFormat format. And, rsyslogd daemon running on the same host needs to forward all these messages to a remote log collector in RSYSLOG_SyslogProtocol23Format format.
Now I want to "piggy back" %STRUCTURED-DATA% to RSYSLOG_TraditionalFileFormat format (basically when my daemon will call syslog() it will specify structured data in square brackets). How can I specify incoming log message format (or template) in rsyslog.conf so that it would understand structured-data?
I understand that one of the solutions would be for my daemon to directly send message to /dev/log and change default log message format in rsyslog.conf. However, this does not seem right, because I want to keep the local log format the same.

It seems that it is impossible to solve this in an elegant way, because libc syslog() call uses /dev/log UNIX domain socket.
And rsyslog 8.8 and older uses hardcoded message parser for messages received over UNIX domain socket. See usespecialparser setting (http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html)
Another option to get this to work with the latest Ubuntu 15.04 that ships with rsyslog 7.4 would be to use UDP socket, but then I can't use libc syslog() anymore.

Related

Logstash + Syslog Input Plugin VS Logstash + File Input Plugin + Syslog server

I have an existing system that sends me log entries to my server via Syslog protocol. The log entries are written into local files, and then I process these log files with Logstash using its File input plugin.
I like it because even if the Logstash goes down (it happens sometimes), I do not lose any log.
I have just realized today that the Logstash also has a Syslog input plugin that is capable of reading log data on the Syslog protocol.
I am wondering if I turn off my Syslog server, and read the data via the Syslog input plugin of the Logstash, will I have the same reliable system, or If the Logstash goes down, I will lose data during the downtime?
If Logstash goes down you will lose data during the downtime.
Also, the syslog input only works if the messages from your logs are in compliance with the RFC3164, anything different and you will need a grok pattern to parse that message.
If you don't want to use the file input anymore you can create a rule on your syslog server to redirect the messages to your logstash input, in this case, if your logstash goes down, you will still have the files to fill the missing data.

snmp proxy in python

I need kind of snmp v2c proxy (in python) which:
react on snmpset command
read value from command and write it to yaml file
run custom action (prefer in different thread and somehow reply success to snmpset command):
run another snmpset to different machine, or
ssh to user#host and run some command, or
run some local tool
and:
react on snmpget command
check value for requested oid in yaml file
return this value
I'm aware of pysnmp but documentation just confuse me. I can image I need some kind of command responder (I need snmp v2c) and some object to store configuration/values form yaml file. But I'm completely lost.
I think you should be able to implement all that with pysnmp. Technically, that would not be SNMP proxy but SNMP command responder.
You can probably take this script as a prototype and implement your (single) cbFun which (like the one in the prototype) receives the PDU and branches on its type (GET or SET SNMP command in your case). Then you can implement value read from the .yaml file in the GetRequestPDU branch and .yaml file write, along with sending SNMP SET command elsewhere, in the SetRequestPDU branch.
The pysnmp API we are talking here is the low-level one. With it you can't ask pysnmp to route messages between callback functions -- it always calls the same callback for all message types.
However, you can also base your tool on the higher-level SNMP API which was introduces along with the SNMPv3 model. With it you can register your own SNMP applications (effectively, callbacks) based on PDU type they support. But given you only need SNMPv2c support I am not sure the higher-level API would pay off in the end.
Keep in mind that SNMP is generally time sensitive. If running local command or SSH-ing elsewhere is going to take more than a couple of seconds, standard SNMP manager might start retrying and may eventually time out. If you look at how Net-SNMP's snmpd works - it runs external commands and caches the result for tends of seconds. That lets the otherwise timing out SNMP manager eventually get a slightly outdated response.
Alternatively, you may consider writing a custom variation plugin for SNMP simulator which largely can do what you have described.

How to install logstash-forwarder for multiple logstash server?

Currently we are working on forwarding logs to 2 different logstash servers. We cannot figure out a way with which we can install logstash-forwarder on a single machine. Is it possible with logstash-forwarder forwarding logs to multiple logstash ??
Else how can we do it with filebeat ??
In the LSF config, you can specify a list of hosts, but it will pick one at random and only switch to another in case of failure.
FB has the same system, but it allows you to also load balance across the list of hosts.
AFAIK, neither allows you to send events to multiple logstash instances.
Logstash, on the other hand, will send events to all of its outputs, so you could have FB send to a single LS, and have that LS output to your other LS instances. Note that if one output is unavailable, the system will block.

Save mail message as a file on Linux using sendmail

I have an application running on several RHEL 5.8 systems which monitors and alerts (via email). I need to create a durable log of these alerts locally on each node.
I think the easiest way to do this would be to add a local email user to the alerts and then use mailbox settings or a script (if needed) to save each message on a local filesystem
I would settle for message body dumped to a text file (one file per email.)
It would be better if it could extract time, host, subject, & body as seperate fields for consumption by an open source log reader.
My systems are using sendmail 8.1 and I would prefer to stick with it, although I also have postfix 2.3.3 available.
As you reported your sendmail uses procmail as local mailer => create special OS user account (e.g. log_user) and use ~log_user/.procmailrc to instruct procmail to deliver messages to maildir folder.
~log_user/.procmailrc
# deliver ALL messages to ~/maillog/ maildir.
# see "man procmailex" for email sorting examples
:0
maillog/

syslog question

I am looking into syslog.
I understand that it is a centralized logging facility that collects logs from various sources.
I have heard that syslog can generate alerts on conditions e.g. max file size of log file is reached.
Is this true?
Because I haven't found how this is done.
Most posts just refer to the logging.
How is the event generation done?
I.e. if I have an app that acts as a log source (redirects logging to a syslog) then is it possible my app can receive an alert, if the max file size has been reached?
How is this configured?
Thank you!
From the application perspective, the syslog function is primarily a receiver of information from the application; the application can write messages to the syslog. There are various bits of information that the application provides to the syslog daemon, including the severity of the message.
The syslog daemon can be configured to take different actions on receipt of different types of message.
No, your application cannot receive an alert when the maximum file size is reached - at least, not via syslog. You might get a SIGXFSZ signal which you can trap. You might prefer to look at your resource limits and keep tabs on your file size to avoid the problem.

Resources