Fluentd-logstash integration - logstash

I am using in fluentd out-http plugin to integrate with logstash, but I am getting the following error:
failed to emit fluentd's log event tag="fluent.info"
event={"type"=>"syslog", "plugin_id"=>"object:3f923544ab40",
"message"=>"shutting down input type=\"syslog\"
plugin_id=\"object:3f923544ab40\""} error_class=ArgumentError
error=#
Can anyone help me to solve this issue?

If you want to send data from fluentd to logstash you can do it using TCP.
This open-source plugin will enable you to send TCP data (even secured SSL/TLS) directly to logstash TCP input (no need to use the fluentd codec).
more details on this answer, good luck.

Related

Logstash Gelf - Multiple sources

I am trying to use gelf input plugin for some ESB logs with GELF Layout (just started ELK Stack for logging). I am successful in getting the logs through a configured port. As there are around 100 apps (will keep adding) and some are WebServices which keeps spitting logs is it ok to have all logs through a single port as would that create any performance problem or missing some logs? Like LS - File beat has that backpressure-sensitive protocol, is there something like this in gelf Input plugin?
input {
gelf{
host => "testelkstacksrvr1"
port => 9090
}
}
You should use GELF over TCP. NXLog can also do this and flow-control will prevent logs getting dropped.

In Logstash, how to get public IP address of a client as field

I am sending logs from my node to logstash using file beat.
There are multiple such a kind of nodes who are sending logs to logstash using file beat.
Basically I want to know the public IP address of my node so that I can plot a visualization in KIBANA with geoip based location.
And as my node is behind NAT, I can't print IP address as part of my log entry (eg., syslog entry); because locally I know only private IP of the node.
Is there any way that logstash can automatically know my node's IP address and insert it as a elastic search field?
Theoretically it should be possible as when logstash gets the information from beats, he should be knowing the IP address from where he is getting that.
Thanks in advance.
It is not possible with the current Logstash Beats input to configure it to enrich incoming events with the remote IP from which the event was received.
This feature was proposed in the past for the older Logstash Lumberjack input, but there isn't an open feature request for this in the Beats input. I suggest you request it.

Logs and ELK stack on separate servers, how to display the logs?

I have logs on one server and ELK stack -- ElasticSearch, Kibana, Logstash -- on another. I want to display the logs, which are on the 1st server, in Kibana, which is on the 2nd one.
How can I do that? Is logstash capable of retrieving logs over the internet from a remote server?
Your best bet is to use Filebeat on server one and send the logs to the logstash server.
https://www.elastic.co/products/beats/filebeat
Logstash can't access remote file, you'll have to use a shipper, like beaver, using udp or tcp transport:
Beaver documentation.

securing graylog2 http port from spam logs??

I'm not really sure how to word the title. This is what i'm trying to do. I'm fairly new to graylog2. I have graylog2 installed to listen on a specific port number to listen for logs sent over the HTTP. So, how do i avoid someone from spamming my graylog server with fake log files?
Currently graylog2 has no built-in capability to prevent that.
However, authenticated HTTP inputs have already been requested and we have an issue open to implement this: Support authenticated HTTP GELF input
For now the only option would be to restrict access to the host/port using standard firewall techniques.

Outputting UDP From Logstash

I have some logs that I want to use logstash to collate . It will be the logstash agent as a shipper on the source servers going to a central logging server . I want to use UDP from the shipper to the central log so I can be totally isolated should the logger fail ( I dont want the 70 odd production servers affected in any way ) . The only way I can see of using udp for the transport is to output using syslog format . Does anyone know of a way I can output UDP natively from logstash ? ( The docs hint at it but I must be missing something on the TCP side ?)
My understanding is that the typical approach would be to use something a bit more lightweight (rsyslog, or even Lumberjack) running on the remote machine, that is sending data to your central logstash server.
It sounds like you want logstash agents (forwarders) to be running on each server and sending the data to the broker. This blog post (also linked as an example below), gives a nice explanation of why they chose not to use logstash forwarders on their remote servers - they eat too much RAM
I use a setup where rsyslog sends UDP data to logstash directly. I've also used a setup where rsyslog was the receiving log server, and aggregated the logs into separate logfiles (based on server hostname) and then logstash read from those files.
For some example configs see the following:
Logstash with Lumberjack and ES
Logstash with rsyslog and ES
I suggest the rsyslog approach, as the chances that rsyslog is already running on your server is higher than Lumberjack. Also there is quite a bit more documentation on rsyslog than Lumberjack. Start off simple, and add more complexity later on, if you feel the need.
You can output UDP natively using the UDP output plugin
http://logstash.net/docs/1.2.1/outputs/udp
By setting the codec field, you can choose whatever output format you like (e.g. JSON)

Resources