I am new to logstash and I am reading about it from couple of days. Like most of the people, I am trying to have a centralized logging system and store data in elasticsearch and later use kibana to visualize the data. My application is deployed in many servers and hence I need to fetch logs from all those servers. Installing logstash forwarder in all those machines and configuring them seems to be a very tedious task (I will do it if this is the only way). Is there a way for logstash to access those logs by mentioning the URL to logs somewhere in conf file instead of logstash forwarders forwarding it to logstash? FYI, my application is deployed on tomcat and the logs are accessible via URL http://:8080/application_name/error.log.
Not directly but there are a few close workarounds - the idea is to create a program/script that will use curl (or it's equivalent) to effectively perform a "tail -f" of the remote log file, and the run that output into logstash.
Here's a bash script that does the trick:
url-tail.sh
This bash script monitors url for changes and print its tail into
standard output. It acts as "tail -f" linux command. It can be helpful
for tailing logs that are accessible by http.
https://github.com/maksim07/url-tail
Another similar one is here:
https://gist.github.com/habibutsu/5420781
There are others out there, written in php or java: Tail a text file on a web server via HTTP
Once you have that running the question is how to get it into logstash - you could:
Pipe it into stdin and use the stdin input
Have it append to a file and use the file input
Use the pipe input to run the command itself and read from the
stdout of the command
The devil is in the details thought, particularly with logstash, so you'll need to experiment but this approach should work for you.
Related
I'm trying to use Seq, which is a tool for logging management, mostly supported in .NET.
There are also tools like seqcli for sending logs to a seq server as shown here:
https://docs.datalust.co/docs/
The thing is, I'm using a springboot App, and according to the docs, I'm using GELF and Seq deployed as docker containers in a remote server. Everything is on Linux.
I managed to send some logs from a file using this command:
./seqcli ingest ../spring-boot-*.log
and I can see them on the remote server, but I'm not able to send logs in realtime. the docs says that I can send logs in real time from STDIN but no more details about it, I have no idea how can I achieve this.
https://docs.datalust.co/docs/command-line-client#extraction-patterns
Any suggestion?
I was digging a little more throughout the docs and I found this:
tail -c 0 -F /var/log/syslog | seqcli ingest
which I converted to this:
tail -c 0 -F ../spring-boot-app.log | ./seqcli ingest
If someone runs into the same problem, look some more here:
https://blog.datalust.co/parsing-plain-text-logs-with-seqcli/
I have a NestJS application running in a Docker container. I have two loggers; the NestJS Logger and Pino.
Pino is responsible for listening to requests and responses and printing them to the console, while I use the NestJS logger to print custom messages I output, and for logging errors and the such.
I essentially want to open two terminal windows for each one of the loggers and only get logs by one of the two on each. How would I go about accomplishing this?
You can configure to save each log in the two different files during execution, for example: req-res-log.txt and custom-log.txt, after open the terminal window and use the command "tail -f -n100 file-path" to show logs during your test.
I am currently using nxlog to send the server logs to a graylog2 server and all the messages are going to the default index in Graylog. I am trying to send the messages to a particular index which should be configurable from the nxlog conf file.
We cannot achieve this via Nxlog configuration. This problem can be solved by using Streams functionality provided by Graylog http://docs.graylog.org/en/2.4/pages/streams.html . We can create a stream with a particular rule to figure out the input source and then redirect the logs to a particular index which is configured when we are creating a stream.
I'd like to have a process that captures both access and error logs, without the logs being written to disk. I'd like to use node.js process.stdin to the logs. Any idea if nginx can be setup to stream the logs to another process instead of to disk?
No, that's not possible, and there's a trac here: https://trac.nginx.org/nginx/ticket/73
However, as in the comment to the trac, you could easily use pipe the logs from the file using tail -F /path/to/access/log | your-node-script.js. Please note that this will still write to disk and then read, so consider the IOPs usage.
Another option is to send Nginx's logs to a node application that acts as a syslog server. Doing that in Nginx is quite trivial (see: http://nginx.org/en/docs/syslog.html ). You will then need to create a simple Node.js server that listens to port 514 UDP and processes the log. See an example in the highlighted lines here: https://github.com/cconstantine/syslog-node/blob/e243e2ae7ddc8ef9214ba3450a8808742e53d37b/server.js#L178-L200
I am using log4j 1.2
How can I send log4j logs to an arbitrary program listening on a socket. I tried following options
SocketAppender - it expects a SocketNode to listen on the port.
TelnetAppender but it sends logs to a read-only port.
What I am really looking for is to send log4j logs to Flume. I know that log4j2.X has a FlumeAppender but not sure if it works with log4j1.2
If Flume runs in the same machine where log4j logs are being stored, then there is no need to send the logs to Flume, but configure Flume to directly read those logs. Regarding that, please try to configure the Exec source with a tail command execution. tail will print the logs line by line (I guess Flume somehow redirects the stdout to a internal file descriptor or something like that) and Flume will get those lines as input data.
I found org.apache.flume.clients.log4jappender.Log4jAppender using Avro
to send logs to flume agent running locally on the machine