I'd like to have a process that captures both access and error logs, without the logs being written to disk. I'd like to use node.js process.stdin to the logs. Any idea if nginx can be setup to stream the logs to another process instead of to disk?
No, that's not possible, and there's a trac here: https://trac.nginx.org/nginx/ticket/73
However, as in the comment to the trac, you could easily use pipe the logs from the file using tail -F /path/to/access/log | your-node-script.js. Please note that this will still write to disk and then read, so consider the IOPs usage.
Another option is to send Nginx's logs to a node application that acts as a syslog server. Doing that in Nginx is quite trivial (see: http://nginx.org/en/docs/syslog.html ). You will then need to create a simple Node.js server that listens to port 514 UDP and processes the log. See an example in the highlighted lines here: https://github.com/cconstantine/syslog-node/blob/e243e2ae7ddc8ef9214ba3450a8808742e53d37b/server.js#L178-L200
Related
I familiar to Node.js and just a little bit about web programming. One of my CLI Node app need to let user see the running program logs. I hope she can open a separated browser and point to something like http://localhost:12345, then she got live log keeping scrolling in the page without any human interaction.
Is that a simple method to do such kind of application? I know programming RESTful, not sure if it helps.
If I understood your question correctly, you are trying to show live server side logs to the user. For that you will have to tail the log file and pipe the output to response or pipe the stdout (if you're not writing logs to file) to the response using socket.io connection. socket.io is a way of providing live updates to the users without them sending https request every time. You can see example here .
I'm working on enabling cluster support in a project I'm working on. This question comes directly from a statement in the Nodejs docs on the cluster module:
from: https://nodejs.org/api/cluster.html#cluster_cluster
Please note that, on Windows, it is not yet possible to set up a named pipe server in a worker.
What exactly does this mean?
What are the implications of this?
From the docs, and other research I've done, the actual practical consequences to this limitation are not clear to me.
A process can expose a named pipe as a way to communicate with other interested parties - ie. an nginx server could expose a named pipe where all incoming requests would be sent (just an idea - I am not sure if nginx can even do that).
From Node.js process (not a cluster worker, though), you could then start an http server (or even a plain TCP server, for that matter) which listens for messages sent to this named pipe:
http.createServer().listen('\\.\pipe\nginx')
Docs for the .listen() method's signature are here, specifically this part is of interest:
Start a server listening for connections on a given handle that has already been bound to a port, a UNIX domain socket, or a Windows named pipe
However, as per the warning, this functionality is not available from a cluster worker, for reasons beyond my understanding.
Here is a relevant commit in Node.js which hints at this limitation. You can find it by opening the Markdown document for cluster, look at git blame and go further into history a bit until you arrive at the commit which introduces this note.
Normal interprocess communication is not affected by this limitation, so a cluster works just the same on Win32 as it does on Unix systems.
Note: Upon further thought, that nginx example is a bit misleading since a named pipe, to my understanding, cannot be used for stateful bidirectional communication. It's just one-way, ie. source->listener. But I do hope I conveyed the general idea behind the limitation.
I am using log4j 1.2
How can I send log4j logs to an arbitrary program listening on a socket. I tried following options
SocketAppender - it expects a SocketNode to listen on the port.
TelnetAppender but it sends logs to a read-only port.
What I am really looking for is to send log4j logs to Flume. I know that log4j2.X has a FlumeAppender but not sure if it works with log4j1.2
If Flume runs in the same machine where log4j logs are being stored, then there is no need to send the logs to Flume, but configure Flume to directly read those logs. Regarding that, please try to configure the Exec source with a tail command execution. tail will print the logs line by line (I guess Flume somehow redirects the stdout to a internal file descriptor or something like that) and Flume will get those lines as input data.
I found org.apache.flume.clients.log4jappender.Log4jAppender using Avro
to send logs to flume agent running locally on the machine
I am new to logstash and I am reading about it from couple of days. Like most of the people, I am trying to have a centralized logging system and store data in elasticsearch and later use kibana to visualize the data. My application is deployed in many servers and hence I need to fetch logs from all those servers. Installing logstash forwarder in all those machines and configuring them seems to be a very tedious task (I will do it if this is the only way). Is there a way for logstash to access those logs by mentioning the URL to logs somewhere in conf file instead of logstash forwarders forwarding it to logstash? FYI, my application is deployed on tomcat and the logs are accessible via URL http://:8080/application_name/error.log.
Not directly but there are a few close workarounds - the idea is to create a program/script that will use curl (or it's equivalent) to effectively perform a "tail -f" of the remote log file, and the run that output into logstash.
Here's a bash script that does the trick:
url-tail.sh
This bash script monitors url for changes and print its tail into
standard output. It acts as "tail -f" linux command. It can be helpful
for tailing logs that are accessible by http.
https://github.com/maksim07/url-tail
Another similar one is here:
https://gist.github.com/habibutsu/5420781
There are others out there, written in php or java: Tail a text file on a web server via HTTP
Once you have that running the question is how to get it into logstash - you could:
Pipe it into stdin and use the stdin input
Have it append to a file and use the file input
Use the pipe input to run the command itself and read from the
stdout of the command
The devil is in the details thought, particularly with logstash, so you'll need to experiment but this approach should work for you.
I am learning linux programming and want to do the following. I would like to create a mini-logger that will work like syslog. I want to be able to replace syslog (not in practice but just to understand at every level how things work).
So in my code, I would write
#include "miniLogger.h"
....
....
miniLogger(DEBUG, "sample debug message");
----
----
Now, I am guessing I would need some kind of daemon to listen for incoming messages from my miniLogger and I have no experience with daemons. Can you point me in the right direction or give me a quick overview how messages can move from my API into a configurable destination.
I read the man pages but I need more of an overview of how APIs communicate with daemons in general.
syslogd listens for log messages over /dev/log, which is a unix domain socket. The socket is datagram-oriented, meaning the protocol is similar to udp.
Your log daemon should open the socket, set the socket to server mode, open a log file in write mode, ask to get notified of packets, parse the messages safely, and write them to the file. The important system calls for doing socket io are described in man 7 socket. To get notified of incoming data on the socket, you can use epoll or select.
syslog commonly uses a PF_LOCAL socket at /dev/log.