Forwarding logs to logger microservice - node.js

We are planning to build a microservice for logging activities of other microservices.
Logger microservice will receive all the logs and stores them properly.
Finally, we can see the logs with tools like Kibana.
I want to know what's the proper way to send log data to the logger service.
Make an HTTP request to the logger.
Publish logs data to a message broker and logger consumes them. (like RabbitMQ)
or any other suitable way to do this.
Thank you all.

Why do you need to build a microservice?
Filebeat scans for service logs and ships them to logstash, managing considerations like retries, etc.
Logstash persists the logs (e.g. to Elasticsearch for Kibana to query) for consumption by other things.

Related

Winston for logging a mulitlpe container application

I am planning to use the Digital Ocean App Platform to host my backend but I wanted to know if each container in App platform would have a different log file (assuming I’m logging into files with Winston) and if this is the case would this even be an issue.
I thought of multiple solutions in case I should handle this:
1- Log into the database
2- Make another container that expects to get the logs through HTTP from the other running containers.
(Note I'm new with dealing with containers so I might be missing/misunderstanding something)
Yes, you will get log files from each node.js process. For this scenario Winston supports alternative transports that can centralize logs from many sources.
As you suggested (2) some of these transport options write logs to an RDBMS.

Does express generate default request logs inside a cloud run container? Should I keep them on in my production environment?

I have an express server inside a Cloud Run Docker container.
I'm getting those logs:
Are those generated by the express package? Or are those logs somehow generated by cloud run?
Here is what express docs says: https://expressjs.com/en/guide/debugging.html
Express uses the debug module internally to log information about route matches, middleware functions that are in use, application mode, and the flow of the request-response cycle.
But it does not give much detail on what those logs are and how you enable or disable them.
Should I leave them on? Won't it hurt my server's performance if it's going to log every request like that? This is NODE_ENV === "production.
These logs entry are generated by the Cloud Run runtime platform. It's not your Express server. The performance aren't impacted by this log, and in any cases, you can't deactivate them.
You could exclude them to save space in logging capacity (and save money), but I don't recommend this. They brings 3 of 4 golden signals of your application (error rate, latency, traffic). Very important for production monitoring

Write all logs to the console or use a log library appender?

I'm running a couple of Node services on AWS across Elastic Beanstalk and Lambdas. We use the Bunyan library and produce JSON logs. We are considering moving our logging entirely to CloudWatch. I've found two ways of pushing logs to CloudWatch:
Write everything to the console using bunyan and use the built-in log streaming in both Beanstalk and Lambda to push logs to CloudWatch for me.
Use a Bunyan Stream like https://github.com/mirkokiefer/bunyan-cloudwatch and push all log events directly to CloudWatch via their APIs.
Are both valid options? Is one more preferred than the other? Any plusses and minuses that I'm missing?
I favor the first option: Write everything to the console using bunyan.
I think this separates concerns better than baking cloudstream into your app. Besides, bunyan-cloudwatch is not maintained.

winston logstash udp connections handleing

I am using Winston log-stash package for my Nodejs services (syslog udp).
While i was testing my app i saw that my docker containers opens lots of connections to the firewall before the log-stash server.
In large scale testing, 3 of my docker containers made more than 80,000 connections at the same time.
My Nodejs services are RestApi (express), run in debug mode and writes lots of logs per request.
How Winston log-stash handle udp log-stash connections,
How can i manage them?
If Winston log-stash opens connection per request, how should I design my code in order keep writing large amount of logs per request?
thanks :)

Using RabbitMQ to capture web application log

I'm trying to setup RabbitMQ to take web application logs to a log server.
My log server will listen to one channel and store the logs that comes in.
There are several web applications that need to send info to the log server.
With many connections (users) hitting the web server, what is the best design to publish messages to RabbitMQ without locking each other? Is it a good idea to keep opening a new connection to the MQ for each web request? Is there some sort of message queue pool?
I'm using IIS for a web server.
I assume you’re leveraging the .NET framework to build your application, given that it’s hosted in IIS. If so, you can also leverage Daishi.AMQP, which has a built-in QueuePool feature. Here is a tutorial that outlines the mechanism in full.
To answer your question, you should initially establish a connection to RabbitMQ from your application server. You can then initialise a Channel (a process that executes within the context of the underlying connection) to serve each HTTP request. It is not a good idea to establish a new connection for each request.
RabbitMQ has a build in queue feature. It is well documented, have a look at the official docs: http://www.rabbitmq.com/getstarted.html

Resources