Winston for logging a mulitlpe container application - node.js

I am planning to use the Digital Ocean App Platform to host my backend but I wanted to know if each container in App platform would have a different log file (assuming I’m logging into files with Winston) and if this is the case would this even be an issue.
I thought of multiple solutions in case I should handle this:
1- Log into the database
2- Make another container that expects to get the logs through HTTP from the other running containers.
(Note I'm new with dealing with containers so I might be missing/misunderstanding something)

Yes, you will get log files from each node.js process. For this scenario Winston supports alternative transports that can centralize logs from many sources.
As you suggested (2) some of these transport options write logs to an RDBMS.

Related

Forwarding logs to logger microservice

We are planning to build a microservice for logging activities of other microservices.
Logger microservice will receive all the logs and stores them properly.
Finally, we can see the logs with tools like Kibana.
I want to know what's the proper way to send log data to the logger service.
Make an HTTP request to the logger.
Publish logs data to a message broker and logger consumes them. (like RabbitMQ)
or any other suitable way to do this.
Thank you all.
Why do you need to build a microservice?
Filebeat scans for service logs and ships them to logstash, managing considerations like retries, etc.
Logstash persists the logs (e.g. to Elasticsearch for Kibana to query) for consumption by other things.

Does express generate default request logs inside a cloud run container? Should I keep them on in my production environment?

I have an express server inside a Cloud Run Docker container.
I'm getting those logs:
Are those generated by the express package? Or are those logs somehow generated by cloud run?
Here is what express docs says: https://expressjs.com/en/guide/debugging.html
Express uses the debug module internally to log information about route matches, middleware functions that are in use, application mode, and the flow of the request-response cycle.
But it does not give much detail on what those logs are and how you enable or disable them.
Should I leave them on? Won't it hurt my server's performance if it's going to log every request like that? This is NODE_ENV === "production.
These logs entry are generated by the Cloud Run runtime platform. It's not your Express server. The performance aren't impacted by this log, and in any cases, you can't deactivate them.
You could exclude them to save space in logging capacity (and save money), but I don't recommend this. They brings 3 of 4 golden signals of your application (error rate, latency, traffic). Very important for production monitoring

How to listen to different channels of logs of NodeJS running NestJS in a Docker container?

I have a NestJS application running in a Docker container. I have two loggers; the NestJS Logger and Pino.
Pino is responsible for listening to requests and responses and printing them to the console, while I use the NestJS logger to print custom messages I output, and for logging errors and the such.
I essentially want to open two terminal windows for each one of the loggers and only get logs by one of the two on each. How would I go about accomplishing this?
You can configure to save each log in the two different files during execution, for example: req-res-log.txt and custom-log.txt, after open the terminal window and use the command "tail -f -n100 file-path" to show logs during your test.

Write all logs to the console or use a log library appender?

I'm running a couple of Node services on AWS across Elastic Beanstalk and Lambdas. We use the Bunyan library and produce JSON logs. We are considering moving our logging entirely to CloudWatch. I've found two ways of pushing logs to CloudWatch:
Write everything to the console using bunyan and use the built-in log streaming in both Beanstalk and Lambda to push logs to CloudWatch for me.
Use a Bunyan Stream like https://github.com/mirkokiefer/bunyan-cloudwatch and push all log events directly to CloudWatch via their APIs.
Are both valid options? Is one more preferred than the other? Any plusses and minuses that I'm missing?
I favor the first option: Write everything to the console using bunyan.
I think this separates concerns better than baking cloudstream into your app. Besides, bunyan-cloudwatch is not maintained.

Connecting to AWS Elasticsearch from non-AWS node.js app

I'm working on puzzling out an infrastructure-ish issue with a project I'm working on. The service that I'm developing is hosted on a transient, containerized platform w/o a stable IP — only a domain name (api.example.com). I'm utilizing Elasticsearch for search, so requests go to something like /my-search-resource and then use ES to find results to return. It's written in node and uses the supported elasticsearch driver to connect to ES.
The issue I'm having is in trying to use an AWS Elasticsearch domain. This project is bootstrapped, so I'm taking advantage of the free-tier from AWS, even though the other services are hosted/deployed on another platform (think: heroku, GCP, etc. — containerized and transient resources).
Since I can't just whitelist a particular IP, I'm not sure what I should do to enable the service to have access to the service. I do need to sign every request sent to the domain? This isn't ideal, since it would require monkey-patching the ES driver library with that functionality. Ideally, I'd like to just use username & pw to connect to the domain, but I know IAM isn't really oriented for something like that from an external service. Any ideas? Is this even something possible?
In my current project we connect to AWS Elastic by using the normal elasticsearch NPM package, and then use http-aws-es to create a specific AWS connection header when connecting.
So for example we have something like this:
const es = require( 'elasticsearch' );
const httpAwsEs = require( 'http-aws-es' );
const esClient = es.Client( {
hosts: 'somehostonaws',
connectionClass: httpAwsEs,
awsConfig: {
region: 'some-aws-region',
accessKey: 'some-aws-access-key',
secretKey: 'some-aws-secret-key'
}
} );
That wouldn't require the whole AWS SDK, but it would allow you to connect to Elastic's that are behind the AWS. Is that a solution to your issue?
This is not a solution to the problem, but a few thoughts on how to approach it. We're in the same pickle at the moment: we wish to use AWS but we do not want to tie in with AWS SDK. As far as I understand it, AWS offers 3 options:
Open to public (not advisable)
Fixed IP addresses (whitelist)
AWS authentication
Option 1 is not an option.
Option 2 presents us with the problem that we have to teach whatever we use to log there to go through a proxy so that the requests appear to come from the same IP address. Our setup is on Heroku and we use QuotaGuard for similar problems. However: i checked the modules I was going to use to interact (we're trying to log there, either to logstash or elasticsearch directly using winston transports) and they offer no support for proxy. Perhaps this is different in your case.
Option 3 is also not supported in any way by winston transports at this time. Which would leave us to use aws-sdk modules and tie in with AWS forever or write our own.

Resources