winston logstash udp connections handleing - node.js

I am using Winston log-stash package for my Nodejs services (syslog udp).
While i was testing my app i saw that my docker containers opens lots of connections to the firewall before the log-stash server.
In large scale testing, 3 of my docker containers made more than 80,000 connections at the same time.
My Nodejs services are RestApi (express), run in debug mode and writes lots of logs per request.
How Winston log-stash handle udp log-stash connections,
How can i manage them?
If Winston log-stash opens connection per request, how should I design my code in order keep writing large amount of logs per request?
thanks :)

Related

WebSocket server clustering with state sharing in Kubernetes deployment

I am looking for a way to cluster WebSocket servers - written in node - so that a proper load balancing and a client request will be served by appropriate node instance. In case of WebSocket, the connection is stateful and I believe a node cluster could help. I want the connection/state information to be shared so that any node instance could serve the request than the client does not need to keep a track of the specific node instance. The reason for this thought process is to ensure that the node instances can be killed and replaced by new instances without bothering about the overheads of state management.
I have a setup where we use multiple instances with load balancers in AWS ECS, deployed by CI/CD pipelines. The number of frontend and backend servers varies between 2 and 8 each depending on bursts and current deployments. If one server crashes, a new one will take its place.
We use socket.io with the Redis adapter to share the websocket state between all connected instances via the in-memory db Redis. This ensures that even if the clients are connected to different instances, they all receive the events.

Winston for logging a mulitlpe container application

I am planning to use the Digital Ocean App Platform to host my backend but I wanted to know if each container in App platform would have a different log file (assuming I’m logging into files with Winston) and if this is the case would this even be an issue.
I thought of multiple solutions in case I should handle this:
1- Log into the database
2- Make another container that expects to get the logs through HTTP from the other running containers.
(Note I'm new with dealing with containers so I might be missing/misunderstanding something)
Yes, you will get log files from each node.js process. For this scenario Winston supports alternative transports that can centralize logs from many sources.
As you suggested (2) some of these transport options write logs to an RDBMS.

Node.js app redundancy

I am wanting to log a public chat to a database to be able to lookup chat history in the future.
I am familiar with node.js and mongoDB.
I don't want to miss logging any chat messages, and so was looking for a redundant solution in case of network disconnect or server failure/restarts.
Everything I've seen regarding failover and balancing, is with the node app as a http server, and so can be solved with a reverse proxy sending the requests to the different servers.
But I'm at a loss as how to have 2+ VPSs in different regions, running a node app that monitors the same public chat and logs those chat entries to a DB without race conditions on the DB.
Messaging between the node instances? But it seems like there would also be race conditions with that...
Thanks for the help.

My application stops writing to files and opening new TCP connections after some time

I have no idea what could be causing this.
I have a Node application which connects to an external server over TCP and communicates with it. Part of its functionality also includes making relatively frequent HTTP requests.
Each instance of the application establishes up to 30 TCP connections to the external server, and makes HTTP requests as needed. Previously, I've been hosting the application on relatively cheap VPSes, with one instance of the application per server.
Now I'm setting it up on a proper dedicated server. I could set it up to run one instance on the dedicated and increase the connection limit that I've set so that one instance could cover several smaller instances on the VPSes, but I'd rather set up several instances of the application on the dedicated each limited to 30 connections.
The application also writes logs to disk (just a plain flat file), and sends logs via UDP to an external logging server. This is done using winston.
After some uptime, however, I'm experiencing an issue where HTTP requests time out (ETIMEDOUT) and the logs stop being written to disk. The application itself is still running, and the TCP connection to the server is still active and working. I can communicate with the application through that connection and it responds as expected. The logging server is still receiving the UDP packets as well. I've noticed that the log files stop being written to, but after a few minutes they appear to be flushed to disk finally, and the missed logs then appear.
My first suspicion was an open-files limit being hit, but the OS (Ubuntu) doesn't have a limit that I'm hitting. I tried disabling any Node HTTP Agent behavior (I'm using the request module, so I just passed false for the agent option).
It's not the webserver on the other end rejecting my connections. While the issue was occurring I was able to successfully wget a file from the webserver using the same external IP as the Node app is using.
I'm tailing the log file and noticing that the time between when a line is generated and when it's flushed to the disk is gradually increasing.
CPU and memory usage are low so there's no way that's the issue. iowait in top is 0.0. I have no idea where to go from here. Any help at all would be greatly appreciated.
I have Node 5.10.1.

Using RabbitMQ to capture web application log

I'm trying to setup RabbitMQ to take web application logs to a log server.
My log server will listen to one channel and store the logs that comes in.
There are several web applications that need to send info to the log server.
With many connections (users) hitting the web server, what is the best design to publish messages to RabbitMQ without locking each other? Is it a good idea to keep opening a new connection to the MQ for each web request? Is there some sort of message queue pool?
I'm using IIS for a web server.
I assume you’re leveraging the .NET framework to build your application, given that it’s hosted in IIS. If so, you can also leverage Daishi.AMQP, which has a built-in QueuePool feature. Here is a tutorial that outlines the mechanism in full.
To answer your question, you should initially establish a connection to RabbitMQ from your application server. You can then initialise a Channel (a process that executes within the context of the underlying connection) to serve each HTTP request. It is not a good idea to establish a new connection for each request.
RabbitMQ has a build in queue feature. It is well documented, have a look at the official docs: http://www.rabbitmq.com/getstarted.html

Resources