I'm sending log messages with log4j 2 socket appender to a log server. If the remote server shuts down messages get lost. I want to retransmit them when the connection re establishes.
I could probably do it by catching the socket exception and saving the message to a temporary queue. Can it be done using only log4j configuration? Maybe using a failover appender or some such?
Edit:
Any ideas? Maybe with async appender. It already has a queue.
That's quite difficult. Because it should be some kind of reliable protocol. E.g. the fact that something was sent by socket doesn't mean it was received and written to a disk/etc. Have a look at JMSAppender for example.
For a simple fail-over you could just use two appenders and two remote servers, just don't reboot them both at the same time.
However, a logger is not something you should care too much about reliability. If you have a business requirement for it, you should implement it somehow differently using appropriate tools.
Related
Currently I am using stomp protocol to send messages to activeMQ and to listen to messages. This is done in Nodejs using stompit library.
When the application is having high CPU or Memory usage, it stops sending heartbeat to broker. So the broker redelivers the message which is currently being processed, leading to repetitive processing of the same message
On disabling heartbeat, the application seems to work fine but I am unsure of the further issues disabling heartbeat might cause. Even when the broker is stopped while sending messages, behaviour seems to be same with or without heartbeat
I have read that it is an optional parameter but I am unable to find out it's exact use cases
Can anyone mention scenarios where no heart beat can cause issues to the application?
Regarding the purpose of heart-beating the STOMP 1.2 specification just says:
Heart-beating can optionally be used to test the healthiness of the underlying TCP connection and to make sure that the remote end is alive and kicking.
Heart-beats potentially flow both from the client to the server and from the server to the client so the "remote end" referenced in the spec here could be the client or the server.
For the server, heart-beating is useful to ensure that server-side resources are cleaned up in a timely manner to avoid excessive strain. Server-side resources are maintained for all client connections and it helps the broker to be able to detect quickly when those connections fail (i.e. heart-beats aren't received) so it can clean up those resources. If heart-beating is disabled then it's possible that a dead connection would not be detected and the server would have to maintain its resources for that dead connection in vain.
For a client, heart-beating is useful to avoid message loss when performing asynchronous sends. Messages are often sent asynchronously by clients (i.e. fire and forget). If there was no mechanism to detect connection loss the client could continue sending messages async on a dead connection. Those messages would be lost since they would never reach the broker. Heart-beating mitigates this situation.
I am using Artemis ActiveMQ for internal asynchronous processes of my application.
All the connection logic is handled by Spring Integration.
I've encountered a low disk space scenario on the artemis server. This resulted in artmeis server blocking my message producers, without any warning (except a warning in the artemis server log). However it can be any other blocking scenario.
The application continued to produce messages, without being aware that the messages aren't written to the queue.
How can my application (producer) be informed about such an infrastructure issue, so I can throw exception or log an error, that will be visible at my applications' end.
If your application sends messages asynchronously then there's no way for it to know about problems sending the message (except for problems that happen specifically on the client). Sending messages async is "fire-and-forget"; the client just sends them and doesn't really care about what happens to them. You'd need to send them synchronously in order to get any indication of a problem on the broker.
Like ActiveMQ the Artemis server supports producer flow control (personally never used it). While the ActiveMQ documentation explicitly states that it also applies to async producers provided you set the Producer Window Size on the connection factory the Artemis documentation says nothing about it. But the windowing concept is the same. You probably should give it a shot.
I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).
I am building an app with several Node.js instances as a Backend (http server, socket server and several a pool of domain servers). Now I am trying to cover several communication and configuration aspects and am wondering if redis makes an appropriate solution.
So, I would use it for the following purposes:
Implementation of a shared run-time lookup table. It's a table of several hundreds of relativelly simple records, accessed and manipulated by 2 node-instances.
Implementation of message queues. Each domain server receives commands from the http server and should execute them sequentially. Domain server should be able to listen on a redis-event, and execute each new command upon its arival
socket sever also has a regis message queue and listen to its event, in order to push notification to connected clients
Is redis "too heavy" for such a purpose?
Does it offer all needed functionality?
I can definitelly implement a look-up in a file and/or memory and a queue using sockets. However, it might make a code cleaner and a solution more robust with redis.
Redis is definitely not a heavy solution, on the contrary.
It's small, insanely fast (when using pipelining), easy to deploy. I consider it as a light solution, a kind of swiss knife that may solves many problems.
Redis based message queues are OK if you don't expect any guarantee on the message delivery. That is to say Redis based queues can't assure you the client has received the message. If it's a problem for your application you should consider using an heavier solution, like 0mq or Rabbitmq.
Is there an ipc option to get the last message in message queue but not removing it?
I want this to allow many clients reading same messages from the same server..
Edit:
Server and clients are on the same machine!
Thanks
I don't believe there is any way to do that using either system v or POSIX message queues. Furthermore, AFAIK neither API allows you to send messages to a remote machine, so unless your clients are running on the same host as the server, you will need to use a higher-level technology.