Using RabbitMQ to capture web application log - iis

I'm trying to setup RabbitMQ to take web application logs to a log server.
My log server will listen to one channel and store the logs that comes in.
There are several web applications that need to send info to the log server.
With many connections (users) hitting the web server, what is the best design to publish messages to RabbitMQ without locking each other? Is it a good idea to keep opening a new connection to the MQ for each web request? Is there some sort of message queue pool?
I'm using IIS for a web server.

I assume you’re leveraging the .NET framework to build your application, given that it’s hosted in IIS. If so, you can also leverage Daishi.AMQP, which has a built-in QueuePool feature. Here is a tutorial that outlines the mechanism in full.
To answer your question, you should initially establish a connection to RabbitMQ from your application server. You can then initialise a Channel (a process that executes within the context of the underlying connection) to serve each HTTP request. It is not a good idea to establish a new connection for each request.

RabbitMQ has a build in queue feature. It is well documented, have a look at the official docs: http://www.rabbitmq.com/getstarted.html

Related

socket.io server to relay/proxy some events

I currently have a socket.io server spawned by a nodeJS web API server.
The UI runs separately and connects to the API via web socket. This is mostly used for notifications and connectivity status checks.
However the API also acts as a gateway for several micro services. One of these is responsible for computing the data necessary for the UI to render some charts. This operation is long-lasting and due to many reasons the computation will only start when a request is received.
In a nutshell, the UI sends a REST request to the API and the API currently uses gRPC to send the request to the micro service. This is bad because it locks both API and UI.
To avoid locking the socket server on the API should be be able to relay the UI request and the "computation ended" event received by the micro service, this way nothing would be locked. This could eventually lead to the gRPC server on the micro service to be removed.
Is this something achievable with socket.io?
If not is the only way for the API to spawn a secondary socket connection to the micro service for each one received by the UI?
Is this a bad idea?
I hope this is clear, thanks.
I actually ended up not using socket.io. However this can still be done with it if the API spawns a server and has the different services connected as clients, https://socket.io/docs/rooms-and-namespaces/ can be used.
This way messages can be "relayed" and even broadcasted from the server to both in case something happens.

How to configure MassTransit in an unreliable network environment?

I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).

Chat integration to existing Spring based application [Web + Mobile]

We are having existing Web application in Spring MVC. We are using Tomcat server. Also we have mobile app [Androis and iOs] for the same which is using spring based rest services. Now, we want to integrate chat functionality to both mobile and web application. I came across Socket.io and Node.js, which seems good. But, I am not much aware of these two frameworks. Then I came to know about Spring WebSocket.
Few Questions :
Which way is better to implement chat for existing spring based web
and mobile applications ? - Spring Websocket / Socket.io - Node.js
If we are going with Socket.io and Node.js, then how could I
configure the node.js to listen to my existing tomcat server port ?
Or I need to use separate port for client server communication for
chat functionality. [Because I tried to use same port, it was giving
Error: listen EADDRINUSE :::9090]
Any example would be the great help.
TIA.
Here is the sample application that sends messages back and forth,
Socket.io is used on client side to subscribe to a Topic of the server side.
Similarly you can use Sock.js with stomp client at client side and Spring on server side which provides an easy configuration with STOMP and also message handler annotation such as
#MessageMapping annotation which ensures that if a message is sent to destination mapping say "/hello" then the method associated with it should be called.
#SendTo annotation which is used to specify the value on which the returned message will be broadcast.
#Example stomp with spring for sending messages.

Scaling webRTC application on Node.js

I am working on a webRTC application where a P2P connection is established between a Customer and free agents .The agents are fetched using AJAX call in the application.I want to scale the application such that if the agents are running on any node server they are able to have a communication mechanism and update status on agent(available,busy,unavailable)can be performed.
My problem statement is that the application is running on 8040 and agentsservice is running on 8088 where the application is making ajax calls and bringing the data.What best can be done to scale the agents or any idea about how to scale the application.
I followed https://github.com/rajaraodv/redispubsub using Redis pub/sub but my problem is not resolved as the agents are being updated , fetched on another node using ajax calls .
You didnt gave enough info... but to scale your nodejs app you need a centeral place which will hold all the info that needed and than can scale redis can scale easily, youc can try socket.io etc..
now after you have your cluster of redis for example you need to make all your node.js to communicate with the redis server that way all you nodes server will have access to same info, now its up to you to send to right info to right clients
Message Bus approach:
Ajax call will send to one of the nodejs servers. If the message doesn't find its destination in that server, it will be sent to the next one, and so one. So signaling server must distribute the received message to all the other nodes in the cluster by establishing a Message Bus

Let a Node.js WAMP.IO server subscribe to a clients publish in purpose to measure latency

I know that the intended use case of a Pub/Sub pattern is to let the clients "multicast" directly to each other with the servers work transparent. But there has been a few occasions where I'd like my server to react a clients publish. Basically I'd like a server.Subscribe('event:ident', callback). Before starting to implement this, I guess I'm not the first running in to this limitation. How do you folks solve it?
In this latest case I'd like to measure the latency between two clients. So I let first client publish a message which the other client subscribe to will respond to ASAP. Obviuosly the traffic will pass through the server. So I'd like the server to also respond so I can separate the latency from the first client to the server from the latency from the server to the second client.
Do you see any pitfalls with this approach? (Except that I'm breaking the strict PubSub pattern)
Note that I'm using the WAMP.IO lib (implementing the WAMP-protocol). I'm not talking about Windows, Apache, PHP and MySQL server!
For WAMPv1, an (ugly) solution is to break the PubSub pattern and have the client publish the message as an RPC, where then server then publishes a PubSub message.
In WAMPv2 (under development), a server will also be able to subscribe to topics.

Resources