Kafka as Messaging queue in Microservices - multithreading

To give you background of the question, i am considering Kafka as a channel for inter service communication between micro services. Most of my micro services would have web nature (Either Web server/ REST server/ SOAP server to communicate with existing endpoints).
At some point i need asynchronous channel between micro services so i a considering Kafka as the message broker.
In my scenario, i have RESTfull microservice once its job is done, it that pushes messages to Kafka queue. Then another micro service which is also web server (embedded tomcat) with small REST layer would consume those messages.
Reason for considering messaging queue is that even if my receiving micro service is down for some reason, all my incoming message would be added to queue and data flow wont be disturbed. Another reason is that Kafka is persistent queue.
Typically, Kafka consumer are multi threaded.
Question is that, receiving micro service being web server, i am concerned about creating user threads in a servlet container managed environment. It might work. But considering the fact that user created threads are bad within web application, i am confused. So what would be your approach in this scenario.
Please suggest.

Related

Implementing RESTful API in front of Event based microservices

I'm working on a system that implements multiple microservices which communicate via a RabbitMQ messaging bus.
These microservices are created using python with the pika library (to publish messages as well as consume a RabbitMQ queue)
One of these microservices (let's call it 'orders') has a connected database to store data
So far, the application components are asynchronous relying fully on RabbitMQ exchanges/queues for communication and, when needed, implements callback queues when one microservice needs to request data from another.
Now that I have backend microservices talking to each other, I would like to implement a RESTful API interface for this 'orders' microservice so clients (ex: web browsers, external applications) can receive and send data.
I can think of two ways to do this:
Create another microservice (let's call it 'orders-api') in something like flask and have it connected to the underlying database behind the 'orders' microservice. This seems like a bad idea since it breaks the microservice pattern to only have a database connected to a single microservice (I don't want two microserices having to know about the same data model)
Create an 'api-gateway' microservice which exposes a RESTful API and, when receiving a request, requests information from the 'orders' microservice via the messaging bus. Similar to how RabbitMQ documents Remote Procedure Calls here: https://www.rabbitmq.com/tutorials/tutorial-six-python.html. This would mean that the 'api-gateway' would be synchronous, and thus, would block while waiting for a response on the messaging bus.
I'm not sure if there are other ways to achieve this which I'm not familiar with. Any suggestions on how to integrated a RESTful API in this environment would be appreciated!

How to dynamically detect the web-server nodes in a load-balanced cluster?

I am implementing some real-time, collaborative features in an ASP.NET Web API based application using WebSockets and things are working fine when the app is deployed on a single web server.
When it is deployed on a farm behind a software (or hardware) load-balancer, I would like implement the pub-sub pattern to make any changes happening on one of the web servers invoke the same logic to check and push those changes via websocket to the clients connected to any of the other web servers.
I understand that this can be done if there an additional layer using RabbitMQ, Redis or some such pub/sub or messaging component.
But is there a way to use DNS or TCP broadcast or something that is already available on the Windows Server/IIS to publish the message to all the other sibling web-servers in the cluster?
No.
But you can use MSMQ instead of RabbitMQ, but still that's not really going to help as it's a queue and not pub/sub so ignore that.
If it's SignalR you're using there are plenty of docs on how to scale out like Introduction to Scaleout in SignalR
Even if it's not SignalR then you can probably get some ideas from there.

Technical Differences Between Service and Web Workers

I've studied Web and Service Workers and I know that they are intended for different approaches. This thread describes them in more detail. However, what I don't understand is the technical difference between these two. While a Service Worker is meant to be a proxy between a server and a client-side application, a Web Worker can be that too. It has access to XMLHttpRequest so you can use it as proxy too.
What is the technical difference between a Web Worker and a Service Worker?
The key difference between the two is that a Service Worker is intended to intercept network requests that would typically be sent directly to a remote service and handle the event such that the front end client code can continue working even when the network is unavailable. Which is to say provide the basis of an offline mode for a web app. The front end code makes standard fetch() requests as if it were talking to the server that are intercepted by the service worker.
A Web Worker is just a general purpose background thread. The intention here is to run background code such that long running tasks do not block the main event loop and cause a slow UI. Web Workers do not intercept network requests, rather the front end code explicitly sends messages to the Web Worker.

Service Bus Brokered VS Relayed Messaging

I have a question that is confusing me what are the differences between the types of service bus, the brokered messaging and the relayed messaging? I am not looking for it from the development perspective but I want to understand more the concept and the differences between them.
Thank you.
Service Bus Relay and Service Bus Brokered Messaging are both mechanisms for developing distributed and hybrid applications. However, they target different development and access patterns.
Service Bus (SB) Relay provides a simple & secure way to do service remoting, i.e., it enables you to securely expose a service hosted on a private cloud to external clients. As is the case with service remoting scenarios, clients explicitly invoke the methods exposed by the "Relayed" service. The primary advantage of SB Relay is that the service can be exposed without requiring any changes to your Firewall settings or any intrusive changes to your corporate network infrastructure.
SB Brokered Messaging on the other hand provides a durable messaging platform with components such as Queues, Topics and Subscriptions. These can be used to implement complex patterns such as publish-subscribe and temporal decoupling between different parts of your application. Since the brokered messaging infrastructure can reliably store the messages, the senders and the receivers do not have to be online at the same time, or do not have to process the messages at the same pace.
Relayed messaging is thus appropriate for scenarios where you have a service that you want to expose to external clients. Clients interact with the "Relayed" service in the same manner that they would if they were on the local network, except that they access it via the SB Relay endpoint. Since this is a service remoting scenario, response is immediate subject to network latency. However, if for whatever reason the service is unavailable at that moment, the client's request will always fail.
In the case of brokered messaging, since the send & receive operations are decoupled, the sender can continue to send messages that are reliably stored on the service regardless of whether the receiver is online or not. However, the tradeoff for this resiliency is that the request will be processed subject to receiver's ability to retrieve and process the message.
I think the main difference is the synchronous vs asynchronous nature of connectivity.
Where relay is mostly a firewall friendly way to expose web services to the public world (even behind firewalls, NAT devices, etc), messaging is more of a way to exchange in an asynchronous way messages over queues and topics. (look at it as the next version MSMQ with cloud support :))
Everything depends on the scenario, but if you are looking for
- Routing (pub/sub)
- Loose coupling sender & receiver
- Load leveling
Then you should definitely go for messaging.
If you want to make your service easily reachable for the outside world, relay service is your friend.
From Azure's site:
Relay
The Service Bus Relay service enables you to build hybrid applications
that run in both a Windows Azure datacenter and your own on-premises
enterprise environment. The Service Bus relay facilitates this by
enabling you to securely expose Windows Communication Foundation (WCF)
services that reside within a corporate enterprise network to the
public cloud, without having to open up a firewall connection or
requiring intrusive changes to a corporate network infrastructure.
Relay also handle load balancing for you (you can have multiple applications listen at the same endpoint for the majority of the bindings).
Brokered Messaging
The second messaging solution, new in the latest release of the
Service Bus, enables “brokered” messaging capabilities. These can be
thought of as asynchronous, or decoupled messaging features that
support publish-subscribe, temporal decoupling, and load balancing
scenarios using the Service Bus messaging infrastructure. Decoupled
communication has many advantages; for example, clients and servers
can connect as needed and perform their operations in an asynchronous
fashion.
Brokered messaging includes Queues and Topics / Subscriptions that allow you to send / receive messages asynchronously.
The main difference is that for relay, you have applications listening at an endpoint. When you send a message, the application processes that message when it is received. For brokered messaging, the message is stored when it is received by the client and can be processed at any time.

Using Service Bus to send messages from a Web Role to all other Web Roles

I’m designing a backend that allows users to establish a TCP socket with it and send/receive stuff along this socket (using a pseudo-protocol I’ve made up) in real-time.
It has to be scalable – i.e. architected on a cloud host. Currently I’m evaluating Windows Azure.
To achieve scalability the application will run on several Web Role Instances. Meaning the users’ TCP sockets will be split across several instances (via a load balancer).
This backend is an event-driven application – when a user sends something to it the message should be passed on to all other connected users.
This means there must be a reliable way to send messages from one Web Role Instance to all other Web Role Instances. As far as I understand, this is what inter-role communication refers to.
Using Service Bus, is it possible for all Web Role Instances to subscribe to a Topic and publish messages to it? Thus implementing the event-driven requirements of my distributed application?
(If not then I’ve misunderstood what this article is about: http://windowsazurecat.com/2011/08/how-to-simplify-scale-inter-role-communication-using-windows-azure-service-bus/)
I wanted to find this out before delving too deep into learning C#, .NET and Windows Azure development.
Thank you for your help.
Yes, using the service bus, all the web roles could send messages to a single topic and each role could have unique individual subscriptions to that topic, such that they all receive the messages sent.
Clemens Vaster has implemented an extension to SignalR using the service bus. It is possible that SignalR + the Service Bus may meet the needs of your project, including the TCP socket implementation.
http://vasters.com/clemensv/2012/02/13/SignalR+Powered+By+Service+Bus.aspx

Resources