I am writing an application where messages about data changes through an API are published over a RabbitMQ message bus. The messages are consumed by a SignalR hub and pushed to subscribed clients.
Now I find messages in my Eventlog that start with:
Exception: System.Threading.ThreadAbortException: Thread was being aborted.
I found SO questions and answers like:
Why am i getting "Thread was being aborted" in asp.net?
and
What exactly is Appdomain recycling
But that raises the question, if applications running in an AppDomain in the application pool is recycled upon inactivity, how can SignalR maintain a connection to subscribed clients?
Why does it work for SignalR to run in an IIS app pool but not a RabbitMQ consumer?
SignalR client libraries provided by the SignalR might have a default behavior of reconnecting once loosing connection (the JS one definitely does).
So having an Pool Recycle would just make your client reconnect to your server, getting again messages as before interruption.
Related
We need to process Azure Bus queue messages receiving configuration.
We need to implement an application that listens for these messages and processes them when they are received.
We can create .Net core console application to listen the queue.
But we need to host this in Linux. And how to host this console application in linux as always running service ?
Thanks in advance.
I'm reading the very limited information about Azure SignalR service as well as the quick start guide and want to make sure I'm understanding this correctly.
We still seem to have a hub and if I understand this correctly, the function of Azure SignalR service is to simply push the messages to connected clients.
In my case, I store the history of chat so by hitting the hub first, I'm able to still use my backend logic to persist chat history or do any other processing that I may want. Then simply allow Azure SignalR service to push the data to connected clients.
The main benefit seems to be handling the scaling of the service.
Am I getting this right?
Yes, you are totally right.
You will use exactly the same API of ASP.NET Core SignalR to write your business logics, which means you can persist whatever you want when the messages from clients hit your hubs.
Azure SignalR Service will be the underlying transport between your app server and connected clients. For example, when you want to broadcast messages to all your clients, you actually only send one message to Azure SignalR Service and the service will broadcast the message to all clients for you. So that you don't have to worry about the scale-out. Azure SignalR Service will handle the scaling-out for you.
You understand correctly.
SignalR is not yet ready for production (when speaking about ASP.NET Core), SignalR for ASP.NET MVC has been around for a while (stable).
SignalR consists of 2 pieces: server and client. The server is as you describe: a "hub" that you can use to push information to clients.
On a webpage you load a piece of generated javascript (generated automatically from your hub definitions). Basically you let your website visitors (clients) connect to the hub through signalR's mechanism (signalR will choose the proper way to connect depending on the browser), and then 'subscribe' to the different methods you have active in your hub.
The workings are simple: whenever you call code in your hub (can be from clients, or from backend code) communication is automatically handled for you to all subscribed clients.
Note: If you are running this on an azure web app: enable the "always on" setting, and set the "websockets" toggle to "enabled", otherwise you'll see strange behaviour.
Note2: The RC version for signalR core 1.0 has just been released (7th of may 2018) so it might be a while before this software starts becoming stable and available through the public nuget/npm channels.
We are designing a system that is web based but also uses NServiceBus and Azure Service Bus to communicate. We have an on premiss server running IIS for the application and also several cloud services running web roles for communicating with external parties (those parties call on a RESTful interface in the cloud and the message is put on the bus or vice versa).
Both the cloud solutions and the on premiss server need to subscribe and publish messages. The publishing does not seem an issue but what happens to those subscriptions if IIS shuts down the processes, do they get woken again when a message arrives or is the service bus really pull based subscription so requiring an active listener.
I have seen questions on here about hosting publishers but nothing about the safety of subscribers.
Extra Info:
Quite by chance we noticed that sometimes in our development environment the applications would need to be started a couple of times before the messages would start arriving. However it occurred to me that if there were actually messages already in the queue when the application started then they would be processed and otherwise not. So the restart just means that it sees older messages and processes them then once running it gets on just fine. However another colleague noted that nsb related startup log files were only being generated after he visited the website hosted in the same web application for the first time. I have just had a similar problem, messages were in the queue but the breakpoint on the handler was not being hit. When I hit a webapi method on the same iis application instance suddenly messages were being processed. So my conclusion from this is that no, it is not safe to host a subscriber under iis or in this case even a web role.
I am answering my own question based on my experience rather than any deep knowledge of NServiceBus or Azure Service Bus.
It seems that it is not safe to rely on the service bus listener to be active under IIS or a web role. Both can shut down the process and, as the bus listener relies on polling the service rather than messages being pushed to the listener from the service, so no further messages will be received until the process starts again.
The good news is that the process will be restarted when the associated web site or web service is hit. so if you absolutely know that you will receive more traffic on the site than on the bus then you might take the risk that the bus listener will remain active. However for our project we are in the process of splitting the listener into a separate windows service.
To give you background of the question, i am considering Kafka as a channel for inter service communication between micro services. Most of my micro services would have web nature (Either Web server/ REST server/ SOAP server to communicate with existing endpoints).
At some point i need asynchronous channel between micro services so i a considering Kafka as the message broker.
In my scenario, i have RESTfull microservice once its job is done, it that pushes messages to Kafka queue. Then another micro service which is also web server (embedded tomcat) with small REST layer would consume those messages.
Reason for considering messaging queue is that even if my receiving micro service is down for some reason, all my incoming message would be added to queue and data flow wont be disturbed. Another reason is that Kafka is persistent queue.
Typically, Kafka consumer are multi threaded.
Question is that, receiving micro service being web server, i am concerned about creating user threads in a servlet container managed environment. It might work. But considering the fact that user created threads are bad within web application, i am confused. So what would be your approach in this scenario.
Please suggest.
I am currently using signalR on Azure Websites with a single instance to push data to clients. No problems.
We're splitting our project into separate web/worker and wcf roles so we can scale them independently.
The site will work like this.
Scenario A
User submits some data to web role and it gets put in a service bus queue ready for worker A, sends a message to worker A that a new item has been added in case it's idle (to save polling). When worker A has processed it, sends a message back to web roles which pushes out to particular clients.
scenario B
receive data in wcf role and it gets put in a different service bus queue ready for worker B, wcf role sends message to worker B that a new item has been added in case it's idle. When worker B has processed it, sends a message to web roles and pushes it out to particular clients.
illustrated badly below:
I am going to enable signalR service bus backplane for the web roles to users. What i'm not sure about is how to get my roles communicating between each other.
I'll need:
web role => worker A
worker A => web role
wcf role => worker B
worker B => web role
Am I creating hubs on web, worker A and worker B all with service bus topics? And then connecting somehow with the signalr .net clients? How do I make sure it goes to all instances of the web role without exposing it publicly?
For some reason it seems simple for hundreds of clients to connect via JavaScript to my web role hub but try and connect some internal ones and I can't quite figure it out.
If anyones interested... What I ended up doing is this:
I created hubs on both the Web and Wcf role. The web role has a connection that allows javascript proxies at /signalr and the web and wcf role had one that didn't at /signalr-internal.
I used the Azure Service Bus as a backplane and let it handle both the web and wcf hubs automatically with no extra tinkering.
In the signalR authentication I probed to see where the connection was coming from (i.e an internal endpoint or the external ssl endpoing and denied / allowed access to particular hubs based on this. This allowed me to use the .net signalr clients on my workers that automatically connect / reconnect etc.
This ended up working nicely with no issues as of yet and it was simple to implement. I'll update if I run into any problems.
EDIT #1:
DO NOT USE THIS METHOD! Everything works splendidly until you actually deploy it into a live environment and then you get a host of issues that made me want to tear my hair out.
What I actually ended up doing (which work perfectly in live) was to use service bus Topics and create subscriptions to them for the listeners. This creates TCP connections and allows your communication to stay 100% internally without any crazy transport or boundary problems.
EDIT #2:
Since this post, Event Hubs were release and we switched over and never looked back. see last comment
Peter, realistically to get this approach to work you would need to switch to Web Roles or IIS hosted on an IaaS VM.
Currently Websites don't support Azure Virtual Networks which is the only way to enable private network inter-connectivity between instances on Azure.
You can add VMs, Web and Worker Roles to a Virual Network which should provide you with the access you're looking for without needing to expose everything via public endpoints.