SignalR Actors or Stateless Services - azure

I'm looking into migrating an application to Service Fabric running on Azure. It's a realtime chat-style application using SignalR. I'd like to have an instance of a service running, self-hosting a SignalR hub (via OWIN) for each "affinity group" in which users are communicating. This is so I can avoid having to scale out SignalR with a backplane. I'd like to be able to spin these services up and down as groups of users enter and leave my application. I would expect I could host tens of these services per VM with a typical load of hundreds of users per group.
My idea is that I'd have a service locator that clients connect to initially to discover which service (port) is hosting their group. I would also have a service that spun up an instance of the chat service when the first request for that group came in.
How would I architect this in Service Fabric on Azure so that a) each of the services/actors is accessible with a SignalR client from the internet? and b) I'm only running as many services as necessary to serve m active groups out of n total groups? The demand for this app is very transient and spiky, so I'm hoping to take advantage of the fact that services are simply processes and can be provisioned in a matter of seconds vs. my current scenario where I have to spin up entire cloud services and wait tens of minutes to handle spikes (at which point it's too late)

You would do a few things:
Have a "service manager service" that intercepted initial join requests and created new Service Fabric services on the fly if they didn't already exist OR if they did already exist resolve the service's current location and then return that address to the client
Alternatively they could just pass back the internal service name (if you're ok exposing that information) and the client could do the resolution and then connection. To some degree this will depend on how much info you want to expose to the client, whether you can or want to modify it to "know about" Service Fabric, etc.
The client would then connect to the actual backing service directly
You would have to come up with some sort of mechanism for the actual chat services to know that there is nobody left and to either delete themselves or to go back through the manager.
You probably would be best off modeling the chat service as a Reliable Service rather than an actor as the Reliable Services stack allows more flexibility around communication protocols/stacks.

Related

SignalR and High Availability -- Can Hub Clients recover if the Server-Goes-Away?

Given an Azure hosted Web Role with a highly-available WebAPI (say 99.95%, as per https://azure.microsoft.com/en-us/documentation/articles/resiliency-disaster-recovery-high-availability-azure-applications/) application that has ~1000 clients. Client is a ReactJS application. The WebAPI application will push notifications tailored to specific client groups (e.g. not all client users are interested in all events, but >1 user may be interested in the same event).
From reading the SignalR documentation and playing with some samples it feels like SignalR Groups will help us flow the right events to the right ReactJS application instances. Additionally we would use one of the SignalR scale out providers to make sure that the we push to the clients from the right WebAPI server instance.
Question: How do applications recover when the "right WebAPI" instance becomes unavailable?
I can imagine a server-side active/passive scheme with some complexity around making sure there is at least one 'server' for each Hub Client...but can a Server connect (in an unsolicited way) to a Hub Client? Would we have each Hub Client connect (when registering for a Group) to >1 Server?
How have applications solved this issue with SignalR?
I think I missed the obvious point that the scale-out providers and the back plane provide the very protection that clients need against servers that go-away. Clients don't connect to a specific server, but to a load-balanced name.

Azure SignalR and backplanes for inter role communication

I am currently using signalR on Azure Websites with a single instance to push data to clients. No problems.
We're splitting our project into separate web/worker and wcf roles so we can scale them independently.
The site will work like this.
Scenario A
User submits some data to web role and it gets put in a service bus queue ready for worker A, sends a message to worker A that a new item has been added in case it's idle (to save polling). When worker A has processed it, sends a message back to web roles which pushes out to particular clients.
scenario B
receive data in wcf role and it gets put in a different service bus queue ready for worker B, wcf role sends message to worker B that a new item has been added in case it's idle. When worker B has processed it, sends a message to web roles and pushes it out to particular clients.
illustrated badly below:
I am going to enable signalR service bus backplane for the web roles to users. What i'm not sure about is how to get my roles communicating between each other.
I'll need:
web role => worker A
worker A => web role
wcf role => worker B
worker B => web role
Am I creating hubs on web, worker A and worker B all with service bus topics? And then connecting somehow with the signalr .net clients? How do I make sure it goes to all instances of the web role without exposing it publicly?
For some reason it seems simple for hundreds of clients to connect via JavaScript to my web role hub but try and connect some internal ones and I can't quite figure it out.
If anyones interested... What I ended up doing is this:
I created hubs on both the Web and Wcf role. The web role has a connection that allows javascript proxies at /signalr and the web and wcf role had one that didn't at /signalr-internal.
I used the Azure Service Bus as a backplane and let it handle both the web and wcf hubs automatically with no extra tinkering.
In the signalR authentication I probed to see where the connection was coming from (i.e an internal endpoint or the external ssl endpoing and denied / allowed access to particular hubs based on this. This allowed me to use the .net signalr clients on my workers that automatically connect / reconnect etc.
This ended up working nicely with no issues as of yet and it was simple to implement. I'll update if I run into any problems.
EDIT #1:
DO NOT USE THIS METHOD! Everything works splendidly until you actually deploy it into a live environment and then you get a host of issues that made me want to tear my hair out.
What I actually ended up doing (which work perfectly in live) was to use service bus Topics and create subscriptions to them for the listeners. This creates TCP connections and allows your communication to stay 100% internally without any crazy transport or boundary problems.
EDIT #2:
Since this post, Event Hubs were release and we switched over and never looked back. see last comment
Peter, realistically to get this approach to work you would need to switch to Web Roles or IIS hosted on an IaaS VM.
Currently Websites don't support Azure Virtual Networks which is the only way to enable private network inter-connectivity between instances on Azure.
You can add VMs, Web and Worker Roles to a Virual Network which should provide you with the access you're looking for without needing to expose everything via public endpoints.

Azure cloud service and web sites communication lock down

I have a azure cloud service (a server) where i host a Redis database. I also have a web site hosted in azure web sites. I want the web site to be able to talk to the Redis DB on port 6379. I know I can configure a public endpoint for that port on my server but that would open it for whole Internet. I want it opened only for azure web sites (or even better, only for my web site). How can i do this?
Windows Azure Web Sites is in an isolation bubble separate from your Cloud Services and there's no way to bridge that gap. Ideally you'd do this by connecting the web site machine to other Azure services via a Virtual Network, but this FAQ confirms you can't do that right now:
Can I use Windows Azure websites with Virtual Network?
No. We do not support websites with virtual networks.
Opening Redis up over the internet shouldn't even be considered as it doesn't have the kind of security you'd want out of the box to be opening up its port publicly as it is meant to be co-located with your application, so you really wouldn't want to do that. Never mind the added network overhead which will eat into the performance you expect to get by leveraging something like Redis anyway.
I believe your best bet given your current configuration is to add a Web Role that's part of the same Azure Cloud Service and run your web based application out of that so that it can communicate with worker role. It only requires a little bit of configuration to get this going (i.e. adding an InternalEndpoint to the Redis Worker Role). While I realize Web Roles don't offer as frictionless a development model as Web Sites, you have to choose the right tool for the job.
Another option, if you want to setup your Redis on a VM instead of tying it to the Cloud Service directly, is that you can setup a Virtual Network, put the Redis VM on the virtual network and then configure the Cloud Service so that it's part of the same affinity group and add the NetworkConfiguration/VirtualNetworkSite configuration section to the Cloud Service's .cscfg.
Which approach makes more sense all depends on how you leverage your Redis instance, but the main benefit of the latter approach is that the Redis instance is not recreated each time you deploy your Cloud Service and, so, any data that's in it will stay available between deployments. Another benefit is if you want to build and leverage a Redis cluster across multiple Cloud Services this enables you to do that.

Azure architecture and role communication

I have an application that includes multiple hosted services in Azure. Two are web roles, one is a worker role. The problem is, two of the roles need to now communicate. One is a web role that serves as the admin interface. The other is a worker role. The admin interface needs to issue commands, like pause any running jobs, report status, etc. The 2nd web role is just a site, unrelated to the first two.
(Just to preface, I want to make sure my use of Azure terms are correct):
Hosted Service: An Azure 'application'. Multiple roles with two deployments, production and staging
Deployment: A specific instance of all the roles, either in production or staging, with a single external endpoint (*.cloudapp.net)
Role: A single 'job', either a web role or a worker role.
Instance: The VM's that service a role
Also to verify: Is it possible to add roles to an existing hosted service? That is, if I deploy 2 roles from one solution, can I add a third role in another deployment from a different solution?
Because each role is in it's own hosted service, it presents some challenges. Here's my understanding of the choices in how they can communicate:
Service Bus: This seems to be the best from an architecture standpoint. Each hosted service can connect a WCF service to the service bus, and admin can issue commands to the worker role. The downside is this is pretty cost prohibitive.
Internal endpoints: This seems best if cost is factored it. The downside is you have to deploy all the roles at once, and the web roles cannot have unique addresses. The only way to access both web roles externally is with port forwarding. As far as I'm aware, it's not possible to deploy 2 roles from one solution, and 1 role from another?
External WCF service: Each component can be in individual projects and individual hosted services. The downside is there's now an externally visible service for administration.
Queue/Table storage: Admin can write commands to the Azure Queue, and the worker roles can write their responses to table storage. This seems fine for generating reports, but seems not great for issuing synchronous commands.
Should multiple roles that all service "the application" all go into the same Azure hosted service? If from a logical standpoint it makes the most sense, then I'd be happy to go with #2 and just deal with port forwarding.
First off, your definitions look pretty good and I think you understand the problem pretty well.
Also with each deployment, each external endpoint can only be assigned to one role. So if you want to run two sites on port 80, then they need to be in the same role. This is just like setting up two sites on an IIS with the same port (which is exactly what you're working with). The sites are distinguished using host headers. If you don't want to go to that effort or if you want to deploy the sites separately, then you'll want to put your stand alone site in its own service/cloud project.
For the communication part, the one option that you've missed off is service bus queues. Microsoft have released a library using service bus queues that is specifically designed for inter-role communication.
Other than that, the extra comments on your points:
You're right internal endpoints is the cheapest way to go, but you will be rolling it all yourself. Of course it could setup WCF services to listen on these internal endpoints.
An external WCF service might work OK, but if you have more than one instance of your role, all WCF calls will go through the load balancer and the message will only be sent to one of the instances. You would need to make multiple calls to make sure the message was received by all instances and even then you couldn't be sure it had worked without some other feedback method.
Storage queues suffer from a similar issue. If you have two instances and want them both to receive the same message, there's no way to guarantee that this will happen.

Communication between 2 web apps running in a azure web role instance

I have 2 web applications running in a web role and I only run single instance in the azure cloud. I would like to send and receive notifications between these 2 applications and any outsider should not have access to them.
That means, web service in both of them are out unless there is a
way to block outsiders from accessing a web service and only a
request from same system would succeed (May be vip and request ip
comparison would do, anything beyond that?).
File system watchers. Create a LocalStorage and use it in both
web apps and watch for files webappA and webappB in each other.
Use Azure Storage Queues.
MSMQ - not interested as its not supported in azure.
Could you please list other options available for me in azure web role
? Thanks in advance.
Note: Please avoid suggesting Internal Endpoint as I am running only a single instance with 2 web applications running in it.
You can set up "private" web services to listen on Internal endpoints. These are not accessible via the outside world. You could have a WebAppOne endpoint and WebAppTwo endpoint, both marked Internal. You then just query the role environment to discover the assigned port for each, and fire up your ServiceHost.
Or... you could use a queue to pass information, as long as:
You're ok with it being asynchronous
You're ok with messages being looked at "at least" once
You're ok with messages possibly being looked at out of order
Or... your apps could write information to an Azure table. No need to expose the table to the outside world.

Resources