Azure Relay - Hybrid connection reuse - azure

Creating new HybridConnectionStream object like below, for every client request thread takes time (~3sec)
var client = new HybridConnectionClient(new Uri(String.Format("sb://{0}/{1}", relayConfiguration.Value.RelayNamespace, relayConfiguration.Value.ConnectionName)), tokenProvider);
-- (takes ~3 secs)
HybridConnectionStream relayConnection= await client.CreateConnectionAsync();
Is there any way out to reuse/cache already established HybridConnectionStream to serve all future request of same client or possible to create pool of HybridConnectionStream to cater future client request faster.
Our implementation as follows: Some user action on mobile app requires data from on premises DB, so the user action hits azure hosted service fabric api which in turn forward the request to specific azure relay hybrid connection then our custom, on premise hosted listener service pick the request & forward it to on premises web service to pick data Here the service fabric app creates NEW hybridconnection/hybridconnectionstream to establish connection with azure relay hybrid connection for each & every incoming user request which is time consuming & we want to avoid new hybridconnection creation everytime instead looking for options to cache & reuse already created costly hybridconnection or trying to create hybridconnection pool kind of thing. please advice if it is possible or suggest something else which is even better. Thanks

We use hybrid connections between one of our App Services and one of several VMs. An Azure hybrid connection is kind of like a VPN. (You have to tilt your head and squint just right.)
Within App Service, Hybrid Connections can be used to access application resources in other networks. Source: Azure App Service Hybrid Connections
So I think you should look at a hybrid connection as something persistent. It's part of your network infrastructure, not something you need to create for each thread. I think the amount of time it takes to create a hybrid connection is in line with that kind of thinking.

Just keep the HybridConnectionStream open and reuse it. Do not close it when you still need it. You can send multiple messages over a single stream. It is read / write so should not be a problem.

Related

SignalR Actors or Stateless Services

I'm looking into migrating an application to Service Fabric running on Azure. It's a realtime chat-style application using SignalR. I'd like to have an instance of a service running, self-hosting a SignalR hub (via OWIN) for each "affinity group" in which users are communicating. This is so I can avoid having to scale out SignalR with a backplane. I'd like to be able to spin these services up and down as groups of users enter and leave my application. I would expect I could host tens of these services per VM with a typical load of hundreds of users per group.
My idea is that I'd have a service locator that clients connect to initially to discover which service (port) is hosting their group. I would also have a service that spun up an instance of the chat service when the first request for that group came in.
How would I architect this in Service Fabric on Azure so that a) each of the services/actors is accessible with a SignalR client from the internet? and b) I'm only running as many services as necessary to serve m active groups out of n total groups? The demand for this app is very transient and spiky, so I'm hoping to take advantage of the fact that services are simply processes and can be provisioned in a matter of seconds vs. my current scenario where I have to spin up entire cloud services and wait tens of minutes to handle spikes (at which point it's too late)
You would do a few things:
Have a "service manager service" that intercepted initial join requests and created new Service Fabric services on the fly if they didn't already exist OR if they did already exist resolve the service's current location and then return that address to the client
Alternatively they could just pass back the internal service name (if you're ok exposing that information) and the client could do the resolution and then connection. To some degree this will depend on how much info you want to expose to the client, whether you can or want to modify it to "know about" Service Fabric, etc.
The client would then connect to the actual backing service directly
You would have to come up with some sort of mechanism for the actual chat services to know that there is nobody left and to either delete themselves or to go back through the manager.
You probably would be best off modeling the chat service as a Reliable Service rather than an actor as the Reliable Services stack allows more flexibility around communication protocols/stacks.

RabbitMQ security in mobile app

I am using Rabbit MQ broker in one of mobile apps that we are developing, I am bit puzzled about security aspects. we are using cloud hosted rabbitmq and hosting platform has given us user name and password (which have been changed since) and we are using SSLconnection so not so much worried about MIM or eavesdropping.
my concern is anybody who knows host and port can make connection to rabbitmq, since we have mobile app we are storing rabbitmq user name and password on device (although encrypted) so I guess that anybody who gets physical access to device and somehow decrypts username password can login to rabbitmq, and once you are logged in you can pretty much do anything on rabbitmq like deleting queues etc..
How are MQ like Rabbitmq used in mobile environment. Is there a better / more secure way of using rabbitmq.
In my experience, it is best to not have your mobile app connect to rabbitmq directly. Use a web server in between the app and RabbitMQ. Have your mobile app connect to your web server via HTTP based API calls. The web server will connect to RabbitMQ, and you won't have to worry about the mobile app having the connection information in it.
There are several advantages of this, on top of the security problem:
better management of RabbitMQ connections
easier to scale number of mobile users
ability to add more logic and processing to the back-end, as needed, without changing the mobile app
creating a connection to RabbitMQ is an expensive operation. It requires a TCP/IP connection. once that connection is open it stays open until you close it. if you open a connection from your mobile app and leave it open, you are reducing the number of available connections to RabbitMQ. if you open and close the connection quickly, you are inducing a lot of extra cost in creating and closing the connections constantly.
with a web server in the middle, you can open a single connection and have it manage multiple mobile devices. the web server will handle the http requests and use the one connection to rabbitmq to push messages to it.
since an HTTP web request is a short-lived connection, you'll be able to handle more users in a short period of time, than you would with direct rabbitmq connections.
this ultimately leads to better scalability as you can add another web server to handle thousands more mobile app instances, while only adding 1 new RabbitMQ connection.
this also lets you add middle-tier logic inside of the web server. you can add additional layers of processing as needed, without changing the mobile app. change the web server code and redeploy as needed.
if you must do this without a server in the middle, you likely won't be able to get around the security issue that you're having. the mobile device will contain the necessary information to make the connection.

Connections pooling REST calls made from Bluemix nodejs apps in to DataCenter services via Datapower

Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404

Disconnection issues with azure service bus relay

We are running some long-running test apps with Azure Service Bus relay over http, hosted in a windows service and most of the time, these run fine for 2-3 days. However every so often an internal network glich may occur (e.g. firewall reboots) that kills the internet connection.
At this point, the relay is dropped in Azure and our web app can no longer communicate with the on-premise service.
I would have thought that the Azure relay client was fault-tolerant - in that if it realises that it's lost connection with Azure then it will re-establish the connection andf if it can't keep trying until it can.. but it appears that this is not the case. This seems pretty fundamental...?
Only once have I ever seen a "System.ServiceModel.CommunicationException" where the service can't communicate on the internet, and that was when the client was starting up and trying to establish the connection in the first place.
Is there any advice or feedback on handling transient disconnections through the relay service (as it's a cloud --> on-prem direction then the client can't AFAIK ping the server).
If you are still experiencing issues, you may want to contact Azure support to understand why it is disconnecting. The Relay client should reconnect if something happens to the existing connection.
You may want to add ConnectionStatusBehavior to your ChannelFactory to have it output when the status for the connection changes. It will contain the error that caused it to change status.
var connectionStatusBehavior = new ConnectionStatusBehavior();
connectionStatusBehavior.Online += ConnectionStatusOnlineMethod;
connectionStatusBehavior.Offline += ConnectionStatusOfflineMethod;
channelFactory.Endpoint.Behaviors.Add(connectionStatusBehavior);
This issue is solved by Microsoft in version 2.6.5 of Microsoft Azure Service Bus dll. After 1 month of testing it seems to work.

Azure SignalR and backplanes for inter role communication

I am currently using signalR on Azure Websites with a single instance to push data to clients. No problems.
We're splitting our project into separate web/worker and wcf roles so we can scale them independently.
The site will work like this.
Scenario A
User submits some data to web role and it gets put in a service bus queue ready for worker A, sends a message to worker A that a new item has been added in case it's idle (to save polling). When worker A has processed it, sends a message back to web roles which pushes out to particular clients.
scenario B
receive data in wcf role and it gets put in a different service bus queue ready for worker B, wcf role sends message to worker B that a new item has been added in case it's idle. When worker B has processed it, sends a message to web roles and pushes it out to particular clients.
illustrated badly below:
I am going to enable signalR service bus backplane for the web roles to users. What i'm not sure about is how to get my roles communicating between each other.
I'll need:
web role => worker A
worker A => web role
wcf role => worker B
worker B => web role
Am I creating hubs on web, worker A and worker B all with service bus topics? And then connecting somehow with the signalr .net clients? How do I make sure it goes to all instances of the web role without exposing it publicly?
For some reason it seems simple for hundreds of clients to connect via JavaScript to my web role hub but try and connect some internal ones and I can't quite figure it out.
If anyones interested... What I ended up doing is this:
I created hubs on both the Web and Wcf role. The web role has a connection that allows javascript proxies at /signalr and the web and wcf role had one that didn't at /signalr-internal.
I used the Azure Service Bus as a backplane and let it handle both the web and wcf hubs automatically with no extra tinkering.
In the signalR authentication I probed to see where the connection was coming from (i.e an internal endpoint or the external ssl endpoing and denied / allowed access to particular hubs based on this. This allowed me to use the .net signalr clients on my workers that automatically connect / reconnect etc.
This ended up working nicely with no issues as of yet and it was simple to implement. I'll update if I run into any problems.
EDIT #1:
DO NOT USE THIS METHOD! Everything works splendidly until you actually deploy it into a live environment and then you get a host of issues that made me want to tear my hair out.
What I actually ended up doing (which work perfectly in live) was to use service bus Topics and create subscriptions to them for the listeners. This creates TCP connections and allows your communication to stay 100% internally without any crazy transport or boundary problems.
EDIT #2:
Since this post, Event Hubs were release and we switched over and never looked back. see last comment
Peter, realistically to get this approach to work you would need to switch to Web Roles or IIS hosted on an IaaS VM.
Currently Websites don't support Azure Virtual Networks which is the only way to enable private network inter-connectivity between instances on Azure.
You can add VMs, Web and Worker Roles to a Virual Network which should provide you with the access you're looking for without needing to expose everything via public endpoints.

Resources