RabbitMQ security in mobile app - security

I am using Rabbit MQ broker in one of mobile apps that we are developing, I am bit puzzled about security aspects. we are using cloud hosted rabbitmq and hosting platform has given us user name and password (which have been changed since) and we are using SSLconnection so not so much worried about MIM or eavesdropping.
my concern is anybody who knows host and port can make connection to rabbitmq, since we have mobile app we are storing rabbitmq user name and password on device (although encrypted) so I guess that anybody who gets physical access to device and somehow decrypts username password can login to rabbitmq, and once you are logged in you can pretty much do anything on rabbitmq like deleting queues etc..
How are MQ like Rabbitmq used in mobile environment. Is there a better / more secure way of using rabbitmq.

In my experience, it is best to not have your mobile app connect to rabbitmq directly. Use a web server in between the app and RabbitMQ. Have your mobile app connect to your web server via HTTP based API calls. The web server will connect to RabbitMQ, and you won't have to worry about the mobile app having the connection information in it.
There are several advantages of this, on top of the security problem:
better management of RabbitMQ connections
easier to scale number of mobile users
ability to add more logic and processing to the back-end, as needed, without changing the mobile app
creating a connection to RabbitMQ is an expensive operation. It requires a TCP/IP connection. once that connection is open it stays open until you close it. if you open a connection from your mobile app and leave it open, you are reducing the number of available connections to RabbitMQ. if you open and close the connection quickly, you are inducing a lot of extra cost in creating and closing the connections constantly.
with a web server in the middle, you can open a single connection and have it manage multiple mobile devices. the web server will handle the http requests and use the one connection to rabbitmq to push messages to it.
since an HTTP web request is a short-lived connection, you'll be able to handle more users in a short period of time, than you would with direct rabbitmq connections.
this ultimately leads to better scalability as you can add another web server to handle thousands more mobile app instances, while only adding 1 new RabbitMQ connection.
this also lets you add middle-tier logic inside of the web server. you can add additional layers of processing as needed, without changing the mobile app. change the web server code and redeploy as needed.
if you must do this without a server in the middle, you likely won't be able to get around the security issue that you're having. the mobile device will contain the necessary information to make the connection.

Related

Azure Relay - Hybrid connection reuse

Creating new HybridConnectionStream object like below, for every client request thread takes time (~3sec)
var client = new HybridConnectionClient(new Uri(String.Format("sb://{0}/{1}", relayConfiguration.Value.RelayNamespace, relayConfiguration.Value.ConnectionName)), tokenProvider);
-- (takes ~3 secs)
HybridConnectionStream relayConnection= await client.CreateConnectionAsync();
Is there any way out to reuse/cache already established HybridConnectionStream to serve all future request of same client or possible to create pool of HybridConnectionStream to cater future client request faster.
Our implementation as follows: Some user action on mobile app requires data from on premises DB, so the user action hits azure hosted service fabric api which in turn forward the request to specific azure relay hybrid connection then our custom, on premise hosted listener service pick the request & forward it to on premises web service to pick data Here the service fabric app creates NEW hybridconnection/hybridconnectionstream to establish connection with azure relay hybrid connection for each & every incoming user request which is time consuming & we want to avoid new hybridconnection creation everytime instead looking for options to cache & reuse already created costly hybridconnection or trying to create hybridconnection pool kind of thing. please advice if it is possible or suggest something else which is even better. Thanks
We use hybrid connections between one of our App Services and one of several VMs. An Azure hybrid connection is kind of like a VPN. (You have to tilt your head and squint just right.)
Within App Service, Hybrid Connections can be used to access application resources in other networks. Source: Azure App Service Hybrid Connections
So I think you should look at a hybrid connection as something persistent. It's part of your network infrastructure, not something you need to create for each thread. I think the amount of time it takes to create a hybrid connection is in line with that kind of thinking.
Just keep the HybridConnectionStream open and reuse it. Do not close it when you still need it. You can send multiple messages over a single stream. It is read / write so should not be a problem.

How to dynamically detect the web-server nodes in a load-balanced cluster?

I am implementing some real-time, collaborative features in an ASP.NET Web API based application using WebSockets and things are working fine when the app is deployed on a single web server.
When it is deployed on a farm behind a software (or hardware) load-balancer, I would like implement the pub-sub pattern to make any changes happening on one of the web servers invoke the same logic to check and push those changes via websocket to the clients connected to any of the other web servers.
I understand that this can be done if there an additional layer using RabbitMQ, Redis or some such pub/sub or messaging component.
But is there a way to use DNS or TCP broadcast or something that is already available on the Windows Server/IIS to publish the message to all the other sibling web-servers in the cluster?
No.
But you can use MSMQ instead of RabbitMQ, but still that's not really going to help as it's a queue and not pub/sub so ignore that.
If it's SignalR you're using there are plenty of docs on how to scale out like Introduction to Scaleout in SignalR
Even if it's not SignalR then you can probably get some ideas from there.

SignalR and High Availability -- Can Hub Clients recover if the Server-Goes-Away?

Given an Azure hosted Web Role with a highly-available WebAPI (say 99.95%, as per https://azure.microsoft.com/en-us/documentation/articles/resiliency-disaster-recovery-high-availability-azure-applications/) application that has ~1000 clients. Client is a ReactJS application. The WebAPI application will push notifications tailored to specific client groups (e.g. not all client users are interested in all events, but >1 user may be interested in the same event).
From reading the SignalR documentation and playing with some samples it feels like SignalR Groups will help us flow the right events to the right ReactJS application instances. Additionally we would use one of the SignalR scale out providers to make sure that the we push to the clients from the right WebAPI server instance.
Question: How do applications recover when the "right WebAPI" instance becomes unavailable?
I can imagine a server-side active/passive scheme with some complexity around making sure there is at least one 'server' for each Hub Client...but can a Server connect (in an unsolicited way) to a Hub Client? Would we have each Hub Client connect (when registering for a Group) to >1 Server?
How have applications solved this issue with SignalR?
I think I missed the obvious point that the scale-out providers and the back plane provide the very protection that clients need against servers that go-away. Clients don't connect to a specific server, but to a load-balanced name.

Connections pooling REST calls made from Bluemix nodejs apps in to DataCenter services via Datapower

Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404

Cloud combined with in-house database. How good is the security?

I'm currently performing a research on cloud computing. I do this for a company that works with highly private data, and so I'm thinking of this scenario:
A hybrid cloud where the database is still in-house. The application itself could be in the cloud because once a month it can get really busy, so there's definitely some scaling profit to gain. I wonder how security for this would exactly work.
A customer would visit the website (which would be in the cloud) through a secure connection. This means that the data will be passed forward to the cloud website encrypted. From there the data must eventually go to the database but... how is that possible?
Because the database server in-house doesn't know how to handle the already encrypted data (I think?). The database server in-house is not a part of the certificate that has been set up with the customer and the web application. Am I right or am I overseeing something? I'm not an expert on certificates and encryption.
Also, another question: If this could work out, and the data would be encrypted all the time, is it safe to put this in a public cloud environment? or should still a private cloud be used?
Thanks a lot!! in advance!!
Kind regards,
Rens
The secure connection between the application server and the database server should be fully transparent from the applications point of view. A VPN connection can connect the cloud instance that your application is running on with the onsite database, allowing an administrator to simply define a datasource using the database server's ip address.
Of course this does create a security issue when the cloud instance gets compromised.
Both systems can live separately and communicate with each other through a message bus. The web site can publish events for the internal system (or any party) to pick up and the internal system can publish events as well that the web site can process.
This way the web site doesn't need access to the internal database and the internal application doesn't have to share more information than is strictly necessary.
By publishing those events on a transactional message queue (such as MSMQ) you can make sure messages are never lost and you can configure transport level security and message level security to ensure that others aren’t tampering messages.
The internal database will not get compromised once a secured connection is established with the static Mac ID of the user accessing the database. The administrator can provides access to a Mac id through one time approval and add the user to his windows console.

Resources