Secured Socket.IO access to GCP VM.s - node.js

I'm building a backend for a multiplayer game. It's Node.JS based, and deployed to Google Compute-Engine VMs, in a managed-instance group.
This backend manages many instances of a game, each game instances hosts a several players, and since each game instance is stateful, it should be managed on the SAME VM.
The connection flow is as follows:
A client opens the game page with an ID of the game instance
The client requests the API for an available game-server IP
The client then connects to the game-server DIRECTLY (not via any load balancer)
The problem is that the connection to the game-servers must be secured, and since the connection is not via a load balancer, I don't have any way to secure the connection.
How can I solve this problem?
Thanks

Related

Node.js Server Sent Events with Load Balancer

I am working on a node.js based Web application that needs to be able to push data down to the browser. The Web app will be sitting behind a load balancer.
Assuming the Web app has a POST REST API as:
/update_client
and assuming there is a third-party application calls this API to push some data to the Web app, and then the Web app pushes the data down to the browser.
Now assuming I have two servers behind the load balancer, with the Web app running. A browser client connects to server 1 to listen on the event. Then the other applications hits the /update_client API at the server 2. Now since the two activities happen on two different servers, how can server 2 notify server 1 to send the data to the connected clients?
And what if I am using auto scaling, with dynamic number of servers behind the load balancer?
You need to have some kind of shared resource behind the servers so they all know about updates. I show how to use Redis Pub / Sub for this in a blog post I wrote recently.
Server Sent Events with Node JS

RabbitMQ security in mobile app

I am using Rabbit MQ broker in one of mobile apps that we are developing, I am bit puzzled about security aspects. we are using cloud hosted rabbitmq and hosting platform has given us user name and password (which have been changed since) and we are using SSLconnection so not so much worried about MIM or eavesdropping.
my concern is anybody who knows host and port can make connection to rabbitmq, since we have mobile app we are storing rabbitmq user name and password on device (although encrypted) so I guess that anybody who gets physical access to device and somehow decrypts username password can login to rabbitmq, and once you are logged in you can pretty much do anything on rabbitmq like deleting queues etc..
How are MQ like Rabbitmq used in mobile environment. Is there a better / more secure way of using rabbitmq.
In my experience, it is best to not have your mobile app connect to rabbitmq directly. Use a web server in between the app and RabbitMQ. Have your mobile app connect to your web server via HTTP based API calls. The web server will connect to RabbitMQ, and you won't have to worry about the mobile app having the connection information in it.
There are several advantages of this, on top of the security problem:
better management of RabbitMQ connections
easier to scale number of mobile users
ability to add more logic and processing to the back-end, as needed, without changing the mobile app
creating a connection to RabbitMQ is an expensive operation. It requires a TCP/IP connection. once that connection is open it stays open until you close it. if you open a connection from your mobile app and leave it open, you are reducing the number of available connections to RabbitMQ. if you open and close the connection quickly, you are inducing a lot of extra cost in creating and closing the connections constantly.
with a web server in the middle, you can open a single connection and have it manage multiple mobile devices. the web server will handle the http requests and use the one connection to rabbitmq to push messages to it.
since an HTTP web request is a short-lived connection, you'll be able to handle more users in a short period of time, than you would with direct rabbitmq connections.
this ultimately leads to better scalability as you can add another web server to handle thousands more mobile app instances, while only adding 1 new RabbitMQ connection.
this also lets you add middle-tier logic inside of the web server. you can add additional layers of processing as needed, without changing the mobile app. change the web server code and redeploy as needed.
if you must do this without a server in the middle, you likely won't be able to get around the security issue that you're having. the mobile device will contain the necessary information to make the connection.

Connections pooling REST calls made from Bluemix nodejs apps in to DataCenter services via Datapower

Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404

Connectivity between NodeJS applications behind load balancer

I'm currently working on nodejs application and I got small issue.
My NodeJS application consists of 2 parts:
Internal API from other applications. Let's call this part API.
User faced web server (Express + Socket.io). Let's call this Web.
We're receiving a lot of calls to API from our other internal applications. Some of this calls would generate notifications to web users (let's imaging it's online-chat).
So if we have message for client #1000 and he's online (connected to Web application through Socket.io) we would emit message throught Socket.io to this client. Everything works fine.
But there is an issue.
We're going to introduce load balancer between our NodeJS application (it's one application, so both parts - API and Web would be behind the load balancer). Now let's imagine that we have load balancer and 2 servers with this application: server1 and server2.
Thus some of API calls are sent to server1 and some of them are sent to server2. So let's imagine we got API call to server1 and this call should send a message to client #1000. But this client has open connection to server2.
The question is: is there any best practices or common solutions - how these two servers should communicate? One of possible solutions could be open socket connection between all servers with nodejs application and if we need to send a message to client - just broadcast it so every server could check if client is connected at this moment and send the message to correct client.

Load balancing for custom client server app in the cloud

I'm designing a custom client server tcp/ip app. The networking requirements for the app are:
Be able to speak a custom application layer protocol through a secure TCP/IP channel (opened at a designated port)
The client-server connection/channel needs to remain persistent.
If multiple instances of the server side app is running, be able to dispatch the client connection to a specific instance of the server side app (based on a server side unique ID).
One of the design goals is to make the app scale so load balancing is particularly important. I've been researching the load-balancing capabilities of EC2 and Windows Azure. I believe requirement 1 is supported by most offerings today. However I'm not so sure about requirement 2 and 3. In particular:
Do any of these services (EC2, Azure) allow the app to influence the load-balancing policy, by specifying additional application-layer requirements? Azure, for example, uses round-robin job allocation for cloud services, but requirement 3 above clearly needs to be factored in as part of the load balancing decision, i.e. forward based on unique ID, but uses round-robin allocation if the unique ID is not found at any of the server side instances.
Do the load-balancer work with persistent connection, per requirement 2? My understanding from Azure is that you can specify a public and private port-pair as an end-point, so the load-balancer monitors the public port and forward the connection request to the private port of some running instance, so basically you can do whatever you want with that connection thereafter. Is this the correct understanding?
Any help would be appreciated.
Windows Azure has input endpoints on a hosted service, which are public-facing ports. If you have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances; you cannot choose which instance to route to (e.g. you must support a stateless app model).
If you wanted to enforce a sticky-session model, you'd need to run your own front-end load-balancer (in a Web / Worker role). For instance: You could use IIS + ARR (application request routing), or maybe nginx or other servers supporting this.
What I said above also applies to Windows Azure IaaS (Virtual Machines). In this case, you create load-balanced endpoints. But you also have the option of non-load-balanced endpoints: Maybe 3 servers, each with a unique port number. This bypasses any type of load balancing, but gives direct access to each Virtual Machine. You could also just run a single Virtual Machine running a server (again, nginx, IIS+ARR, etc.) which then routes traffic to one of several app-server Virtual Machines (accessed via direct communication between load-balancer Virtual Machine and app server Virtual Machine).
Note: The public-to-private-port mapping does not let you do any load-balancing. This is more of a convenience to you: Sometimes you'll run software that absolutely has to listen on a specific port, regardless of the port you want your clients to visit.

Resources