I have a websocket server which handles connection to some devices (from third parties so I don't control their implementation).
My system was working fine with .NET framework on an app service until I figured out that app services have a max outbound IPs of 8000 connections in my case.
I need to move towards a more scalable server which brings me some questions.
My constraints : I need to keep a constant websocket open with the devices and be able to reach them at any time to send them messages (one by one).
I started looking into Azure app service Signal R (or the Web Pub Sub which seems very similar). The code and the pricing seems to fit my needs. Using the upstream feature I could also send custom messages to my devices.
However I don't understand the scaling part, it says each unit can contain 1000 devices.
Following this question : What is a Unit in terms of Azure Signal R Service?
All my devices are going to connect to myapp.com, 5000 of them so divided in 5 units. They are going to send me messages sent to Azure functions for analysis.
But if I decide to send a message to device n° 4300 do I need to know on which Unit it is? Can I reach it if I have several units?
I couldn't find the answer on azure's docs or signalr.
Related
Now here's a really weird question that I couldn't find the answer to on the internet. Here's how I'm planning to build a project:
Controller App --> Node.js Server (probably Express) --> Some IoT Device Running Node.js Who Knows Where
So essentially, the Controller App wants to control an IoT device, but it could be anywhere. So, it communicates to a server which sits on a static IP which will keep track of where this IoT device is (could be on any network/IP/port). So the controller app will send a request to the server, and the server will tell this IoT device wherever it is to do something.
The problem is, how will this Node.js Server know where the device is?
Proposed Solution A: One way I thought of was to have a server, and share a secret string between the server and the IoT device. The server will have some 'endpoint(?)' that the IoT device can 'subscribe' to.
Proposed Solution B: The IoT device forms a WebSocket or a Sockets.io connection. Whilst this might be a better and easier solution, when you add many devices, will the server take up much more resources when it's communicating to multiple devices in real time?
So yeah, a really weird question, because here, it's really a push notification from Node.js -> Node.js, rather than what every other search result is about, for Node.js -> Some Notification Service like iOS or Google or Web Service Workers.
Thanks!
The "push" options are generally as follows:
Client polls an endpoint every once in a while to check if there's something new. Not really push, but very simple to implement. Feasability for using this implementation depends upon how "real-time" you need the push to be.
Client creates and maintains a constant connection with the server and the server can then send data over that connection at any time. This would be the webSocket or socket.io option or, in some cases SSE (server sent events) which is a version of continuous http. The client will need the ability to detect when the connection has dropped and re-establish the connection as needed. Obviously, the server needs the ability to handle a simultaneous connection (but mostly idle connection) from every device you're supporting. If the traffic is low, custom server configurations can support hundreds of thousands of connections. Typical shared hosting solutions are much more limited in this regard as they don't give you access to the whole server's resources.
Server uses some existing "push service" that is built into the client. This would work for an iOS or Android device that has a push service as part of the platform. Not available to a custom IoT device.
Third party push services or libraries. Google has Firebase Cloud Messaging which purports to be usable with IoT devices, but I'm mostly just finding examples of the IoT device initiating the event and having that event then pushed to more classic devices (phones, browsers, etc...), not from node.js server to IoT device.
One of the requirements of my app is that when one user makes any insert/update/delete, all users viewing a page with a list of that record type get pushed an update containing the change. The user should not be expected to repeat an API call to refresh the dozens of records that did not change, because the push should contain a short summary of the change that occurred.
I accomplished this in my small dev server using SocketIO. I can't scale this across more than one server. My target infrastructure is AWS, and I know AWS has a push notification service, but I believe it's mobile-only and not what I'm looking for. The huge number of data streams being subscribed to is the reason I haven't consider a server-less infrastructure.
I'm new to AWS and have never attempted horizontal scaling either, so please forgive me if my entire question is ignorant.
Have you taken a look at using AWS IoT MQTT messaging protocol? Each browser is a 'device' and you have javascript listening in the browser for messages published via a socket protocol. Each service pushes a message to MQTT when it has an update. There's some good POCs out there (i.e. medium.com/#jparreira/…)
I am referring to the diagram
NodeJS is used as run time in this case and AWS Lambda is used as event notifier (updates comes from other lambda or DB).
My challenge is, the "user browser" can also be a mobile client. The "API" should acts as a service which allows client (mobile or web) to subscribe, unsubscribe, or publish data, nothing else.
Can lambda works as API that has capabilities of "pushing events notifications" to directly clients?
Is there any solution and also sample work/source code can be used as POC?
Next question is, how can I scale such architecture since it becomes stateful (requires memory to remember states of clients connections)?
Or else, how possible is it persist client connections on DB (using frameworks like websocket or socket.io)?
AWS has the SNS service to send notifications, which you can use from Lambda.
You can also directly use the relevant platform's notification system e.g for iOS, Node has an "apn" module that is used to communicate with Apple's APNS service - it's straightforward to use and can be implemented in a Lambda function.
In brief:
Your iOS app registers for APNS which responds with an APNS device token. Your app should then send this to your API / server for storage.
Your API can then send notifications to APNS, referencing any device tokens, along with the private key file you create from the Apple Developer page.
APNS will send the notifications to the registered devices.
Here is a good tutorial.
Your other queries should perhaps be separate questions.
Can lambda works as API that has capabilities of "pushing events notifications" to directly clients?
Yes! As #AndyOS mentioned, SNS is a great service that is quite literally intended to send notifications. I won't go into details here to avoid duplication of response.
Is there any solution and also sample work/source code can be used as POC?
Or else, how possible is it persist client connections on DB (using frameworks like websocket or socket.io)?
If you are looking to use websockets, I'd encourage you to take a look at IoT (https://aws.amazon.com/iot). IoT supports the MQTT protocol (http://docs.aws.amazon.com/iot/latest/developerguide/protocols.html). This page also contains sample client-side code which might help you bootstrap your solution.
Next question is, how can I scale such architecture since it becomes stateful (requires memory to remember states of clients connections)?
You can view the service limits of IoT at http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_iot. You would need to decide if your app fits within these bounds, depending on the various metrics your app has (number of requests per second, number of concurrent connections, etc.).
this is my first question here and I realize this question might be open ended, but I'm looking for specific solutions, and any solution would be accepted.
I have GPS devices which send data packets to an IP on a port, both of which I can configure. I wish to use one of Google's, Amazon's or Microsoft's offering of cloud services. I am using python. Here is an implementation I found online :-
https://github.com/rdkls/gps-tracker-server
The data is coming as packets which are not over HTTP protocol. I have considered building a network listener over a socket on Google Compute Engine, but I'm not sure if it will be able to handle simultaneous requests from 1000 devices if such a situation ever arises. The Google Cloud IoT core offering seems to fit my need perfectly, but it is in private beta right now, which means I can't use it. I think I'll need a message queue service. But most of the offerings from these three companies requires messages over HTTP. Keep in mind that I can't change how the messages are sent from the GPS devices.
The messages sent are in this format -
https://drive.google.com/file/d/0B2EklrIn3KugS2NJYWZGWlVWeGdMbjM4WHQ2TUZmYWhIRmt3/view?usp=drive_web
Format:
data is sent in (byte sized) packets directly to the IP:Port over GPRS connections, one heartbeat packet every minute and GPS details every minute from each device. It also requires teh server to eply to the messagee for acknowledgement since it's not over TCP/IP.
So basically, which service and which architecture should I use keeping scalability, reliability and cost in mind?
I think for a 1000 devices, that would send such messages every minute, total would be 43M messages. I'm not sure but I'm looking for something that'll cost me about 1000$ that is 1$ per device per month.
I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).