(Angular 4) Make constant HTTP requests to display data in real-time - multithreading

I have a REST API, which returns me the latest sensor data when I make a GET request. New data is being added every second. I would like to make a GET request every second, so I can feed the latest sensor data to a line chart to keep the user updated and give it a real-time feeling.
Since Angular 4 is unable to create background threads, I'm unsure how I can do this task. I found some information regarding Web-Workers, however I was unable to find a proper example. I'm also unsure how I would use a Web-Worker to keep returning new data, as it usually only returns one value (when the function is finished executing).
Is this even possible using Angular 4?

Related

How to manage concurrent writes to a large (5mb) MongoDB document with Node JS

I built an app that manages sports tournaments using MongoDB, Mongoose on NodeJS. I'd like to know if I am using the best solution to handle multiple concurrent writes to a large document (5Mb) in rapid succession.
Each "Event" (tournament) is a single document that contains a list of teams. There is a maximum number of teams that can register to each Event. So normally, when a team registers, my Node JS server will load the event, check if the max number of teams has not been reached, add the team to sub-documents and save the Event.
The problem is that some tournaments make players frantic to get a spot and you can have 60 teams complete their registration in the opening seconds which would cause concurrency errors.
For example, if 2 teams click on "save" at the same time, 2 threads (requests) will open on the NodeJS server, both threads will load identical copies of the event, modify them and save two different versions of the document over one another. Obviously, you will get a version error for one of the two threads. Now imagine 60 teams registering within the same second.
The second problem is that the Event document is quite large. Let's be dramatic and say it's 5Mb in size (rare but possible). If I have to load, modify, write 5 megs per registration, the registration system is going to grind to a halt (since my MongoDB is on a different server.)
So I need to know if I built the right solution and if you guys foresee problems with this.
On my node server, I built a Singleton class (accessible to all requests) to manage access to documents. So if a request comes along and asks for Document X, the singleton returns a Promise to the request which will be resolved once this document becomes available to edit. The singleton then turns around, loads the document and grants access to the first request by resolving it's promise. When the request is done editing this document, it tells the singleton that it's done. The singleton then checks if there is queue of other requests waiting to edit this document (other teams that want to register). If so, it does NOT save the document but rather resolves the next promise, allowing the next request to edit the document.
When the last request has finished editing the document and there are no more requests in the queue, the singleton saves the document and clears it from memory.
So in short, the singleton allows the system to load the document once, allow modifications from multiple requests and then saves the document at the end of the rush. This is especially useful since the document is rather large (up to 5mb) and minimizes the number of read/writes to the MongoDB server. The other use is that if we're accepting 50 teams and we get 55 requests wanting to append their teams, the last 5 requests in the queue will take into account that the live document has reached it's team limit and return a "sorry we're full" response.
Is this the best way to manage concurrent writes to a large document?
MongoDB provides a multitude of update operators that you should be using on the specific fields instead of modifying the entire document in your application. For example, for adding to arrays use https://docs.mongodb.com/manual/reference/operator/update/push/.
This way you 1) will only be sending the changed data on each write and 2) avoid racing yourself and clobbering your other changes.
This doesn't help you with the time it takes the server to rewrite that 5 mb document each time it's modified - split the document up to fix this (if you find it to be an issue).

How should I manage the number of sockets in a node.js application?

I am building my first web-based node.js application - an online game - as a hobby/project to try and teach myself how it all works.
I'm using socket.io to send real-time updates (who's in the lobby, points scored etc) to users, but I'm not sure whether the way I'm managing the sockets, and the information being sent through them, in the best way.
Whenever the game is updated, I'm sending an object to each user which updates everything at once, and a lot of the time, the information being updated is actually staying the same. For example, if a user scores a point, an update is sent to everyone's browser to update the leaderboard, but that same socket.on function is re-sending information such as usernames, which stay the same throughout the game:
exampleObject = {
"usernames" : [username1, username2], // only gets updated in the browser once, but is sent every time
"points": {
"username1": 1, // Different value with every update
"username2": 3
}
}
(The real object is quite a bit bigger than this)
Would it be more sensible to have a different socket.on function for every individual piece of information which needs updating, so I can then call them individually as and when required, or is there any sense in updating everything through one function? Any thoughts/advice would be greatly appreciated.
If you are regularly sending a piece of information over and over, then it makes sense to design a specific message that only contains that specific information so you aren't regularly sending information that does not need to be sent. You can have as many different messages as you want and you should use that to design efficient messages, particularly for the most common messages.
Would it be more sensible to have a different socket.on function for every individual piece of information which needs updating, so I can then call them individually as and when required
Yes. Design efficient messages specifically for things you regularly send.
or is there any sense in updating everything through one function?
Only if you need to change lots of stuff at once. It's wasteful to include data in a frequent message that never changes and doesn't need to be sent.
It's perfectly fine to have different messages you send for different purposes and then the client has different listeners for those specific messages. At the same time, if you regularly send three pieces of data together, you probably wouldn't make a separate message for each piece of data - you'd put those three together such that your message structure aligns with your usage.
And, you can also have different messages for different purposes even if some data is in both messages.
One more note here. The title of your question "How should I manage the number of sockets in a node.js application?" seems to ask about managing the number of sockets. But, the rest of your question isn't about that at all. The rest of your question is about having different messages on the same socket. You don't need a new socket in order to define and use a different message. You can have thousands of different messages that you use all on the same socket connection. That's the whole architecture of socket.io. You send a message name and some data that goes with it. You can use a limitless number of separate message names all on the same connection.

Speech Services STT- Possible to Link Request to Result?

I have a use case where a mobile app records a long series of commands. Each command is a short, single word (or number). They can happen quickly one right after the other, but the use case does not care if it takes several seconds to get results back from the Cognitive server. It is currently being implemented as discrete asynchronous requests rather than streaming (seems to be more reliable for us).
Since results are coming back async, I see no easy way to map the result back to its corresponding request (and ultimately the app command). Can I embed a unique ID somewhere that will get passed back to me? Is there some other option?
You are using the SDK?
If you do recognizeOnce you get the result from the audio as a call result (synchronous)
If you do continuousrecognition there is currently no way to tag the audio segment.

make node server wait for client input before continuing

I have a nodejs app that has a finite specific sequence of actions.
One of the actions is getting an array of images, sending it to a client, and displaying it for a manual human filtering.
After filtering was done, (say a button was pressed), I need the nodejs app to keep executing the sequence until it's done.
I've been wondering over and over how to perform such a thing (and if possible, without the use of sockets.)
I tried creating a boolean representing if filtering was done, and using
while (!boolean), but server seems to be busy running it so it can't event handle the response which should update that same boolean.
Is there a better way?

Using Google map objects within a web worker?

The situation:
Too much stuff is running in the main thread of a page making a google map with overlays representing ZIP territories coming from US census data and stuff the client has asked for grouping territories into discreet groups. While there is no major issue on desktops, mobile devices (iPad) decide that the thread is taking too long (max of 6 seconds after data returns) and therefore must have crashed.
Solution: Offload the looping function to gather the points for the shape from each row to a web worker that can work as fast or slow as resources allow on a mobile device. (Three for loops, 1st to select row, 2nd to select column, 3rd for each point within the column. Execution time: matter of 3-6 seconds total for over 2000+ rows with numerous points)
The catch: In order for this to be properly efficient, the points must be made into a shape (polygon) within the web worker. HOWEVER since it is a google.maps.polygon object made up of google.maps.latlng objects it [the web worker] needs to have some knowledge of what those items are within the web worker. Web workers require you to not use window or the DOM so it must import the script and the intent was to pass back just the object as a JSON encoded item. The code fails on any reference of google objects even with importScript() due to the fact those items rely on the window element.
Further complications: Google's API is technically proprietary. The web app code that this is for is bound by NDA so pointed questions could be asked but not a copy/paste of all code.
The solution/any vague ideas:???
TLDR: Need to access google.maps.latlng object and create new instances of (minimally) within a web worker. Web worker should either return Objects ready to be popped into a google.maps.polygon object or should return a google.maps.polygon object. How do I reference the google maps API if I cannot use the default method of importing scripts due to an issue requiring the window object?
UPDATE: Since this writing Ive managed to offload the majority of the grunt work from the main thread to the web worker allowing it to parse through the data asynchronously and assign the data to custom made latlng object.
The catch now is getting the returned values to run the function in the proper context to see if the custom latlng is sufficient for google.maps.polygon to work its magic.
Excerpt from the file that calls the web worker and listens for its response (Coffeescript)
#shapeWorker.onmessage= (event)->
console.log "--------------------TESTING---------------"
data=JSON.parse(event.data)
console.log data
#generateShapes(data.poly,data.center,data.zipNum)
For some reason, its trying to evaluate GenerateShapes in the context of the web worker rather than in the context of the class its in.
Once again it was a complication of too many things going on at once. The scope was restricted due to the usage of -> rather than => which expands the scope to allow the parent class functions.
Apparently the issue resided with the version of iOS this web app needed to run on and a bug with the storage being set arbitrarily low (a tenth of its previous size). With some shrinking of the data and a fix to the iOS version in question I was able to get it running without the usage of web workers. One day I may be able to come back to it with web workers to increase efficiency.

Resources