make node server wait for client input before continuing - node.js

I have a nodejs app that has a finite specific sequence of actions.
One of the actions is getting an array of images, sending it to a client, and displaying it for a manual human filtering.
After filtering was done, (say a button was pressed), I need the nodejs app to keep executing the sequence until it's done.
I've been wondering over and over how to perform such a thing (and if possible, without the use of sockets.)
I tried creating a boolean representing if filtering was done, and using
while (!boolean), but server seems to be busy running it so it can't event handle the response which should update that same boolean.
Is there a better way?

Related

Speech Services STT- Possible to Link Request to Result?

I have a use case where a mobile app records a long series of commands. Each command is a short, single word (or number). They can happen quickly one right after the other, but the use case does not care if it takes several seconds to get results back from the Cognitive server. It is currently being implemented as discrete asynchronous requests rather than streaming (seems to be more reliable for us).
Since results are coming back async, I see no easy way to map the result back to its corresponding request (and ultimately the app command). Can I embed a unique ID somewhere that will get passed back to me? Is there some other option?
You are using the SDK?
If you do recognizeOnce you get the result from the audio as a call result (synchronous)
If you do continuousrecognition there is currently no way to tag the audio segment.

(Angular 4) Make constant HTTP requests to display data in real-time

I have a REST API, which returns me the latest sensor data when I make a GET request. New data is being added every second. I would like to make a GET request every second, so I can feed the latest sensor data to a line chart to keep the user updated and give it a real-time feeling.
Since Angular 4 is unable to create background threads, I'm unsure how I can do this task. I found some information regarding Web-Workers, however I was unable to find a proper example. I'm also unsure how I would use a Web-Worker to keep returning new data, as it usually only returns one value (when the function is finished executing).
Is this even possible using Angular 4?

Parse LocalDataStore queries give Warning: A long-running operation is being executed on the main thread

I'm developing an Objective C application using Parse. I understand why, when I make a Parse query to the server, that I would need to perform the query in background and run code in a completion handler / callback when that call has completed. And this is what I do when I initially launch the app and download some data tables.
However, I then pin all these Parse objects locally and subsequently make queries to this data using the LocalDataStore option. Do I still need to execute these calls in background? If I remove the background option with these calls, the code runs fine but I still get the warning in the console:
Warning: A long-running operation is being executed on the main thread.
Break on warnBlockingOperationOnMainThread() to debug.
If I'm performing local Parse queries can I treat this simply as a warning (and ignore it) or do I still need to treat these queries as operations that should be performed in a background thread ? Any advice would be appreciated. Thanks.

in postgresql wire protocol, how can I reach ReadyForQuery from a partially executed select?

I am trying to debug an issue with the `node-pg-cursor' module in node.js against a postgresql server (version 9.3)
This module allows for sequential reads of N rows in a select and works by sending
cur.read(N): 'Execute' on portal=unnamed, rows=N
this command fetches up to N rows and we can continue fetching rows incrementally until the end, where we receive
CommandComplete
ReadyForQuery
Now my problem is that I want to bail out of the extended command before fetching all the rows and reaching the end of the Execute sequence: I would like to incrementally fetch N rows, N rows, N rows,.. and at one point decide that I have enough.
When I do that (stop fetching via Execute), the query seem to never reach CommandComplete or ReadyForQuery. This seems normal since nothing tells the extended query that I am never going to ask rows from it again.
Apart from closing the connection, is there a command to reach CommandComplete, or ReadyForQuery while not fetching all the rows from the portal ?
I tried to send Close and received CloseComplete, but it did not go to ReadyForQuery.
If I force an ErrorResponse by sending garbage on the protocol, I reach ReadyForQuery but that does not seem very clean ...
I think you're referring to this, in the documentation:
If Execute terminates before completing the execution of a portal (due to reaching a nonzero result-row count), it will send a PortalSuspended message; the appearance of this message tells the frontend that another Execute should be issued against the same portal to complete the operation. The CommandComplete message indicating completion of the source SQL command is not sent until the portal's execution is completed. Therefore, an Execute phase is always terminated by the appearance of exactly one of these messages: CommandComplete, EmptyQueryResponse (if the portal was created from an empty query string), ErrorResponse, or PortalSuspended.
Presumably, you're getting PortalSuspended and you want to discard the portal without executing any more of it or consuming any more results.
If so, I think you can just send a Sync message:
At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless message causes the backend to close the current transaction if it's not inside a BEGIN/COMMIT transaction block ("close" meaning to commit if no error, or roll back if error). Then a ReadyForQuery response is issued.
You may wish to issue a Close against the portal first:
The Close message closes an existing prepared statement or portal and releases resources.
so what I think you need to do is, in message flow terms:
Parse
Bind a named portal
Describe
Loop:
Execute with rowcount limit to fetch some rows
If no more rows needed; then
Close the portal
Break out of the loop
If CommandComplete received:
Break out of the loop
Sync
Wait for ReadyForQuery
It sounds like you might want to be using the asynchronous query processing API, if your driver is a libpq wrapper. If it's a native implementation the source code for libpq may offer you clues.
Overall, it looks like you'll need to cancel the query using a new connection, then continue to consume input until the buffer is empty. You'll receive however much result data was buffered, then an error message indicating the query was cancelled (if it didn't buffer all its output before you cancelled it) and finally a ReadyForQuery.
I quote the libpq manual:
A client that uses PQsendQuery/PQgetResult can also attempt to cancel a command that is still being processed by the server; see Section 31.6. But regardless of the return value of PQcancel, the application must continue with the normal result-reading sequence using PQgetResult. A successful cancellation will simply cause the command to terminate sooner than it would have otherwise.
Systems usually have quite big TCP send buffers, and they're typically dynamic. See Linux's tcp(7), the SO_SNDBUF option to setsockopt(2), etc. So quite a lot of data might be buffered before the PostgreSQL server blocks on writing to the socket. PostgreSQL doesn't offer per-connection control of the send buffer size, or even a global config option; you must do it on the operating system level. (That said, it'd be trivial to patch PostgreSQL to set a send buffer size with setsockopt and SO_SENDBUF if you wanted to).
PostgreSQL can't just flush the output buffer when you cancel a query. Even if it were safe to do so and the platform supported it, Pg doesn't know for sure that the buffer has emptied of results from prior queries and other relevant messages, since you might have piplined multiple queries.
So all you can really do is reduce the maximum size of the TCP output buffer. That'll reduce the amount of data you must read and throw away, but it may impact the performance of other queries that send bulk data.
Instead of trying to run the query and cancelling it when you've seen enough, I suggest reading rows in batches, requesting a new batch when you've consumed the current one. You can do this by using protocol-level cursors. That way you can control how much data the server queues up and you don't have to mess with buffer sizes. You may already be doing this - using a named portal, and sending an Execute with a maximum row-count, waiting for the PortalSuspended to say there are more rows to read.

QAbstractItemModel Lazy Loading locks application

I have implemented canFetchMore, hasChildren and fetchMore in order to allow my model to be lazy loaded. It's very simple and based on QT's: http://doc.qt.io/archives/qt-4.7/itemviews-simpletreemodel.html
My problem is that in my application fetching children is not a very quick operation, it involves a few seconds of delay on the server side while it figures out who the children actually are.
I'm unsure how to deal with that. I can't have my application locking up for several seconds every time someone expands a node. I don't know how to go about making this happen in the background. If I was to create a sub-process or thread to actually do the work of retrieving the children and updating the client side data structure, how would I go about telling the model that this had successfully completed (and for the node to finally expand).
Also, is there a way to show that the node is currently in the process of loading the data in the background?
Apologies if these are stupid questions, GUI programming is still a bit of a mystery to me and I've never used QT before.
For the record, I'm using Python, but if answers are given in C++ I can understand them.
Thanks
If I was to
create a sub-process or thread to actually do the work of retrieving
the children and updating the client side data structure, how would I
go about telling the model that this had successfully completed (and
for the node to finally expand).
You can use signal and slots. In the thread you retrieve the data you will emit a custom signal like someDataAvailable(YourdataType) and then in the gui you will handle this signal with a slot something like handleDataReadySignal(YourdataType). The signal passes the object that you give it when emitting. Apparently you need to update the gui and the list in the handleDataReadySignal slot. Of course you need to connect the slot to the signal preferably in the constructor of the window/dialog to which the list is attached

Resources