How do i make sure that i can make an RPC call only after i finish the first RPC call?Also how do i make sure that i can refresh my view only after i finish an RPC call.
I know i can do it in the OnSuccess method,but apart from that is there something which i can work with?
I have made significant research on this and also saw that i can use a Timer,but i feel its a heavyweight on client side.
Can i use Scheduler.scheduleDeferred to defer my second rpc call ??
Thanks
Basically you would have 2 RPC callbacks.
First RPC callback onSuccess would issue the second RPC call.
Second RPC callback onSuccess is a place where you should 'refresh' your page.
Using timers is definitely not a good practice for this task.
Related
I have a "recover password" option on a website. Sometimes my sendMail function takes a few seconds to execute. I want to call this async sendMail without waiting and return something like "check you inbox in a few seconds".
Will the server keep running the method or once it responds the method will be terminated?
Whatever happens will surely happen regardless if it's on Apache, Nginx, etc?
Yes, the sendMail method will keep running. An incoming http request is just one asynchronous operation that nodejs can process. It can have thousands of others going on too. In this case, your sendMail() operation would be one additional asynchronous operation carrying on and it will finish without any regard for when the http request finishes.
Remember, nodejs is NOT a threaded architecture where each http request is a new thread that gets terminated at some point. Instead, it's a non-blocking, event driven, asynchronous architecture and you can start as many other asynchronous operations as you want and they will run to completion regardless of what happens with the http request.
I'm using Express in Firebase Functions that are on Node 10. My question is: if I have heavy Promises that I need to complete before the function terminates, but I want to do this after I sent the response (res.status(200).send...), how is the best way I could do this?
I have come up with two approaches so far but neither seem great (and I haven't tried the 'finish' approach yet to see if it works):
For every router, add a finally... clause that awaits accumulated promises. This has the major downside that it'd be easy to forget to do this when adding a new router, or might have bugs if the router already needs its own finally clause
Use the Node stream 'finish' event, https://nodejs.org/docs/latest-v10.x/api/stream.html#stream_event_finish ... but I don't know if Firebase Functions allow this to be caught, or if it'll still be caught if my function runs to its end and the event is fired afterwards.
I'm doing this to try to make my functions' responses faster. Thanks for any help.
if I have heavy Promises that I need to complete before the function terminates, but I want to do this after I sent the response (res.status(200).send...), how is the best way I could do this?
With Cloud Functions, it's not possible to send a response before promises are resolved, and have the work from those promises complete normally. When you send the response, Cloud Functions assumes that's the very last thing the function will do. When the response is sent using res.send(), Cloud Functions will terminate the function and shut down the code. Any incomplete asynchronous work might never finish.
If you want to continue work after a response is sent, you will have to first offload that work to some other component or service that is not in the critical path of the function. One option is to send a message to a pubsub function, and let that function finish in the "background" while the first function sends the response and terminates. You could also send messages to other services, such as App Engine or Compute Engine, or even kick off work in other clouds to finish asynchronously.
See also:
Continue execution after sending response (Cloud Functions for Firebase)
How to exec code after res.send() in express server at Firebase cloudFunctions
When I send msg to a WebSocket client is it blocking or non blocking code ?
ws.send(msg);
In other words, is it a good practice to wrap a send within a setTimeout ?
I am using the Node Einaros WS library but I think this question applies to many other libraries such as Socket.Io or Engine.Io too.
Firstly, to wrap a blocking function within a setTimeout is only going to delay the blocking call, right? So it wouldn't matter if you did that or not. The non-blocking nature of node comes from the fact that the underlying engine runs an event system to let you know when traditional blocking calls (such as file system retrieval) are complete.
Websockets are a "fire and forget" protocol, which I think is what you're trying to ask. The server and client do not wait for a response and instead use the same system as I mentioned above. They will 'listen' to events when they are emitted from the other side and then deal with a process. It is worth noting that websocket communication in the browser do so only under the TCP protocol, meaning if a packet is lost then it will request it again from the server. This is not usually a problem, but in a realtime game sense where milliseconds are important, this is not usually ideal.
I am new to C++ and I am trying to develop a client-server application based on the boost::asio library. I am (still) not able to understand properly the difference between sync and async modes. I've previously studied web protocol services such as HTTP and AJAX. From this explanation, it's clear that HTTP is synchronous and AJAX is asynchronous. What is the difference in TCP socket communication in terms of sync and async? And which mode is better from the perspective of enterprise-level multi-threaded application development, and why?
As I understand synchronous mode, the client blocks for a while until it receives the packet/ data message from the server. And in async mode, the client carries out another operation without blocking the current operation. Why is this different? Is async synonymous with UDP? It seems it doesn't care if it receives transmission acknowledgement.
TCP transmission is always asynchronous. What's synchronous or asynchronous is the behaviour of the API. A synchronous API does things while you call it: for example, send() moves data to the TCP send buffer and returns when it is done. An asynchronous API starts when you call it, executes independently after it returns to you, and calls you back or provides an interrogable handle via which completion is notified.
HTTP is synchronous in the sense that you send a request, receive a response, display or process the response, all in that order.
Ajax is asynchronous only in the sense that it operates independently of the page request/response cycle in the surrounding HTTP request. It's a poor choice of terminology. It would have been better to use a term like 'nested', 'out of band', ...
I want to write a callback that takes a bit of time to complete an external IO operation, but I do not want it to interfere when sending data back to the client. I don't care about waiting for callback completion for purposes of the reply back to the client, but if the callback results in an error, I would like to log it. About 80% of executions will result in this callback executing after the response is sent back to the client and the connection is closed.
My approach works well and I have not seen any problems, but I would like to know whether there are any pitfalls in this approach that I may be unaware of. I would think that node's evented IO would handle this without issue, but I want to make sure before I commit this architecture to production. Any issues that should make me reconsider this approach?
As long as you're not trying to reference that response object after the response is sent, this will not cause any problems. There's nothing special about a request handler that cares one bit about callbacks in its code being invoked after the response is generated.