In Meteor JS code, I am using HTTP.get method to call server inside a method. I must return result to client, so I am wrapping this function with
Meteor.wrapAsync to get a Synchronous function.
var httpSync = Meteor.wrapAsync(HTTP.get, this);
var result = httpSync(myUrl);
My question is - Will Meteor.wrapAsync(AsyncFunction) block other requests? Will it affect parallel execution of multiple requests?
It won't block the entire server. Meteor uses the fibers package to provide "synchronous looking" functions which don't block the entire server.
However, it will block other methods from the same user. If you want other methods from that user to run simultaneously, call this.unblock() inside the method:
On the server, methods from a given client run one at a time. The N+1th invocation from a client won't start until the Nth invocation returns. However, you can change this by calling this.unblock. This will allow the N+1th invocation to start running in a new fiber.
By the way, you don't need to Meteor.wrapAsync HTTP.get, since it can already be used synchronously. wrapAsync is intended to be used with external libraries that are not designed for Meteor.
Related
In a web environment with Actix-Web I want to write behind data to a database, async so the request is not held up. This could also be calling a webhook or calling an API to send an email.
With Scala I would create a queue and use a thread pool (e.g. with ForkJoin) to fire and forget a task.
How would I do this in Rust with Actix-Web? (Actix actors?)
You would use actix_web::rt::spawn to execute an async function that runs independently.
I have an AWS Lambda application built upon an external library that contains an EventEmitter. On a certain event, I need to make a HTTP request. So I was using this code (simplified):
myEmitter.on("myEvent", async() => {
setup();
await doRequest();
finishingWork();
});
What I understand that happens is this:
My handler is called, but as soon as the doRequest function is called, a Promise is returned and the EventEmitter continues with the next handlers. When all that is done, the work of the handler can continue (finishingWork).
This works locally, because my NodeJS process keeps running and any remaining events on the eventloop are handled. The strange thing is that this doesn't seem to work on AWS Lambda. Even if context.callbackWaitsForEmptyEventLoop is set to true.
In my logging I can see my handler enters the doRequest function, but nothing after I call the library to make the HTTP call (request-promise which uses request). And the code doesn't continue when I make another request (which I would expect if callbackWaitsForEmptyEventLoop is set to false, which it isn't).
Has anyone experienced something similar and know how to perform an ansynchronous HTTP request in the handler of a NodeJS event emitter, on AWS Lambda?
I have similar issue as well, my event emitter logs all events normally until running into async function. It works fine in ECS but not in Lambda, as event emitter runs synchronously but Lambda will exit once the response is returned.
At last, I used await-event-emitter to solve the problem.
await emitter.emit('onUpdate', ...);
If you know how to solve this, feel free to add another answer. But for now, the "solution" for us was to put the eventhandler code elsewhere in our codebase. This way, it is executed asynchronously.
We were able to do that because there is only one place where the event is emitted, but the eventhandler way would have been a cleaner solution. Unfortunately, it doesn't seem like it's possible.
Understand that NodeJS is a single thread process, but if I have to run a long process database process, do I need to start a web worker to do that?
For example, in a sails JS app, I can call database to create record, but if the database call take times to finish, it will block other user from access the database.
Below are a sample code i tried
var test = function(cb) {
for(i=0;i<10000;i++) {
Company.create({companyName:'Walter Jr'+i}).exec(cb);
}
}
test(function(err,result){
});
console.log("return to client");
return res.view('cargo/view',{
model:result
});
On first request, I see the return almost instant. But if I request it again, I will need to wait for all the records being entered before It will return me the view again.
What is the common practice for this kinda of blocking issue?
Node.js has non-blocking, asynchronous IO.
read the article below it will help you to restructure your code
http://hueniverse.com/2011/06/29/the-style-of-non-blocking/
Also start using Promises to help you avoid writing blocking IO.
I would like to stop a function in order to wait the end of another section of code.
Is there in Firefox OS some synchronization method like wait() and notify() in Java?
Thanks
JavaScript doesn't has this concept, but generally speaking uses callbacks (function pointers). A bit like the anonymous classes in Java. For example, I'm making a call to a web server:
function callToWebServer(url, doneCallback) {
// do all kinds of magic, waiting for the web server to reply etc.
// when done:
doneCallback();
}
Now using it via:
callToWebServer(function() {
// this is executed after the call to the web server succeeded
});
alert(1); // this is executed straight away
We can never wait, as JS is a single-thread execution environment. All code is written async.
You can see the interop model for going from Node.js -> C#, here.
What I want to know is, can the C# code then make a call to a method in the Node.js part of the process from the C#, before returning?
Imagine if you had a call, like
var webApi = edge.func('/MyDotNetApi.csx');
webApi(function (error, result) { log.('api started'); });
where the MyDotNetApi.csx returns, but leaves a socket listener thread running to handle HTTP requests. Now, if the Node.js part of the process holds (ever changing) information which the .Net code needs to access for inclusion in its HTTP responses, can it somehow ask Node.js for it?
Calling back Node.js from C# with Edge.js is possible and documented.