Here's my Code and the error occur in my terminal
The problem is where you are setting res.render() in your server code more than once. Res.render combines the sending of data and closing the socket, so you can only call it once for each incoming request. You can do the async/await approach as #lpizzinidev mentions to buffer up data and make it more readable/maintainable, or chain your promises by doing a .then, execute second query inside that one, mash those two query results in one object, and then res.render(combinedData).
If you want to send data in chunks, you need to change to res.render to res.write (which you can call multiple times) before calling res.end().
Related
I am a bit new to JavaScript web dev, and so am still getting my head around the flow of asynchronous functions, which can be a bit unexpected to the uninitiated. In my particular use case, I want execute a routine on the list of available databases before moving into the main code. Specifically, in order to ensure that a test environment is always properly initialized, I am dropping a database if it already exists, and then building it from configuration files.
The basic flow I have looks like this:
let dbAdmin = client.db("admin").admin();
dbAdmin.listDatabases(function(err, dbs){/*Loop through DBs and drop relevant one if present.*/});
return await buildRelevantDB();
By peppering some console.log() items throughout, I have determined that the listDatabases() call basically puts the callback into a queue of sorts. I actually enter buildRelevantDB() before entering the callback passed to listDatabases. In this particular example, it seems to work anyway, I think because the call that reads the configuration file is also asynchronous and so puts items into the same queue but later, but I find this to be brittle and sloppy. There must be some way to ensure that the listDatabases portion resolves before moving forward.
The closest solution I found is here, but I still don't know how to get the callback I pass to listDatabases to be like a then as in that solution.
Mixing callbacks and promises is a bit more advanced technique, so if you are new to javascript try to avoid it. In fact, try to avoid it even if you already learned everything and became a js ninja.
Dcumentation for listDatabases says it is async, so you can just await it without messing up with callbacks:
const dbs = await dbAdmin.listDatabases();
/*Loop through DBs and drop relevant one if present.*/
The next thing, there is no need to await before return. If you can await within a function, it is async and returns a promise anyway, so just return the promise from buildRelevantDB:
return buildRelevantDB();
Finally, you can drop database directly. No need to iterate over all databases to pick one you want to drop:
await client.db(<db name to drop>).dropDatabase();
I am new to Node.js, and I have been reading questions and answers related with this issue, but still not very sure if I fully understand the concept in my case.
Suggested Code
router.post('/test123', function(req, res) {
someAsyncFunction1(parameter1, function(result1) {
someAsyncFunction2(parameter2, function(result2) {
someAsyncFunction3(parameter3, function(result3) {
var theVariable1 = req.body.something1;
var theVariable2 = req.body.something2;
)}
)}
});
Question
I assume there will be multiple (can be 10+, 100+, or whatever) requests to one certain place (for example, ajax request to /test123, as shown above) at the same time with some variables (something1 and something2). According to this, it would be impossible that one user's theVariable1 and theVariable2 are mixed up with (i.e, overwritten by) the other user's req.body.something1 and req.body.something2. I am wondering if this is true when there are multiple callbacks (three like the above, or ten, just in case).
And, I also consider using res.locals to save some data from callbacks (instead of using theVariable1 and theVariable2, but is it good idea to do so given that the data will not be overwritten due to multiple simultaneous requests from clients?
Each request an Node.js/Express server gets generated a new req object.
So in the line router.post('/test123', function(req, res), the req object that's being passed in as an argument is unique to that HTTP connection.
You don't have to worry about multiple functions or callbacks. In a traditional application, if I have two objects cat and dog that I can pass to the listen function, I would get back meow and bark. Even though there's only one listen function. That's sort of how you can view an Express app. Even though you have all these get and post functions, every user's request is passed to them as a unique entity.
UPDATE: See MarkLogic 8 - Stream large result set to a file - JavaScript - Node.js Client API for someone's answer on how to do this in Javascript. This question is specifically asking about XQuery.
I have a web application that consumes rest services hosted in node.js.
Node simply proxies the request to XQuery which then queries MarkLogic.
These queries already have paging setup and work fine in the normal case to return a page of data to the UI.
I need to have an export feature such that when I put a URL parameter of export=all on a request, it doesn't lookup a page anymore.
At that point it should get the whole result set, even if it's a million records, and save it to a file.
The actual request needs to return immediately saying, "We will notify you when your download is ready."
One suggestion was to use xdmp:spawn to call the XQuery in the background which would save the results to a file. My actual HTTP request could then return immediately.
For the spawn piece, I think the idea is that I run my query with different options in order to get all results instead of one page. Then I would loop through the data and create a string variable to call xdmp:save with.
Some questions, is this a good idea? Is there a better way? If I loop through the result set and it does happen to be very large (gigabytes) it could cause memory issues.
Is there no way to directly stream the results to a file in XQuery?
Note: Another idea I had was to intercept the request at the proxy (node) layer and then do an xdmp:estimate to get the record count and then loop through querying each page and flushing it to disk. In this case I would need to find some way to return my request immediately yet process in the background in node which seems to have some ideas here: http://www.pubnub.com/blog/node-background-jobs-async-processing-for-async-language/
One possible strategy would be to use a self-spawning task that, on each iteration, gets the next page of the results for a query.
Instead of saving the results directly to a file, however, you might want to consider using xdmp:http-post() to send each page to a server:
http://docs.marklogic.com/xdmp:http-post?q=xdmp:http-post&v=8.0&api=true
In particular, the server could be a Node.js server that appends each page as it arrives to a file or any other datasink.
That way, Node.js could handle the long-running asynchronous IO with minimal load on the database server.
When a self-spawned task hits the end of the query, it can again use an HTTP request to notify Node.js to close the file and report that the export is finished.
Hping that helps,
In my index.js, I have an exports function that is supposed to send data back to the client via ajax on pressing a submit button. However, when the user presses submit, the data seems to get sent over before it the data gets modified. When pressing submit one more time, it sends the data that was previously modified as if clicking the submit button only sends the 'previously' set data. This is my code:
var tabledata = getRecordFromDatabase(key);
if(tabledata.length === 0)
tabledata = 'There is no matched record in the database';
res.contentType('text/html');
res.send({'matched':tabledata});
So to illustrate the error: I click submit after filling out a form and receive back the message "There is no matched record in the database". I hit submit a second time without changing anything in the form I just filled. This time record data is actually sent to me. Why could this be?
If whatever you're doing in getRecordFromDatabase is asynchronous and non-blocking, then node.js is behaving as it should. Node.js is non-blocking - it doesn't stop and wait for processes to complete (unless those processes are intentionally written to block, which is usually avoided in node.js). This is beneficial, because it keeps the server free to accept new requests and process many requests at once.
If your database call is asynchronous, you're not waiting for it to return before you res.send(). That's why your first submit returns back empty. Most likely, by the time you hit submit a second time, your DB call has finally returned, and that's why you get a result.
It's hard to give you a code-based answer to your problem, because you abstracted away what is happening in your DB call method. But typically, an asynchronous call would go something like:
getRecordFromDatabase(key, function(err, data){
if(data.length === 0)
data = 'There is no matched record in the database';
res.contentType('text/html');
res.send({'matched':data});
});
This way, you are passing a function to execute as a callback to your asynchronous method - when the async call completes, it executes the callback, which then executes the res.send() with the appropriate data.
In my app (node / express / redis), I use some code to update several items in DB at the same time:
app.put('myaction', function(req, res){
// delete stuff
db.del("key1");
db.srem("set1", "test");
// Add stuff
db.sadd("set2", "test2");
db.sadd("set3", "test3");
db.hmset("hash1", "k11", "v11", "k21", "v21");
db.hmset("hash2", "k12", "v12", "k22", "v22");
// ...
// Send response back
res.writeHead(200, {'content-type': 'application/json'});
res.write(JSON.stringify({ "status" : "ok" }));
res.end();
});
Can I be sure ALL those actions will be performed before the method returns ? My concern is the asynchronous processing. As I do not use callback function in the db actions, will this be alright ?
While all of the commands are sent and responses parsed asynchronously, it's useful to note that the callbacks are all invoked in order. So you can use the callback of the last Redis command to send the response to the client, and then you'll know that all of the Redis commands have been executed before responding.
Use the MULTI/EXEC command to create a queue of your commands and execute them in a row. Then use a callback to send a coherent response back (success/failure).
Note that you must use Redis' AOF to avoid that - in case of crash - the db state is not coherent with your logic because only a part of the commands in the queue were executed: i.e. MULTI/EXEC is not transactional upon execution. This is a useful reference.
I haven't worked with redis, but if this works(if you it doesn't call undefined function) and it should be asynchronous, then you can use it. But if there is an error in updating, then you can't handle it, this way.
No, you can't be sure if all those actions complete successfully, because your redis server might crash.. To speed things up, you can group all your update commands into one with pipelining (does your redis driver support that?), then get the success or failure of the whole operation via a callback and proceed..