I have a web app that is published via ExpressJS on NodeJS, of course. It uses CouchDB as it's data source. I implemented long polling to keep the app in sync at all times between all users. To accomplish this I use the following logic:
User logs into app and an initial long poll request is made to Node via an Express route.
Node in turn makes a long poll request to CouchDB.
When Couch is updated it responds to the request from Node.
Lastly Node responds to the browser.
Simple. What is happening, though, is that when I refresh the browser it freezes up on every fifth refresh. Huh? very wierd. But I can reproduce it over and over, even in my test environment. Every fifth refresh without fail freezes up Node and causes the app to freeze. Rebooting Node fixes the issue.
After much hair pulling I THOUGHT I solved it by changing this:
app.get('/_changes/:since*', security, routes.changes);
To this:
app.get('/_changes/:since*', security, function () { routes.changes });
However, after further testing this is just failing to run routes.changes. So no actual solution. Any ideas why long polling CouchDb from Node would do such a strange thing? On the fifth refresh I can have a break point in node on the first line of my routing code and it never get's hit. However, in the browser I can break on the request to node for long polling and it seems to go out. It's like Node is not accepting the connection for some reason...
Should I be approaching long polling from Node to CouchDB in a different way? I'm using feed=longpoll, should I maybe be doing feed=continuous? If I turn down the changes_timeout in couchdb to 5 seconds it doesn't get rid of the issue, but it does make it easier to cope with since the freezes only last 5 seconds tops. So this would seem to indicate that node can't handle having several outstanding requests to couch. Maybe I will try a continuous feed and see what happens.
self.getChanges = function (since) {
Browser:
$.ajax({
url: "/_changes/" + since,
type: "GET", dataType: "json", cache: false,
success: function (data) {
try {
self.processChanges(data.results);
self.lastSeq(data.last_seq);
self.getChanges(self.lastSeq());
self.longPollErrorCount(0);
} catch (e) {
self.longPollErrorCount(self.longPollErrorCount() + 1);
if (self.longPollErrorCount() < 10) {
setTimeout(function () {
self.getChanges(self.lastSeq());
}, 3000);
} else {
alert("You have lost contact with the server. Please refresh your browser.");
}
}
},
error: function (data) {
self.longPollErrorCount(self.longPollErrorCount() + 1);
if (self.longPollErrorCount() < 10) {
setTimeout(function () {
self.getChanges(self.lastSeq());
}, 3000);
} else {
alert("You have lost contact with the server. Please refresh your browser.");
}
}
});
}
Node:
Routing:
exports.changes = function (req, res) {
var args = {};
args.since = req.params.since;
db.changes(args, function (err, body, headers) {
if (err) {
console.log("Error retrieving changes feed: "+err);
res.send(err.status_code);
} else {
//send my response... code removed here
}
})
}
Database long poll calls:
self.changes = function (args, callback) {
console.log("changes");
if (args.since == 0) {
request(self.url + '/work_orders/_changes?descending=true&limit=1', function (err, res, headers) {
var body = JSON.parse(res.body);
var since = body.last_seq;
console.log("Since change: "+since);
self.longPoll(since, callback);
});
} else {
self.longPoll(args.since, callback);
}
}
self.longPoll = function (since, callback) {
console.log("about to request with: "+since);
request(self.url + '/work_orders/_changes?feed=continuous&include_docs=true&since=' + since,
function (err, res, headers) {
console.log("finished request.")
if (err) { console.log("Error starting long poll: "+err.reason); return; } //if err send it back
callback(err, res.body);
});
}
Socket.io will automatically fall back to long polling, and doesn't have a problem like the one you are having. So just use that. Also for CouchDB changes use this https://github.com/iriscouch/follow or maybe this one https://npmjs.org/package/changes like the other guy suggested.
Its very bad practice to reinvent things when we have popular modules that already do what you need. There are currently more than 52,000 node modules on https://npmjs.org/. People make a big deal about copying and pasting code. In my mind reinventing basic stuff is even worse than that.
I know with so many modules its hard to know about all of them, so I'm not saying you can never solve the same problem as someone else. But take a look at npmjs.org first and also sites like http://node-modules.com/ which may give better search results.
Related
I'm running a CouchDB server with docker and I'm trying to POST data through a Node app.
But I'm frequently prompted with a ESOCKETTIMEDOUT error (not always).
Here's the way I'm opening the connexion to the DB:
var remoteDB = new PouchDB('http://localhost:5984/dsndatabase', {
ajax: {
cache: true,
timeout: 40000 // I tried a lot of durations
}
});
And here's the code used to send the data :
exports.sendDatas = function(datas,db, time) {
console.log('> Export vers CouchDB')
db.bulkDocs(datas).then(function () {
return db.allDocs({include_docs: true});
}).then(function (){
var elapsedTime = new Date().getTime() - time;
console.log('> Export terminé en ', elapsedTime, ' ms');
}).catch(function (err) {
console.log(err);
})
};
The error doesn't show up every time but I'm unable to find a pattern.
And, timeout or not, all my data is successfully loaded in my CouchDB !
I've seen a lot of posts on this issue but none of them really answers my question ...
Any idea ?
Okay this seems to be real issue:
https://github.com/pouchdb/pouchdb/issues/3550#issuecomment-75100690
I think you can fix it by stating a reasonably longer timeout value/a retry logic using exponential backoff.
Let me know if that works for you.
So I have been building a cli/gui app using electron for work. I have a progress bar that needs an interval to be run every so often to refresh the progress bar. Recently we wanted to run this in jenkins so having progress bars refresh ever 80 ms would be super anoying. I added some code to clear the interval if a certain environment variable was set and then for some reason the app started to hang after sending http requests. The server would never receive the requests and the app would just sit there. After a lot of debugging i have found that putting a setInterval(() => { }, 80); (one that does anything or nothing for any amount of time) anywhere in the code fixes the problem. Has anyone seen this before? i feel like I'm going crazy!
edit:
setting a timeout on the request will make the request fail after the timeout.
edit:
so I can't show you the exact code but here is the jist of one of the request calls.
return new Promise((resolve, reject) => {
request.put({
url: this.buildUrl(armada, '/some-path'),
body: vars,
json: true
}, (err, resp, body) => {
logger.comment('err "%s"', JSON.stringify(err));
logger.comment('resp "%s"', JSON.stringify(resp));
logger.comment('body "%s"', JSON.stringify(body));
let e = this.handleError(err, resp, body, 'Error getting stuff');
if (e) { return reject(e); }
logger.comment('got back body "%s"', body);
resolve(resp);
});
});
and then if I have an interval somewhere it doesn't hang. and if I don't it does.
i can paste this code anywhere in my code and everything starts working
setInterval(() => { }, 80);
Now it's not a specific request made as the app makes a lot of different requests. any of the requests it makes can hang. And they don't always hang. about 1 in 10 times everything will work just fine for an individual request.
I am having difficulty using Fibers/Meteor.bindEnvironment(). I tried to have code updating and inserting to a collection if the collection starts empty. This is all supposed to be running server-side on startup.
function insertRecords() {
console.log("inserting...");
var client = Knox.createClient({
key: apikey,
secret: secret,
bucket: 'profile-testing'
});
console.log("created client");
client.list({ prefix: 'projects' }, function(err, data) {
if (err) {
console.log("Error in insertRecords");
}
for (var i = 0; i < data.Contents.length; i++) {
console.log(data.Contents[i].Key);
if (data.Contents[i].Key.split('/').pop() == "") {
Projects.insert({ name: data.Contents[i].Key, contents: [] });
} else if (data.Contents[i].Key.split('.').pop() == "jpg") {
Projects.update( { name: data.Contents[i].Key.substr(0,
data.Contents[i].Key.lastIndexOf('.')) },
{ $push: {contents: data.Contents[i].Key}} );
} else {
console.log(data.Contents[i].Key.split('.').pop());
}
}
});
}
if (Meteor.isServer) {
Meteor.startup(function () {
if (Projects.find().count() === 0) {
boundInsert = Meteor.bindEnvironment(insertRecords, function(err) {
if (err) {
console.log("error binding?");
console.log(err);
}
});
boundInsert();
}
});
}
My first time writing this, I got errors that I needed to wrap my callbacks in a Fiber() block, then on discussion on IRC someone recommending trying Meteor.bindEnvironment() instead, since that should be putting it in a Fiber. That didn't work (the only output I saw was inserting..., meaning that bindEnvironment() didn't throw an error, but it also doesn't run any of the code inside of the block). Then I got to this. My error now is: Error: Meteor code must always run within a Fiber. Try wrapping callbacks that you pass to non-Meteor libraries with Meteor.bindEnvironment.
I am new to Node and don't completely understand the concept of Fibers. My understanding is that they're analogous to threads in C/C++/every language with threading, but I don't understand what the implications extending to my server-side code are/why my code is throwing an error when trying to insert to a collection. Can anyone explain this to me?
Thank you.
You're using bindEnvironment slightly incorrectly. Because where its being used is already in a fiber and the callback that comes off the Knox client isn't in a fiber anymore.
There are two use cases of bindEnvironment (that i can think of, there could be more!):
You have a global variable that has to be altered but you don't want it to affect other user's sessions
You are managing a callback using a third party api/npm module (which looks to be the case)
Meteor.bindEnvironment creates a new Fiber and copies the current Fiber's variables and environment to the new Fiber. The point you need this is when you use your nom module's method callback.
Luckily there is an alternative that takes care of the callback waiting for you and binds the callback in a fiber called Meteor.wrapAsync.
So you could do this:
Your startup function already has a fiber and no callback so you don't need bindEnvironment here.
Meteor.startup(function () {
if (Projects.find().count() === 0) {
insertRecords();
}
});
And your insert records function (using wrapAsync) so you don't need a callback
function insertRecords() {
console.log("inserting...");
var client = Knox.createClient({
key: apikey,
secret: secret,
bucket: 'profile-testing'
});
client.listSync = Meteor.wrapAsync(client.list.bind(client));
console.log("created client");
try {
var data = client.listSync({ prefix: 'projects' });
}
catch(e) {
console.log(e);
}
if(!data) return;
for (var i = 1; i < data.Contents.length; i++) {
console.log(data.Contents[i].Key);
if (data.Contents[i].Key.split('/').pop() == "") {
Projects.insert({ name: data.Contents[i].Key, contents: [] });
} else if (data.Contents[i].Key.split('.').pop() == "jpg") {
Projects.update( { name: data.Contents[i].Key.substr(0,
data.Contents[i].Key.lastIndexOf('.')) },
{ $push: {contents: data.Contents[i].Key}} );
} else {
console.log(data.Contents[i].Key.split('.').pop());
}
}
});
A couple of things to keep in mind. Fibers aren't like threads. There is only a single thread in NodeJS.
Fibers are more like events that can run at the same time but without blocking each other if there is a waiting type scenario (e.g downloading a file from the internet).
So you can have synchronous code and not block the other user's events. They take turns to run but still run in a single thread. So this is how Meteor has synchronous code on the server side, that can wait for stuff, yet other user's won't be blocked by this and can do stuff because their code runs in a different fiber.
Chris Mather has a couple of good articles on this on http://eventedmind.com
What does Meteor.wrapAsync do?
Meteor.wrapAsync takes in the method you give it as the first parameter and runs it in the current fiber.
It also attaches a callback to it (it assumes the method takes a last param that has a callback where the first param is an error and the second the result such as function(err,result).
The callback is bound with Meteor.bindEnvironment and blocks the current Fiber until the callback is fired. As soon as the callback fires it returns the result or throws the err.
So it's very handy for converting asynchronous code into synchronous code since you can use the result of the method on the next line instead of using a callback and nesting deeper functions. It also takes care of the bindEnvironment for you so you don't have to worry about losing your fiber's scope.
Update Meteor._wrapAsync is now Meteor.wrapAsync and documented.
mongoosejs async code .
userSchema.static('alreadyExists',function(name){
var isPresent;
this.count({alias : name },function(err,count){
isPresent = !!count
});
console.log('Value of flag '+isPresent);
return isPresent;
});
I know isPresent is returned before the this.count async function calls the callback , so its value is undefined . But how do i wait for callback to change value of isPresent and then safely return ?
what effect does
(function(){ asynccalls() asynccall() })(); has in the async flow .
What happens if var foo = asynccall() or (function(){})()
Will the above two make return wait ?
can process.nextTick() help?
I know there are lot of questions like these , but nothing explained about problem of returning before async completion
There is no way to do that. You need to change the signature of your function to take a callback rather than returning a value.
Making IO async is one of the main motivation of Node.js, and waiting for an async call to be completed defeats the purpose.
If you give me more context on what you are trying to achieve, I can give you pointers on how to implement it with callbacks.
Edit: You need something like the following:
userSchema.static('alreadyExists',function (name, callback) {
this.count({alias : name}, function (err, count) {
callback(err, err ? null : !!count);
console.log('Value of flag ' + !!count);
});
});
Then, you can use it like:
User.alreadyExists('username', function (err, exists) {
if (err) {
// Handle error
return;
}
if (exists) {
// Pick another username.
} else {
// Continue with this username.
}
}
Had the same problem. I wanted my mocha tests to run very fast (as they originally did), but at the same time to have a anti-DOS layer present and operational in my app. Running those tests just as they originally worked was crazy fast and ddos module I'm using started to response with Too Many Requests error, making the tests fail. I didn't want to disable it just for test purposes (I actually wanted to have automated tests to verify Too Many Requests cases to be there as well).
I had one place used by all the tests that prepared client for HTTPS requests (filled with proper headers, authenticated, with cookies, etc.). It looked like this more or less:
var agent = thiz.getAgent();
thiz.log('preReq for user ' + thiz.username);
thiz.log('preReq for ' + req.url + ' for agent ' + agent.mochaname);
if(thiz.headers) {
Object.keys(thiz.headers).map(function(header) {
thiz.log('preReq header ' + header);
req.set(header, thiz.headers[header]);
});
}
agent.attachCookies(req);
So I wanted to inject there a sleep that would kick in every 5 times this client was requested by a test to perform a request - so the entire suite would run quickly and every 5-th request would wait to make ddos module consider my request unpunishable by Too Many Requests error.
I searched most of the entries here about Async and other libs or practices. All of them required going for callback - which meant I would have to re-write a couple of hundreds of test cases.
Finally I gave up with any elegant solution and fell to the one that worked for me. Which was adding a for loop trying to check status of non-existing file. It caused a operation to be performed long enough I could calibrate it to last for around 6500 ms.
for(var i = 0; i < 200000; ++i) {
try {
fs.statSync('/path' + i);
} catch(err) {
}
};
I have a node app with mongo server on an amazon ec2 instance. It works great, but I just added a new API call and every time I call it, the server freezes and I cannot access/ssh into it for several hours. While this is happening, my server goes down which makes the app that relies on it unusable and my users angry...
This code works perfectly on my localhost, but as soon as I run it on my server it freezes. My thoughts are that it may be crashing mongo? I have no idea why this would happen...
If anyone has any ideas what could be going wrong, please let me know.
node is using express. The send_error function will perform a res.send({some error}). db.CommentModel returns mongoose.model('comment', Comment);
in app.js
app.get('/puzzle/comment/:id', auth.restrict, puzzle.getComments);
in the file which defines getComments
exports.getComments = function(req, res)
{
var userID = _u.stripNonAlphaNum(req.params.id);
var CommentModel = db.CommentModel;
CommentModel.find({user: userID}, function(e, comments) {
if(e)
{
err.send_error(err.DB_ERROR, res);
}
else if (!comments)
{
err.send_error(err.DB_ERROR, res);
}
else if (comments.length == 0)
{
res.send([]);
}
else
{
var commentIDs = [];
for (var i = 0; i<comments.length; i++)
{
commentIDs.push({_id: comments[i].puzzle});
}
var TargetModel = pApp.findPuzzleModel(_u.stripNonAlphaNum(req.apiKey));
TargetModel.find({removed: false, $or: commentIDs}, function(e, puzzles) {
if(e)
{
err.send_error(err.DB_ERROR, res);
}
else if (!puzzles)
{
err.send_error(err.DB_ERROR, res);
}
else
{
res.send(puzzles);
}
});
}
});
}
It sounds like your query is causing something on your server (potentially mongo) to consume a very large amount of CPU - as this is commonly what causes the issue you have seen with SSH access.
You should try reading over the logs of your mongo instance and seeing if there are any long running queries.
Monngodb provides an internal profiler for examining long running commands. Try setting long running profiling level to 1, running the command and examining the logfile output.
More details on the profiler are available at http://www.mongodb.org/display/DOCS/Database+Profiler