This question already has answers here:
MongoDB - Error: getMore command failed: Cursor not found
(6 answers)
Closed 4 years ago.
I am using mongodb's (mongoose module) with node js
and i am processing around 1,00,00,000 documents in (1000 bunch each) using limit and skip functionalities.
my processing fine but after some time it gives me an error.
{ MongoError: Cursor not found, cursor id: 62783806111
at Function.MongoError.create (/home/admin/Pictures/duplicayProj1/node_modules/mongoose/node_modules/mongodb-core/lib/error.js:31:11)
at /home/admin/Pictures/duplicayProj1/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:483:72
at authenticateStragglers (/home/admin/Pictures/duplicayProj1/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:429:16)
at Connection.messageHandler (/home/admin/Pictures/duplicayProj1/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:463:5)
at Socket.<anonymous> (/home/admin/Pictures/duplicayProj1/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:339:20)
at emitOne (events.js:115:13)
at Socket.emit (events.js:210:7)
at addChunk (_stream_readable.js:252:12)
at readableAddChunk (_stream_readable.js:239:11)
at Socket.Readable.push (_stream_readable.js:197:10)
at TCP.onread (net.js:589:20)
name: 'MongoError',
message: 'Cursor not found, cursor id: 62783806111',
ok: 0,
errmsg: 'Cursor not found, cursor id: 62783806111',
code: 43 }
can any one tell me what's the actual problem because i am not using any keywords matches with cursor.
Thanks in advance
This normally happens because your cursor timeouts if it is idle for too long. Check out noCursorTimeout. Just make sure you close the cursor when you are finished.
Related
so I've been working on inserting my JSON data into google cloud postgresql for a few days now and am running across an issue where not even close to all of my data is inserted. Here is my code:
//prior to this i am connecting to cloud using pg and creating tables InterestClubs and FilterClubs
//alldata a json file, an array of about 3000 objects
let count = 0; //incrementing this every time i loop through
for(const club of alldata){
client.query("INSERT INTO InterestClubs (name, clublink, university, description, logo, interests) VALUES ('"+club.title+"', '"+club.clubLink+"', '"+club.university+"', '"+club.descriptionSnippet+"', '"+club.logoLink+"', '"+club.interests+"')")
client.query("INSERT INTO FilterClubs (name, clublink, university, description, logo, polfilters, relfilters, culfilters) VALUES ('"+club.title+"', '"+club.clubLink+"', '"+club.university+"', '"+club.descriptionSnippet+"', '"+club.logoLink+"', '"+club.politicalFilters+"', '"+club.religiousFilters+"', '"+club.culturalFilters+"')");
count++;
}
console.log(count); //outputs 3000 (or however many clubs there are in the json file)
I seem to be successfully looping through the data 3000 times (leading me to believe that I have inserted 3000 objects), but when I run a query such as SELECT * FROM InterestClubs (using either node/express or the cloud shell), I only receive 19 objects in return. I thought it may have something to do with having to wait a certain amount of time to allow the client.queries to successfully connect and insert, but when I used async/await (awaiting in front of each query), nothing changed. Also, I am getting this error every time I try and insert (after the count is printed)
3611
events.js:174
throw er; // Unhandled 'error' event
^
error: syntax error at or near "s"
at Connection.parseE (C:\Users\User\Documents\Code\Personal_Projects\clubhaus\node_modules\pg\lib\connection.js:539:11)
at Connection.parseMessage (C:\Users\User\Documents\Code\Personal_Projects\clubhaus\node_modules\pg\lib\connection.js:366:17)
at Socket.<anonymous> (C:\Users\User\Documents\Code\Personal_Projects\clubhaus\node_modules\pg\lib\connection.js:105:22)
at Socket.emit (events.js:198:13)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
Emitted 'error' event at:
at Query.handleError (C:\Users\User\Documents\Code\Personal_Projects\clubhaus\node_modules\pg\lib\query.js:108:8)
at Connection.emit (events.js:198:13)
at Socket.<anonymous> (C:\Users\User\Documents\Code\Personal_Projects\clubhaus\node_modules\pg\lib\connection.js:109:12)
at Socket.emit (events.js:198:13)
[... lines matching original stack trace ...]
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
This makes me think that something about the 19th object could be triggering a syntax error, causing the queries to stop inputting but still looping through them, but I'm not sure. Any help would be appreciated!!
Check if that 20th object has unescaped quote characters in one of the properties you are using. If u are using npm package pg you can automatically escape those characters passing variable arguments to your insert statements like this:
client.query("INSERT INTO InterestClubs (name, clublink, university, description, logo, interests) VALUES ($1, $2, $3, $4, $5, $6)", [club.title, club.clubLink, club.university, club.descriptionSnippet, club.logoLink, club.interests])
I'm deliberately triggering an error in a stored procedure under certain conditions which I want to catch in my Node.js API which uses the Tedious package.
Code Snippet from API:
let request = new Request(sql, (err)=>{
if (err) {
sqlerr = err;
console.log(typeof(err));
console.log("**RQ-ERROR**", err);
}
});
In the callback of the "Request" object above there is an "err" parameter. The "typeof()" returns "object"; however, when I dump it to the console it looks like this:
**RQ-ERROR** { RequestError: Duplicate entry for specified period
at RequestError (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\tedious\lib\errors.js:32:12)
at Parser.tokenStreamParser.on.token (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\tedious\lib\connection.js:723:34)
at emitOne (events.js:96:13)
at Parser.emit (events.js:188:7)
at Parser.parser.on.token (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\tedious\lib\token\token-stream-parser.js:27:14)
at emitOne (events.js:96:13)
at Parser.emit (events.js:188:7)
at addChunk (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\readable-stream\lib\_stream_readable.js:297:12)
at readableAddChunk (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\readable-stream\lib\_stream_readable.js:279:11)
at Parser.Readable.push (C:\inetpub\wwwroot\PersonnelApps\kudosapi\node_modules\readable-stream\lib\_stream_readable.js:240:10)
message: 'Duplicate entry for specified period',
code: 'EREQUEST',
number: 50000,
state: 1,
class: 16,
serverName: 'PERSODG2LNN52\\SQLEXPRESS',
procName: 'CreateStatusReport',
lineNumber: 44 }
This almost looks like a JavaScript object but, as you can see, the "RequestError" data isn't quoted nor is there a comma after the text "240:10)" just before the "message" member. I'm not sure if this is a bug in TDS or if I'm just missing something but I cannot access any of the members as it is. I'd have to convert it to a string and parse it which is fine but isn't very elegant.
Suggestions?
as you can see, the "RequestError" data isn't quoted nor is there a comma after the text "240:10)"
These are artifacts of the console logging out the error message. You can try it out for yourself with something like the following:
$ node
> console.log(new Error('this is an error object!'));
Error: this is an error object!
at repl:1:13
at Script.runInThisContext (vm.js:119:20)
at REPLServer.defaultEval (repl.js:332:29)
at bound (domain.js:395:14)
at REPLServer.runBound [as eval] (domain.js:408:12)
at REPLServer.onLine (repl.js:639:10)
at REPLServer.emit (events.js:194:15)
at REPLServer.EventEmitter.emit (domain.js:441:20)
at REPLServer.Interface._onLine (readline.js:290:10)
at REPLServer.Interface._line (readline.js:638:8)
I'm not exactly sure what the desired outcome of this question is, but try inspecting the err.message property rather than using the typeof operator.
I have several sources of lists of images (flicker, images stored at s3, imgur, etc)
I want to get the dimenions of these images.
I use node and https://github.com/nodeca/probe-image-size to go over each url and use that to get the width of the image and count how many images are at a certain width via the following code
probes = [];
_.forEach(image_urls, url => {
probes.push(probe(url));
});
results = await Promise.all(probes);
_.forEach(results, result_of_image => {
width = parseInt(result_of_image.width / 10) * 10;
if (!widthes[width]) {
widthes[width] = 1;
} else {
widthes[width]++;
}
});
even though all urls are accessible, I sometimes get getaddrinfo ENOTFOUND with the stack
at ClientRequest.req.once.err (/image_script/node_modules/got/index.js:73:21)
at Object.onceWrapper (events.js:293:19)
at emitOne (events.js:101:20)
at ClientRequest.emit (events.js:191:7)
at TLSSocket.socketErrorListener (_http_client.js:358:9)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:191:7)
at connectErrorNT (net.js:1031:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
I suspect that because the url list is very large (in the thousands) that node just takes all resources of the system and things just stop working properly (this is a guess)
Is there a better way to do the above? or provide node with some connection pool?
I have a function in NodeJS using Mongoose driver like below:
Pseudocode:
function someFn(someParams) {
// Step 1: a couple of very fast mongo queries (in milliseconds)
// Step 2: HUGE CPU processing - think millions of data grouped, mapped, etc. (takes about a minute)
// Step 3: another mongo query which inserts the results from Step 2 into a collection
}
At step 3, I get the following error:
MongoError: connection 4 to cluster closed
at Function.MongoError.create (/home/some-user/my-repo/node_modules/mongodb-core/lib/error.js:29:11)
at TLSSocket.<anonymous> (/home/some-user/my-repo/node_modules/mongodb-core/lib/connection/connection.js:202:22)
at Object.onceWrapper (events.js:293:19)
at emitOne (events.js:101:20)
at TLSSocket.emit (events.js:191:7)
at _handle.close (net.js:513:12)
at Socket.done (_tls_wrap.js:332:7)
at Object.onceWrapper (events.js:293:19)
at emitOne (events.js:101:20)
at Socket.emit (events.js:191:7)
at TCP._handle.close [as _onclose] (net.js:513:12)
My MongoDB connection params are as follows:
mongoose.connect(connStr, {
server: {
socketOptions: {
keepAlive: 300000,
connectTimeoutMS: 300000,
socketTimeoutMS: 300000,
auto_reconnect: true
}
}
});
I don't understand why I'm getting this error at Step 3. Can someone help me out with this, please?
Figured out the issue after hours of debugging. My Step 3 mongoose query had too many documents(in the order of millions, from Step 2). The error from mongoose gives no reason why the connection is closing. A message like Too many documents or Too large query would've gone a long way in saving a lot of time.
I'm trying to use https://github.com/rs/pushd as a push notification server.
Whenever I try to add a new subscriber, the server crashes with this stack trace:
/home/ec2-user/push_server/pushd/node_modules/redis/index.js:602
throw err;
^
TypeError: Cannot set property 'id' of null
at /home/ec2-user/push_server/pushd/lib/api.coffee:20:21
at /home/ec2-user/push_server/pushd/lib/subscriber.coffee:133:21
at try_callback (/home/ec2-user/push_server/pushd/node_modules/redis/index.js:592:9)
at RedisClient.return_reply (/home/ec2-user/push_server/pushd/node_modules/redis/index.js:685:13)
at HiredisReplyParser.<anonymous> (/home/ec2-user/push_server/pushd/node_modules/redis/index.js:321:14)
at HiredisReplyParser.emit (events.js:95:17)
at HiredisReplyParser.execute (/home/ec2-user/push_server/pushd/node_modules/redis/lib/parser/hiredis.js:43:18)
at RedisClient.on_data (/home/ec2-user/push_server/pushd/node_modules/redis/index.js:547:27)
at Socket.<anonymous> (/home/ec2-user/push_server/pushd/node_modules/redis/index.js:102:14)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:765:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:427:10)
at emitReadable (_stream_readable.js:423:5)
at readableAddChunk (_stream_readable.js:166:9)
at Socket.Readable.push (_stream_readable.js:128:10)
at TCP.onread (net.js:529:21)
This is the command that should add a new subscriber (example from the documentation), and causes the crash:
curl -d proto=apns -d token=FE66489F304DC75B8D6E8200DFF8A456E8DAEACEC428B427E9518741C92C6660 -d lang=fr -d badge=0 -d category=show -d contentAvailable=true http://localhost/subscribers
The crash happens before the subscriber is added to the database.
The server is a RHEL micro instance on AWS
Versions
Redis Server 2.8.19.
Node.js 0.10.36.
CoffeeScript 1.9.0
Any help will be appreciated.
/lib/event.coffee
- throw new Error("Missing redis connection") if not redis?
+ throw new Error("Missing redis connection") if not #redis?
/lib/subscriber.coffee
- if info?.updated? # subscriber exists
+ if #info?.updated? # subscriber exists
# transform numeric value to number type
- for own key, value of info
+ for own key, value of #info