Using mongodb 5 improved json schema validation errors (https://www.mongodb.com/developer/article/mongodb-5-0-schema-validation/), we should be able to have a detailed validation error when e.g inserting incorrect data in a document.
However, it does not seem to be working using latest mongodb node driver 4.2.
Here's the error I have :
MongoServerError: BSON field 'insert.documents.0' is the wrong type 'array', expected type 'object'
at MessageStream.messageHandler (/workspace/node_modules/mongodb/src/cmap/connection.ts:741:20)
at MessageStream.emit (events.js:400:28)
at MessageStream.emit (domain.js:470:12)
at processIncomingData (/workspace/node_modules/mongodb/src/cmap/message_stream.ts:167:12)
at MessageStream._write (/workspace/node_modules/mongodb/src/cmap/message_stream.ts:64:5)
at writeOrBuffer (internal/streams/writable.js:358:12)
at MessageStream.Writable.write (internal/streams/writable.js:303:10)
at Socket.ondata (internal/streams/readable.js:726:22)
at Socket.emit (events.js:400:28)
at Socket.emit (domain.js:470:12) {
ok: 0,
code: 14,
codeName: 'TypeMismatch'
Is there a way to have the detailed error message ?
Related
Error : MongoServerError: user is not allowed to do action [insert] on [nodeMongoCrud.users]
server is running on port 5000 MongoServerError: user is not allowed to do action [insert] on [nodeMongoCrud.users] at Connection.onMessage (G:\web-development-projects-list\milestone-11\node-mongo-crud-server\node_modules\mongodb\lib\cmap\connection.js:207:30) at MessageStream.<anonymous> (G:\web-development-projects-list\milestone-11\node-mongo-crud-server\node_modules\mongodb\lib\cmap\connection.js:60:60) at MessageStream.emit (node:events:527:28) at processIncomingData (G:\web-development-projects-list\milestone-11\node-mongo-crud-server\node_modules\mongodb\lib\cmap\message_stream.js:132:20) at MessageStream._write (G:\web-development-projects-list\milestone-11\node-mongo-crud-server\node_modules\mongodb\lib\cmap\message_stream.js:33:9) at writeOrBuffer (node:internal/streams/writable:389:12) at _write (node:internal/streams/writable:330:10) at MessageStream.Writable.write (node:internal/streams/writable:334:10) at TLSSocket.ondata (node:internal/streams/readable:754:22) at TLSSocket.emit (node:events:527:28) { ok: 0, code: 8000, codeName: 'AtlasError', [Symbol(errorLabels)]: Set(0) {} }
I create second user on mongodb and copy the configuration from mongodb. then replace and .
then i try to add an create an object on datbase with this funciton below: `
But I got an error That i highlight on title .
To create or modify a database user, please go to cloud.mongodb.com and go to the Database Access tab and make sure that your user has the readWriteAnyDatabase#admin access so you can read or write the data in your MongoDB database with…
error: Error: Duplicate entry 'Story of Lawyer' for key 'book.book-title-unique'
at Packet.asError (E:\Work\nest_best_practice\node_modules\mysql2\lib\packets\packet.js:728:17)
at Query.execute (E:\Work\nest_best_practice\node_modules\mysql2\lib\commands\command.js:29:26)
at PoolConnection.handlePacket (E:\Work\nest_best_practice\node_modules\mysql2\lib\connection.js:456:32)
at PacketParser.onPacket (E:\Work\nest_best_practice\node_modules\mysql2\lib\connection.js:85:12)
at PacketParser.executeStart (E:\Work\nest_best_practice\node_modules\mysql2\lib\packet_parser.js:75:16)
at Socket. (E:\Work\nest_best_practice\node_modules\mysql2\lib\connection.js:92:25)
at Socket.emit (node:events:527:28)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at Socket.Readable.push (node:internal/streams/readable:228:10) {
code: 'ER_DUP_ENTRY',
errno: 1062,
sqlState: '23000',
sqlMessage: "Duplicate entry 'Story of Lawyer' for key 'book.book-title-unique'",
sql: "INSERT INTO book(id, title, desc, createdAt, updatedAt, image, pdf) VALUES (DEFAULT, 'Story of Lawyer', 'No Description', DEFAULT, DEFAULT, 'software-developer-copy-6d00.jpg', 'NEST CLI-3339.pdf')"
}
Have you tried using an exception filter
https://docs.nestjs.com/exception-filters
I am trying to insertOne more document in a new collection using this code:
await client.db(db).collection(newCollection).insertOne(newDocument);
However, this runs into error code 8000, which is not documented here: https://github.com/mongodb/mongo/blob/master/src/mongo/base/error_codes.yml .
MongoServerError: cannot create a new collection -- already using 500 collections of 500
at Connection.onMessage (/home/pi/projects/myProject/node_modules/mongodb/lib/cmap/connection.js:210:30)
at MessageStream.<anonymous> (/home/pi/projects/myProject/node_modules/mongodb/lib/cmap/connection.js:63:60)
at MessageStream.emit (node:events:527:28)
at processIncomingData (/home/pi/projects/myProject/node_modules/mongodb/lib/cmap/message_stream.js:132:20)
at MessageStream._write (/home/pi/projects/myProject/node_modules/mongodb/lib/cmap/message_stream.js:33:9)
at writeOrBuffer (node:internal/streams/writable:390:12)
at _write (node:internal/streams/writable:331:10)
at Writable.write (node:internal/streams/writable:335:10)
at TLSSocket.ondata (node:internal/streams/readable:766:22)
at TLSSocket.emit (node:events:527:28) {
ok: 0,
code: 8000,
codeName: 'AtlasError',
[Symbol(errorLabels)]: Set(0) {}
}
I did further research and found that number of collections one create is not limited by the user ("In general, try to limit your replica set to 10,000 collections") as stated here: https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-massive-number-collections/#:~:text=In%20general%2C%20try%20to%20limit%20your%20replica%20set%20to%2010%2C000%20collections.
I am using Atlas on a shared cluster. Is this an Atlas issue or an issue with my code?
yes, it's atlas M0 free restriction. See Database and Collections here
Any help would be appreciated...
Not sure why I get this error every now and then. I've search off and on for a solution but keep coming up empty.
I run my Node App as a service on Centos7. It can run for a few days ... or even a month before getting this error.
I use a Pooled connection to Postgres and am inserting data into several different databases every few seconds.
events.js:377
throw er; // Unhandled 'error' event
^
error: terminating connection due to administrator command
at Parser.parseErrorMessage (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/parser.js:287:98)
at Parser.handlePacket (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.<anonymous> (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/index.js:11:42)
at Socket.emit (events.js:400:28)
at addChunk (internal/streams/readable.js:290:12)
at readableAddChunk (internal/streams/readable.js:265:9)
at Socket.Readable.push (internal/streams/readable.js:204:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
Emitted 'error' event on Client instance at:
at Client._handleErrorEvent (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg/lib/client.js:319:10)
at Client._handleErrorMessage (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg/lib/client.js:330:12)
at Connection.emit (events.js:400:28)
at /home/beenth12/public_html/ws/wxbox/node/node_modules/pg/lib/connection.js:114:12
at Parser.parse (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/parser.js:40:17)
at Socket.<anonymous> (/home/beenth12/public_html/ws/wxbox/node/node_modules/pg-protocol/dist/index.js:11:42)
[... lines matching original stack trace ...]
at Socket.Readable.push (internal/streams/readable.js:204:10) {
length: 116,
severity: 'FATAL',
code: '57P01',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'postgres.c',
line: '3193',
routine: 'ProcessInterrupts'
}
The documentation will inform you that SQLSTATE 57P01 is admin_shutdown. So somebody shutdown the database, and it is not surprising that your database connection got terminated.
I have a Data model in Sails using the sails-cassandra connection system. Data. Data.count({...}).exec() returns 1, but Data.find({...}).exec() or Data.findOne({...}).exec() return the following error message:
Error (E_UNKNOWN) :: Encountered an unexpected error
ResponseError: Undefined name folder in selection clause
at FrameReader.readError (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/readers.js:276:13)
at Parser.parseError (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/streams.js:187:45)
at Parser.parseBody (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/streams.js:169:19)
at Parser._transform (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/streams.js:103:10)
at Parser.Transform._read (_stream_transform.js:179:10)
at Parser.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:301:12)
at writeOrBuffer (_stream_writable.js:288:5)
at Parser.Writable.write (_stream_writable.js:217:11)
at Protocol.ondata (_stream_readable.js:540:20)
at Protocol.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)
at Protocol.Readable.push (_stream_readable.js:126:10)
at Protocol.Transform.push (_stream_transform.js:140:32)
at Protocol.transformChunk (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/streams.js:75:8)
at Protocol._transform (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/streams.js:26:10)
at Protocol.Transform._read (_stream_transform.js:179:10)
at Protocol.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:301:12)
at writeOrBuffer (_stream_writable.js:288:5)
at Protocol.Writable.write (_stream_writable.js:217:11)
at Socket.ondata (_stream_readable.js:540:20)
at Socket.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)
at Socket.Readable.push (_stream_readable.js:126:10)
at TCP.onread (net.js:538:20)
(event loop)
at RequestHandler.send (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/request-handler.js:128:11)
at Client._getPrepared (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/client.js:581:11)
at /Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/cassandra-driver/lib/client.js:399:12
at fn (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/async/lib/async.js:638:34)
at Immediate._onImmediate (/Users/samuel/Apps/dataapp/node_modules/sails-cassandra/node_modules/async/lib/async.js:554:34)
at processImmediate [as _immediateCallback] (timers.js:367:17)
This is probably an issue with the construction of the returned attributes since count() does not return any of the attributes, where as find() and findOne() do.
I would look at the attributes on your models. Add and remove each one till you find the offender.