Query mongodb with geddy - node.js

While trying out the node.js framework geddy (on windows) and i've run into a bit of a problem.
I'm trying to query mongodb, in my controller, using the .first() method from my Users Model like so:
geddy.model.User.first({name: 'jdoe'}, function (err, data) {
if (err) {
throw err;
} else {
console.log(data);
}
});
Strangely enough i'm not getting any output, error, nothing. The user jdoe exists in the collection so it should output something, right ? Am i doing something wrong ?
My model is defined as:
var User = function () {
this.defineProperties({
username: {type: 'string', required: true},
password: {type: 'string', required: true},
});
this.autoIncrementId = true;
};
User = geddy.model.register('User', User);
The default adapter is set to mongo in development.js, when i ran geddy for the first time it created my database and it has inserted the Users collection correctly.
Any idea on whats going wrong here ?
UPDATE:
added development.js as requested
var config = {
detailedErrors: true
, debug: true
, hostname: null
, port: 4000
, model: {
defaultAdapter: 'mongo',
}
,db: {
mongo: {
dbname: 'knowledgebase'
}
}
, sessions: {
store: 'memory'
, key: 'sid'
, expiry: 14 * 24 * 60 * 60
}
};
module.exports = config;
also my collections on mongo ( created by geddy )
> show collections
User
system.indexes
users
note that somehow geddy is creating two collections instead of one

It looks like you're being hit by this bug: https://github.com/mde/geddy/issues/240
As it is, Geddy accidentally creates two collections per model. It always uses the lowercased pluralized collection to do read/writes though. Are you sure that your data was in that collection and not in the other?
At any rate, from the comments, it sounds like you've got this one covered already.

Related

How to fix and prevent duplicate key error in mongodb

I've been working on a hobby project recently and I've encountered a problem I can't seem to figure out, even after scouring the internet for an answer. I'm using Node.js on c9.io with MongoDB. Anytime I try to create a new entry into the database, the first entry works and goes through fine, but then the second one causes an error.
E11000 duplicate key error collection: project.tasks index: username_1 dup key:
{ : null }'
My Schema:
var mongoose = require("mongoose");
var passportLocalMongoose = require("passport-local-mongoose");
var taskSchema = new mongoose.Schema({
task: String,
region: String,
cost: String,
when: String,
isAccepted: Boolean,
author: {
id:{
type: mongoose.Schema.Types.ObjectId,
ref: "User"
}
},
tasker: {
id : {
type: mongoose.Schema.Types.ObjectId,
ref: "User"
}
}
});
taskSchema.plugin(passportLocalMongoose);
module.exports = mongoose.model("Task", taskSchema);
My Post Request:
app.post("/taskers/index/show", function(req, res){
var task = req.body.task;
var newTask = {
task: task.task,
region: task.region,
cost: task.cost,
when: task.when,
isAccepted: false,
author: req.user._id,
tasker: req.user._id
};
console.log("STSOTSOTSOTOOPP");
Task.create(newTask, function(err, newlyCreated){
if(err){
console.log(err);
} else {
console.log(newlyCreated);
res.redirect("/users/index");
}
});
});
If anyone knows what I'm doing wrong or can lead me to a solution, that would be amazing as I've been stuck on this for a while.
E11000 duplicate key error collection: project.tasks index: username_1 dup key: { : null }
This error is coming from mongo (not from mongoose). Removing indexes from your mongoose schema will not have any impact on the underlying collection so you'll now want to remove the unique index on username from your tasks collection.
This index was likely created by previous code that we no longer see (or perhaps by that taskSchema.plugin(passportLocalMongoose); -- that sounds suspiciously like the kind of thing that would want an index on username).
If you connect to mongo using the shell, you should be to run db.tasks.getIndexes() to see that unique username index, and then use the dropIndexCommand to remove the offending index.
See E11000 duplicate key error index in mongodb mongoose for more details about how mongoose & mongo interact.
Open MongoDB compass and connect same Database that you are using in your code,
open your Db and select your collection in MongoDB Compass
navigate to indexes section and remove unnecessary index
save

Memory issue with mongo in node

I am facing memory issues with my node app. Took some heapdumps and saw a lot of mongo objects being held in the memory which is causing the node app to run out of memory.
I have the following setup for my app.
MongoDB 3.4.13
Mongoose 4.11.10 (tried 4.13.11 and 5.0.7 also)
Node 8.9.4
config.js
const clientUID = require('./env').clientUID;
module.exports = {
// Secret key for JWT signing and encryption
secret: 'mysecret',
// Database connection information
database: `mongodb://localhost:27017/app_${clientUID}`,
// Setting port for server
port: process.env.PORT || 3000,
}
I have several models in the app. Every model is defined in the following manner (just listing one of the models here):
models/card.js
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const CardSchema = new Schema({
name: {
type: String,
unique: true,
required: true
},
macId: {
type: String,
unique: true,
required: true
},
cardTypeId: {
type: mongoose.Schema.Types.ObjectId,
ref: 'CardType',
required: true
},
},
{
timestamps: true
});
module.exports = mongoose.model('Card', CardSchema);
In the app I require the model and perform some actions as follows:
const Card = require('./models/card');
...require other models
const config = require('./config');
mongoose.connect(config.database);
function fetchCardByMacId(macId) {
return Card.findOne({ macId }).lean().exec();
}
function updateTrackerByMacId(macId, x, y, nodeId) {
const data = {x, y, lastNodeId: nodeId};
fetchCardByMacId(macId)
.then(card => {
Tracker.findOneAndUpdate({ cardId: card._id }, data, { upsert: true, new: true }).exec((error, tracker) => {
if (error) {
return console.log('update tracker error', error);
}
TrackerHistory.findOne({ trackerId: tracker._id }).exec((err, trackerHistory) => {
if (err) {
return console.log('fetch trackerHistory error', err);
}
if (trackerHistory) {
trackerHistory.trackers.push({ x, y, timestamp: moment().format(), nodeId });
TrackerHistory.findOneAndUpdate({_id: trackerHistory._id},trackerHistory,(er, trackerHis) => {
if (er) {
return console.log('trackerHistory change update error', er);
}
})
} else {
const trackerHistoryNew = new TrackerHistory({
trackerId: tracker._id,
trackers: [{ x, y, timestamp: moment().format(), nodeId }]
});
trackerHistoryNew.save((er, trackerHis) => {
if (er) {
return console.log('trackerHistory create error', er);
}
});
}
});
});
}).catch(error => {
console.log('updateTrackerByMacId error', error);
});
}
Like this there are many other functions that read and update data.
Every 5 seconds I get new data that needs to be inserted into the db (not more than few 100kbs) and some of the old db data also gets updated based on this new data (seems like fairly straight forward db ops...read, manipulate and update back).
From the index.js I spawn 2 child processes that take the load of processing this new data and updating the db based on the business logic. When new data is received in the index.js using event listeners, I send it to child process 1 to insert/update the db. child process 2 runs on a 10s timer to read this updated data and then do some further updates to the db.
Running this on my local macbook pro is no issue (logging heap memory being used never goes above 40-50mb). When i load it on a DO Ubuntu 16.04 server (4GB /2 CPUs) I am facing memory issues. The child processes are exiting after hitting the memory threshold for the process (~1.5gb) which seems very odd to me.
I also tried to do this using docker containers and see the same results. on the mac it runs without issues but on the server it is eating up memory.
Generating heapdumps shows a lot of mongo objects in the heap.
I would like some help in understanding what I am doing wrong here and what is the issue with mongo eating up this much memory on the server.
So there was a big issue with the way the TrackerHistory collection was modelled. TrackerHistory had an array and every time a new object had to be added to the array the whole TrackerHistory object was being loaded in the memory and at the given frequency of updating the real time data the memory was bloating up faster than it was being gc'd.
Fixed it by removing the trackers array in a new collection and adding a foreign key reference to the TrackerHistory.
reference article that helped me identify this issue.
https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design-part-1

Connect Mongodb to Sails JS

ive read various tutorials and instructions how to connect sails to js. Every tutorial is telling me to do this. I am new to mongodb btw.
I followed the instructions
install sails-mongo (npm install)
Edit the config/connection
mongo: {
adapter: 'sails-mongo',
host: 'localhost',
port: 54321,
database:'dbname'
}
Edit the config/models.js
connection:'mongo'
Edit the local.js
connections: {
mongodb: {
host : 'localhost',
port : 54321,
database : 'dbname'
}
}
so in my api/model/User.js
module.exports = {
attributes:{
name: {
type: 'string'
},
employedIn:{
collection:'company'
}
},
findUsers :function(opts,cb){
Users.findOne(opts).exec(function (err, theUser) {
// to do
// i wanna show the data of the user
});
}
}
I run console.log(Users) but I didnt find column/documents there.
Now, how am i going to get the collection named users from mongodb?
(Just like 'SELECT * FROM users' in SQL or db.users.find().pretty() )
You query a waterline model by models find or findOne method. You create a new record by create, update by update and delete by destroy methods. There are some more methods exposed by query interface. You have to call exec and pass a callback to it, to get it run. Documentation is here: Waterline Query Interface
So basically it's just:
User.create({
name: 'Max Mustermann'
}).exec(function(console.log));
User.find().exec(function(console.log));
User.create({
name: 'Peter Pan'
}).exec(function(console.log));
User.find().exec(console.log);
User.findOne({
where: { name: 'Max Mustermann' }
}).exec(function(err, user) {
user.destroy().exec(console.log);
});
You do not need a custom findUsers method on your model. This is just find:
// /api/model/User.js
module.exports = {
attributes:{
name: {
type: 'string'
},
employedIn:{
collection:'company'
}
}
}
You should use sails console to test.

MongoDB: handling auto-incrementing model id's instead of Mongo's native ObjectID

Due to a management decision, we are using userId for the users collection, postId for the posts collection, and topicId for the topics collection, instead of '_id' for each collection as the unique identifier.
This causes a few problems getting started - one of the problems I have encountered is with upserts -
Using Mongoose, we have a schema that restricts userId to be a unique value - but when doing an update on a user model, with upsert set to true, MongoDB appears to only look at the ObjectIds of a collection to see if the same one exists - it doesn't check to see if a model already exists with the same userId - therefore Mongo does an insert instead of an update.
let me illustrate this with some data:
let's say the user's collection has one document:
{
_id:'561b0fad638e99481ab6d84a'
userId:3,
name:'foo'
}
we then run:
User.update({userId:3},{"$set":{name:'bar'},{upsert:true},function(err,resp){
if(err){
// "errMessage": "insertDocument :: caused by :: 11000 E11000 duplicate key error index: app42153482.users.$userId_1 dup key: { : 3 }",
}
});
one would think that MongoDB would find the existing document with userId:3 and udpate it, so there must be something I am doing wrong since it's giving me the duplicate key error?
Typically the default value ObjectId is more ideal for the _id. Here, in this situation you can either override the default _id or you can have your own field for id(like userId in your case).
Use a separate counters collection to track the last number sequence used. The _id field contains the sequence name and the seq field contains the last value of the sequence.
Insert into the counters collection, the initial value for the userid:
db.counters.insert( {
_id: "userid",
seq: 0 } )
Create a getNextSequence function that accepts a name of the sequence. The function uses the findAndModify() method to atomically increment the seq value and return this new value:
function getNextSequence(name) {
var ret = db.counters.findAndModify(
{
query: { _id: name },
update: { $inc: { seq: 1 } },
new: true
}
);
return ret.seq;
}
Use this getNextSequence() function during insert().
db.users.insert(
{
_id: getNextSequence("userid"),
name: "Sarah C."
}
)
db.users.insert(
{
_id: getNextSequence("userid"),
name: "Bob D."
}
)
This way you can maintain as many sequences as you want in the same counter collection. For the upsert issue, check out the Optimistic Loop block in this link Create an auto-increment sequence field.
The second approach is to use a mongoose middleware like mongodb-autoincrement.
Hope it helps.
I don't know which versions of MongoDB and Mongoose you are using, but I couldn't reproduce your problem with MongoDB 3.0 and Mongoose 4.1.10.
I made a sample for you which will create and save a new user, update (using upsert) it, and create another one through an upsert. Try running this code:
"use strict";
var mongoose=require("mongoose");
var Schema = require('mongoose').Schema;
var ObjectId = mongoose.Schema.Types.ObjectId;
// Connect to test
mongoose.connect("mongodb://localhost:27017/test");
// Lets create your schema
var userSchema = new Schema({
_id: ObjectId,
userId: {type: Number, unique: true },
name: String
});
var User = mongoose.model("User", userSchema, "Users");
User.remove() // Let's prune our collection to start clean
.then( function() {
// Create our sample record
var myUser = new User({
_id:'561b0fad638e99481ab6d84a',
userId:3,
name:'foo'
});
return myUser.save();
})
.then( function() {
// Now its time to update (upsert userId 3)
return User.update({userId:3},{"$set":{name:'bar'}},{upsert:true});
})
.then( function() {
// Now its time to insert (upsert userId 4)
return User.update({userId:4},{"$set":{name:'bee'}},{upsert:true});
})
.then( function() {
// Lets show what we have inserted
return User.find().then(function(data) {console.log(data)});
})
.catch( function(err) {
// Show errors if anything goes wrong
console.error("ERROR", err);
})
.then( function() {
mongoose.disconnect();
});
Following the documentation (of MongoDB 3.0) upsert:true will only not insert a non-existing document if your query conditions match on the _id field.
See: https://docs.mongodb.org/manual/reference/method/db.collection.update/#mongodb30-upsert-id
Why are you not using the user_name for a user as unique id?
Because auto-incrementing fields as ids are a bad practice to use in a mongodb environment, especially if you want to use sharding
=> all your inserts will occur on the latest shard
=> the mongodb cluster will have to rebalance often / redistribute the data around.
(Currently this will not occur on your system as you still use the generated _id field)
You can off course also create a unique index on the user_id field:
https://docs.mongodb.org/manual/core/index-unique/#index-type-unique

How do to call EXPIRE command for a Redis record using Sails

I'm using Sails to add some data to Redis... It is working OK but I'm not sure how to set the EXPIRE for a key...
Im using the sails-redis adapter/connection for the model...
My model looks like this
module.exports = {
connection: 'cache',
attributes: {
id: {type: 'string', primaryKey: true},
data: {type: 'string'}
}
};
To save the model I use
Cache.create({id: "somekey", data: data}, function(err, data){})
Waterline doesn't provide this functionality with the native adapter currently near as I can tell. However, it's a wee bit hacky, but you can access the redis client itself through Waterline and do it that way with the created model ID.
// Create the base model
Model.create({ field: value }).exec( function ( err, created ) {
if ( created ) {
// Then manually set expiration via native Redis adapter
var native = Model.adapter.connections.connectionName._adapter.native;
native( 'connectionName', null, function ( err, connection ) {
if ( connection ) {
connection.expire('waterline:<model name, lower case>:id:' + created.id, <timeout, in seconds>, function ( err, reply ) {
console.log( err, reply );
});
}
});
}
});
Not quite as clean as I'd prefer, but I also don't like writing that functionality manually when it's built in to Redis itself.

Resources