I have a Schema in which I'm storing a relationship between two users. Each of these relationships has user specific data. I'm curious as to if it's possible to do something along the lines of THIS:
{
users: Array,
users[0]: {
typing: Boolean,
last_checked: Date
},
users[1]: {
typing: Boolean,
last_checked: Date
}
}
Instead of having the information stored like so:
{
users: Array,
data: Array
}
and doing logic on the server to find the index, etc Like so:
entry.data[entry.users.indexOf(id)].typing
Basically just trying to find a decent way to store user based information for each user in the 2-person relationship. The most ideal situation to me would be to use the users _id as a key, but can you do that with Mongoose?
I propose you to create an array that's gonna contains user's data. Here I did restrained the size of the array to two relationships.
DataSchema = {
typing: Boolean,
last_checked: Date,
}
UserSchema = {
relationship: {
type: [DataSchema],
validate: [
() => val.length <= 2,
'{PATH} exceeds the limit of 2 relationship',
],
}
}
Access the data:
// User 1 data
entry.relationship[0]
// User 2 data
entry.relationship[1]
// User 1 _id you can use
entry.relationship[0]._id
Related
I have a collection of documents which are being added as a result of users' interactions.
Those docs already have an _id field, but I also wanna add a unique human readable ID for every existing and newly created object, in a form of D123456
What is the best way of adding such an ID and being sure that all those IDs are unique?
MongoDB doesn't have an auto-increment option like relational databases.
You can implement something yourself: before you save your document, generate an ID. First, create a database collection whose sole purpose is to hold a counter:
const Counter = mongoose.model('Counter', new mongoose.schema({
current: Number
}));
Second, before you save your object, find and increment the number in the collection:
const humanReadableDocumentId = await Counter.findOneAndUpdate(
// If you give this record a name, you can have multiple counters.
{ _id: 'humanReadableDocumentId' },
{ $inc: { current: 1 } },
// If no record exists, create one. Return the new value after updating.
{ upsert: true, returnDocument: 'after' }
);
const yourDocument.set('prettyId', format(humanReadableDocumentId.current));
function format(id) {
// Just an example.
return 'D' + id.toString().padStart(6, '0');
}
Note: I've tested the query in MongoDB (except for the 'returnDocument' option, which is Mongoose-specific, but this should work)
Formatting is up to you. If you have more than 999999 documents, the 'nice looking ID' in the example will just get longer and be 7+ characters.
I'm trying to query an object that's inside an item which is inside a mongoose model, when I'm trying to find that object with the find() method or _.find() method, I can't get access to the object for some reason and when I console.log() it, it gives me undefined or when I use array.filter() it gives me an empty array, which means the object that I'm trying to access does not meet the criteria that I give it in the lodash find method, but then when I look at my database I see that the object does actually have the properties to meet the criteria. So I don't know what I'm doing wrong, here's my code: as you can see I'm trying to get the information of the item that the user clicked on and want to see:
router.get("/:category/:itemId", (req, res) => {
console.log(req.params.itemId);
//gives the item id that the user clicked on
console.log(req.params.category);
//gives the name of category so I can find items inside it
Category.findOne({ name: req.params.category }, (err, category) => {
const items = category.items; //the array of items
console.log(items); //gives an array back
const item = _.find(items, { _id: req.params.itemId });
console.log(item); //gives the value of 'undefined' for whatever reason
});
});
The category Schema:
const catSchema = new mongoose.Schema({
name: {
type: String,
default: "Unlisted",
},
items: [
{
name: String,
price: Number,
description: String,
img: String,
dateAdded: Date,
lastUpdated: Date,
},
],
dateCreated: Date,
lastUpdate: Date,
});
well the answer is a little bit obvious, you are using MongoDB and in Mongo you have "_ID" you can use that "_ID" only with Mongoose! so you just have to remove the underscore and that is it! do it like this const item = _.find(items, { id: req.params.itemId });
hope you are doing better.
When I look at your Schema I see that the item field is an array of objects which doesn't have an _id inside so when you create a new instance of catShema it just generates an _id field for the new instance but not for each item inside the items array, just also enter the id of the item in question because according to my understanding, you must also have a model called items in your database
When you save these records in your database, you will generate an element with this structure
{
_id: String, // auto generated
name: String,
items: [ {"name1", "price1", "description1", "imgUrl1", "dateAdded1", "lastUpdated1"},{"name2", "price2", "description2", "imgUrl2", "dateAdded1", "lastUpdated2"}, ...],
dateCreated: Date,
lastUpdate: Date
}
Note : the code provided at the top is only an illustration of the object which will be registered in the database and not a valid code
Here you can notice that there is not a field called _id in the object sent inside the database.
My suggestion to solve this issue is
To create an additional field inside the items array called _id like :
{
...,
items: [
{
_id: {
type: String,
unique : true,
required: true // to make sure you will always have it when you create a new instance
},
...
... // the rest of fields of items
},
...
Now when you create a new instance make sure in the object you enter in the database the _id is recorded when you call the catInstance.save() so inside the catInstance object you have to add the current Id of the element to be able to filter using this field.
Hope my answer helped, if you have any additional questions, please let me know
Happy coding ...
I'm attempting to get a CSV file to my mongodb collection (via mongoose) while checking for matches at each level of my schema.
So for a given schema personSchema with a nest schema carSchema:
repairSchema = {
date: Date,
description: String
}
carSchema = {
make: String,
model: String
}
personSchema = {
first_name: String,
last_name: String,
car: [carSchema]
}
and an object that I am mapping the CSV data to:
mappingObject = {
first_name : 0,
last_name: 1,
car : {
make: 2,
model: 3,
repair: {
date: 4,
description: 5
}
}
}
check my collection for a match then check each nested schema for a match or create the entire document, as appropriate.
Desired process:
I need to check if a person document matching first_name and last_name exists in my collection.
If such a person document exists, check if that person document contains a matching car.make and car.model.
If such a car document exists, check if that car document contains a matching car.repair.date and car.repair.description.
If such a repair document exists, do nothing, exact match to existing record.
If such a repair document does not exist, push this repair to the repair document for the appropriate car and person.
If such a car document does does not exist, push this car to the car document for the appropriate person.
If such a person document does not exist, create the document.
The kicker
This same function will be used across many schemas, which may be nested many levels deep (current database has one schema that goes 7 levels deep). So it has to be fairly abstract. I can already get the data into the structure I need as a javascript object, so I just need to get from that object to the collection as described.
It also has to be synchronous, since multiple records from the CSV could have the same person, and asynchronous creation could mean that the same person gets created twice.
Current solution
I run through each line of the CSV, map the data to my mappingObject, then step through each level of the object in javascript, checking non-object key-value pairs for a match using find, then pushing/creating or recursing as appropriate. This absolutely works, but it is painfully slow with such large documents.
Here's my full recursing function, which works:
saveObj is the object that I've mapped the CSV on to that matches my schema.
findPrevObj is initially false. path and topKey both are initially "".
lr is the line reader object, lr.resume simply moves on to the next line.
var findOrSave = function(saveObj, findPrevObj, path, topKey){
//the object used to search the collection
var findObj = {};
//if this is a nested schema, we need the previous schema search to match as well
if (findPrevObj){
for (var key in findPrevObj){
findObj[key] = findPrevObj[key];
}
}
//go through all the saveObj, compiling the findObj from string fields
for (var key in saveObj){
if (saveObj.hasOwnProperty(key) && typeof saveObj[key] === "string"){
findObj[path+key] = saveObj[key]
}
}
//search the DB for this record
ThisCollection.find(findObj).exec(function(e, doc){
//this level at least exists
if (doc.length){
//go through all the deeper levels in our saveObj
for (var key in saveObj){
var i = 0;
if (saveObj.hasOwnProperty(key) && typeof saveObj[key] === "string"){
i += 1;
findOrSave(saveObj[key], findObj, path+key+".", path+key);
}
//if there were no deeper levels (basically, full record exists)
if (!i){
lr.resume();
}
}
//this level doesn't exist, add new record or push to array
} else {
if (findPrevObj){
var toPush = {};
toPush[topKey] = saveObj;
ThisCollection.findOneAndUpdate(
findPrevObj,
{$push: toPush},
{safe: true, upsert: true},
function(err, doc) {
lr.resume();
}
)
} else {
// console.log("\r\rTrying to save: \r", saveObj, "\r\r\r");
ThisCollection.create(saveObj, function(e, doc){
lr.resume();
});
}
}
});
}
I'll update for clarity, but the person.find is to check if a person with a matching first and last name exists. If they do exist, I check each car for a match - if the car exists already, there's no reason to add this record. If the car doesn't exist, I push it to the car array for the matching person. If no person was matched, I'd save the entire new record.
Ah, what you want is to update with upsert:
replace
Person.find({first_name: "adam", last_name: "snider"}).exec(function(e, d){
//matched? check {first_name: "adam", last_name: "snider", car.make: "honda", car.model: "civic"}
//no match? create this record (or push to array if this is a nested array)
});
with
Person.update(
{first_name: "adam", last_name: "snider"},
{$push: {car: {make: 'whatever', model: 'whatever2'}}},
{upsert: true}
)
If a match is found, it will push into OR create the car field this subdoucment: {car_make: 'whatever', car_model: 'whatever2'}.
If a match is not found, it will create a new doc that looks like:
{first_name: "adam", last_name: "snider", car: {car_make: 'whatever', car_model: 'whatever2'}}
This cuts your total db round trips in half. However, for even more efficiency, you can use an orderedBulkOperation. This would result in a single round trip to the database.
Here's what that would look like (using es6 here for concision...not a necessity):
const bulk = Person.collection.initializeOrderedBulkOp();
lr.on('line', function(line) {
const [first_name, last_name, make, model, repair_date, repair_description] = line.split(',');
// Ensure user exists
bulk.update({first_name, last_name}, {first_name, last_name}, {upsert: true});
// Find a user with the existing make and model. This makes sure that if the car IS there, it matches the proper document structure
bulk.update({first_name, last_name, 'car.make': make, 'car.model': model}, {$set: {'car.$.repair.date': repair_date, 'car.$.repair.description': repair_description}});
// Now, if the car wasn't there, let's add it to the set. This will not push if we just updated because it should match exactly now.
bulk.update({first_name, last_name}, {$addToSet: {car: {make, model, repair: {date: repair_date, description: repair_description}}}})
});
I am trying to stop duplicates in my Mongo DB Collection but they are still getting in. I am reading data from twitter and storing it like:
var data = {
user_name: response[i].user.screen_name,
profile_image: response[i].user.profile_image_url,
content: {
text: response[i].text
},
id: response[i].id_str,
};
and I have the following to stop any duplicates:
db[collection].ensureIndex( { id: 1, "content.text": 1 }, { unique: true, dropDups: true } );
The id field is working and no duplicates appear but "content.text" field does not work and duplicates are appearing. Any Ideas why?
When you enforce a unique constraint on a composite index, two documents are considered same only if the documents have the same value for both id and context.text fields and not for either key individually.
To enforce unique constraints on the fields, id and context.text individually, You could enforce it as below:
db.col.ensureIndex({"id":1},{unique:true}) and similarly for the other field.
I'm trying to implement a rating system and I'm struggling to only allow one rating per user in a reasonable way.
Simply put, i have an array of ratings in my schema, containing the "rater" and the rating, as such:
var schema = new Schema({
//...
ratings: [{
by: {
type: Schema.Types.ObjectId
},
rating: {
type: Number,
min: 1,
max: 5,
validate: ratingValidator
}
}],
//...
});
var Model = mongoose.model('Model', schema);
When i get a request, i wish to add the users rating to the array if the user has not already voted this document, otherwise i wish to update the rating (you should not be able to give more than one rating)
One way to do this is to find the document, "loop through" the array of ratings and search for the user. If the user has got already a rating in the array, the rating is changed, otherwise a new rating is pushed. As such:
Model.findById(id)
.select('ratings')
.exec(function(err, doc) {
if(err) return next(err);
if(doc) {
var rated = false;
var ratings = doc.ratings;
for(var i = 0; i < ratings.length; i++) {
if(ratings[i].by === user.id) {
ratings[i].rating = rating;
rated = true;
break;
}
}
if(!rated) {
ratings.push({
by: user.id,
rating: rating
});
}
doc.markModified('ratings');
doc.save();
} else {
//Not found
}
});
Is there an easier way? A way to let mongodb do this automatically?
The mongodb $addToSet operator could be an alternative, however i have not managed to use it for this, since that could allow two ratings with different scores from the same user.
As you note the $addToSet operator will not work in this case as indeed a userId with a different vote value would be a different value and it's own unique member of the set.
So the best way to do this is to actually issue two update statements with complementary logic. Only one will actually be applied depending on the state of the document:
async.series(
[
// Try to update a matching element
function(callback) {
Model.update(
{ "_id": id, "ratings.by": user.id },
{ "$set": { "ratings.$.rating": rating } },
callback
);
},
// Add the element where it does not exist
function(callback) {
Model.update(
{ "_id": id, "ratings.by": { "$ne": user.id } },
{ "$push": { "ratings": { "by": user.id, "rating": rating } }},
callback
);
}
],
function(err,result) {
// all done
}
);
The principle is simple, try to match the userId present in the ratings array for the document and update the entry. If that condition is not met then no document is updated. In the same way, try to match the document where there is no userId present in the ratings array, if there is a match then add the element, otherwise there will be no update.
This does bypass the built in schema validation of mongoose, so you would have to apply your constraints manually ( or inspect the schema validation rules and apply manually ) but it is better than you current approach in one very important aspect.
When you .find() the document and call it back to your client application to modify using code as you are, then there is no guarantee that the document has not changed on the server from another process or request. So when you issue .save() the document on the server may no longer be in the state that it was when it was read and any modifications can overwrite the changes made there.
Hence while there are two operations to the server and not one ( and your current code is two operations anyway ), it is the lesser of two evils to manually validate than to possibly cause a data inconsistency. The two update approach will respect any other updates issued to the document possibly occurring at the same time.