My program is an online game. I create a game object which contains players. I want to remove the game record after 3 hours if no other player (than the one who created the game) joined the game.
I can count players, and I know about mongodb TTL but how can I set a TTL that will only trigger is there is no multiple players?
The basics of using a TTL index is that the document will be deleted after the specified seconds has passed from the time recorded in the index field. What may not be clearly apparent is that though the documentation examples use a field called "createdAt", this does not mean this is your only restriction, as the date field can be updated.
So for instance if you create a document and give it an "updatedAt" field, that you populate when the game is created, then provide the index definition:
db.games.createIndex({ "updatedAt": 1 },{ "expireAfterSeconds": 10800 })
Then all you really have to do is "update" that value whenever a new player joins the game, or there is some other action that keeps the game "valid":
db.games.update(
{ "_id": gameId },
{
"$push": { "players": newPlayer },
"$currentDate": { "updatedAt": true }
}
)
Or change score:
db.games.update(
{ "_id": gameId },
{
"$inc": { "score": 20 },
"$currentDate": { "updatedAt": true }
}
)
Also using the $currentDate operator if you choose.
So the "expire date" for a TTL index is not "set", it just works on a process that checks the "current time" and removes any data where the indexed field of a TTL index is older than the current time "minus" the expiry seconds set on the index.
In short, create the index and just keep updating the field to keep the document current. As things change, players join or leave update the field.
Related
I am trying to make a pretty simple mongoDB document to track searches by date in a node.js project. Here's what I'm shooting for:
{
"date": 1626930000000,
"searches": [
{
"search": "search 1",
"count": 2
},
{
"search": "search 2",
"count": 5
}
]
}
I want to update/add new searches to each date and increment the count for existing searches. Then on new dates do the same thing. Here is my current attempt using mongoose:
const query = { date: date, searches: { search: search } };
guideSearchesModel.findOneAndUpdate(query, {$addToSet: { searches: search ,$inc: { count: 1 }}}, { upsert: true, new: true }, callback);
But this is what gets added to the DB:
{
"_id": {
"$oid": "60f9eb370f12712140dd29db"
},
"date": 1626930000000,
"searches": {
"_id": {
"$oid": "60f9eb38c4ff591f50137726"
},
"search": "search 1"
}
Its missing the "count", and if I run it again it inserts a new document, doesn't update the one I want to based on the date.
Any ideas would be greatly appreciated.
Thanks!
The query {searches: {search: "search 1"}} is an exact document search, meaning it will match a document that contains a field named searches that contains an object with exactly 1 field named search with the value "search 1". i.e. it will not match those sample documents, and therefore it will always upsert.
To match fields in subdocuments, use dotted notation, like
{"searches.search": "search 1"}
Also note that the query part of that update matches both a date and the search object, which means if a document exists in the collection with the desired date, but doesn't contain the search in its array, a new document with the date will be inserted.
It looks like you want to conditionally update the document by appending the search to the array if it doesn't exist, and incrementing its count if it does. This will require you to use the aggregation pipeline form of the update command, like Update MongoDB field using value of another field.
Currently I am working on a mobile app. Basically people can post their photos and the followers can like the photos like Instagram. I use mongodb as the database. Like instagram, there might be a lot of likes for a single photos. So using a document for a single "like" with index seems not reasonable because it will waste a lot of memory. However, I'd like a user add a like quickly. So my question is how to model the "like"? Basically the data model is much similar to instagram but using Mongodb.
No matter how you structure your overall document there are basically two things you need. That is basically a property for a "count" and a "list" of those who have already posted their "like" in order to ensure there are no duplicates submitted. Here's a basic structure:
{
"_id": ObjectId("54bb201aa3a0f26f885be2a3")
"photo": "imagename.png",
"likeCount": 0
"likes": []
}
Whatever the case, there is a unique "_id" for your "photo post" and whatever information you want, but then the other fields as mentioned. The "likes" property here is an array, and that is going to hold the unique "_id" values from the "user" objects in your system. So every "user" has their own unique identifier somewhere, either in local storage or OpenId or something, but a unique identifier. I'll stick with ObjectId for the example.
When someone submits a "like" to a post, you want to issue the following update statement:
db.photos.update(
{
"_id": ObjectId("54bb201aa3a0f26f885be2a3"),
"likes": { "$ne": ObjectId("54bb2244a3a0f26f885be2a4") }
},
{
"$inc": { "likeCount": 1 },
"$push": { "likes": ObjectId("54bb2244a3a0f26f885be2a4") }
}
)
Now the $inc operation there will increase the value of "likeCount" by the number specified, so increase by 1. The $push operation adds the unique identifier for the user to the array in the document for future reference.
The main important thing here is to keep a record of those users who voted and what is happening in the "query" part of the statement. Apart from selecting the document to update by it's own unique "_id", the other important thing is to check that "likes" array to make sure the current voting user is not in there already.
The same is true for the reverse case or "removing" the "like":
db.photos.update(
{
"_id": ObjectId("54bb201aa3a0f26f885be2a3"),
"likes": ObjectId("54bb2244a3a0f26f885be2a4")
},
{
"$inc": { "likeCount": -1 },
"$pull": { "likes": ObjectId("54bb2244a3a0f26f885be2a4") }
}
)
The main important thing here is the query conditions being used to make sure that no document is touched if all conditions are not met. So the count does not increase if the user had already voted or decrease if their vote was not actually present anymore at the time of the update.
Of course it is not practical to read an array with a couple of hundred entries in a document back in any other part of your application. But MongoDB has a very standard way to handle that as well:
db.photos.find(
{
"_id": ObjectId("54bb201aa3a0f26f885be2a3"),
},
{
"photo": 1
"likeCount": 1,
"likes": {
"$elemMatch": { "$eq": ObjectId("54bb2244a3a0f26f885be2a4") }
}
}
)
This usage of $elemMatch in projection will only return the current user if they are present or just a blank array where they are not. This allows the rest of your application logic to be aware if the current user has already placed a vote or not.
That is the basic technique and may work for you as is, but you should be aware that embedded arrays should not be infinitely extended, and there is also a hard 16MB limit on BSON documents. So the concept is sound, but just cannot be used on it's own if you are expecting 1000's of "like votes" on your content. There is a concept known as "bucketing" which is discussed in some detail in this example for Hybrid Schema design that allows one solution to storing a high volume of "likes". You can look at that to use along with the basic concepts here as a way to do this at volume.
I use NodeJS and I have a MongoDB collection with a lot of entries. 99% of time the last entry is selected, sometimes the entry before. Since MongoDB has to get through all entries one by one, it would be more useful to sort the entries the other way round:
Instead of this:
{
_id: "foo",
name: "name"
},
{
_id: "bar",
name: "name"
}
// <- new entry will be inserted here
I want to use this:
// <- new entry will be inserted here
{
_id: "foo",
name: "name"
},
{
_id: "bar",
name: "name"
},
So that in most cases the entry I search for is the first or the second item.
Is that possible or even necessary (does it make any difference in speed)?
I could also reverse the items and then iterate through them, but I don't think that this would be faster.
You schould not be concerned about position of the item in the collection. Each collection has an index on _id field so if you sort your collection by this field and take first (second or third) element you will get in in no time ( porbably 0 ms)
I'm learning Mongodb/Mongoose/Express and have come across a fairly complex query (relative to my current level of understanding anyway) that I'm not sure how best to approach. I have a collection - to keep it simple let's call it entities - with an embedded actions array:
name: String
actions: [{
name: String
date: Date
}]
What I'd like to do is to return an array of documents with each containing the most recent action (or most recent to a specified date), and the next action (based on the same date).
Would this be possible with one find() query, or would I need to break this down into multiple queries and merge the results to generate one result array? I'm looking for the most efficient route possible.
Provided that your "actions" are inserted with the "most recent" being the last entry in the list, and usually this will be the case unless you are specifically updating items and changing dates, then all you really want to do is "project" the last item of the array. This is what the $slice projection operation is for:
Model.find({},{ "actions": { "$slice": -1 } },function(err,docs) {
// contains an array with the last item
});
If indeed you are "updating" array items and changing dates, but you want to query for the most recent on a regular basis, then you are probably best off keeping the array ordered. You can do this with a few modifiers such as:
Model.update(
{
"_id": ObjectId("541f7bbb699e6dd5a7caf2d6"),
},
{
"$push": { "actions": { "$each": [], "$sort": { "date": 1 } } }
},
function(err,numAffected) {
}
);
Which is actually more of a trick that you can do with the $sort modifier to simply sort the existing array elements without adding or removing. In versions prior to 2.6 you need the $slice "update" modifier in here as well, but this could be set to a value larger than the expected array elements if you did not actually want to restrict the possible size, but that is probably a good idea.
Unfortunately, if you were "updating" via a $set statement, then you cannot do this "sorting" in a single update statement, as MongoDB will not allow both types of operations on the array at once. But if you can live with that, then this is a way to keep the array ordered so the first query form works.
If it just seems too hard to keep an array ordered by date, then you can in fact retrieve the largest value my means of the .aggregate() method. This allows greater manipulation of the documents than is available to basic queries, at a little more cost:
Model.aggregate([
// Unwind the array to de-normalize as documents
{ "$unwind": "$actions" },
// Sort the contents per document _id and inner date
{ "$sort": { "_id": 1, "actions.date": 1 } },
// Group back with the "last" element only
{ "$group": {
"_id": "$_id",
"name": { "$last": "$name" },
"actions": { "$last": "$actions" }
}}
],
function(err,docs) {
})
And that will "pull apart" the array using the $unwind operator, then process with a next stage to $sort the contents by "date". In the $group pipeline stage the "_id" means to use the original document key to "collect" on, and the $last operator picks the field values from the "last" document ( de-normalized ) on that grouping boundary.
So there are various things that you can do, but of course the best way is to keep your array ordered and use the basic projection operators to simply get the last item in the list.
I have the following documents:
{
"_id": "538584aad48c6cdc3f07a2b3",
"startTime": "2014-06-12T21:30:00.000Z",
"endTime": "2014-06-12T22:00:00.000Z",
},
{
"_id": "538584b1d48c6cdc3f07a2b4",
"startTime": "2014-06-12T22:30:00.000Z",
"endTime": "2014-06-12T23:00:00.000Z",
}
All of them have startTime and endTime value. I need to maintain consistency that no two date spans in the collection overlap.
Let's say if I add the following document with the following dates:
db.collection.insert({
"startTime": "2014-06-12T19:30:00.000Z",
"endTime": "2014-06-12T21:00:00.000Z"
});
This date span insert should fail because it overlaps with an existing interval.
My questions are:
How to check for date span overlap?
How to check and insert with a single query?
EDIT: to prevent duplicate I ask there and start a bounty. I need to make update operation by using single query as described here: How to query and update document by using single query?
The query is not as complicated as it may look at first - the query to find all documents which "overlap" the range you are given is:
db.test.find( { "startTime" : { "$lt" : new_end_time },
"endTime" : { "$gt": new_start_time }
}
)
This will match any document with starting date earlier than our end date and end date greater than our start time. If you visualize the ranges as being points on a line:
-----|*********|----------|****|-----------|******||********|---
s1 e1 s2 e2 s3 e3s4 e4
the sX-eX pairs represent existing ranges. If you take a new s5-e5 you can see that if we eliminate pairs that start after our end date (they can't overlap us) and then we eliminate all pairs that end before our start date, if we have nothing left, then we are good to insert.
That condition would be does a union of all documents with end date $lte our start and those with start date $gte ours include all documents already in collection. Our query flips this around to make sure that no documents satisfy the opposite of this condition.
On the performance front, it's unfortunate that you are storing your dates as strings only. If you stored them as timestamps (or any number, really) you could make this query utilize indexes better. As it is, for performance you would want to have an index on { "startTime":1, "endTime":1 }.
It's simple to find whether the range you want to insert overlaps any existing ranges, but to your second question:
How to check and insert with a single query?
There is no way proper way to do it with an inserts since they do not take a query (i.e. they are not conditional).
However, you can use an updates with upsert condition. It can insert if the condition doesn't match anything, but if it does match, it will try to update the matched document!
So the trick you would use is make the update a noop, and set the fields you need on upsert only. Since 2.4 there is a $setOnInsert operator to update. The full thing would look something like this:
db.test.update(
{ startTime: { "$lt" : new_end_time }, "endTime" : { "$gt": new_start_time } },
{ $setOnInsert:{ startTime:new_start_time, endTime: new_end_time}},
{upsert:1}
)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("538e0f6e7110dddea4383938")
})
db.test.update(
{ startTime:{ "$lt" : new_end_time }, "endTime" : { "$gt": new_start_time } },
{ $setOnInsert:{ startTime:new_start_time, endTime: new_end_time}},
{upsert:1}
)
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })
I just did the same "update" twice - the first time, there was no overlap document(s) so the update performed an "upsert" which you can see in the WriteResult it returned.
When I ran it a second time, it would overlap (itself, of course) so it tried to update the matched document, but noticed there was no work to do. You can see the returned nMatched is 1 but nothing was inserted or modified.
This query should return all documents that somehow overlap with the new start/end-Time values.
db.test.find({"$or":[
{"$and":[{"startTime":{"$lte":"new_start_time"}, "endTime":{"$gte":"new_start_time"}}, //new time has an old startTime in the middle
{"startTime":{"$lte":"new_end_time"}, "endTime":{"$lte":"new_end_time"}}]},
{"$and":[{"startTime":{"$gte":"new_start_time"}, "endTime":{"$gte":"new_start_time"}}, //new time sorounds and old time
{"startTime":{"$lte":"new_end_time"}, "endTime":{"$lte":"new_end_time"}}]},
{"$and":[{"startTime":{"$gte":"new_start_time"}, "endTime":{"$gte":"new_start_time"}}, //an old time has the new endTime in the middle
{"startTime":{"$lte":"new_end_time"}, "endTime":{"$gte":"new_end_time"}}]},
{"$and":[{"startTime":{"$lte":"new_start_time"}, "endTime":{"$gte":"new_start_time"}}, //new time is within an old time
{"startTime":{"$lte":"new_end_time"}, "endTime":{"$gte":"new_end_time"}}]}
]})
You want to run both queries at the same time. It means you want Synchronous in your code visit this question it may help for your answer
Synchronous database queries with Node.js