This has me stumped. I am fairly new to noSql, and node.js development. So running into moments of what the heck are pretty common. Yet I cannot come to grips with this one on my own.
We are inserting documents into a mongo user collection and everything is working as it should. What I do not get and would like to have some insight on...is the creation of my users, the _id value is also a date stamp. I can sort on this field and user names corresponds to sign up log entries. Yet for the life of me I cannot determine a way to convert this to a normal time-stamp that is human readable.
520193b4571be99a06000031 is typical date code.
Here is a collection snip.
{
"_id" : ObjectId("520193b4571be99a06000031"),
"email" : "this_user#gmail.com",
"google" : {
"email" : "XXXXXXXXXXXXXXXXXXXXXX",
"expires" : ISODate("2012-10-11T18:30:13.611Z"),
"accessToken" : "A_Reallly_REALLY_LONG one!!!!####$$$$$$%%%%%%%"
},
"login" : "google:XXXXXXXXXXXXXXXXXXXXXX"
}
Per the docs:
ObjectId("520193b4571be99a06000031").getTimestamp()
look at here
http://docs.mongodb.org/manual/reference/object-id/
or http://api.mongodb.org/java/2.0/org/bson/types/ObjectId.html
and create an objectID and get date from there
Related
{
"_id" : ObjectId("5f0083848f162b38900dc113"),
"isEmailVerified" : false,
"isProfileSetup" : true,
"my_events" : [
ObjectId("5f005a63b5524eb74813de11"),
ObjectId("5f005a5bb5524eb74813de0c"),
ObjectId("5f017dfcf8e6d8615cddfd6f")
]
}
I have this document in user's collection and i am trying to paginate the array my_events only. I am sorry if this is a stupid question.
Firstly is this possible to paginate this array without event fetching it completely from the db and if yes please share the way here.
{{url_local}}/api/event?user_id=5f0083848f162b38900dc113&page=1&limit=2
Above call should find the user with mentioned user_id and should return only these values :-
ObjectId("5f005a63b5524eb74813de11"),
ObjectId("5f005a5bb5524eb74813de0c")
And,
{{url_local}}/api/event?user_id=5f0083848f162b38900dc113&page=2&limit=2
it should return this :-
ObjectId("5f017dfcf8e6d8615cddfd6f")
This can be achieved with aggregation in MongoDB, pls check https://docs.mongodb.com/manual/reference/operator/aggregation/arrayElemAt/
But in general, the advantage of paging will be lost as aggregations are heavy on the database, you should consider to change the document structure and create a collection for subdocuments and add a reference to your parent document if you can.
Please bear with me if my question is simple, I'm so new to couchDB:
I have this document:
{
Name : Raju
age : 23
Designation : Designer
}
How can I update my document to:
{
Name : Raju
age : 23
}
Can I use insertfor the document with my desired data which is {Name : Raju, age : 23} object? Does it delete Designation field?
In short, you can do that, CouchDB does not attempt to merge updates, just updates the doc to whatever you sent it. Please take a moment to read http://docs.couchdb.org/en/1.6.1/intro/api.html#documents
It is a very simple, well written bunch of examples. I would encourage you to read the couchdb docs first if you new to it.
I was looking for a way to calculate a ratio on Kibana. After many researches i found this way :
Using the "JSON Input" feature in a visualisation.
I have all my informations in an index, with 2 types of documents (boots and reboots).
I am looking for the script which count the number of documents with the type boots, same for the reboots type then divide the second by the first.
It sounds really easy, but i do not find any way to get it after my researches, and i am not used to groovy enough yet to do it by myself.
I found many ways to manipulate documents values (doc['mydocname'].values etc), but nothing about the type.
Thanks in advance.
EDIT : I tried this
{
"aggs" : {
"boots_count" : { "value_count" : { "_type" : "boots" } }
}
}
Which is supposed to count the number of fields (here the field _type) in the index. But when i put it into "JSON Input" in a visualisation, that results in an error :
Error: Request to Elasticsearch failed: {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[BbXJ0O6tRxa_OcyBfYCGJQ][informationbe][0]: SearchParseException[[informationbe][0]: from[-1],size[0]: Parse Failure [Failed to parse source [{\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"#sitePoste\",\"size\":5,\"order\":{\"1\":\"desc\"}},\"aggs\":{\"1\":{\"avg\":{\"script\":\"0\",\"lang\":\"expression\",\"ratio\":{\"boots_count\":{\"value_count\":{\"_type\":\"boots\"}}}}}}}}
I am wrong. But where ?
EDIT2 : In other hand, i am trying scripted fields, with something like this using lucene expression :
doc['_type:boots'].count / doc['_type:reboots'].count
but it doesnt work more, i am pretty confident about the "doc['_type:boots']" part, i guess the problem is on the "XXX.count" part.
After many attempts, i understand better and better how it works. Default scripted fields scope is on the document, not on the whole index, so i cant do a count action of whole values of the index from documents in it.
I am looking for a workaround, i'll post it it if find something interesting.
I finally solved my problem :
I added a scripted field, if the type of the document is boots, the scripted field = 1, else 0. Then i created a search with only boots and reboots documents (filter _type:boots _type:reboots) and calculated the average of the scripted field in a metric.
Everything works well !
I'd like to have a flexible schema in Mongo, but would also like to enforce a schema for subsequent updates. Is it possible to store the validation in the document like the following?
I have tried this, but can't seem to convert the string into a Joi object.
{
"_id" : ObjectId("53d5dce1fc87899b2b3c2def"),
"name" : {
"validator" : "Joi.string().alphanum().min(3).max(30).required()",
"value" : "Bob"
},
"email" : {
"validator" : "Joi.string().email()",
"value" : "bob#gmail.com"
}
}
Most of the time, storing executable code in a database is not a good idea. What will you do when you realize a validator function which is already stored in a billion documents needs to be modified? What if someone manages to insert a document with validation code which does more malicious stuff than just validating?
I would really recommend you to determine the type of the document and the appropriate validation routine for each type in node.js.
But when you insist on having executable code for each document in the document itself, you can run that code in node.js using the vm.runInContext(object.validator, object) method. Keep in mind that this requires access to the whole document in node.js, so you can not do partial updates. Also keep in mind that, as I said, it might not be a very good idea.
In the upcoming Mongo 3.2 version they are going to add document validation (slides).
It will work in a different way and looking at your requirements it looks like it is possible to achieve what you want. It is possible to specify the type of the field, check the existence and pass through regex.
Here is a little bit about validation. You can specify validation rules for each collection, using validator option using almost all mongo query operators (except $geoNear, $near, $nearSphere, $text, and $where).
To create a new collection with a validator, use:
db.createCollection("your_coll", {
validator: { `your validation query` }
})
To add a validator to the existing collection, you can add the validator:
db.createCollection("your_coll", {
validator: { `your validation query` }
})
Validation work only on insert/update, so when you create a validator on your old collection, the previous data will not be validated (you can write application level validation for a previous data). You can also specify validationLevel and validationAction to tell what will happen if the document will not pass the validation.
If you try to insert/update the document with something that fails the validation, (and have not specified any strange validationLevel/action) then you will get an error on writeResult (sadly enough the error does not tell you what failed and you get only default validation failed):
WriteResult({
"nInserted" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
I really don't know what's going on with my configuration, but I'm just not able to query anything after indexing (don't even know if I'm doing the indexing part correctly). Could someone please tell me what each of the following means and should be?
I have a CouchDB database called bestdb. Inside this database I have document types like product and customer.
Now I installed elastic search version 0.18.7 and the corresponding couchdb river. I started elastic search and couchdb. I set the network.host of elasticsearch to be an ip address: 10.0.0.129 . I followed the instructions in the tutorial :
curl -XPUT '10.0.0.129:9200/_river/{A}/_meta' -d '{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "bestdb",
"filter": null
},
"index" : {
"index" : "{B}",
"type" : "{C}",
"bulk_size" : "100",
"bulk_timeout" : "10ms"
}
}'
{A}: What's this? My understanding is that this is just an internal elastic search index right? It's not being used for querying or searching right? So this could be any name right?
{B}: What's this index? How is this different from the one above? What should the value of this be in my scenario?
{C}: Is this related to the Document Type in couchdb, like product or customer ?
The online tutorial just sets everything to be the same value. How would my curl statement look like if I wanted to query all product documents or customer documents?
Thank you to whoever that clears things up a bit for me.
Regards,
Mark Huang
kimchy's documentation often leaves a little bit to the imagination. :-)
A is the river name. A river is just an ES document, stored in an index named _river, a type named whatever you want, and a doc id _meta.
B & C is the local index/_type that your bestdb couchdb _changes stream will get indexed into. These can be overridden by _index and _type fields in your couchdb documents. If none of the above is supplied, they'll default to your couchdb instance name bestdb/bestdb.