How can I get last created document in couchdb? Maybe some how I can use _changes feature of couchdb? But documentation says, that I only can get list of document, ordered by first created document, ant there is no way to change order.
So how can I get last created document?
You can get the changes feed in descending order as it's also a view.
GET /dbname/_changes?descending=true
You can use limit= as well, so;
GET /dbname/_changes?descending=true&limit=1
will give the latest update.
Your only surefire way to get the last created document is to include a timestamp (created_at or something) with your document. From there, you just need a simple view to output all the docs by their creation date.
I was going to suggest using the last_seq information from the database, but the sequence number changes with every single write, and replication also complicates the matter further.
Related
I would like to get all the docs in couchDb updated in a specific time range.
I'm using the below API but I don't get any result.
/_all_docs?startkey="2019-01-01T00:00:00Z"&endkey="2020-01-01T00:00:00Z"
Any suggestions are welcome.
Andrea
_all_docs's key is the document ID, not timestamp. For your query to be useful, you'll need to create a custom view based on a timestamp (and ensure the timestamp is updated by your code).
I need to generate ids with a convention, for example:
Instead of getting: "538cd180e381f20d1c1cd2a2"
I would like to have an ID like this one: "p38cd180e381f20d1c1cd2a2"
So what I want is that my IDs start with a consonant letter.
Does anyone know how to accomplish that within the driver, I mean, getting that behaviour on "new mongo.ObjectId()"?
Thanks in advance.
You can use the following, to get the id starting with a consonant
db.collection.insert({"_id":"p"+new ObjectId()})
you can use any other string in place of "p" and the string will append to the start of the id generated by mongodb.
Short answer: Sorry, no standard way available to achieve this as of now.
Detailed answer and workaround: MongoDB or driver generated ids are a combination of Creation Time (as timestamp), Increment value for next id, Machine on which the id is generated and the process id of the process which generated this document id. All this info is available in the generated id and can be extracted back. For now, this is what you have been given and there is no support for generating your own custom id from the driver's algorithm.
If you want to customize your id generation and be able to make use of these properties, then you can embed all this info that MongoDB uses for id generation and add this information to your document itself. By doing that you will be able to reproduce the information that MongoDB generates from the id. And while inserting the document to MongoDB, you can give your docs a customized id which agrees with your requirements.
So if you later on want to make comparisons based on creation time or maybe the machine, you can do that from the information that was added to the docs themselves.
Use the code: db.collection.insert({"customId":"p"+new ObjectId()}). And let your code use this customId.
I have a large set of documents in a CouchDB database that were just accidentally bulk deleted using _deleted:true. I also have a backup for this set of data that includes their last known good revision and metadata. I need to maintain the same _id, so simple restore with a new _id is not an option.
Compaction has not been run and I can access any of these documents via the &rev= url parameter as well as their attachments (which are needed).
What I need to do is "restore" these documents to the revision I have on file. Surprisingly, I have come up empty with any queries on how to achieve this. Tips or hacks appreciated.
If you just PUT the whole document, including the attachment stub, back into the DB, with the deleted rev, but less the _deleted:true parameter, then all will be well.
i have changed one index in schema.xml and now want's to refresh all existing documents ..
How to do that ? i don't want to upload all documents again ...
any suggestion ?
if you changed the schema you HAVE TO reindex. After restarting Solr of course.
Updated:
If by 'adding one extra index' you mean adding one core, that core is empty so you have to add anything you need there.
If you change the way a field is analyzed, or add a field etc, you have to reindex again, your docs are not changed to reflect the change you made until you reindex
How are you indexing the data? If you are getting the data from a database you can use a data-import handler and use a delta-import query. The delta-import will only update newly added fields. Check out this link for full documentation:
http://wiki.apache.org/solr/DataImportHandler
How to get Post with Comments Count in single query with CouchDB?
I can use map-reduce to build standalone view [{key: post_id, value: comments_count}] but then I had to hit DB twice - one query to get the post, another to get comments_count.
There's also another way (Rails does this) - count comments manually, on the application server and save it in comment_count attribute of the post. But then we need to update the whole post document every time a new comment added or deleted.
It seems to me that CouchDB is not tuned for such a way, unlike RDBMS when we can update only the comment_count attribute in CouchDB we are forced to update the whole post document.
Maybe there's another way to do it?
Thanks.
The view's return json includes the document count as 'total_rows', so you don't need to compute anything yourself, just emit all the documents you want counted.
{"total_rows":3,"offset":0,"rows":[
{"id":...,"key":...,value:doc1},
{"id":...,"key":...,value:doc2},
{"id":...,"key":...,value:doc3}]
}