I'm trying to get multiple documents with one single request (http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API#Fetch_Multiple_Documents_With_a_Single_Request) in bigCouch but it doesn't seem to work as expected, it returns all the documents in the DB.
The same request and with the same data works in couchDB. Isn't it supported in bigCouch?
The request is: http://x.x.x.x:5984/database/_all_docs?keys=["test"]
Related
I am using mongoDB along with Nodejs. Whenever I get a request from Nodejs, I read a document from MongoDB collection, perform some mathematical calculations on it and then update the document based on those calculations.
The problem is that if I send 5 requests at the same time, then all the requests read the same data and therefore the updates made to the document are not correct.
How can I solve this issue? I tried reading on the documentation but didn't get any clue.
Writing REST API using Nodejs and database is mongodb 3.6.
Collections Names : Subscription, Users and Offering
I am using aggregate function to fetch data from subscription and using lookup I am fetching user which has subscribed.
What I want is in the same output (previous line) I also want to list all the records from offering collection as array.
How do I can get it.
Thanks in advance.
I don't think you can, and neither you should. Getting all records of some collection is bad practice, always try to limit yourself with only things you need.
If you really want to add resutls from some totally unrelated collection then you should make separate request and then add them together in json you sending to client.
I have an API endpoint for an event store to which I can query a get request and
receive a feed of events in ndjson format. I need to automate the collection of these events and store them in a database. As these events are in a nested json structure where some of the events have a complex structure, I was thinking of storing them in a document database. Can you please help me with the options I have in capturing these events and storing them w.r.t. the python libraries/frameworks that I can use to achieve this? To understand the events I was able to use REQUESTS library and get the events. I also tried asyncio and aiohttp to try to get these events asynchronously but that ran slower than requests run. can we create any pipeline using to get these events from the endpoint at frequent intervals?
Also some of these nested json keys have dots, MongoDB is not allowing to store them. I tried CosmosDB as well and it worked fine (only thing there was, if the json has a key "ID", it has to be unique. As these json feeds have ID key which is not unique, I had to rename the dict key before storing into cosmosdb).
Thanks,
Srikanth
I have a scenario where there are multiple (~1000 - 5000) databases being created dynamically in CouchDB, similar to the "one database per user" strategy. Whenever a user creates a document in any DB, I need to hit an existing API and update that document. This need not be synchronous. A short delay is acceptable. I have thought of two ways to solve this:
Continuously listen to the changes feed of the _global_changes database.
Get the db name which was updated from the feed.
Call the /{db}/_changes API with the seq (stored in redis).
Fetch the changed document, call my external API and update the document
Continuously replicate all databases into a single database.
Listen to the /_changes feed of this database.
Fetch the changed document, call my external API and update the document in the original database (I can easily keep a track of which document originally belongs to which database)
Questions:
Does any of the above make sense? Will it scale to 5000 databases?
How do I handle failures? It is critical that the API be hit for all documents.
Thanks!
In the TAMA implementation, I came across an issue with Couchdb. (Version 1.2.0) ,
We are using named documents to maintain unique constraint logic in the application. (named documents : whose _id is user defined, and not couch generated.)
We are using the REST API to add the documents to Couchdb, where we found strange behavior :
When we try to recreate the documents using HTTP PUT which have been deleted in the past(because of bug in the code), the documents are not created the first time .
HTTP Put - Returns HTTP 200, but doc is not saved in couchdb.
Again trying the same request,
HTTP Put - Returns HTTP 200 and adds the doc in database.
HTTP PUT request needs to be sent twice to create and save the doc.
I have checked that the above bug is reproducible for deleted docs, i.e the response for GET _id is {"error":"not_found","reason":"deleted"}.
This looks like a bug in CouchDB to me, could you please let us know if you could think of any scenario where above error might occur and any possible workarounds/solutions ?
Couchdb has a builtin mechanism to ensure that you do not overwrite the same document as someone else.
If you PUT any existing document, you'll have to accompany this process with the current doc._rev value, so that couchdb can confirm the document you are updating is based on the most recent version in the database.
I've not come across this case with deletions, but it makes sense to me that couchdb should not allow you to overwrite a deleted document as the assumption should be, you just don't know about the deletion.
Have you tried if you can access the revision of the deleted document and if so, whether by adding it to the new document, you can succeed with the PUT on the first call?