Multiple PouchDB to single CouchDB - couchdb

I need to submit data from multiple mobile apps. in my mobile app I am planning to use pouchdb to store the document, later I want this document to sync to couchdb one-way only.
what will happen if I submit data from multiple devices ? will pouch db create same document ID and overwrite data in couchDB ?

The document ID will not be the same, assuming you're letting PouchDB create them (the likelihood of PouchDB generating the same ID twice is extremely low)

Related

Listen to changes of all databases in CouchDB

I have a scenario where there are multiple (~1000 - 5000) databases being created dynamically in CouchDB, similar to the "one database per user" strategy. Whenever a user creates a document in any DB, I need to hit an existing API and update that document. This need not be synchronous. A short delay is acceptable. I have thought of two ways to solve this:
Continuously listen to the changes feed of the _global_changes database.
Get the db name which was updated from the feed.
Call the /{db}/_changes API with the seq (stored in redis).
Fetch the changed document, call my external API and update the document
Continuously replicate all databases into a single database.
Listen to the /_changes feed of this database.
Fetch the changed document, call my external API and update the document in the original database (I can easily keep a track of which document originally belongs to which database)
Questions:
Does any of the above make sense? Will it scale to 5000 databases?
How do I handle failures? It is critical that the API be hit for all documents.
Thanks!

How can I intercept a call from PouchDB to CouchDB, using .net

I am learning PouchDB with CouchDB and trying to wrap my head around intercepting documents to the couchdb server and performing an action on it wether it be creating other documents, updating the user table, etc.
On the server the json document will be treated through a business layer before it is submitted to the couchdb server, preferably in .net.
Is this possible? If not, is there a way to do so?
Thanks!
On the server side, you can listen to the _changes feed from CouchDB (docs here) and react whenever a document is added, modified, or deleted. This could be useful for reporting/messaging/aggregation/etc.
Alternatively, if you want to do some schema validation on the documents before they are accepted, then you should look into adding a design doc with a validate_doc_update field (docs here).

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

How to get all document from couchdb start with some word. is it possible in light couch?

I am using pouchdb on client side and couchdb on server side. and both are in sync.
I am accessing couchdb from java using client-api lightpouch.
I am storing transaction data, each transaction is stored as document by prefixed _id like
Transaction_1,
Transaction_2
..
..
so on
Now i want to access all the documents where the _id field starts with Transaction on the server.
This is possible in pouchdb and i am able to achieve that.
But i am wondering how can i achieve the same at server side, in java using lightcouch.
Or is there any Java client-API available that provides this kind of functionality. ??
To find all documents whose _ids match a certain prefix, you only need to do:
/_all_docs?startkey="foo"&endkey="foo\uffff"
(For the prefix "foo".)
I wrote up a bit about why this works here.
LightCouch aims at providing a simple API
for communicating with CouchDB databases.
What you need is a CouchDB view server-side which you can request with LightCouch.

Saving chat transcripts in nosql databases

I'm building a chat server app using node.js.
I need to save chat transcripts on the database and would like
to play with nosql database like mongodb. If i was in relational db world, i would
create users, chat_sessions and chat_messages tables and, for every new message, i'd
append new record in chat_messages table.
Which is the best approach in nosql?
Do i have to create a chat_session document and, inside of it, create
a chat_messages structure that is updated for every new message or
is there a better way to do it for nosql/mongodb?
You would use a similar approach and insert each new message as a separate document into the collection (possibly one for each room or channel).
Documents in MongoDB have a 16mb limit so storing the entire history in one document, which grows in an unbounded fashion, would be a bad choice.
You can de-normalize usernames and store them on the messages themselves to avoid querying two tables (this would likely be a join in a relational database).
It may make sense to split the data into multiple databases rather than collections (per room, channel, whatever) because currently mongodb has a database level write lock. This would allow you to achieve greater write concurrency with mongodb and node.js.

Resources