Multiple pouchdbs vs single pouchdb - couchdb

I created couchdb with multiple dbs for use in my ionic 3 app. Also upon integrating it with pouchdb for client side syncing i created seperate pouchdbs for each one of the dbs. Total 5 pouchdbs. My question
whether it is good idea storing multiple pouchdbs on client side owing to the no. of http connections that would be created by syncing the pouchdbs. Or shall I put all Couchdb databases into one database and use type fields to separate the docs. Then only one pouchdb need to be created and synced on client.
Also using pouchdb-authenticaion plugin, authentication data is valid for only the database on which signup/login methods were called. Accessing other databases returns unauthenticated.

I would say, if your pouchdbs are syncing in realtime, that should be less expensive to reduce their amount to one and distinguish records by type.
But it should not be that costly, but still very convinient to set up multiple changes feed per each ItemStore (e.g. TodoStore, CommentStore, etc) with corresponding filter function passing only docs of the matching type into the store it belongs to. It can also be achieved by filtering on the basis of design_docs (I'm not sure if it saves anything, at least in the browser)
One change feed distributing docs to store would be probably the cheapest solution. But I suppose the filter function can't be changes after the change feed was established, so it must know about all the stores (i.e. doc types) beforehand

Related

Azure Change Feed Support and multiple local clients

We have a scenario where multiple clients would like to get updates from Document Db inserts, but they are not available online all the time.
Example: Suppose there are three clients registered with the system, but only one is online at present time. When the online client inserts/updates a document, we want the offline client(s) on wakes up to go look at change feed and update itself independently.
Now is there a way for each client to maintain it's own feed to the same partition (when they were last synced) and get the changes when the come online based on last sync?
When using change feed, you use continuation token per partition. Change feed continuation tokens do not expire, thus you can continue from any point. Each client can keep its own continuation token and read changes as needed/wakes up, this essentially means that each client can keep its own feed for each partition.

Node Express APP 1 to N (with MongoDB)

we are developing a big node app with express and MongoDB. We are trying to get the best performance, because we will have multiple clients (maybe 100+) running on the same server.
We were thinking in a one-to-n APP, one instance, one database and multiple clients accessing their domains.
I want to know what is the best settings for this scenario (one server, multiple clients) to performance and development
One instance, one database (clients data would be identified by a company ObjectId on the entry and clients would access a domain or subroute)
One instance, multiple tables (or databases, what is the best?)
Multiple instances, multiple tables
Any other ideas?
On the first setting, the developers will always worry about the current company and this can bring limitations to the app
On the second setting, the concern will continue but the company will not interfere on the database entries (more clean model)
On the third setting (maybe the best for development) only one company will be treated and brings a lot of possibilities, but may bring performance issues (all instances will run on a single server)
Other settings I have not thought of can be better.
Notes:
We are using the mongoose library
I have some experience with WordPress and i like the way themes and plugins are created for it. We are trying to achieve a level of performance similar to Wordpress with PHP (several Wordpress running on a server efficiently)
sorry about bad english
You don't need to manage multiple instance as you can create a company collection and in that collection you can store every single company and then you just need to create a reference of all these values in users.Please make sure that you have made unique index on company collection.It is really easy handle such scenarios in RDBMS(mysql).
And one more thing you can also run multiple mongod client on same instance by just changing the port and if you are looking for that sort of solution then you can do that as well.
Please note following things before using mongo:-
Please use mongo only if you have over TB's of data because that doesn't make any sense to use mongodb for some mb's or gb's of data.
Use of indexes is must in mongo if you want maximum performance.
Mongo stores all the indexes in main memory and if the indexes size is more then memory that it start swapping of indexes which is really costly and hence please make sure that you have different servers for your application and your db.
I still says it would be better to use RDBMS if you don't have TB's of data to deal with.
Why this approach:-
Let me give you a scenario.
You have 100 companies and with in 100 companies you have 1000 users for each of the company. i.e. you have 1L records in your user collection.Now i want to delete a single user or i want to update a user or i want to fetch a user from a single company then i don't need to traverse my complete database as i can make a index on my user collection using user-id and company id(compound index) or even i can make a simple filter query on company id.
For index please read this
https://docs.mongodb.com/manual/core/index-compound/
And btw we are not saving company id as an object instead i am saving only the value of _id from company collection.

Mongodb auto create index for new collections

I have a Node.JS server with a Mondodb database. Multiple clients use this same database, and each client has his own collection. The collections are named by the id of the client.
Since every client uses a different name for his data, when a new client connects to the server, the first operation he does on the database will create a new collection for him.
I need all the collections to have a specific index. Is there a way to automatically create this index for every new collection?
No, there is no such command.
But don't be afraid to call createIndex too often. The documentation guarantees that when an index with the same settings already exists, nothing will happen. So you can attach it to some common database operations executed by new users. It's no big deal when it gets called more than once.
To highlight this behavior, the method used to be called ensureIndex, but that name is deprecated.
By the way: Having a different collection for every client is a quite unusual architecture. It has some drawbacks, like the problem with indexes and other collection-level configuration you already discovered, but also others like being unable to do any queries which use data from more than one client. With the default storage engine, there is the advantage that clients can not lock each other with collection-wide locks, but when you use the WiredTiger engine, that advantage is obsolete because WiredTiger locks only on document level.
A more conventional approach is to have one collection for all users and have a field in each document which says which user owns the document and which is part of all indexes used by user-specific queries.

Generate document ID server side

When creating a document and letting Couch create the ID for you, does it check if the ID already exists, or could I still produce a conflict?
I need to generate UUIDs in my app, and wondered if it would be any different than letting Couch do it.
Use POST /db request for that, but you should be aware the fact that the underlying HTTP POST method is not idempotent, and a client may automatic retry it due to a problem some networking problems, which may create multiple documents in the database.
As Kxepal already mentioned it is generally not recommended to POST a document without providing your own _id.
You could, however, use GET /_uuids to retrieve a list of UUIDs from the server and use that for storing your documents. The UUIDs returned will depend on the algorithm that is used, but the chance of a duplicate are (for most purposes) insignificantly small.
You can and should give a document id, even when using the bulk document interface. Skipping that step makes the problem of resubmitted requests creating duplicate documents even worse. On the other hand, if you do assign ID's, and part of the request reaches couchdb twice (as in the case of a reconnecting proxy), then your response will include some conflicts, which you can safely ignore, you know the conflict was from you, in the same request

How to construct this database in mongodb?

say i have two collections in mongodb,one for users which contains users basic info,and one for apps which contains applications.now if users are allowed to add apps,and the next time when they login, the web should get the user added app for them.how should i construct this kind of database in mongodb.
users:{_id:ObjectId(),username:'username',password:'password'}
apps:{_id:ObjectId(),appname:'',developer:,description:};
and how should i get the user added app for them??? should i add something like addedAppId like:
users:{_id:ObjectId(),username:'username',password:'password',addedApppId:[]}
to indicate which app they have added,and then get the apps using addedAppId???
Yep, there's nothing wrong with the users collection keeping track of which apps a user has added like you've indicated.
This is known as linking in MongoDB. In a relational system you'd probably create a separate table, added_apps, which had a user ID, app ID and any other relevant information. But since you can't join, keeping this information in the users collection is entirely appropriate.
From the docs:
A key question when designing a MongoDB schema is when to embed and when to link. Embedding is the nesting of objects and arrays inside a BSON document. Links are references between documents.
There are no joins in MongoDB – distributed joins would be difficult on a 1,000 server cluster. Embedding is a bit like "prejoined" data. Operations within a document are easy for the server to handle; these operations can be fairly rich. Links in contrast must be processed client-side by the application; the application does this by issuing a follow-up query.
(this is the extra bit you'd need to do, fetch app information from the user's stored AppId.)
Generally, for "contains" relationships between entities, embedding should be be chosen. Use linking when not using linking would result in duplication of data.
...
Many to many relationships are generally done by linking.

Resources