I have a Node.JS server with a Mondodb database. Multiple clients use this same database, and each client has his own collection. The collections are named by the id of the client.
Since every client uses a different name for his data, when a new client connects to the server, the first operation he does on the database will create a new collection for him.
I need all the collections to have a specific index. Is there a way to automatically create this index for every new collection?
No, there is no such command.
But don't be afraid to call createIndex too often. The documentation guarantees that when an index with the same settings already exists, nothing will happen. So you can attach it to some common database operations executed by new users. It's no big deal when it gets called more than once.
To highlight this behavior, the method used to be called ensureIndex, but that name is deprecated.
By the way: Having a different collection for every client is a quite unusual architecture. It has some drawbacks, like the problem with indexes and other collection-level configuration you already discovered, but also others like being unable to do any queries which use data from more than one client. With the default storage engine, there is the advantage that clients can not lock each other with collection-wide locks, but when you use the WiredTiger engine, that advantage is obsolete because WiredTiger locks only on document level.
A more conventional approach is to have one collection for all users and have a field in each document which says which user owns the document and which is part of all indexes used by user-specific queries.
Related
Is it possible to add data into commands on certain MongoDB collections?
The use case here is for simple management of multitenancy. We have data that doesn't contain the tenant's id and we then want to insert the tenant's id in on every command (find, update, updateOne, insert, insertMany, etc.) to particular collections (some collections are generic tenant wide collections). We are using the MongoDB driver (rather not use Mongoose).
Currently, we have to remember to add the tenant id whenever we use a command, but this is a bit dangerous as it is possible to miss adding the tenant's id...
Thanks!
Out of the box Mongo does not come with this feature set.
Mongoose actually shines for these kind of things with its middleware infrastructure.
If mongoose is not an option you can look into these smaller hooks implementations:
https://www.npmjs.com/package/mongohooks
https://www.npmjs.com/package/mongodb-hooks
They give you something to work with I believe.
In the end we created a helper function which we passed the data to. The helper then adds the tenant's id to any commands nessasay and send the command to MongoDB.
This allows us to control when to restrict by tenant per collection per command type.
I want to use Couchdb to create a offline first app, where users can add documents.
Only the user who created a document should be able to change it, otherwise it should only be readable. For this i wanted to use the "peruser" mechanism of couchdb and replicate these documents into a main database where everyone can read.
Is it possible to automatically get the replication and other configurations (like design documents) configured when the database is created by the couch_peruser options?
I found a possible way myself:
add a validation function to the main database to deny writes (http://docs.couchdb.org/en/2.1.1/ddocs/ddocs.html#vdufun)
use _db_updates endpoint to monitor database creation (http://docs.couchdb.org/en/2.1.1/api/server/common.html#db-updates)
create a _replicator document to set up a continuous replication from userdb to main db (http://docs.couchdb.org/en/2.1.1/replication/replicator.html)
One thing to look about is that maintaining a lot of continuous replications requires a lot of system resources.
Another way is to create authorships with design documents. With this aproach we don't need to maintain replications to the main database, because every entry can be hold in one database (main database in my case).
http://guide.couchdb.org/draft/validation.html#authorship
I created couchdb with multiple dbs for use in my ionic 3 app. Also upon integrating it with pouchdb for client side syncing i created seperate pouchdbs for each one of the dbs. Total 5 pouchdbs. My question
whether it is good idea storing multiple pouchdbs on client side owing to the no. of http connections that would be created by syncing the pouchdbs. Or shall I put all Couchdb databases into one database and use type fields to separate the docs. Then only one pouchdb need to be created and synced on client.
Also using pouchdb-authenticaion plugin, authentication data is valid for only the database on which signup/login methods were called. Accessing other databases returns unauthenticated.
I would say, if your pouchdbs are syncing in realtime, that should be less expensive to reduce their amount to one and distinguish records by type.
But it should not be that costly, but still very convinient to set up multiple changes feed per each ItemStore (e.g. TodoStore, CommentStore, etc) with corresponding filter function passing only docs of the matching type into the store it belongs to. It can also be achieved by filtering on the basis of design_docs (I'm not sure if it saves anything, at least in the browser)
One change feed distributing docs to store would be probably the cheapest solution. But I suppose the filter function can't be changes after the change feed was established, so it must know about all the stores (i.e. doc types) beforehand
I've got one db per user, from which I use a filtered replicate to send documents marked public into a single common database. I have this working. now when a user changes their document from public to private, the replications does not clear the document from the common database.
Aside from reading all of the private documents from the userDb and then removing them from the common db (if they exist), is there anyway to accomplish this via fundamental replication features?
Short answer: No.
Long answer: The replicator will not delete documents on the remote unless the document was deleted in the local database. Filter functions just determine what is allowed to be replicated. So your use case can be done but in a way I would think is slightly abusive of the CouchDB model if the user changes the document to private, you could:
Cache the document content on the device the user made the change.
Delete the document in the remote database
Re create the document using the cached content with the fields updated to indicate it is private.
As long as your filter function allows deletes to be replicated to the remote, the document will be deleted, however the usual caveats around how CouchDB deletes documents apply.
say i have two collections in mongodb,one for users which contains users basic info,and one for apps which contains applications.now if users are allowed to add apps,and the next time when they login, the web should get the user added app for them.how should i construct this kind of database in mongodb.
users:{_id:ObjectId(),username:'username',password:'password'}
apps:{_id:ObjectId(),appname:'',developer:,description:};
and how should i get the user added app for them??? should i add something like addedAppId like:
users:{_id:ObjectId(),username:'username',password:'password',addedApppId:[]}
to indicate which app they have added,and then get the apps using addedAppId???
Yep, there's nothing wrong with the users collection keeping track of which apps a user has added like you've indicated.
This is known as linking in MongoDB. In a relational system you'd probably create a separate table, added_apps, which had a user ID, app ID and any other relevant information. But since you can't join, keeping this information in the users collection is entirely appropriate.
From the docs:
A key question when designing a MongoDB schema is when to embed and when to link. Embedding is the nesting of objects and arrays inside a BSON document. Links are references between documents.
There are no joins in MongoDB – distributed joins would be difficult on a 1,000 server cluster. Embedding is a bit like "prejoined" data. Operations within a document are easy for the server to handle; these operations can be fairly rich. Links in contrast must be processed client-side by the application; the application does this by issuing a follow-up query.
(this is the extra bit you'd need to do, fetch app information from the user's stored AppId.)
Generally, for "contains" relationships between entities, embedding should be be chosen. Use linking when not using linking would result in duplication of data.
...
Many to many relationships are generally done by linking.