CouchDB taking lot of space due to revisions - couchdb

We have a project that involved with a database sync with pouchdb in mobile devices. We have faced issue when updating multiple documents (8400 docs per minute), internal storage increasing (around 20MB per minute) frequency.
We figured one main reason for that couchdb revisions. So we decided to decrease database rev_limit to around 5. But we heard it may impact replication process between couchdb and pouchdb. My first question is
how this decrease of revision limit impact to the replication process?.
And we figured out views taking more space than normal document storage. My second question, is there any way to reduce couchdb view size?

Your data model (fast updates) doesn't play to CouchDB's strengths. Even after compaction, old revisions (including tombstones) take up space. CouchDB is happiest when using small, immutable documents. Such a model is also less likely to suffer from update conflicts.
Look to your documents - can they be broken apart such that updates can be changed to new document writes? Typical indicators are nested objects or arrays that grow in documents over time.

Related

MongoDB to store vehicle tracking information

I am building an app similar to Uber for tracking vehicles. Since the update frequency is so high (accounting for several users), I want to know the general practices involved for making writes faster to mongodb collection.
I am maintaining a database to store historical location information from all vehicles but it is bound to grow very fast once we go into production. I need to get the list of vehicles closest to a point. For this should I implement a separate table (with one row per vehicle) which gets updated after every update or there is better/faster way to do this using the existing table?
Two separate collections would likely be the best option here.
A vehicles collection which includes the current location. It could even include the 50 most recent location entries, added with $push and $slice to not have unbounded array growth. http://docs.mongodb.org/manual/reference/operator/update/slice/#up._S_slice
A locationHistory collection which includes all previous vehicle movements. You could index this by vehicle ID, and/or date.
One thing you for sure want to avoid is having an UNBOUNDED array inside a document.
{_id: ObjectID, VIN: String,
pastLocations: [{...unbounded array... }]
}
When mongodb allocates space for a new vehicle entry, it will use an average of the existing vehicle sizes to determine how much disk space to allocate. Having vastly different sizes of vehicle entries (some move more than others, or are newer etc) will negatively impact performance, and cause a lot more page faults.
The key here is that you're trying to avoid page faults. Keeping 50 entries of vehicle history (if they're just GPS coordinates) as a subdocument array isn't super huge. Keeping an entire year's worth of history that could be more than 1MB would be a big deal (heh) and cause page faults all the time when accessing different vehicles.
I did some extensive data storage loading of 20 GB+ in MongoDB over couple of months (deployed latest stable version in Aug. 2014). I noticed that the database corrupted on Windows OS (using high performance storage - iSCI over fibre channel), so MongoDB service just stopped and could not be started. I can still reproduce the issue by reaching high data loads. I cannot recommend for MongoDB for any production deployment, I hope you can find a better DBMS.
Performance should get better on mongodb due to wiredtiger integration on the newest version. (Not stable yet:http://blog.mongodb.org/post/102461818738/announcing-mongodb-2-8-0-rc0-release-candidate-and)

Can CouchDB handle thousands of separate databases?

Can CouchDB handle thousands of separate databases on the same machine?
Imagine you have a collection of BankTransactions. There are many thousands of records. (EDIT: not actually storing transactions--just think of a very large number of very small, frequently updating records. It's basically a join table from SQL-land.)
Each day you want a summary view of transactions that occurred only at your local bank branch. If all the records are in a single database, regenerating the view will process all of the transactions from all of the branches. This is a much bigger chunk of work, and unnecessary for the user who cares only about his particular subset of documents.
This makes it seem like each bank branch should be partitioned into its own database, in order for the views to be generated in smaller chunks, and independently of each other. But I've never heard of anyone doing this, and it seems like an anti-pattern (e.g. duplicating the same design document across thousands of different databases).
Is there a different way I should be modeling this problem? (Should the partitioning happen between separate machines, not separate databases on the same machine?) If not, can CouchDB handle the thousands of databases it will take to keep the partitions small?
(Thanks!)
[Warning, I'm assuming you're running this in some sort of production environment. Just go with the short answer if this is for a school or pet project.]
The short answer is "yes".
The longer answer is that there are some things you need to watch out for...
You're going to be playing whack-a-mole with a lot of system settings like max file descriptors.
You'll also be playing whack-a-mole with erlang vm settings.
CouchDB has a "max open databases" option. Increase this or you're going to have pending requests piling up.
It's going to be a PITA to aggregate multiple databases to generate reports. You can do it by polling each database's _changes feed, modifying the data, and then throwing it back into a central/aggregating database. The tooling to make this easier is just not there yet in CouchDB's API. Almost, but not quite.
However, the biggest problem that you're going to run into if you try to do this is that CouchDB does not horizontally scale [well] by itself. If you add more CouchDB servers they're all going to have duplicates of the data. Sure, your max open dbs count will scale linearly with each node added, but other things like view build time won't (ex., they'll all need to do their own view builds).
Whereas I've seen thousands of open databases on a BigCouch cluster. Anecdotally that's because of dynamo clustering: more nodes doing different things in parallel, versus walled off CouchDB servers replicating to one another.
Cheers.
I know this question is old, but wanted to note that now with more recent versions of CouchDB (3.0+), partitioned databases are supported, which addresses this situation.
So you can have a single database for transactions, and partition them by bank branch. You can then query all transactions as you would before, or query just for those from a specific branch, and only the shards where that branch's data is stored will be accessed.
Multiple databases are possible, but for most cases I think the aggregate database will actually give better performance to your branches. Keep in mind that you're only optimizing when a document is updated into the view; each document will only be parsed once per view.
For end-of-day polling in an aggregate database, the first branch will cause 100% of the new docs to be processed, and pay 100% of the delay. All other branches will pay 0%. So most branches benefit. For end-of-day polling in separate databases, all branches pay a portion of the penalty proportional to their volume, so most come out slightly behind.
For frequent view updates throughout the day, active branches prefer the aggregate and low-volume branches prefer separate. If one branch in 10 adds 99% of the documents, most of the update work will be done on other branch's polls, so 9 out of 10 prefer separate dbs.
If this latency matters, and assuming couch has some clock cycles going unused, you could write a 3-line loop/view/sleep shell script that updates some documents before any user is waiting.
I would add that having a large number of databases creates issues around compaction and replication. Not only do things like continuous replication need to be triggered on a per-database basis (meaning you will have to write custom logic to loop over all the databases), but they also spawn replication daemons per database. This can quickly become prohibitive.

Replicating CouchDB to local couch reduces size - why?

I recently started using Couch for a large app I'm working on.
I database with 7907 documents, and wanted to rename the database. I poked around for a bit, but couldn't figure out how to rename it, so I figured I would just replicate it to a local database of the name I wanted.
The first time I tried, the replication failed, I believe the error was a timeout. I tried again, and it worked very quickly, which was a little disconcerting.
After the replication, I'm showing that the new database has the correct amount of records, but the database size is about 1/3 of the original.
Also a little odd is that if I refresh futon, the size of the original fluctuates between 94.6 and 95.5 mb
This leaves me with a few questions:
Is the 2nd database storing references to the first? If so, can I delete the first without causing harm?
Why would the size be so different? Had the original built indexes that the new one eventually will?
Why is the size fluctuating?
edit:
A few things that might be helpful:
This is on a cloudant couchdb install
I checked the first and last record of the new db, and they match, so I don't believe futon is underreporting.
Replicating to a new database is similar to compaction. Both involve certain side-effects (incidentally, and intentionally, respectively) which reduce the size of the new .couch file.
The b-tree indexes get balanced
Data from old document revisions is discarded.
Metadata from previous updates to the DB is discarded.
Replications store to/from checkpoints, so if you re-replicate from the same source, to the same location (i.e. re-run a replication that timed out), it will pick up where it left off.
Answers:
Replication does not create a reference to another database. You can delete the first without causing harm.
Replicating (and compacting) generally reduces disk usage. If you have any views in any design documents, those will re-build when you first query them. View indexes use their own .view file which also consumes space.
I am not sure why the size is fluctuating. Browser and proxy caches are the bane of CouchDB (and web) development. But perhaps it is also a result of internal Cloudant behavior (for example, different nodes in the cluster reporting slightly different sizes).

CouchDB .view file growing out of control?

I recently encountered a situation where my CouchDB instance used all available disk space on a 20GB VM instance.
Upon investigation I discovered that a directory in /usr/local/var/lib/couchdb/ contained a bunch of .view files, the largest of which was 16GB. I was able to remove the *.view files to restore normal operation. I'm not sure why the .view files grew so large and how CouchDB manages .view files.
A bit more information. I have a VM running Ubuntu 9.10 (karmic) with 512MB and CouchDB 0.10. The VM has a cron job which invokes a Python script which queries a view. The cron job runs once every five minutes. Every time the view is queried the size of a .view file increases. I've written a job to monitor this on an hourly basis and after a few days I don't see the file rolling over or otherwise decreasing in size.
Does anyone have any insights into this issue? Is there a piece of documentation I've missed? I haven't been able to find anything on the subject but that may be due to looking in the wrong places or my search terms.
CouchDB is very disk hungry, trading disk space for performance. Views will increase in size as items are added to them. You can recover disk space that is no longer needed with cleanup and compaction.
Every time you create update or delete a document then the view indexes will be updated with the relevant changes to the documents. The update to the view will happen when it is queried. So if you are making lots of document changes then you should expect your index to grow and will need to be managed with compaction and cleanup.
If your views are very large for a given set of documents then you may have poorly designed views. Alternatively your design may just require large views and you will need to manage that as you would any other resource.
It would be easier to tell what is happening if you could describe what document updates (inc create and delete) are happening and what your view functions are emitting, especially for the large view.
That your .view files grow, each time you access a view is because CouchDB updates views on access. CouchDB views need compaction like databases too. If you have frequent changes to your documents, resulting in changes in your view, you should run view compaction from time to time. See http://wiki.apache.org/couchdb/HTTP_view_API#View_Compaction
To reduce the size of your views, have a look at the data, you are emitting. When you emit(foo, doc) the entire document is copied to the view to it is very instantly available when you query the view. the function(doc) { emit(doc.title, doc); } will result in a view as big as the database itself. You could also emit(doc.title, nil); and use the include_docs option to let CouchDB fetch the document from the database when you access the view (which will result in a slightly performance penalty). See http://wiki.apache.org/couchdb/HTTP_view_API#Querying_Options
Use sequential or monotonic id's for documents instead of random
Yes, couchdb is very disk hungry, and it needs regular compactions. But there is another thing that can help reducing this disk usage, specially sometimes when it's unnecessary.
Couchdb uses B+ trees for storing data/documents which is very good data structure for performance of data retrieval. However use of B-tree trades in performance for disk space usage. With completely random Id, B+-tree fans out quickly. As the minimum fill rate is 1/2 for every internal node, the nodes are mostly filled up to the 1/2 (as the data spreads evenly due to its randomness) generating more internal nodes. Also new insertions can cause a rewrite of full tree. That's what randomness can cause ;)
Instead, use of sequential or monotonic ids can avoid all.
I've had this problem too, trying out CouchDB for a browsed-based game.
We had about 100.000 unexpected visitors on the first day of a site launch, and within 2 days the CouchDB database was taking about 40GB in space. This made the server crash because the HD was completely full.
Compaction brought that back to about 50MB. I also set the _revs_limit (which defaults to 1000) to 10 since we didn't care about revision history, and it's running perfectly since. After almost 1M users, the database size is usually about 2-3GB. When i run compaction it's about 500MB.
Setting document revision limit to 10:
curl -X PUT -d "10" http://dbuser:dbpassword#127.0.0.1:5984/yourdb/_revs_limit
Or without user:password (not recommended):
curl -X PUT -d "10" http://127.0.0.1:5984/yourdb/_revs_limit

How much storage do you require when using CouchDB when compared to RDBMS?

I need to know the factoring that needs to be taken into consideration when implementing a solution using CouchDB. I understand that CouchDB does not require normalization and that the standard techniques that I use in RDBMS development are mostly thrown away.
But what exactly are the costs involved. I perfectly understand the benefits, but the costs of storage make me a bit nervous as it appears as CouchDB would need an awful lot of replicated data, some of it going stale and out of date well before its usage. How would one manage stale data?
I know that I could implement some awful relationship model with documents using Couchdb and lower the costs of storage, but wouldn't this defeat the objectives of Couchdb and the performances that I can gain?
An example I am thinking about is a system for requistions, ordering and tendering. The system currently has the one to many thing going on and the many might get updated more frequently than the one.
Any help would be great as I am an old school RDBMS guy with all the teachings of C.J. Date, E.F Codd and R. F. Boyce, so struggling at the moment with the radical notion of document storage.
Does Couchdb have anything internal to manage the recognition and reduction of duplicate data?
Only you know how many copies of how much data you will use, so unfortunately the only good answer will be to build simulated data sets and measure the disk usage.
In addition, similar to a file system, CouchDB requires additional storage for metadata. This cost depends on two factors:
How often you update or create a document
How often you compact
The worst-case instantaneous disk usage will be the total amount of data times two, plus all the old document revisions (#1) existing at compaction time (#2). This is because compaction builds a new database file with only the current document revisions. Therefore the usage will be two copies of current data (from the old file plus the new file), plus all of the "wasted" old revisions awatiing deletion when compaction completes. After compaction, the old file is deleted so you will reclaim over half of this worst-case value.
Running compaction all the time is no problem to reduce data use however it has implications with disk i/o.

Resources