MongoDB to store vehicle tracking information - node.js

I am building an app similar to Uber for tracking vehicles. Since the update frequency is so high (accounting for several users), I want to know the general practices involved for making writes faster to mongodb collection.
I am maintaining a database to store historical location information from all vehicles but it is bound to grow very fast once we go into production. I need to get the list of vehicles closest to a point. For this should I implement a separate table (with one row per vehicle) which gets updated after every update or there is better/faster way to do this using the existing table?

Two separate collections would likely be the best option here.
A vehicles collection which includes the current location. It could even include the 50 most recent location entries, added with $push and $slice to not have unbounded array growth. http://docs.mongodb.org/manual/reference/operator/update/slice/#up._S_slice
A locationHistory collection which includes all previous vehicle movements. You could index this by vehicle ID, and/or date.
One thing you for sure want to avoid is having an UNBOUNDED array inside a document.
{_id: ObjectID, VIN: String,
pastLocations: [{...unbounded array... }]
}
When mongodb allocates space for a new vehicle entry, it will use an average of the existing vehicle sizes to determine how much disk space to allocate. Having vastly different sizes of vehicle entries (some move more than others, or are newer etc) will negatively impact performance, and cause a lot more page faults.
The key here is that you're trying to avoid page faults. Keeping 50 entries of vehicle history (if they're just GPS coordinates) as a subdocument array isn't super huge. Keeping an entire year's worth of history that could be more than 1MB would be a big deal (heh) and cause page faults all the time when accessing different vehicles.

I did some extensive data storage loading of 20 GB+ in MongoDB over couple of months (deployed latest stable version in Aug. 2014). I noticed that the database corrupted on Windows OS (using high performance storage - iSCI over fibre channel), so MongoDB service just stopped and could not be started. I can still reproduce the issue by reaching high data loads. I cannot recommend for MongoDB for any production deployment, I hope you can find a better DBMS.

Performance should get better on mongodb due to wiredtiger integration on the newest version. (Not stable yet:http://blog.mongodb.org/post/102461818738/announcing-mongodb-2-8-0-rc0-release-candidate-and)

Related

Elasticsearch index Architecture for large data and having more update/delete operations

I have a index which has almost now 50GB of data and it will exceed to 100GB to soon! so now I would like to setup index architecture for the better performance.
I have checkout many things one of them is Index LifeCycle but as the index which i have that can be updated at anytime! so in that case how can I design my index so that will be good for perfomance.
Another thing is that As I have found an artical Dynamic Indices on the update and delete records from Index. it shows that data will be find out while we perform any search operations! as in my case I have too much update records in those cases it will reduce the performance of the index!
How can we improve our index performance when we have a large data and having to many update and delete operaion? what architecture should we follow?
Is all of your date likely to be updated or deleted or is it only the latest data?
If your updates are on fairly recent data and the old data is read-only you can create the hot-warm-cold architecture as described in this blog post.
If all of your data is likely to be updated you can do a hot-warm architecture, where all your updates go to hot node and all queries go to warm node. Elastic will sync the hot and warm node to achieve eventual consistency so you might have to live with stale date for milliseconds I assume. Check this.
In my experience Elastic is able to easily handle 50-100 GB data even if you update and search from the same set of indices and nodes. It all depends on the rate of the updates and search.

CouchDB taking lot of space due to revisions

We have a project that involved with a database sync with pouchdb in mobile devices. We have faced issue when updating multiple documents (8400 docs per minute), internal storage increasing (around 20MB per minute) frequency.
We figured one main reason for that couchdb revisions. So we decided to decrease database rev_limit to around 5. But we heard it may impact replication process between couchdb and pouchdb. My first question is
how this decrease of revision limit impact to the replication process?.
And we figured out views taking more space than normal document storage. My second question, is there any way to reduce couchdb view size?
Your data model (fast updates) doesn't play to CouchDB's strengths. Even after compaction, old revisions (including tombstones) take up space. CouchDB is happiest when using small, immutable documents. Such a model is also less likely to suffer from update conflicts.
Look to your documents - can they be broken apart such that updates can be changed to new document writes? Typical indicators are nested objects or arrays that grow in documents over time.

Alternative of Cassandra for storing User data with high IO

We are looking for a technology stack which will have the following criteria.
We will be having around 10 million customer.
Each customer will be having around 20MB+ of data.
Data of each user will be updated everyday.
We need to store the data for more than six months.
We may need to query on the data any time within the time span of six months.
Currently we are thinking to use Cassandra, but the limitation of maximum storage per node in Cassandra should be less than 3TB, we are looking for other alternatives to use with or without Cassandra.
Well, I don't know if my suggestion applies for your case. We had a similar case with one of our products. There was created a blob field to record binary data, as pdf documents, that made the database grew considerably.
The solution we made was to create a second database, as a repository for records older then one year. At the application server there's a service running which:
1) Copies the records, from specific tables, older then one year to this second database;
2) Deletes records from the main database, once we have a copy in the other side;
3) Queries that need data older then one year are directed to this second database;
Sure, we had to do some implementations on the code to adapt to this situation, but is running good so far.
You can try ScyllaDB. It's a C++ reimplementation of Cassandra at 10x the speed. Scylla supports 10TB/node and there are examples of larger amounts per node. Proper disclosure - I work there but am speaking from experience.
You can definitely consider just to store the metadata itself in the database and the blobs on a separate nodes outside but it's complex and Scylla can store it all altogether. Such a similar system is already in production and we hope that user will eventually open source it

is it good to use different collections in a database in mongodb

I am going to do a project using nodejs and mongodb. We are designing the schema of database, we are not sure that whether we need to use different collections or same collection to store the data. Because each has its own pros and cons.
If we use single collection, whenever the database is invoked, total collection will be loaded into memory which reduces the RAM capacity.If we use different collections then to retrieve data we need to write different queries. By using one collection retrieving will be easy and by using different collections application will become faster. We are confused whether to use single collection or multiple collections. Please Guide me which one is better.
Usually you use different collections for different things. For example when you have users and articles in the systems, you usually create a "users" collection for users and "articles" collection for articles. You could create one collection called "objects" or something like that and put everything there but it would mean you would have to add some type fields and use it for searches and storage of data. You can use a single collection in the database but it would make the usage more complicated. Of course it would let you to load the entire collection at once but whether or not it is relevant for the performance of your application, that is something that would have to be profiled and tested to give your the performance impact for your particular use case.
Usually, developers create the different collection for different things. Like for post management, people create 'post' collection and save the posts in post collection and same for users and all.
Using different collection for different purpose is a good pratices.
MongoDB is great at scaling horizontally. It can shard a collection across a dynamic cluster to produce a fast, querable collection of your data.
So having a smaller collection size is not really a pro and I am not sure where this theory comes that it is, it isn't in SQL and it isn't in MongoDB. The performance of sharding, if done well, should be relative to the performance of querying a single small collection of data (with a small overhead). If it isn't then you have setup your sharding wrong.
MongoDB is not great at scaling vertically, as #Sushant quoted, the ns size of MongoDB would be a serious limitation here. One thing that quote does not mention is that index size and count also effect the ns size hence why it describes that:
By default MongoDB has a limit of approximately 24,000 namespaces per
database. Each namespace is 628 bytes, the .ns file is 16MB by
default.
Each collection counts as a namespace, as does each index. Thus if
every collection had one index, we can create up to 12,000
collections. The --nssize parameter allows you to increase this limit
(see below).
Be aware that there is a certain minimum overhead per collection -- a
few KB. Further, any index will require at least 8KB of data space as
the b-tree page size is 8KB. Certain operations can get slow if there
are a lot of collections and the meta data gets paged out.
So you won't be able to gracefully handle it if your users exceed the namespace limit. Also it won't be high on performance with the growth of your userbase.
UPDATE
For Mongodb 3.0 or above using WiredTiger storage engine, it will no longer be the limit.
Yes personally I think having multiple collections in a DB keeps it nice and clean. The only thing I would worry about is the size of the collections. Collections are used by a lot of developers to cut up their db into, for example, posts, comments, users.
Sorry about my grammar and lack of explanation I'm on my phone

Potential issue with Couchbase paging

It may be too much turkey over the holidays, but I've been thinking about a potential problem that we could have with Couchbase.
Currently we paginate based on time, but I'm thinking a similar issue could occur with other values used for paging for example the atomic counter. I'll try to explain best I can, this would only occur in a load balanced environment.
For example say we have 4 servers load balanced and storing data to our Couchbase cluster. We sort our records based on timestamps currently. If any of the 4 servers writing the data starts to lag behind the others than our pagination would possibly be missing records when retrieving client side. A SQL DB auto-increment and timestamps for example can be created when the record is stored to the DB which will avoid similar issues. Using a NoSql DB like Couchbase you define the data you need to retrieve on before it is stored to the DB. So what I am getting at is if there is a delay in storing to the DB and you are retrieving in a pagination fashion while this delay has occurred, you run the real possibility of missing data. Since we are paging that data may never be viewed.
Interested in what other thoughts people have on this.
EDIT**
Response to Andrew:
Example a facebook or pintrest type app is storing data to a DB, they have many load balanced servers from the frontend writing to the db. If for some reason writing is delayed its a non issue with a SQL DB because a timestamp or auto increment happens when the data is actually stored to the DB. There will be no missing data when paging. asking for 1-7 will give you data that is only stored in the DB, 7-* will contain anything that is delayed because an auto-increment value has not been created for that record becuase it is not actually stored.
In Couchbase its different, you actually get your auto increment value (atomic counter) and then save it. So for example say a record is going to be stored as atomic counter number 4. For some reasons this is delayed in storing to the DB. Other servers are grabbing 5, 6, 7 and storing that data just fine. The client now asks for all data between 1 and 7, 4 is still not stored. Then the next paging request is 7 to *. 4 will never be viewed.
Is there a way around this? Can it be modelled differently in CB, or is this just a potential weakness in CB when needing to page results. As I mentioned are paging is timestamp sensitive.
Michael,
Couchbase is an eventually consistent database with respect to views. It is ACID with respect to documents. There are durability interfaces that let you manage this. This means that you can rest assured you won't lose data and that indexes will catch up eventually.
In my experience with Couchbase, you need to expect that the nodes will never be in-sync. There are many things the database is doing, such as compaction and replication. The most important thing you can do to enhance performance is to put your views on a separate spindle from the data. And you need to ensure that your main data spindles across your cluster can sustain between 3-4 times your ingestion bandwidth. Also, make sure your main document key hashes appropriately to distribute the load.
It sounds like you are discussing a situation where the data exists in your system for less time than it takes to be processed through the view system. If you are removing data that fast, you need either a bigger cluster or faster disk arrays. Of the two choices, I would expand the size of your cluster. I like to think of Couchbase as building a RAIS, Redundant Array of Independent Servers. By expanding the cluster, you reduce the coincidence of hotspots and gain disk bandwidth. My ideal node has two local drives, one each for data and views, and enough RAM for my working set.
Anon,
Andrew

Resources