Does NeDB use the disk to do find queries? - node.js

Does NeDB use the disk to do find queries?
Or are find queries 100% using RAM based data structures.
I need to do intensive work using find queries and I don't want to put work on my hard disk.
(As a bonus track, do SQLite find queries also work 100% in memory?)

Depends, if you create indexes on fields, those fields will be held in memory for faster lookup and access. Unindexed field lookups will be hitting disk. This is true for SQLite, as well as most other persistent databases (e.g PostgreSQL, MySQL, MongoDB, and many more)
NeDB is an in-memory database, which means that all data will be held in memory (similar to Redis). That being said, you still have to index fields for faster lookup.
If you want to create indexes on fields in NeDB, they have documentation for that here.
For NeDB, the _id field is automatically indexed, so you don't have to create an index for that field and querying for _id will be substantially faster than querying for another unindexed field.

Related

Large Data Processing and Management in MongoDB using NodeJS

I am trying to do CRUD operations on MongoDB of a very large size around 20GB data and there can be multiple such versions of data. Can anyone guide me on how to handle such high data for the CRUD operations and maintaining the previous versions of the data in MongoDB?
I am using NodeJS as backend and I can also use any other database if required.
Mongodb is a reliable database, I am using it to processes 10-11 billions of data every single day nodejs should also be fine as long as you are handling the files in streams of data.
Things you should do to Optimize:
Indexing - this will be the biggest part, if you want faster queries you better look into indexing in mongodb, every single document needs to be indexed according to your query, else you are going to have a tough time dealing with queries.
Sharding and Replication - this will help you organise the data and increases the query speed, replication would allow you to have your reads and writes separated(there are cons for replication you can read about that in the mongodb documentation).
This are the main things you need to consider, there are a lot but this should get you started...;) need any help please do let me know.

Best way of querying table without providing the primary key

I am designing the data model of our Scylla database. For example, I created a table, intraday_history, with fields:
CREATE TABLE intraday_history (id bigint,timestamp_seconds bigint,timestamp timestamp,sec_code text,open float,high float,low float,close float,volume float,trade int, PRIMARY KEY ((id,sec_code),timestamp_seconds,timestamp));
My id is a twitter_snowflake generated 64-bit integers.. My problem is how can I use WHERE without providing always the id (most of the time I will use the timestamp with bigint). I also encounter this problem in other tables. Because the id is unique then I cannot query a batch of timestamp.
Is it okay if lets say for a bunch of tables for my 1 node, I will use an ID like cluster1 so that when I query the id I will just id=cluster1 ? But it loss the uniqueness feature
Allow filtering comes as an option here. But I keep reading that it is a bad practice, especially when dealing with millions of query.
I'm using the ScyllaDB, a compatible c++ version of Apache Cassandra.
In Cassandra, as you've probably already read, the queries derive the tables, not the other way around. So your situation where you want to query by a different filter would ideally entail you creating another Cassandra table. That's the optimal way. Partition keys are required in filters unless you provide the "allow filtering" "switch", but it isn't recommended as it will perform a DC (possibly cluster)-wide search, and you're still subjected to timeouts. You could consider using indexes or materialized views, which are basically cassandra maintained tables populated by the base table's changes. That would save you the troubles of having the application populate multiple tables (Cassandra would do it for you). We've had some luck with materialized views, but with either of these components, there can be side effects like any other cassandra table (inconsistencies, latencies, additional rules, etc.). I would say do a bit of research to determine the best approach, but most likely providing "allow filtering" isn't the best choice (especially for high volume and frequent queries or with tables containing high volumes of data). You could also investigate SOLR if that's an option, depending on what you're filtering.
Hope that helps.
-Jim

Transforming Mongoose Object references for use by Model.collection.create()

I need to periodically backup a subset of a mongo database from production and restore it into a development database in order to diagnose issues for specific customers. Doing a full backup/restore isn't practical given the size of the database.
There are a few dozen mongoose models involved, and each typically has several references to other models via fields of type Schema.ObjectId, and my implementation works in most cases, however when the size of the backup exceeds something like 100k records I run into out of memory situations or database timeouts on the restore.
My algorithm uses Model.insertMany(docs) within an async loop inserting a few hundred documents at a time to one collection at a time, however when there are hundreds of thousands of docs involved this process inevitably consumes all memory or times out the dbase connection. Process and Db mmory is maxed and I've tried introducing timeouts in the algorithm to facility GC and experimented with the batch size (ranging from 1000 at a time to 1 at a time), but the result is invariably failure on a very large dataset.
If I use Model.connection.create(docs) instead of Model.insertMany(docs) the restore completes reliably, even with a huge dataset, but the ObjectID references in my backup are imported as strings rather than ObjectID's, and the result isn't queryable via mongoose.
I know that bulk insert is a difficult scenario and that Mongoose is doing validation etc on each document inserted, but for a backup/restore scenario validation is not required given that the target db is always a subset of the src db, and I'm wondering if there is a Model or Schema method or other technique I can use to transform a source doc into a Mongoose doc?
I could obviously write model specific methods to do this transformation but its also obviously something that mongoose already does, and I'm wondering if that transformation is exposed by any api? It would be a nice middle ground technique for this sort of bulk insert scenario.

is it good to use different collections in a database in mongodb

I am going to do a project using nodejs and mongodb. We are designing the schema of database, we are not sure that whether we need to use different collections or same collection to store the data. Because each has its own pros and cons.
If we use single collection, whenever the database is invoked, total collection will be loaded into memory which reduces the RAM capacity.If we use different collections then to retrieve data we need to write different queries. By using one collection retrieving will be easy and by using different collections application will become faster. We are confused whether to use single collection or multiple collections. Please Guide me which one is better.
Usually you use different collections for different things. For example when you have users and articles in the systems, you usually create a "users" collection for users and "articles" collection for articles. You could create one collection called "objects" or something like that and put everything there but it would mean you would have to add some type fields and use it for searches and storage of data. You can use a single collection in the database but it would make the usage more complicated. Of course it would let you to load the entire collection at once but whether or not it is relevant for the performance of your application, that is something that would have to be profiled and tested to give your the performance impact for your particular use case.
Usually, developers create the different collection for different things. Like for post management, people create 'post' collection and save the posts in post collection and same for users and all.
Using different collection for different purpose is a good pratices.
MongoDB is great at scaling horizontally. It can shard a collection across a dynamic cluster to produce a fast, querable collection of your data.
So having a smaller collection size is not really a pro and I am not sure where this theory comes that it is, it isn't in SQL and it isn't in MongoDB. The performance of sharding, if done well, should be relative to the performance of querying a single small collection of data (with a small overhead). If it isn't then you have setup your sharding wrong.
MongoDB is not great at scaling vertically, as #Sushant quoted, the ns size of MongoDB would be a serious limitation here. One thing that quote does not mention is that index size and count also effect the ns size hence why it describes that:
By default MongoDB has a limit of approximately 24,000 namespaces per
database. Each namespace is 628 bytes, the .ns file is 16MB by
default.
Each collection counts as a namespace, as does each index. Thus if
every collection had one index, we can create up to 12,000
collections. The --nssize parameter allows you to increase this limit
(see below).
Be aware that there is a certain minimum overhead per collection -- a
few KB. Further, any index will require at least 8KB of data space as
the b-tree page size is 8KB. Certain operations can get slow if there
are a lot of collections and the meta data gets paged out.
So you won't be able to gracefully handle it if your users exceed the namespace limit. Also it won't be high on performance with the growth of your userbase.
UPDATE
For Mongodb 3.0 or above using WiredTiger storage engine, it will no longer be the limit.
Yes personally I think having multiple collections in a DB keeps it nice and clean. The only thing I would worry about is the size of the collections. Collections are used by a lot of developers to cut up their db into, for example, posts, comments, users.
Sorry about my grammar and lack of explanation I'm on my phone

How manage big data in MongoDb collections

I have a collection called data which is the destination of all the documents sent from many devices each n seconds.
What is the best practice to keep the collection alive in production without documents overflow?
How could I "clean" the collection and save the content in another one? Is it the correct way?
Thank you in advance.
You cannot overflow, if you use sharding you have almost unlimited space.
https://docs.mongodb.com/manual/reference/limits/#Sharding-Existing-Collection-Data-Size
Those are limits for single shard, and you have to start sharding before reaching them.
It depends on your architecture, however limit (in worst case) of 8.19200 exabytes (or 8,192,000 terabytes) is unreachable for most of even big data apps, if you multiply number of shard possible in replica set by max collection size in one of them.
See also:
What is the max size of collection in mongodb
Mongodb is a best database for storing large collection. You can do below steps for better performance.
Replication
Replication means copying your data several times on a single server or multiple server.
It provides a backup of your data every time when you insert data in your db.
Embedded document
Try to make your collection with refreences. It means that try to make refrences in your db.

Resources