Mongo DB find query taking too much time [closed] - node.js

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have worked with MySQL and PHP project earlier for a iPhone app, but when storage data size increases with time, client moved with NodeJS and MongoDB.
We have made a new version of app with Mongo database, which is working fine with few records.
But when we have migrated MySQL database into MongoDB, it has consumed almost 2GB space at server.
Our app having large numbers of users and its related data.
And now we are stuck that find records (20 records) taking so much time (4 to 5 seconds) which leads to unwanted time consumption in the app, and users irritated in most of activities in app.

Take a look at this section of the documentation MongoDB talking about Performance Optimization.
As the documentation displayed options are:
Create Indexes to Support Queries
Use Projections to Return Only Necessary Data
http://docs.mongodb.org/manual/tutorial/optimize-query-performance-with-indexes-and-projections/

You probably need to add indexes to your collection.
ensureIndex command creates an index on the collection. It will improve the speed of your queries significantly. The indexes will have to be created according to the queries you use.
Please follow this documentation:
http://docs.mongodb.org/manual/core/index-creation/

Related

storing variables in webserver with nodejs and express [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
i'm trying to create a web server that saves variables to a webserver on each situation, due to caching is it possible?,
like res.send(data)
i'm trying to make something like webdb(not sure if works)
This Webserver Uses Express
edit:
SOLVED
_
you have two main options in order to save your data values on the server-side:
1> save on the memory of your server using "Redis" data store.
Beginner’s Guide to Redis and Caching with NodeJS
2> save on the hard disk databases like using (MySQL db or mango db etc).
if your Json variable size is about small and it needs frequently used with your code the Redis memory store is a good option. (for session use also). if you rarely need to access the data or the size is high or another reason, mango db/MySQL db will be used.
(for caching purpose option 1 is better)

reading sql server log files (ldf) with spark [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
this is probably far fetched but... can spark - or any advanced "ETL" technology you know - connect directly to sql server's log file (the .ldf) - and extract its data?
Agenda is to get SQL server's real time operational data without replicating the whole database first (nor selecting directly from it).
Appreciate your thoughts!
Rea
to answer your question, I have never heard of any tech to read an LDF directly, but there are several products on the market that can "link-clone" a database almost instantly by using some internal tricks. Keep in mind that the data is not copied using these tools, but it allows instant access for use cases like yours.
There may be some free ways to do this, especially using cloud functions, or maybe linked-clone functions that Virtual Machines offer, but I only know about paid products at this time like Dell EMC, Redgate's and Windocks.
The easiest to try that are not in the cloud are:
Red Gate SQL Clone with a 14 day free trial:
Red Gate SQL Clone Link
Windocks.com (this is free for some cases, but harder to get started with)

Where to store mp4 files on a node.js server? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am building some kind of video streaming web app using node.js/express and MongoDB. But I am facing an issue related to where to store the mp4 files that my clients will upload to my back-end. I am not sure if MongoDB is capable of storing large files(in the GB order) and my current idea is to keep the files on a directory and then keep track of each file path on MongoDB. Is this a good idea or is there a better method to do so?
My advice, use
s3.amazonaws.com
Yes, it's way better to store only a path inside a MongoDB instead of storing directly the video file inside the DB. Because your DB will grow up so fast if you did that. The disk space taken by both solutions is the same, but overloading your DB with these files will just result in a slower DB result

How data can be synchronized among multiple linux servers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have basically 4 servers for running the same project. I want make changes in database from UI.
What should I do so that all changes are reflected on all server so that all servers contain the same data.
You can use replication in database for your purpose.
You can use data replication. replicate all the data from all four servers at one single location.
Database replication is the frequent electronic copying data from a database in one computer or server to a database in another so that all users share the same level of information. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. The implementation of database replication for the purpose of eliminating data ambiguity or inconsistency among users is known as normalization.

put all images in a database or just in a folder [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am developing a website which uses a lot of images.
The images get manipulated very often (every few seconds, by the clients). All images are on a linux server. It is also possible that two clients try to change an image at the same time.
So my question is: should I put the images into a database or just leave them in a folder (how does the OS handle the write-write-collisions?)?
I use node.js and mongoDB on the server.
You usually store the reference to the file location inside of the database. As far as write-write collisions In most whoever has the file open first gets it however it mostly depends on the OS that you are working with. You will want to look into file locking. This wikipedia article gives a good overview.
http://en.wikipedia.org/wiki/File_locking
It is also considered good practice in your code to check and notify the user if the file is in use if write collisions are likely to occur.
I suggest you store your images within the MongoDB using the GridFS file system. This allows you to keep images and their metadata together, it has an atomic update and two more advantages, if you are using replica sets for your database:
Your images have the same high availability as the rest of your data and get backed-up together
You can, eventually, direct queries to secondary members of the set
For more information, see
http://docs.mongodb.org/manual/applications/gridfs
http://docs.mongodb.org/manual/replication/
Does this help?
Cheers
Ronald

Resources