lets suppose i have a collection with 4k documents on mongodb. Every 10 seconds for example i need to loop for each document and make some verifications. Im wondering now what is the besy way I could do that? Do I need multithread or something like that to speed up the proccess?
fire off your script using cron
your database will need to be able to handle the traffic, so you would have to play with horizontal scaling or vertical scaling if there are issues or look at cache options like redis or Elasticsearch which can handle faster hits, mongoDB will be able to handle it with the correct hardware otherwise.
consider adding a lock while each query is running in case one of them takes too long, IE make a collection for checking if a lock is turned on prior to executing the query script, turn off the lock once the query finishes
Related
I am trying to work out how to process bulk records into elastic search using the bulk function and need to use threads to get some performance out of it. But I am stuck trying to work out how to limit the threads to 5 concurrent so its not to heavy on elastic.
I was thinking of just looping the db and filling a list, then when it hits eg (50), push to a thread for processing and continue. But this method will spawn to many threads and I cannot see an obvious way to limit the treads without waiting for all of them to finish, before adding another thread.
I have done this in golang before, where you can just add threads and when it hits the limit it will just wait before adding more to the queue, but seeming a little more elusive in python so far.
I am open to alternatives but this seems like the cleanest way to go so far, but there might be better methods like db -> queue with limit, then just threads to consume from the queue.. ?
look forward to some responses.
We are trying to create an algorithm/heuristic that will schedule a delivery at a certain time period, but there is definitely a race condition here, whereby two conflicting scheduled items could be written to the DB, because the write is not really atomic.
The only way to truly prevent race conditions is to create some atomic insert operation, TMK.
The server receives a request to schedule something for a certain time period, and the server has to check if that time period is still available before it writes the data to the DB. But in that time the server could get a similar request and end up writing conflicting data.
How to circumvent this? Is there some way to create some script in the DB itself that hooks into the write operation to make the whole thing atomic? By putting a locking mechanism on that script? What makes the whole thing non-atomic is the read and the wire time between the server and the DB.
Whenever I run into race condition I think of one immediate solution QUEUE.
Step 1) What you can do is that instead of adding data to a database directly you can add it to queue without checking anything.
Step 2) A separate reader will read from the queue check DB for any conflict and take necessary action.
This is one of the ways to solve this If you implement any better solution please do share it.
Hope that helps
I have a node server, and I'm fairly certain that my hosting server is running it across two different machines with slightly different times. So if I make a call to the server that just returns Date.now(), I could see something like this:
console.log(firstTime); // 5:30:24
console.log(secondTime); // 5:29:11
Even though I retrieved firstTime before I retrieved secondTime.
So I can't trust my server's system time. I also can't use MongoDB's timestamps because they're only precise to the second and I need something with millisecond precision.
I had a thought to store a single record in its own collection, and that record has an integer that I'd increment whenever there's an update, and then I'd store that in the object that was updated, so I'd know which objects were out of date. I'm not sure this is the best way to go, though, and not really sure how to accomplish it using MongoDB/Mongoose, without causing all sorts of subtle timing issues.
What's the best way to go about doing something like this?
So I have a backend implementation in node.js which mainly contains a global array of JSON objects. The JSON objects are populated by user requests (POSTS). So the size of the global array increases proportionally with the number of users. The JSON objects inside the array are not identical. This is a really bad architecture to begin with. But I just went with what I knew and decided to learn on the fly.
I'm running this on a AWS micro instance with 6GB RAM.
How to purge this global array before it explodes?
Options that I have thought of:
At a periodic interval write the global array to a file and purge. Disadvantage here is that if there are any clients in the middle of a transaction, that transaction state is lost.
Restart the server every day and write the global array into a file at that time. Same disadvantage as above.
Follow 1 or 2, and for every incoming request - if the global array is empty look for the corresponding JSON object in the file. This seems absolutely absurd and stupid.
Somehow I can't think of any other solution without having to completely rewrite the nodejs application. Can you guys think of any .. ? Will greatly appreciate any discussion on this.
I see that you are using memory as a storage. If that is the case and your code is synchronous (you don't seem to use database, so it might), then actually solution 1. is correct. This is because JavaScript is single-threaded, which means that when one code is running the other cannot run. There is no concurrency in JavaScript. This is only a illusion, because Node.js is sooooo fast.
So your cleaning code won't fire until the transaction is over. This is of course assuming that your code is synchronous (and from what I see it might be).
But still there are like 150 reasons for not doing that. The most important is that you are reinventing the wheel! Let the database do the hard work for you. Using proper database will save you all the trouble in the future. There are many possibilites: MySQL, PostgreSQL, MongoDB (my favourite), CouchDB and many many other. It shouldn't matter at this point which one. Just pick one.
I would suggest that you start saving your JSON to a non-relational DB like http://www.couchbase.com/.
Couchbase is extremely easy to setup and use even in a cluster. It uses a simple key-value design so saving data is as simple as:
couchbaseClient.set("someKey", "yourJSON")
then to retrieve your data:
data = couchbaseClient.set("someKey")
The system is also extremely fast and is used by OMGPOP for Draw Something. http://blog.couchbase.com/preparing-massive-growth-revisited
I'm returning A LOT (500k+) documents from a MongoDB collection in Node.js. It's not for display on a website, but rather for data some number crunching. If I grab ALL of those documents, the system freezes. Is there a better way to grab it all?
I'm thinking pagination might work?
Edit: This is already outside the main node.js server event loop, so "the system freezes" does not mean "incoming requests are not being processed"
After learning more about your situation, I have some ideas:
Do as much as you can in a Map/Reduce function in Mongo - perhaps if you throw less data at Node that might be the solution.
Perhaps this much data is eating all your memory on your system. Your "freeze" could be V8 stopping the system to do a garbage collection (see this SO question). You could Use V8 flag --trace-gc to log GCs & prove this hypothesis. (thanks to another SO answer about V8 and Garbage collection
Pagination, like you suggested may help. Perhaps even splitting up your data even further into worker queues (create one worker task with references to records 1-10, another with references to records 11-20, etc). Depending on your calculation
Perhaps pre-processing your data - ie: somehow returning much smaller data for each record. Or not using an ORM for this particular calculation, if you're using one now. Making sure each record has only the data you need in it means less data to transfer and less memory your app needs.
I would put your big fetch+process task on a worker queue, background process, or forking mechanism (there are a lot of different options here).
That way you do your calculations outside of your main event loop and keep that free to process other requests. While you should be doing your Mongo lookup in a callback, the calculations themselves may take up time, thus "freezing" node - you're not giving it a break to process other requests.
Since you don't need them all at the same time (that's what I've deduced from you asking about pagination), perhaps it's better to separate those 500k stuff into smaller chunks to be processed at the nextTick?
You could also use something like Kue to queue the chunks and process them later (thus not everything in the same time).