I am trying to find the best caching solution for a node app. There a few modules which can manage this. Most popular being: https://www.npmjs.com/package/node-cache
But I found that responses are faster if I just save some results into a variable like:
var cache = ["Kosonsoy","Pandean","Ḩadīdah","Chirilagua","Chattanooga","Hebi","Péruwelz","Pul-e Khumrī"];
I can then update that variable on a fixed interval, is this also classed as caching? Are there any known issues/problems to this method. As it defiantly provides the fastest response times.
Of course it is faster as you use local memory to store the data.
For limited cache size this is good, but can quickly eat up all your process memory.
I would advise to use modules as there is a considerable amount of collaborative brainpower invested in them. Alternatively you can use a dedicated instance to run something like Redis to use as cache.
Alternatives aside, if you would stick with your solution I recommend small improvement.
Instead of
var cache = ["Kosonsoy","Pandean","Ḩadīdah","Chirilagua","Chattanooga","Hebi","Péruwelz","Pul-e Khumrī"];
try using objects as key value storage.
This will make searching and updating entries faster.
var cache = {"Kosonsoy":data,"Pandean":moreData,...};
Searching in array requires iterations while accessing the object is as simple as
var storedValue = cache["Kosonosoy"];
saving
cache[key] = value;
If you use many workers you will have a duplicated cache for each one because they have no shared memory.
Of course, you can use it for small data. And keeping a data in a variable for an amount of time can be called caching for me.
Related
I'm running Node.js / Express server on a container with pretty strict memory constraints.
One of the endpoints I'd like to expose is a "batch" endpoint where a client can request a list of data objects in bulk from my data store. The individual objects vary in size, so it's difficult to set a hard limit on how many objects can be requested at one time. In most cases a client could request a large amount of objects without any issues, but it certain edge cases even requests for a small amount of objects will trigger an OOM error.
I'm familiar with Node's process.memoryUsage() & process.memoryUsage.rss(), but I'm worried about the performance implications of constantly checking heap (or service) memory usage while serving an individual batch request.
In the longer term, I might consider using memory monitoring to bake in some automatic pagination for the endpoint. In the short term, however, I'd just like to be able to return an informative error to the client in the event that they are requesting too many data objects at a given time (rather than have the entire application crash with an OOM error).
Are there any more effective methods or tools I could be using to solve the problem instead?
you have couple of options.
Options 1.
what is biggest object you have in store. I would say that you allow some {max object count} on api and set container memory to biggestObject x {max allowed objects size}. You can even have some pagination concept added if required where page size = {max object count}
Option 2.
Also using process.memoryUsage() should be fine too. I don't believe it is a not a costly call unless you have read this somewhere. Before each object pull check current memory and go ahead only if safe amount of memory is available.The response in this case can contain only pulled data and lets client to pull remaining ids in next call. Implementable via some paging logic too.
options 3.
explore streams. This I will not be able to add much info for now.
Is there any to lock any object in Node JS application.
Is there are multiple instance for application is available some function shouldnt run concurrent. If instance A function is completed, it should unlock that object/key or some identifier and B instance of application should check if its unlock it should run some function.
Any Object or Key can be used for identifying the locking and unlocking the function.
How to do that in NodeJS application which have multiple instances.
As mentioned above Redis may be your answer, however, it really depends on the resources available to you. There are some other possibilities less complicated and certainly less powerful which may also do the trick.
node-cache may also do the trick, if you set it up correctly. It is not any where near as powerful as Redis, but on the bright side it does not require as much setup and interaction with your environment.
So there is Redis and node-cache for memory locks. I should mention there are quite a few NPM packages which do the cache. Depends on what you need, and how intricate your cache needs to be.
However, there are less elegant ways to do what you want, though less elegant is not necessarily worse.
You could use a JSON file based system and hold locks on the files for a TTL. lockfile or proper-lockfile will accomplish the task. You can read the information from the files when needed, delete when required, give them a TTL. Basically a cache system to disk.
The memory system is obviously faster. The file system requires just as much planning in your code as the memory system.
There is yet another way. This is possibly the most dangerous one, and you would have to think long and hard on the consequences in terms of security and need.
Node.js has its own process.env. As most know this holds the system global variables available to all by simply writing process.env.foo where foo would have been declared as a global system variable. A package such as .dotenv allows you to add to your system variables by way of a .env text file. Thus if you put in that file sam=mongoDB, then in your code where you write process.env.sam it will be interpreted as mongoDB. Tons of system wide variables can be set up here.
So what good does that do, you may ask? Well these are system wide variables, and they can be changed in mid-flight. So if you need to lock the variables and then change them it is a simple manner to do it with. Beware though of the gotcha here. Once the system goes down, or all processes stop, and is started again, your environment variables will return to the default in the .env file.
Additionally, unless you are running a system which is somewhat safe on AWS or Azure etc. I would not feel secure in having my .env file open to the world. There is a way around this one too. You can use a hash to encrypt all variables and put the hash in the file. When you call it, decrypt before actually requesting use of the full variable.
There are probably many wore ways to lock and unlock, not the least of which is to use the native Node.js structure. Combine File System events together with Crypto. But this demands a much deeper level of understanding of the actual Node.js library and structures.
Hope some of this helped.
I strongly recommend Redis in your case.
There are several ways to create a application/process shared object, using locks is one of them, as you mentioned.
But they're just complicated. Unless you really need to do that yourself, Redis will be good enough. Atomic ops cross multiple process, transaction and so on.
Old thread but I didn't want to use redis so I made my own open source solution which utilizes websocket connections:
https://github.com/OneAndonlyFinbar/sync-cache
So I have a backend implementation in node.js which mainly contains a global array of JSON objects. The JSON objects are populated by user requests (POSTS). So the size of the global array increases proportionally with the number of users. The JSON objects inside the array are not identical. This is a really bad architecture to begin with. But I just went with what I knew and decided to learn on the fly.
I'm running this on a AWS micro instance with 6GB RAM.
How to purge this global array before it explodes?
Options that I have thought of:
At a periodic interval write the global array to a file and purge. Disadvantage here is that if there are any clients in the middle of a transaction, that transaction state is lost.
Restart the server every day and write the global array into a file at that time. Same disadvantage as above.
Follow 1 or 2, and for every incoming request - if the global array is empty look for the corresponding JSON object in the file. This seems absolutely absurd and stupid.
Somehow I can't think of any other solution without having to completely rewrite the nodejs application. Can you guys think of any .. ? Will greatly appreciate any discussion on this.
I see that you are using memory as a storage. If that is the case and your code is synchronous (you don't seem to use database, so it might), then actually solution 1. is correct. This is because JavaScript is single-threaded, which means that when one code is running the other cannot run. There is no concurrency in JavaScript. This is only a illusion, because Node.js is sooooo fast.
So your cleaning code won't fire until the transaction is over. This is of course assuming that your code is synchronous (and from what I see it might be).
But still there are like 150 reasons for not doing that. The most important is that you are reinventing the wheel! Let the database do the hard work for you. Using proper database will save you all the trouble in the future. There are many possibilites: MySQL, PostgreSQL, MongoDB (my favourite), CouchDB and many many other. It shouldn't matter at this point which one. Just pick one.
I would suggest that you start saving your JSON to a non-relational DB like http://www.couchbase.com/.
Couchbase is extremely easy to setup and use even in a cluster. It uses a simple key-value design so saving data is as simple as:
couchbaseClient.set("someKey", "yourJSON")
then to retrieve your data:
data = couchbaseClient.set("someKey")
The system is also extremely fast and is used by OMGPOP for Draw Something. http://blog.couchbase.com/preparing-massive-growth-revisited
I'm returning A LOT (500k+) documents from a MongoDB collection in Node.js. It's not for display on a website, but rather for data some number crunching. If I grab ALL of those documents, the system freezes. Is there a better way to grab it all?
I'm thinking pagination might work?
Edit: This is already outside the main node.js server event loop, so "the system freezes" does not mean "incoming requests are not being processed"
After learning more about your situation, I have some ideas:
Do as much as you can in a Map/Reduce function in Mongo - perhaps if you throw less data at Node that might be the solution.
Perhaps this much data is eating all your memory on your system. Your "freeze" could be V8 stopping the system to do a garbage collection (see this SO question). You could Use V8 flag --trace-gc to log GCs & prove this hypothesis. (thanks to another SO answer about V8 and Garbage collection
Pagination, like you suggested may help. Perhaps even splitting up your data even further into worker queues (create one worker task with references to records 1-10, another with references to records 11-20, etc). Depending on your calculation
Perhaps pre-processing your data - ie: somehow returning much smaller data for each record. Or not using an ORM for this particular calculation, if you're using one now. Making sure each record has only the data you need in it means less data to transfer and less memory your app needs.
I would put your big fetch+process task on a worker queue, background process, or forking mechanism (there are a lot of different options here).
That way you do your calculations outside of your main event loop and keep that free to process other requests. While you should be doing your Mongo lookup in a callback, the calculations themselves may take up time, thus "freezing" node - you're not giving it a break to process other requests.
Since you don't need them all at the same time (that's what I've deduced from you asking about pagination), perhaps it's better to separate those 500k stuff into smaller chunks to be processed at the nextTick?
You could also use something like Kue to queue the chunks and process them later (thus not everything in the same time).
Is there any particular reason to use memcached for fast access to cached data instead of just creating a global CACHE variable in the node program and using that?
Assume that the application will we running in one instance and not distributed across multiple machines.
The global variable option seems like it would be faster and more efficient but I wasn't sure if there was a good reason to not do this.
It depends on the size and number of items. If you're working with a few items of modest size and they don't need to be accessible to other node instances then using an object has a key/value store is fine. The one trick is that when you go to delete/remove items from the cache/object make sure you don't keep any other references to it, otherwise you will have a leak.