A seemingly simple question, but I am unsure of the node.js equivalent to what I'm used to (say from Python, or LAMP), and I actually think there may not be one.
Problem statement: I want to use basic, simple logging in my express app. Maybe I want to output DEBUG messages, or INFO messages, or just some stats to the log for consumption by other back-end systems later.
1) I want all logs message, however, to contain some fields: remote-ip and request url, for example.
2) On the other hand, code that logs is everywhere in my app, including deep inside the call tree.
3) I don't want to pass (req,res) down into every node in the call tree (this just creates a lot of parameter passing where they are mostly not needed, and complicates my code, as I need to pass these into async callbacks and timeouts etc.)
In other systems, where there is a thread per request, I will store the (req,res) pair (where all the data I need is) in a thread-local-storage, and the logger will read this and format the message.
In node, there is only one thread. What is my alternative here? What's "the request context in which a specific piece of code is running under"?
The only way I can think of achieving something like this is by looking at a trace, and using reflection to look at local variables up the call tree. I hate that, plus would need to implement this for all callbacks, setTimeouts, setIntervals, new Function()'s, eval's, ... and the list goes on.
What are other people doing?
Related
I tried using NodeJS in a server-side script to parse the text content in local PDF files using pdf-parse, which in turn uses Mozilla's amazing PDF parser. Everything worked wonderfully in my dev sandbox, but the whole thing came crashing down on me when I attempted to use the same code in production.
My problem was caused by the sheer number of PDF files I'm trying to process asynchronously: I have more than 100K files that need processing, and Mozilla's PDF parser is (understandably) unconditionally asynchronous – the OS killed my node process because of too many open files. I had started by writing all of my code asynchronously (the preliminary part where I search for PDF files to parse), but even after refactoring all the code for synchronous operation, it still kept crashing.
The gist of the problem is related to the cost of the operations: walking the folder structure to look for PDF files is cheap, whereas actually opening the files, reading their contents and parsing them is expensive. So Node kept generating new promises for each file it encountered, and the promises were never fulfilled. If I tried to run the code manually on smaller folders, it worked like a charm – really fast and reliable. As soon as I tried to execute the code on the entire folder structure it crashed, no matter what.
I know Node enthusiasts always answer questions like these by saying the OP is using the wrong programming pattern, but I'm stumped as to what would be the correct pattern in this case.
You need to control how many simultaneous asynchronous operations you start at once. This is under your control. You don't show your code so we can just advise conceptually.
For example, if you look at this answer:
Promise.all consumes all my RAM
It shows a function called mapConcurrent() that iterates an array calling an asynchronous function that returns a promise with a maximum number of async operations "in flight" at any given time. You can tune that number of concurrent operations based on your situation.
Another implementation here:
Make several requests to an API that can only handle 20 request a minute
with a function call pMap() that does something similar.
There are other such implementations built into libraries such as Bluebird and Async-promises.
I want to know if it's possible to have synchronous blocks in a Node.js application. I'm a total newbie to Node, but I couldn't find any good answers for the behavior I'm looking for specifically.
My actual application will be a little more involved, but essentially what I want to do is support a GET request and a POST request. The POST request will append an item to an array stored on the server and then sort the array. The GET request will return the array. Obviously, with multiple GETs and POSTs happening simultaneously, we need a correctness guarantee. Here's a really simple example of what the code would theoretically look like:
var arr = [];
app.get(/*URL 1*/, function (req, res) {
res.json(arr);
});
app.post(/*URL 2*/, function (req, res) {
var itemToAdd = req.body.item;
arr.push(itemToAdd);
arr.sort();
res.status(200);
});
What I'm worried about is a GET request returning the array before it is sorted but after the item is appended or, even worse, returning the array as it's being sorted. In a language like Java I would probably just use a ReadWriteLock. From what I've read about Node's asynchrony, it doesn't appear that arr will be accessed in a way that preserves this behavior, but I'd love to be proven wrong.
If it's not possible to modify the code I currently have to support this behavior, are there any other alternatives or workarounds to get the application to do what I want it to do?
What I'm worried about is a GET request returning the array before it is sorted but after the item is appended or, even worse, returning the array as it's being sorted.
In the case of your code here, you don't have to worry about that (although read on because you may want to worry about other things!). Node.js is single-threaded so each function will run in its entirety before returning control to the Event Loop. Because all your array manipulation is synchronous, your app will wait until the array manipulation is done before answering a GET request.
One thing to watch out for then, of course, is if (for example) the .sort() takes a long time. If it does, your server will block while that is going on. And this is why people tend to favor asynchronous operations instead. But if your array is guaranteed to be small and/or this is an app with a limited number of users (say, it's an intranet app for a small company), then what you're doing may work just fine.
To get a good understanding of the whole single-threaded + event loop thing, I recommend Philip Roberts's talk on the event loop.
I'm writing my first 'serious' Node/Express application, and I'm becoming concerned about the number of O(n) and O(n^2) operations I'm performing on every request. The application is a blog engine, which indexes and serves up articles stored in markdown format in the file system. The contents of the articles folder do not change frequently, as the app is scaled for a personal blog, but I would still like to be able to add a file to that folder whenever I want, and have the app include it without further intervention.
Operations I'm concerned about
When /index is requested, my route is iterating over all files in the directory and storing them as objects
When a "tag page" is requested (/tag/foo) I'm iterating over all the articles, and then iterating over their arrays of tags to determine which articles to present in an index format
Now, I know that this is probably premature optimisation as the performance is still satisfactory over <200 files, but definitely not lightning fast. And I also know that in production, measures like this wouldn't be considered necessary/worthwhile unless backed by significant benchmarking results. But as this is purely a learning exercise/demonstration of ability, and as I'm (perhaps excessively) concerned about learning optimal habits and patterns, I worry I'm committing some kind of sin here.
Measures I have considered
I get the impression that a database might be a more typical solution, rather than filesystem I/O. But this would mean monitoring the directory for changes and processing/adding new articles to the database, a whole separate operation/functionality. If I did this, would it make sense to be watching that folder for changes even when a request isn't coming in? Or would it be better to check the freshness of the database, then retrieve results from the database? I also don't know how much this helps ultimately, as database calls are still async/slower than internal state, aren't they? Or would a database query, e.g. articles where tags contain x be O(1) rather than O(n)? If so, that would clearly be ideal.
Also, I am beginning to learn about techniques/patterns for caching results, e.g. a property on the function containing the previous result, which could be checked for and served up without performing the operation. But I'd need to check if the folder had new files added to know if it was OK to serve up the cached version, right? But more fundamentally (and this is the essential newbie query at hand) is it considered OK to do this? Everyone talks about how node apps should be stateless, and this would amount to maintaining state, right? Once again, I'm still a fairly raw beginner, and so reading the source of mature apps isn't always as enlightening to me as I wish it was.
Also have I fundamentally misunderstood how routes work in node/express? If I store a variable in index.js, are all the variables/objects created by it destroyed when the route is done and the page is served? If so I apologise profusely for my ignorance, as that would negate basically everything discussed, and make maintaining an external database (or just continuing to redo the file I/O) the only solution.
First off, the request and response objects that are part of each request last only for the duration of a given request and are not shared by other requests. They will be garbage collected as soon as they are no longer in use.
But, module-scoped variables in any of your Express modules last for the duration of the server. So, you can load some information in one request, store it in a module-level variable and that information will still be there when the next request comes along.
Since multiple requests can be "in-flight" at the same time if you are using any async operations in your request handlers, then if you are sharing/updating information between requests you have to make sure you have atomic updates so that the data is shared safely. In node.js, this is much simpler than in a multi-threaded response handler web server, but there still can be issues if you're doing part of an update to a shared object, then doing some async operation, then doing the rest of an update to a shared object. When you do an async operation, another request could run and see the shared object.
When not doing an async operation, your Javascript code is single threaded so other requests won't interleave until you go async.
It sounds like you want to cache your parsed state into a simple in-memory Javascript structure and then intelligently update this cache of information when new articles are added.
Since you already have the code to parse your set of files and tags into in-memory Javascript variables, you can just keep that code. You will want to package that into a separate function that you can call at any time and it will return a newly updated state.
Then, you want to call it when your server starts and that will establish the initial state.
All your routes can be changed to operate on the cached state and this should speed them up tremendously.
Then, all you need is a scheme to decide when to update the cached state (e.g. when something in the file system changed). There are lots of options and which to use depends a little bit on how often things will change and how often the changes need to get reflected to the outside world. Here are some options:
You could register a file system watcher for a particular directory of your file system and when it triggers, you figure out what has changed and update your cache. You can make the update function as dumb (just start over and parse everything from scratch) or as smart (figure out what one item changed and update only that part of the cache) as it is worth doing. I'd suggest you start simple and only invest more in it when you're sure that effort is needed.
You could just manually rebuild the cache once every hour. Updates would take an average of 30 minutes to show, but this would take 10 seconds to implement.
You could create an admin function in your server to instruct the server to update its cache now. This might be combined with option 2, so that if you added new content, it would automatically show within an hour, but if you wanted it to show immediately, you could hit the admin page to tell it to update its cache.
I'm a little confused by this issue Netflix ran into with Express. They started to see a build of latency in their APIs. We use Express for everything, and I'd like to avoid any sudden problems.
Here's a link to the article.
http://www.infoq.com/news/2014/12/expressjs-burned-netflix
The way it's written, it sounds like a problem with Express, and how it's handling routing. But in the end, they state the following:
"After dig into their source code the team found out the problem. It resided in a periodic function that was being executed 10 times per hour and whose main purpose was to refresh route handlers from an external source. When the team fixed the code so that the function would stop adding duplicate route handlers, the latency and CPU usage increases went away."
I don't understand what exactly they were trying to do. I don't believe this was something that Express was doing on it's own. Sounds like they were doing something a bit oddball, and it didn't work out. I'd think load testing would have revealed this. Anyway, anyone who understands this better who can comment on what the problem actually was? The entire section at the top of the article talks about how Express rotates through the routes list, but I really don't see how iterating over what should not be a very large array would cause that much of a delay.
The best counterpoint explanation of this I've seen is Eran Hammer's. The comments are also illuminating. Of particular interest are the following excerpts from Yunong Xiao's (the author of the Netflix post) comment:
The specific problem we encountered was not a global handler but the
express static file handler with a simple string path. We were adding
the same static router handler each time we refreshed our routes.
since this route handler was in the global routing array, it meant
that every request that was serviced by our app had to iterate though
this handler.
It was absolutely our mis-use of the Express API that caused this --
after all, we were leaking that specific handler! However, had Express
1) not stored static handlers with simple strings in the global
routing array, and 2) rejected duplicate routing handlers, or 3) not
taken 1ms of CPU time to merely iterate through this static handler,
then we would not have experienced such drastic performance problems.
Express would have masked the fact that we had this leak -- and
perhaps this would have bit us down the road in another subtle way.
Our application has over 100 GET routes (and growing), even using the
Express's Router feature -- which lets you compose arrays of handlers
for each path inside the global route array, we'd still have to
iterate through all 100 handlers for each request. Instead, we built
our own custom global route handler, which takes in the context of a
request (including its path) and returns a set of handlers specific to
the request such that we don't have to iterate through handlers we
don't need.
This was our implementation, which separated the global handlers that
every request needs from handlers specific to each request. I'm sure
more optimal solutions are out there.
So I have a backend implementation in node.js which mainly contains a global array of JSON objects. The JSON objects are populated by user requests (POSTS). So the size of the global array increases proportionally with the number of users. The JSON objects inside the array are not identical. This is a really bad architecture to begin with. But I just went with what I knew and decided to learn on the fly.
I'm running this on a AWS micro instance with 6GB RAM.
How to purge this global array before it explodes?
Options that I have thought of:
At a periodic interval write the global array to a file and purge. Disadvantage here is that if there are any clients in the middle of a transaction, that transaction state is lost.
Restart the server every day and write the global array into a file at that time. Same disadvantage as above.
Follow 1 or 2, and for every incoming request - if the global array is empty look for the corresponding JSON object in the file. This seems absolutely absurd and stupid.
Somehow I can't think of any other solution without having to completely rewrite the nodejs application. Can you guys think of any .. ? Will greatly appreciate any discussion on this.
I see that you are using memory as a storage. If that is the case and your code is synchronous (you don't seem to use database, so it might), then actually solution 1. is correct. This is because JavaScript is single-threaded, which means that when one code is running the other cannot run. There is no concurrency in JavaScript. This is only a illusion, because Node.js is sooooo fast.
So your cleaning code won't fire until the transaction is over. This is of course assuming that your code is synchronous (and from what I see it might be).
But still there are like 150 reasons for not doing that. The most important is that you are reinventing the wheel! Let the database do the hard work for you. Using proper database will save you all the trouble in the future. There are many possibilites: MySQL, PostgreSQL, MongoDB (my favourite), CouchDB and many many other. It shouldn't matter at this point which one. Just pick one.
I would suggest that you start saving your JSON to a non-relational DB like http://www.couchbase.com/.
Couchbase is extremely easy to setup and use even in a cluster. It uses a simple key-value design so saving data is as simple as:
couchbaseClient.set("someKey", "yourJSON")
then to retrieve your data:
data = couchbaseClient.set("someKey")
The system is also extremely fast and is used by OMGPOP for Draw Something. http://blog.couchbase.com/preparing-massive-growth-revisited