Parsing a very long JSON file in NodeJS - node.js

I'm using a Nuxt App that has it's own Express based API, it looks to be working fine, but I'm scared of overusing someone else's services.
And I found a JSON that has everything I need to deliver before-hand.
The problem is, that it is 4MB long, and I don't think it would be a very efficient progress to retrieve and add data from it.
If I want to efficiently parse a huge JSON, and use it for the server (As in, to serve parts of it according to the requests I receive).
How would you go around it? Any ideas?

Is only 4MB, you should just load it to memory, any other thing you want to do probably is overkill, literally, fs.readFile, then JSON.parse, that's it, in memory.
Trying to come up with a more sophisticated and "efficient" way on this context is probably not worth the trouble and maybe is just not posible, if you end up using another service just to store and manage those 4mb of data, the IO needed for that is orders of magnitude more than just keeping it on ram.

Related

What is the best way to send files through HTTP?

I am working on web api in node.js and express and I want to enable users to upload images.
My api is using JSON requests and responses but when it comes to uploading images I don't know which option is a better one. I can think of two ideas:
encode images as a base64 strings and send them as a JSON (like {"image": "base64_encoded_image"})
use multipart/form request and handle the request with a help of packages like multer
I've been reading some articles and other questions related to my issue and I'm still struggling to choose one approach over the other. Encoding image and sending it with JSON increases the size of data by about 25% (that's what I've read) but using multipart seems weird to me as all other endpoints on my api use JSON.
The multipart/formdata approach has certain advantages over the Base64 encoding one.
First and foremost disadvantage of using Base64 approach is the 30% increase in size of data, while this may not be significant for files of small size but it will definitely matter if you are sending large files and storing them on storage spaces( will increase your cost/data-consumption ). Also packages like multer provide you with certain functionalities like - checking the type of file(jpg,png etc) and set size limit on files etc. And they are quite easy to implement as well with a lot of tutorials and guides present online.
Furthermore, converting image to Base64 string increases computation overhead on user's machine especially if the file is large.
I would advise you to use multipart/form-data approach for your case.

How to store and handle big data string (around 2mb) in node js

I have a frontend in angular and the API is in nodejs. The frontend sends me the encrypted file and right now I am storing it in MongoDB. But when I send this file to a frontend, the call will sometimes break. So please suggest me how can I solve this call break issue.
Your question isn't particularly clear. As I understand it, you want to send a large file from the Node backend to the client. If this is correct, then read on.
I had a similar issue whereby a long-running API call took several minutes to compile the data and send the single large response back to the client. This issue I had was a timeout and I couldn't extend it.
With Node you can use 'streams' to stream the data as it is available to the client. This approach worked really well for me as the server was streaming the data, the client was reading it. This got round the timeout issue as there is frequent 'chatter' between the server and client.
Using Streams did take a bit of time to understand and I spent a while reading various articles and examples. That said, once I understood, it was pretty straightforward to implement.
This article on Node Streams on the freeCodeCamp site is excellent. It contains a really useful example where it creates a very large text file, which is then 'piped' to the client using streams. It shows how you can read the text file in 'chunks', make transformations to those checks and sent it to the client. What this article doesn't explain is how to read this data in the client...
For the client-side, the approach is different from the typical fetch().then() approach. I found another article that shows a working example of reading such streams from the back-end. See Using Readable Streams on the Mozilla site. In particular, look at the large code example that uses the 'pump' function. This is exactly what I used to read the data from streamed from the back-end.
Sorry I can't give a specific answer to your question, but I hope the links get you started.

Best way to request API and store every minute

I have an app that is hitting the rate limit for an API which is hurting the user experience. I have an idea to solve this but have no idea if this is what should be done ideally to solve this issue. Does this idea makes sense and is it a good way to solve this issue? And how should I go about implementing it? I'm using react-native and nodejs.
Here is the idea:
My app will request the data from a "middleman" API that I make. The middle man API will request data once per minute from the main API that I am having the rate limit problem with (this should solve the rate limit issue then) then store it for the one minute until it updates again. I was thinking the best way to do this is spin a server on AWS that requests from the other API every minute (Is this the easiest way to get a request every minute?) then store it on either a blank middleman webpage (or do I need to store it in a database like MongoDB?). Then my app will call from that middleman webpage/API.
Your idea is good.
Your middleman would be a caching proxy. It would act just as you stated. Hava a look at https://github.com/active-video/caching-proxy it does almost what you want. It creates a server that will receive requests of URLs, fetch and cache those, and serve the cached version from now on.
The only downside is that it does not have a lifetime option for the cache. You could either fork to add the option, or run a daemon that would delete the files that are too old to force a re-fetch.
EDIT:
A very interesting addition to the caching-proxy would be to have a head request to know if the result changed. While this is not provided by all API, this could become useful if yours is displaying such info. Only if HEAD requests do not count toward your API limits...

In an isomorphic Redux app, is it better practice to keep API calls small, or to send over all information in one go?

I am building a sports data visualization application with server-side rendering in React (ES6)/Redux/React-Router-Redux. At the top, there is a class-based App component, and there are two different class-based component routes. (everything under those is a stateless functional component), structured as follows:
App
|__ Index (/)
|__ Match (/match/:id)
When a request is made for a given route, one API call is dispatched, containing all information for the given route. This is hosted on a different server, where we're using Restify and Sequelize ORM. The JSON object returned is roughly 12,000 to 30,000 lines long and takes anywhere from 500ms to 8500ms to return.
Our application, therefore, takes a long time to load, and I'm thinking that this is the main bottleneck. I have a couple options in mind.
Separate this huge API call into many smaller API calls. Although, since JS is single-threaded, I'd have to measure the speed of the render to find out if this is viable.
Attempt lazy loading by dispatching a new API call when a new tab is clicked (each match has several games, all in new tabs)
Am I on the right track? Or is there a better option? Thanks in advance, and please let me know if you need any more examples!
This depends on many things including who your target client is. Would mobile devices ever use this or strictly desktop?
From what you have said so far, I would opt for "lazy loading".
Either way you generally never want any app to force a user to wait at all especially not over 8 seconds.
You want your page send and show up with something that works as quick as possible. This means you don't want to have to wait until all data resolves before your UI can be hydrated. (This is what will have to happen if you are truly server side rendering because in many situations your client application would be built and delivered at least a few seconds before the data is resolved and sent over the line.)
If you have mobile devices with spotty networks connections they will likely never see this page due to timeouts.
It looks like paginating and lazy loading based on accessing other pages might be a good solution here.
In this situation you may also want to look into persisting the data and caching. This is a pretty big undertaking and might be more complicated than you would want. I know some colleagues who might use libraries to handle most of this stuff for them.

With ExpressJS or Node, Is there an easy way to read an external image into memory and serve it?

I'm using an external service to create images. I'd like my users to be able to hit my API and ask for the image. Then my Express server would retrieve it from the external service, then serve it to the user. Sort of like a proxy I suppose, but not exactly.
Is there an easy way to do this, preferably one that doesn't involve downloading the image to the hard drive, then reading it back in and serving it?
Using the request library, I was able to come up with this:
var request = require("request");
exports.relayImage = function(req, res){
request(req.params.url).pipe(res);
}
That seems to work. If there is a more efficient way to do this (meaning on server resources, not in terms of lines of code), speak up!
What you are doing is exactly what you should be doing, and is the most efficient method. Using pipe, the data is sent as it comes in, requiring no additional resources than are needed to buffer and transmit.
Also be mindful of content type and other response headers that you may want to relay. Finally, realize that you've effectively built an open proxy where anyone can request anything they want through your servers. This is a bit dangerous, so be sure to lock it down in your final application.
You should be able to use the http module to make a request to the external image service with a callback that returns the image as the response. It won't write to disk unless you explicitly tell it to.

Resources