In NodeJS, I am logging every every API request and would get the log through another API request . But I do not want to use MONGODB or any such database. I am currently writing to file each API request in JSON format. But now API requests are much quicker and sometimes file is busy before another request comes.
What would be the best solution in this case?
Actually I am developing a portable and distributable solution. So database will hinder the motive.
Related
I am working on a node web application and require a form to enable users to provide a URL containing a (potentially 100mb) large CSV or XML file. This would then be submitted and trigger the server (Express) to download the file using fetch, process it and then save it to my Postgres database.
The problem I am having is the size of the file. Responses from the API take minutes to return and I'm worried this solution is not optimal for a production application. I've also seen that many servers (including cloud based ones) have response size limits on them, which would obviously be exceeded here.
Is there a better way to do this than simply via a fetch request?
Thanks
I have a frontend in angular and the API is in nodejs. The frontend sends me the encrypted file and right now I am storing it in MongoDB. But when I send this file to a frontend, the call will sometimes break. So please suggest me how can I solve this call break issue.
Your question isn't particularly clear. As I understand it, you want to send a large file from the Node backend to the client. If this is correct, then read on.
I had a similar issue whereby a long-running API call took several minutes to compile the data and send the single large response back to the client. This issue I had was a timeout and I couldn't extend it.
With Node you can use 'streams' to stream the data as it is available to the client. This approach worked really well for me as the server was streaming the data, the client was reading it. This got round the timeout issue as there is frequent 'chatter' between the server and client.
Using Streams did take a bit of time to understand and I spent a while reading various articles and examples. That said, once I understood, it was pretty straightforward to implement.
This article on Node Streams on the freeCodeCamp site is excellent. It contains a really useful example where it creates a very large text file, which is then 'piped' to the client using streams. It shows how you can read the text file in 'chunks', make transformations to those checks and sent it to the client. What this article doesn't explain is how to read this data in the client...
For the client-side, the approach is different from the typical fetch().then() approach. I found another article that shows a working example of reading such streams from the back-end. See Using Readable Streams on the Mozilla site. In particular, look at the large code example that uses the 'pump' function. This is exactly what I used to read the data from streamed from the back-end.
Sorry I can't give a specific answer to your question, but I hope the links get you started.
I came across these questions when I am designing the collections for my project.
I read it from somewhere before saying that better do most of the calculation at the client side. the reason because MongoDB computation will slow down the response. But I don't really understand where is the borderline of client-side or server-side.
Question
I am writing APIs, so are the js files that are used to response the JSON is counted as client-side or server-side?
The ionic app typescript side is counted as client-side right?
The js file that use to write APIs, where will they be stored when production? at the server in MongoDB?
Which is better in performance?
Use query e.g '$gt' or '$lt' to get the filtered data from MongoDB and send the response back to the client.
Get the unfiltered list and filter them e.g using lodash, and send the response back to the client.
Some Info:
I am creating a json file from matlab which updates every second with the latest coordinates.For writing file from matlab to json I am using JsonLab.
My problem :
The json file refreshes every second and gets updated .Now i want to show the latest data everytime on the webpage.So this requires loading the json everytime it is refreshed.I am confused about how to do this . I am using MEAN stack (MongoDB, Express, AngularJS, Node).
Any help would be appreceiated.
Don't consume it with REST then. Implement http://socket.io/ and consume coordinates with web sockets live data.
You can also maybe consider using Firebase (AngularFire) if you want to have your sockets implementation to be someone else's problem.
How do you send files on node.js/express.
I am using Rackspace Cloudfiles and wanna send images/videos to their remote storage but I am not sure that it's as simple as reading the file (fs.readFileSync()) and send the data in the request body, or is it?
What should the headers be.
What if it's a very large file on a couple of GBs?
Is it possible to use superagent (http://visionmedia.github.com/superagent) for this or is there a better library for sending files?
Please give me some information about this.
Thanks!
app.get('/img/bg.png', function(req, res) {
res.sendFile('public/img/background.png')
})
https://expressjs.com/en/api.html#res.sendFile
use "res.sendFile". "res.sendfile" is deprecated.
For large files, you will want to use node.js's concept of piping IO streams together. You want to open the local file for reading, start the HTTP request to rackspace, and then pipe the data events from the file read process to the HTTP request process.
Here's an article on how to do this.
Superagent is fine for small files, but because the superagent API presumes your entire request body is loaded into memory before starting the request, it's not the best approach for large file transfers.
Normally you won't need to worry specifically about the request headers as node's HTTP request library will send the appropriate headers for you. Just make sure you use whatever HTTP method your API requires (probably POST), and it looks like for rackspace you will need to add the X-Auth-Token extra header with your API token as well.
I am using Rackspace Cloudfiles and wanna send images/videos to their remote storage but I am not sure that it's as simple as reading the file (fs.readFileSync()) and send the data in the request body, or is it?
You should never use fs.readFileSync in general. When you use it, or any other method called somethingSync, you block the entire server for the duration of that call. The only acceptable time to make synchronous calls in a node.js program is during startup.
What should the headers be.
See RackSpace Cloud Files API.
Is it possible to use superagent (http://visionmedia.github.com/superagent) for this or is there a better library for sending files?
While I don't have any experience with superagent, I'm sure it will work fine. Just make sure you read the API documentation and make your requests according to their specification.