I have a file issueData.json and I want to update in POST request. This is my code.
I try to read the file parse to array, push the new, and after it re-write.
app.post("/api/issues", (req, res, next) => {
const issueObj = req.body;
fs.readFile("issuesData.json", (err: Error, data: string | Buffer) => {
if (err) {
res.status(500).send(err);
} else {
const stringData = data.toString();
const issueFile = [...JSON.parse(stringData)];
const updatedIssueFile = issueFile.push(issueObj);
fs.writeFile(
"issuesData.json",
JSON.stringify(updatedIssueFile),
(err: Error) => {
if (err) {
res.status(500).send(err);
} else {
res.status(200).send("Issue has updated");
}
}
);
}
});
});
1) Is it a good practice?
2) Is TS so, what should be the type of req, res, next?
3) It is a good way to update the JSON?
If you're just writing to a file, you might not need to read the contents of the file and append your issueObj to the issueFile array. Maybe you could just write the issueObj to a new line in your file. Maybe something like the appendFile function would help (https://nodejs.org/api/fs.html#fs_fs_appendfile_path_data_options_callback).
Currently, as your file grows, the read operations will take longer and longer and will affect performance. However, just writing will ensure you don't incur that overhead for each POST request.
Related
I have an API backend with Node and Express. I am trying to take some filtered data from the frontend and create a CSV file and download it for the user. I have been using json2csv. I am able to create the data file correctly and when I use that file in my express route I download a file that just says undefined. At first, I thought it was an asynchronous issue, but after using a setTimeout as a test to see if that was an issue I still get the undefined data file. Console logging the "csvData" shows the correct data.
Express route to download the file.
app.post('/api/downloads/filtered', (req, res) => {
let fields = [];
fields = Object.keys(req.body[0])
const filteredData = req.body;
const json2csvParser = new json2csv({fields: fields});
const csvData = json2csvParser.parse(filteredData);
console.log(csvData)
fs.writeFile('./report.csv', csvData, (err) => {
if (err) {
console.log(err);
}
else {
console.log('created report.csv');
res.download('./report.csv');
}
})
})
I'm using Vue on the frontend, I get the file when clicking a button not sure if that is something I should include.
I ended up figuring out my issue. I found that downloading in a post request didn't seem to be possible. I needed a get request. Since the data for the file came in the request body I ended up keeping the post request to create the file and creating a separate get request to download the file this seemed to work fine but didn't find it documented anywhere so I wasn't sure if a better way exists.
app.post('/api/downloads/filtered', (req, res) => {
console.log(req.body)
let fields = [];
fields = Object.keys(req.body[0])
const filteredData = req.body;
const json2csvParser = new json2csv({fields: fields});
const csvData = json2csvParser.parse(filteredData);
console.log(csvData)
fs.writeFile('./report.csv', csvData, (err) => {
if (err) {
console.log(err);
}
else {
console.log('created report.csv');
}
})
})
app.get('/api/downloads/filtered', (req, res) => {
setTimeout(() => {res.download('./report.csv')}, 1000)
})
I have a front end react app and a backend node/express app. I want to allow a user to upload a csv file, then parse the file and instantiate a model for each row. However, I am somewhat confused about how to do this, since I am used to simply posting to a route in the API, and persisting the thing from the request body. In this case, the thing from the request body is the file, and I don't want to save the file, just the data inside it. How can I parse the file without saving it to the database? I have tried to use multer to process the upload and csv-parse to parse the contents, but I am not sure this makes sense. Nonetheless, here is the code (app/index):
...
const multer = require('multer');
const upload = multer().single();
const parse = require('csv-parse');
...
router.post('/distributor/:id/files', (req,res) => {
upload(req, res, function (err) {
if (err) {
console.error("An error occurred when uploading. Please try again. Note
that you may only upload one file at a time, and we only support .csv
files.")
return
}
console.log("We have received your file")
})
});
...
// router.get('/distributor/:id/files/:id', (req, res) => {
// File
// .forge({id: req.params.id})
// .fetch()
// .then((file) => {
// if (_.isEmpty(file))
// return res.sendStatus(404);
// return parseJson(file)
// })
// .then((jsonData) => {
// for (var i in jsonData) {
// //save instance of model
// }
// })
// .catch((error) => {
// console.error(error);
// return res.sendStatus(500);
// });
// })
// function parseJson(file) {
// var output = [];
// // Create the parser
// var parser = parse({delimiter: ':'});
// // Use the writable stream api
// parser.on('readable', function(){
// while(record = parser.read()){
// output.push(record);
// }
// });
// // Catch any error
// parser.on('error', function(err){
// console.log(err.message);
// });
// parser.end();
// }
I know this doesn't make sense, since I don't actually want to save the file as a model and table in the database, I just want to save each item inside the file, so I know I cannot make a route called '/distributor/:id/files/:id'. But I am lost as to what to do instead. I hope that what I am trying to do is clear! I am fairly new to node, and programming in general, and I have never come across a situation in which I needed to handle file upload.
You can use this node module to parse the csv file. https://www.npmjs.com/package/csvtojson
For example you have file name users in the request object.
const csv=require('csvtojson');
csv()
.fromString(req.files.users.data.toString('utf8'))
.on('json', (user) => {
console.log(user);
})
.on('done', () => {
console.log('done parsing');
});
You will be able to get every row as a json object.
I've been using node-oracledb for a few months and I've managed to achieve what I have needed to so far.
I'm currently working on a search app that could potentially return about 2m rows of data from a single call. To ensure I don't get a disconnect from the browser and the server, I thought I would try queryStream so that there is a constant flow of data back to the client.
I implemented the queryStream example as-is, and this worked fine for a few hundred thousand rows. However, when the returned rows is greater than one million, Node runs out of memory. By logging and watching both client and server log events, I can see that client is way behind the server in terms of rows sent and received. So, it looks like Node is falling over because it's buffering so much data.
It's worth noting that at this point, my selectstream implementation is within a req/res function called via Express.
To return the data, I do something like....
stream.on('data', function (data) {
rowcount++;
let obj = new myObjectConstructor(data);
res.write(JSON.stringify(obj.getJson());
});
I've been reading about how streams and pipe can help with flow, so what I'd like to be able to do is to be able to pipe the results from the query to a) help with flow and b) to be able to pipe the results to other functions before sending back to the client.
E.g.
function getData(req, res){
var stream = myQueryStream(connection, query);
stream
.pipe(toSomeOtherFunction)
.pipe(yetAnotherFunction)
.pipe(res);
}
I'm spent a few hours trying to find a solution or example that allows me to pipe results, but I'm stuck and need some help.
Apologies if I'm missing something obvious, but I'm still getting to grips with Node and especially streams.
Thanks in advance.
There's a bit of an impedance mismatch here. The queryStream API emits rows of JavaScript objects, but what you want to stream to the client is a JSON array. You basically have to add an open bracket to the beginning, a comma after each row, and a close bracket to the end.
I'll show you how to do this in a controller that uses the driver directly as you have done, instead of using separate database modules as I advocate in this series.
const oracledb = require('oracledb');
async function get(req, res, next) {
try {
const conn = await oracledb.getConnection();
const stream = await conn.queryStream('select * from employees', [], {outFormat: oracledb.OBJECT});
res.writeHead(200, {'Content-Type': 'application/json'});
res.write('[');
stream.on('data', (row) => {
res.write(JSON.stringify(row));
res.write(',');
});
stream.on('end', () => {
res.end(']');
});
stream.on('close', async () => {
try {
await conn.close();
} catch (err) {
console.log(err);
}
});
stream.on('error', async (err) => {
next(err);
try {
await conn.close();
} catch (err) {
console.log(err);
}
});
} catch (err) {
next(err);
}
}
module.exports.get = get;
Once you get the concepts, you can simplify things a bit with a reusable Transform class which allows you to use pipe in the controller logic:
const oracledb = require('oracledb');
const { Transform } = require('stream');
class ToJSONArray extends Transform {
constructor() {
super({objectMode: true});
this.push('[');
}
_transform (row, encoding, callback) {
if (this._prevRow) {
this.push(JSON.stringify(this._prevRow));
this.push(',');
}
this._prevRow = row;
callback(null);
}
_flush (done) {
if (this._prevRow) {
this.push(JSON.stringify(this._prevRow));
}
this.push(']');
delete this._prevRow;
done();
}
}
async function get(req, res, next) {
try {
const toJSONArray = new ToJSONArray();
const conn = await oracledb.getConnection();
const stream = await conn.queryStream('select * from employees', [], {outFormat: oracledb.OBJECT});
res.writeHead(200, {'Content-Type': 'application/json'});
stream.pipe(toJSONArray).pipe(res);
stream.on('close', async () => {
try {
await conn.close();
} catch (err) {
console.log(err);
}
});
stream.on('error', async (err) => {
next(err);
try {
await conn.close();
} catch (err) {
console.log(err);
}
});
} catch (err) {
next(err);
}
}
module.exports.get = get;
Rather than writing your own logic to create a JSON stream, you can use JSONStream to convert an object stream to (stringified) JSON, before piping it to its destination (res, process.stdout etc) This saves the need to muck around with .on('data',...) events.
In the example below, I've used pipeline from node's stream module rather than the .pipe method: the effect is similar (with better error handling I think). To get objects from oracledb.queryStream, you can specify option {outFormat: oracledb.OUT_FORMAT_OBJECT} (docs). Then you can make arbitrary modifications to the stream of objects produced. This can be done using a transform stream, made perhaps using through2-map, or if you need to drop or split rows, through2. Below the stream is sent to process.stdout after being stringified as JSON, but you could equally send to it express's res.
require('dotenv').config() // config from .env file
const JSONStream = require('JSONStream')
const oracledb = require('oracledb')
const { pipeline } = require('stream')
const map = require('through2-map') // see https://www.npmjs.com/package/through2-map
oracledb.getConnection({
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
connectString: process.env.CONNECT_STRING
}).then(connection => {
pipeline(
connection.queryStream(`
select dual.*,'test' as col1 from dual
union select dual.*, :someboundvalue as col1 from dual
`
,{"someboundvalue":"test5"} // binds
,{
prefetchRows: 150, // for tuning
fetchArraySize: 150, // for tuning
outFormat: oracledb.OUT_FORMAT_OBJECT
}
)
,map.obj((row,index) => {
row.arbitraryModification = index
return row
})
,JSONStream.stringify() // false gives ndjson
,process.stdout // or send to express's res
,(err) => { if(err) console.error(err) }
)
})
// [
// {"DUMMY":"X","COL1":"test","arbitraryModification":0}
// ,
// {"DUMMY":"X","COL1":"test5","arbitraryModification":1}
// ]
If I query the box REST API and get back a readable stream, what is the best way to handle it? How do you send it to the browser?? (DISCLAIMER: I'm new to streams and buffers, so some of this code is pretty theoretical)
Can you pass the readStream in the response and let the browser handle it? Or do you have to stream the chunks into a buffer and then send the buffer??
export function getFileStream(req, res) {
const fileId = req.params.fileId;
console.log('fileId', fileId);
req.sdk.files.getReadStream(fileId, null, (err, stream) => {
if (err) {
console.log('error', err);
return res.status(500).send(err);
}
res.type('application/octet-stream');
console.log('stream', stream);
return res.status(200).send(stream);
});
}
Will ^^ work, or do you need to do something like:
export function downloadFile(req, res) {
const fileId = req.params.fileId;
console.log('fileId', fileId);
req.sdk.files.getReadStream(fileId, null, (err, stream) => {
if (err) {
console.log('error', err);
return res.status(500).send(err);
}
const buffers = [];
const document = new Buffer();
console.log('stream', stream);
stream.on('data', (chunk) => {
buffers.push(buffer);
})
.on('end', function(){
const finalBuffer = Buffer.concat(buffers);
return res.status(200).send(finalBuffer);
});
});
}
The first example would work if you changed you theoretical line to:
- return res.status(200).send(stream);
+ res.writeHead(200, {header: here})
+ stream.pipe(res);
That's the nicest thing about node stream. The other case would (in essence) work too, but it would accumulate lots of unnecessary memory.
If you'd like to check a working example, here's one I wrote based on scramjet, express and browserify:
https://github.com/MichalCz/scramjet/blob/master/samples/browser/browser.js
Where your streams go from the server to the browser. With minor mods it'll fit your problem.
update: clarified and edited code to reflect what I really want, that is, send a streaming response, that is, send back matched results as they arrive from their own async matching process.
consider (using expressjs-ish code)
app.post('/', jsonParser, function (req, res) {
if (!req.body) return res.sendStatus(400)
// matches is not needed for this scenario so
// commenting it out
// var matches = [];
req.body.forEach(function(element, index) {
foo.match(
element,
function callback(error, result) {
if (error) {
console.log(error); // on error
}
else {
⇒ if (some condition) {
// matches.push(result);
⇒ res.send(result);
}
}
}
);
});
// moved this above, inside the callback
// ⇒ res.send(matches);
});
The input to post('/') is an array of terms. Each term is matched using foo which has a callback after every call. I want to send back all the matches that satisfy "some condition" (see ⇒ in the code above). Ideally, it would be good to send back a streaming response, that is, send a response back as the matches occur (because foo.match() might take a while for each term). How do I go about this?
Does something like that works for you? I' ve used the stream-array module. Possibly this can be helpful for you? How to emit/pipe array values as a readable stream in node.js?
var streamify = require('stream-array');
app.post('/', jsonParser, function (req, res) {
if (!req.body) {
return res.sendStatus(400);
}
var matches = [];
req.body.forEach(function (element, index) {
foo.match(
element,
function callback(error, result) {
if (error) {
console.log(error); // on error
} else {
if (some condition) {
streamify([result]).pipe(res);
}
}
}
);
});
// res.json(req.body);
});