I've got multiple json files contained within a directory that will dynamically be updated by users. The users can add categories which will create new json files in that directory, and they can also remove categories which would delete json files in that directory. I'm looking for a method to read all json files contained in that folder directory, and push all the json files into a single object array. I imagine asynchronously would be desirable too.
I'm very new to using fs. I've management to read single json files by directory using
const fs = require('fs');
let data = fs.readFileSync('./sw_lbi/categories/category1.json');
let categories = JSON.parse(data);
console.log(categories);
But of course this will only solve the synchronous issue when using require()
As I'll have no idea what json files will be contained in the directory because the users will also name them, I'll need a way to read all the json files by simply calling the folder directory which contains them.
I'm imagining something like this (which obviously is foolish)
const fs = require('fs');
let data = fs.readFileSync('./sw_lbi/categories');
let categories = JSON.parse(data);
console.log(categories);
What would be the best approach to achieve this?
Thanks in advance.
First of all you need to scan this directory for files, next you need to filter them and select only JSONs, and at the end just read every file and do what you need to do
const fs = require('fs');
const path = require('path')
const jsonsInDir = fs.readdirSync('./sw_lbi/categories').filter(file => path.extname(file) === '.json');
jsonsInDir.forEach(file => {
const fileData = fs.readFileSync(path.join('./sw_lbi/categories', file));
const json = JSON.parse(fileData.toString());
});
Related
I'm getting a ZIP archive from S3 using the aws s3 node SDK.
In this zip file there is a single .json file where I want to get the contents from. I don't want to save this file to storage, but only get the contents of this zip file.
Example:
File.zip contains a single file:
file.json with contents({"value":"abcd"})
I currently have:
const { S3Client, GetObjectCommand} = require("#aws-sdk/client-s3");
const s3Client = new S3Client({ region: 'eu-central-1'});
const file = await s3Client.send(new GetObjectCommand({Bucket:'MyBucket', Key:'file.zip'}));
file.body now contains a Readable stream with the contents of the zip file. I now want to transfer this Readable stream into {"value":"abcd"}
Is there a library or piece of code that can help me do this and produce the result without having to save the file to disk?
You could use the package archiver or zlib (zlib is integrated in nodejs)
a snippet from part of my project looks like this:
import { unzipSync } from 'zlib';
// Fetch data and get buffer
const res = await fetch('url')
const zipBuffer = await res.arrayBuffer()
// Unzip data and convert to utf8
const unzipedBuffer = await unzipSync(zipBuffer)
const fileData = unzipedBuffer.toString('utf8')
Now fileData is the content of your zipped file as a string, you can use JSON.parse(fileData) to get the content as a json and work with it
I have couple of images in images/source directory which is at root level. I want to read all the images in the folder and get its path to something like below:
var images = ['images/source/first.jpg', 'images/source/second.png', 'images/source/third.png'];
I am trying to read the directory something like below, but its not working!
const imgPath= fs.readdirSync(path.join(__dirname, '/images/source')).sort();
The above code reads all the images, but i need all the images with their path in an array.
Read this forum. It may helpful.
How do you get a list of the names of all files present in a directory in Node.js?
const testFolder = './tests/';
const fs = require('fs');
fs.readdir(testFolder, (err, files) => {
files.forEach(file => {
console.log(file);
});
});
I've set up a small node js BE app, built with express and fastCsv module on top of it. The desired outcome would be to be able to download a csv file to the client side, without storing it anywhere inside the server, since the data is generated depending on user criteria.
So far I've been able to get somewhere it it, Im using streams, since that csv file could be pretty large depending on the user selection. Im pretty sure something is missing inside the code bellow:
const fs = require('fs');
const fastCsv = require('fast-csv');
.....
(inside api request)
.....
router.get('/', async(req, res) => {
const gatheredData ...
const filename = 'sometest.csv'
res.writeHead(200, {
'Content-Type': 'text/csv',
'Content-Disposition': 'attachment; filename=' + filename
})
const csvDataStream = fastCsv.write(data, {headers: true}).pipe(res)
})
The above code 'works' in some way as it does deliver back the response, but not the actual file, but the contents of the csv file, which I can view in the preview tab as a response. To sum up, Im trying to stream in that data, into a csv and push it to download file to client, and not store it on the server. Any tips or pointers are very much appreciated.
Here's what worked for me after created a CSV file on the server using the fast-csv package. You need to specify the full, absolute directory path where the output CSV file was created:
const csv = require("fast-csv");
const csvDir = "abs/path/to/csv/dir";
const filename = "my-data.csv";
const csvOutput = `${csvDir}/${filename}`;
console.log(`csvOutput: ${csvOutput}`); // full path
/*
CREATE YOUR csvOutput FILE USING 'fast-csv' HERE
*/
res.type("text/csv");
res.header("Content-Disposition", `attachment; filename="${filename}"`);
res.header("Content-Type", "text/csv");
res.sendFile(filename, { root: csvDir });
You need to make sure to change the response content-type and headers to "text/csv", and try enclosing the filename=... part in double-quotes, like in the above example.
I have a paginated request that gives me a list of objects, which I later concat to get the full list of objects.
If I attempt to JSON.stringify this, it fails for large objects with range error. I was looking for a way to zlib.gzip to handle large JSON objects.
Try installing stream-json it will solve your problem, It's a great wrapper around streams and parsing a JSON.
//require the modules stream-json
const StreamArray = require('stream-json/utils/StreamArray');
// require fs if your using a file
const fs = require('fs');
const zlib = require('zlib');
// Create an instance of StreamArray
const streamArray = StreamArray.make();
fs.createReadStream('./YOUR_FILE.json.gz')
.pipe(zlib.createUnzip()) // Unzip
.pipe(streamArray.input); //Read the stream
//here you can do whatever you want with the stream,
//you can stream it to response.
streamArray.output.pipe(process.stdout);
In the example, I'm using a JSON (file) but you can use a collection and pass it to the stream.
Hope that's help.
I want to have persistent memory (store the user's progress) in a .json file in %AppData%. I tried doing this according to this post, but it doesn't work. For testing purposes I'm only working with storing one object.
The code below doesn't work at all. If I use fs.open(filePath, "w", function(err, data) { ... instead of readFile(..., it does create a json file in %AppData%, but then it doesn't write anything to it, it's always 0 bytes.
var nw = require('nw.gui');
var fs = require('fs');
var path = require('path');
var file = "userdata.json";
var filePath = path.join(nw.App.dataPath, file);
console.log(filePath); // <- This shows correct path in Application Data.
fs.readFile(filePath ,function (err, data) {
var idVar = "1";
var json = JSON.parse(data);
json.push("id :" + idVar);
fs.writeFile(filePath, JSON.stringify(json));
});
If anyone has any idea where I'm messing this up, I'd be grateful..
EDIT:
Solved, thanks to kailniris.
I was simply trying to parse an empty file
There is no json in the file you try to read. Before parsing data check if the file is empty. If it is then create an empty json, push the new data into it then write it to the file else parse the json in the file.