How can I read json values from a file? - node.js

So basically I have these json values in my config.json file, but how can I read them from a .txt file, for example:
{"prefix": $}
This would set a variable configPrefix to $. Any help?

You can use require() to read and parse your JSON file in one step:
let configPrefix = require("./config.json").prefix;
Or, if you wanted to get multiple values from that config:
const configData = require("./config.json");
let configPrefix = configData.prefix;
If your data is not actually JSON formatted, then you have to read the file yourself with something like fs.readFile() or fs.readFileSync() and then parse it yourself according to whatever formatting rules you have for the file.

If you are going to be reading this file just as the start of the program then go ahead and use require or import if you have babel. just a tip, suround the require with a try catch block to handle possible errors.
let config
try {
config = require('path.to.file.json')
} catch (error) {
// handle error
config = {}
}
If you will be changing this file externally and you feel the need to source it then apart from reading it at the start you will need a function that uses fs.readFile. consider doing it like this and not with readFileAsync unless you need to block the program until you are done reading the config file.
After all of that you can do const configPrefix = config.prefix which will have the value '$'.

Related

How do I read csv file line by line, modify each line, write result to another file

I recently used event-stream library for nodejs to parse a huge csv file, saving results to database.
How do I solve the task of not just reading a file, but modifying each line, writing result to new file?
Is it some combination of through and map method, or duplex? Any help is highly appreciated.
If you use event-stream for read you can use split() method process csv line by line. Then change and write line to new writable stream.
var fs = require('fs');
var es = require('event-stream');
const newCsv = fs.createWriteStream('new.csv');
fs.createReadStream('old.csv')
.pipe(es.split())
.pipe(
es.mapSync(function(line) {
// modify line way you want
newCsv.write(line);
}))
newCsv.end();

Unable to use variables in fs functions when using brfs

I use browserify in order to be able to use require. To use fs functions with browserify i need to transform it with brfs but as far as I understood this results in only being able to input static strings as parameters inside my fs function. I want to be able to use variables for this.
I want to search for xml files in a specific directory and read them. Either by searching via text field or showing all of their data at once. In order to do this I need fs and browserify in order to require it.
const FS = require('fs')
function lookForRoom() {
let files = getFileNames()
findSearchedRoom(files)
}
function getFileNames() {
return FS.readdirSync('../data/')
}
function findSearchedRoom(files) {
const SEARCH_FIELD_ID = 'room'
let searchText = document.getElementById(SEARCH_FIELD_ID).value
files.forEach((file) => {
const SEARCHTEXT_FOUND = file.includes(searchText.toLowerCase())
if (SEARCHTEXT_FOUND) loadXML(file)
})
}
function loadXML(file) {
const XML2JS = require('xml2js')
let parser = new XML2JS.Parser()
let data = FS.readFile('../data/' + file)
console.dir(data);
}
module.exports = { lookForRoom: lookForRoom }
I want to be able to read contents out of a directory containing xml files.
Current status is that I can only do so when I provide a constant string to the fs function
The brfs README contains this gotcha:
Since brfs evaluates your source code statically, you can't use dynamic expressions that need to be evaluated at run time.
So, basically, you can't use brfs in the way you were hoping.
I want to be able to read contents out of a directory containing xml files
If by "a directory" you mean "any random directory, the name of which is determined by some form input", then that's not going to work. Browsers don't have direct access to directory contents, either locally or on a server.
You're not saying where that directory exists. If it's local (on the machine the browser is running on): I don't think there are standardized API's to do that, at all.
If it's on the server, then you need to implement an HTTP server that will accept a directory-/filename from some clientside code, and retrieve the file contents that way.

gunzip partials read from read-stream

I use Node.JS to fetch files from my S3 bucket.
The files over there are gzipped (gz).
I know that the contents of each file is composed by lines, where each line is a JSON of some record that failed to be put on Kinesis.
Each file consists of ~12K such records. and I would like to be able to process the records while the file is being downloaded.
If the file was not gzipped, that could be easily done using streams and readline module.
So, the only thing that stopping me from doing this is the gunzip process which, to my knowledge, needs to be executed on the whole file.
Is there any way of gunzipping a partial of a file?
Thanks.
EDIT 1: (bad example)
Trying what #Mark Adler suggested:
const fileStream = s3.getObject(params).createReadStream();
const lineReader = readline.createInterface({input: fileStream});
lineReader.on('line', line => {
const gunzipped = zlib.gunzipSync(line);
console.log(gunzipped);
})
I get the following error:
Error: incorrect header check
at Zlib._handle.onerror (zlib.js:363:17)
Yes. node.js has a complete interface to zlib, which allows you to decompress as much of a gzip file at a time as you like.
A working example that solves the above problem
The following solves the problem in the above code:
const fileStream = s3.getObject(params).createReadStream().pipe(zlib.createGunzip());
const lineReader = readline.createInterface({input: fileStream});
lineReader.on('line', gunzippedLine => {
console.log(gunzippedLine);
})

Node.js import csv with blank fields

I'm trying to import & parse a CSV file using the csv-parse package, but having difficulty with requireing the csv file in the first place.
When I do input = require('../../path-to-my-csv-file')
I get an error due to consecutive commas because some fields are blank:
e","17110","CTSZ16","Slitzerâ„¢ 16pc Cutlery Set in Wood Block",,"Spice up
^
SyntaxError: Unexpected token ,
How do I import the CSV file into the node environment to begin with?
Package examples are Here
To solve your first problem, reading CSV with empty entries:
Use the 'fast-csv' node package. It will parse csv with emtpy entries.
To answer your second question, how to import a CSV into node:
You don't really "import" csv files into node. You should fs.open the file
or use fs.createReadStream to read the csv file at the appropriate location.
Below is a script that uses fs.createReadStream to parse a CSV called 'test.csv' that is one directory up from the script that is running it.
The first section sets up our program, makes basic declarations of the objects were going use to store our parsed list.
var csv = require('fast-csv') // require fast-csv module
var fs = require('fs') // require the fs, filesystem module
var uniqueindex = 0 // just an index for our array
var dataJSON = {} // our JSON object, (make it an array if you wish)
This next section declares a stream that will intercept data as it's read from our CSV file and do stuff to it. In this case we're intercepting the data and storing it in a JSON object and then saving that JSON object once the stream is done. It's basically a filter that intercepts data and can do what it wants with it.
var csvStream = csv() // - uses the fast-csv module to create a csv parser
.on('data',function(data){ // - when we get data perform function(data)
dataJSON[uniqueindex] = data; // - store our data in a JSON object dataJSON
uniqueindex++ // - the index of the data item in our array
})
.on('end', function(){ // - when the data stream ends perform function()
console.log(dataJSON) // - log our whole object on console
fs.writeFile('../test.json', // - use fs module to write a file
JSON.stringify(dataJSON,null,4), // - turn our JSON object into string that can be written
function(err){ // function(err) only gets performed once were done saving the file and err will be nil if there is no error
if(err)throw err //if there's an error while saving file throw it
console.log('data saved as JSON yay!')
})
})
This section creates what is called a "readStream" from our csv file. The path to the file is relative. A stream is just a way of reading a file. It's pretty powerful though because the data from a stream can be piped into another stream.
So we'll create a stream that reads the data from our CSV file, and then well pipe it into our pre-defined readstream / filter in section 2.
var stream = fs.createReadStream('../test.csv')
stream.pipe(csvStream)
This will create a file called 'test.json' one directory up from the place where our csv parsing script is. test.json will contain the parsed CSV list inside a JSON object. The order in which the code appears here is how it should appear in a script you make.

Can Gulp overwrite all src files?

Let's say I want to replace the version number in a bunch of files, many of which live in subdirectories. I will pipe the files through gulp-replace to run the regex-replace function; but I will ultimately want to overwrite all the original files.
The task might look something like this:
gulp.src([
'./bower.json',
'./package.json',
'./docs/content/data.yml',
/* ...and so on... */
])
.pipe(replace(/* ...replacement... */))
.pipe(gulp.dest(/* I DONT KNOW */);
So how can I end it so that each src file just overwrites itself, at its original location? Is there something I can pass to gulp.dest() that will do this?
I can think of two solutions:
Add an option for base to your gulp.src like so:
gulp.src([...files...], {base: './'}).pipe(...)...
This will tell gulp to preserve the entire relative path. Then pass './' into gulp.dest() to overwrite the original files. (Note: this is untested, you should make sure you have a backup in case it doesn't work.)
Use functions. Gulp's just JavaScript, so you can do this:
[...files...].forEach(function(file) {
var path = require('path');
gulp.src(file).pipe(rename(...)).pipe(gulp.dest(path.dirname(file)));
}
If you need to run these asynchronously, the first will be much easier, as you'll need to use something like event-stream.merge and map the streams into an array. It would look like
var es = require('event-stream');
...
var streams = [...files...].map(function(file) {
// the same function from above, with a return
return gulp.src(file) ...
};
return es.merge.apply(es, streams);
Tell gulp to write to the base directory of the file in question, just like so:
.pipe(
gulp.dest(function(data){
console.log("Writing to directory: " + data.base);
return data.base;
})
)
(The data argument is a vinyl file object)
The advantage of this approach is that if your have files from multiple sources each nested at different levels of the file structure, this approach allows you to overwrite each file correctly. (As apposed to set one base directory in the upstream of your pipe chain)
if you are using gulp-rename, here's another workaround:
var rename = require('gulp-rename');
...
function copyFile(source, target){
gulp.src(source)
.pipe(rename(target))
.pipe(gulp.dest("./"));
}
copyFile("src/js/app.js","dist/js/app.js");
and if you want source and target to be absolute paths,
var rename = require('gulp-rename');
...
function copyFile(source, target){
gulp.src(source.replace(__dirname,"."))
.pipe(rename(target.replace(__dirname,".")))
.pipe(gulp.dest("./"));
}
copyFile("/Users/me/Documents/Sites/app/src/js/app.js","/Users/me/Documents/Sites/app/dist/js/app.js");
I am not sure why people complicate it but by just starting your Destination path with "./" does the job.
Say path is 'dist/css' Then you would use it like this
.pipe(gulp.dest("./dist/css"));
That's it, I use this approach on everyone of my projects.

Resources