How can I create a txt file that holds the contents of an array in JavaScript? - node.js

I have several arrays that contain data that I would like to export, each array to a txt file, in order to be analyzed using MATLAB.
Let's say my array is:
var xPosition = [];
// some algorithm that adds content to xPosition
// TODO: export array into a txt file let's call it x_n.txt
It would be great to store each element of an array per line.

I have found a guide for the solution to my question in this post. The following code is what I ended up using:
var fs = require('fs');
var xPosition = [];
// some algorithm that adds content to xPosition
var file = fs.createWriteStream('./positions/x_n.txt');
file.on('error', function(err) { /* error handling */ });
xPosition.forEach(function(v) { file.write(v + '\n'); });
file.end();

The solution you found works, but here's how I'd have done it:
var fs = require('fs');
var xPosition = [1,2,3]; // Generate this
var fileName = './positions/x_n.txt';
fs.writeFileSync(fileName, xPosition.join('\n'));
This uses node's synchronous file writing capability, which is ideal for your purposes. You don't have to open or close file handles, etc. I'd use streams only if I had gigabytes of data to write out.

Related

How to delete lines of text from file with createWriteStream with Node.js?

I'm trying to update a huge text document by deleting text that is dynamically received from an array. I cannot use readFileSync because the file is way too large so I have to stream it. The problem im encountering is the function deletes everything instead of only deleting what's in the array. Perhaps im not understanding how to properly delete something from a stream. How can this be done?
largeFile_example.txt
test_domain_1
test_domain_2
test_domain_3
test_domain_4
test_domain_5
test_domain_6
test_domain_7
test_domain_8
test_domain_9
test_domain_10
stream.js
const es = require('event-stream');
const fs = require('fs');
//array of domains to delete
var domains = ['test_domain_2','test_domain_6','test_domain_8'];
//loop
domains.forEach(function(domain){
//domain to delete
var dom_to_delete = domain;
//stream
var s = fs
.createReadStream('largeFile_example.txt')
.pipe(es.split())
.pipe(
es
.mapSync(function(line) {
//check if found in text
if(line === dom_to_delete){
//delete
var newValue = dom_to_delete.replace(line, '');
fs.createWriteStream('largeFile_example.txt', newValue, 'utf-8');
}
})
.on('error', function(err) {
console.log('Error while reading file.', err);
})
.on('end', function() {
//...do something
}),
);
})
You can simply use readline interface with the streams and you can read line by line. When you encounter any domain from the array just don't add it.
You can use for-of with async/await
const fs = require('fs');
const readline = require('readline');
async function processLine() {
const fileStream = fs.createReadStream('yourfile');
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity
});
// Note: crlfDelay recognize all instances of CR LF
// ('\r\n') in file as a single line break.
for await (const line of rl) {
// each line will be here as domain
// create a write stream and append it to the file
// line by line using { flag: a }
}
}
processLine();
To delete the domains from the existing file, you need to follow these steps:
Need to read the file as a stream.
Replace the text you don't want with the '' using regex or replace method.
add the updated content to the temp file or a new file.
There is no way you can read from one point and update the same line. I mean I am not aware of such a technique in Node.js(will be happy to know that). So that's why you need to create a new file and once updated remove the old file.
Maybe you can add some more value to how you code it as I am not sure why you want to do that. If your file is not large you can do that in-place, but your case is different.

Access variable from another file

I have ex1.js and ex2.js. And in ex2.js I want to get variable which one is in ex1.js
I can read the file, but I want to get exactly this var Value.
var fs = require('fs');
var readMe = fs.readFileSync('path', 'utf8');
console.log(readMe);
As long as they are both JavaScript files this is how it is generally done.
// in the js2 file
exports.var_js2 = 15;
// in the js1 file
var_js2 = require('./js2').var_js2

gulp - wrap plugin (which uses through2) output with string

I would like to know how exactly can I manipulate the output of my Gulp plugin so, for example, no matter how many files are passed to the plugin, it will wrap the output with a string. Currently I cannot know when does the last file is done.
The super simplified example below will iterate on 3 files and will create a new file named output.js and in it there will be three times the string xxx (xxxxxxxxx).
I would like the plugin itself to wrap the contents so the output will
be: +xxxxxxxxx+.
How can I do this?
Thanks!
Gulpfile
var gulp = require('gulp');
var concat = require('gulp-concat');
var foo = require('./index');
gulp.task('default', function() {
gulp.src([a.html, b.html, c.html])
.pipe(foo())
.pipe(concat('output.js'))
.pipe(gulp.dest('./test/output'))
});
The most basic gulp plugin (index.js):
var through2 = require('through2'),
gutil = require('gulp-util');
var PLUGIN_NAME = 'foo';
module.exports = function( options ){
// through2.obj(fn) is a convenience wrapper around
// through2({ objectMode: true }, fn)
return through2.obj(function( file, enc, callback ){
file.contents = new Buffer( 'xxx' );
this.push(file);
callback();
});
}
I understand the files are currently simply returned modified, but what I don't understand is how to append text and return the concatenated result that I want, while keeping it OK with Gulp working standards.
The "real" plugin should actually wrap the files results with:
var foo = { FILES_CONTENT }
where FILES_CONTENT will actually be a a concatenated string of all the files:
"file_name" : "file_content",
"file_name" : "file_content",
...
I would make the following changes to your gulpfile.js:
var gulp = require('gulp');
var foo = require('./index.js');
gulp.task('default', function() {
return gulp.src(['a.html', 'b.html', 'c.html'])
.pipe(foo({fileName:'output.js', varName:'bar'}))
.pipe(gulp.dest('./test/output'))
});
Since your foo() plugin itself will concatenate all the files, there's no need to use gulp-concat at all. Instead your plugin should accept an option fileName that provides the name of the generated file. I've also added another option varName that will provide the name of the var in the output file.
I'll assume that a.html, b.html and c.html are simple HTML files, something like this:
<h1 class="header">a</h1>
As you've already realized you need to concat all the files in the plugin itself. That's not really difficult however and doesn't require a lot of code. Here's a index.js which does exactly that:
var through2 = require('through2'),
gutil = require('gulp-util'),
path = require('path'),
File = require('vinyl');
var PLUGIN_NAME = 'foo';
module.exports = function(options) {
var files = { };
var outputFile = null;
return through2.obj(function(file, enc, callback){
outputFile = outputFile || file;
var filePath = path.relative(file.base, file.path);
files[filePath] = file.contents.toString();
callback();
}, function(callback) {
outputFile = outputFile ? outputFile.clone() : new File();
outputFile.path = path.resolve(outputFile.base, options.fileName);
outputFile.contents = new Buffer(
'var ' + options.varName + ' = ' +
JSON.stringify(files, null, 2) + ';'
);
this.push(outputFile);
callback();
});
}
Since you want to output a key/value mapping from file names to file contents our transformFunction just stores both of those things in a regular JavaScript object files. None of the input files themselves are emitted. Their names and contents are just stored until we have all of them.
The only tricky part is making sure that we respect the .base property of each file as is customary for gulp plugins. This allows the user to provide a custom base folder using the base option in gulp.src().
Once all files have been processed through2 calls the flushFunction. In there we create our output file with the provided fileName (once again making sure we respect the .base property).
Creating the output file contents is then just a matter of serializing our files object using JSON.stringify() (which automatically takes care of any escaping that has to be done).
The resulting ./test/output/output.js will then look like this:
var bar = {
"a.html": "<h1 class=\"header\">a</h1>\n",
"b.html": "<h1 class=\"header\">b</h1>\n",
"c.html": "<h1 class=\"header\">c</h1>\n"
};
You should use the gulp pipeline technique (standard).
This means that you can use the gulp-insert package in order
to add the string xxx.
var insert = require('gulp-insert');
.pipe(insert.append('xxx')); // Appends 'xxx' to the contents of every file
You can also prepend, append and wrap with this package and it support of course the gulp standards.
So the full example will be:
var gulp = require('gulp');
var concat = require('gulp-concat');
var foo = require('./index');
var insert = require('gulp-insert');
gulp.task('default', function() {
gulp.src([a.html, b.html, c.html])
.pipe(foo()
.pipe(insert.append('xxx'))
.pipe(concat('output.js'))
.pipe(gulp.dest('./test/output'))
});

Write a line into a .txt file with Node.js

I want to use Node.js to create a simple logging system which prints a line before the past line into a .txt file. However, I don't know how the file system functionality from Node.js works.
Can someone explain it?
Inserting data into the middle of a text file is not a simple task. If possible, you should append it to the end of your file.
The easiest way to append data some text file is to use build-in fs.appendFile(filename, data[, options], callback) function from fs module:
var fs = require('fs')
fs.appendFile('log.txt', 'new data', function (err) {
if (err) {
// append failed
} else {
// done
}
})
But if you want to write data to log file several times, then it'll be best to use fs.createWriteStream(path[, options]) function instead:
var fs = require('fs')
var logger = fs.createWriteStream('log.txt', {
flags: 'a' // 'a' means appending (old data will be preserved)
})
logger.write('some data') // append string to your file
logger.write('more data') // again
logger.write('and more') // again
Node will keep appending new data to your file every time you'll call .write, until your application will be closed, or until you'll manually close the stream calling .end:
logger.end() // close string
Note that logger.write in the above example does not write to a new line. To write data to a new line:
var writeLine = (line) => logger.write(`\n${line}`);
writeLine('Data written to a new line');
Simply use fs module and something like this:
fs.appendFile('server.log', 'string to append', function (err) {
if (err) return console.log(err);
console.log('Appended!');
});
Step 1
If you have a small file
Read all the file data in to memory
Step 2
Convert file data string into Array
Step 3
Search the array to find a location where you want to insert the text
Step 4
Once you have the location insert your text
yourArray.splice(index,0,"new added test");
Step 5
convert your array to string
yourArray.join("");
Step 6
write your file like so
fs.createWriteStream(yourArray);
This is not advised if your file is too big
I created a log file which prints data into text file using "Winston" logger. The source code is here below,
const { createLogger, format, transports } = require('winston');
var fs = require('fs')
var logger = fs.createWriteStream('Data Log.txt', {
flags: 'a'
})
const os = require('os');
var sleep = require('system-sleep');
var endOfLine = require('os').EOL;
var t = ' ';
var s = ' ';
var q = ' ';
var array1=[];
var array2=[];
var array3=[];
var array4=[];
array1[0] = 78;
array1[1] = 56;
array1[2] = 24;
array1[3] = 34;
for (var n=0;n<4;n++)
{
array2[n]=array1[n].toString();
}
for (var k=0;k<4;k++)
{
array3[k]=Buffer.from(' ');
}
for (var a=0;a<4;a++)
{
array4[a]=Buffer.from(array2[a]);
}
for (m=0;m<4;m++)
{
array4[m].copy(array3[m],0);
}
logger.write('Date'+q);
logger.write('Time'+(q+' '))
logger.write('Data 01'+t);
logger.write('Data 02'+t);
logger.write('Data 03'+t);
logger.write('Data 04'+t)
logger.write(endOfLine);
logger.write(endOfLine);
function mydata() //user defined function
{
logger.write(datechar+s);
logger.write(timechar+s);
for ( n = 0; n < 4; n++)
{
logger.write(array3[n]);
}
logger.write(endOfLine);
}
var now = new Date();
var dateFormat = require('dateformat');
var date = dateFormat(now,"isoDate");
var time = dateFormat(now, "h:MM:ss TT ");
var datechar = date.toString();
var timechar = time.toString();
mydata();
sleep(5*1000);

nodejs: each line in separate file

I want to split a file: each line in a separate file. The initial file is really big. I finished with code bellow:
var fileCounter = -1;
function getWritable() {
fileCounter++;
writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
return writable;
}
var readable = fs.createReadStream(file).pipe(split());
readable.on('data', function (line) {
var flag = getWritable().write(line, function() {
readable.resume();
});
if (!flag) {
readable.pause();
}
});
It works but it is ugly. Is there more nodish way to do that? maybe with piping and without pause/resume.
NB: it's not a question about lines/files/etc . The question is about streams and I just try to illustrate it with the problem
You can use Node's built-in readline module.
var fs = require('fs');
var readline = require('readline');
var fileCounter = -1;
var file = "foo.txt";
readline.createInterface({
input: fs.createReadStream(file),
terminal: false
}).on('line', function(line) {
var writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
writable.write(line);
fileCounter++
});
Note that this will lose the last line of the file if there is no newline at the end, so make sure your last line of data is followed by a newline.
Also note that the docs indicate that it is Stability index 2, meaning:
Stability: 2 - Unstable The API is in the process of settling, but has
not yet had sufficient real-world testing to be considered stable.
Backwards-compatibility will be maintained if reasonable.
How about the following? Did you try? Pause and resume logic isn't realy needed here.
var split = require('split');
var fs = require('fs');
var fileCounter = -1;
var readable = fs.createReadStream(file).pipe(split());
readable.on('data', function (line) {
fileCounter++;
var writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
writable.write(line);
writable.close();
});
Piping dynamically would be hard...
EDIT: You could create a writable (so pipe()able) object that would, on('data') event, do the "create file, open it, write the data, close it" but it :
wouldn't be reusable
wouldn't follow the KISS principle
would require a special and specific logic for file naming (It would accept a string pattern as an argument in its constructor with a placeholder for the number. Etc...)
I realy don't recommend that path, or you're going to take ages implementing a non-realy-reusable module. Though, that would make a good writable implementation exercise.

Resources