Node - Abstracting Pipe Steps into Function - node.js

I'm familiar with Node streams, but I'm struggling on best practices for abstracting code that I reuse a lot into a single pipe step.
Here's a stripped down version of what I'm writing today:
inputStream
.pipe(csv.parse({columns:true})
.pipe(csv.transform(function(row) {return transform(row); }))
.pipe(csv.stringify({header: true})
.pipe(outputStream);
The actual work happens in transform(). The only things that really change are inputStream, transform(), and outputStream. Like I said, this is a stripped down version of what I actually use. I have a lot of error handling and logging on each pipe step, which is ultimately why I'm try to abstract the code.
What I'm looking to write is a single pipe step, like so:
inputStream
.pipe(csvFunction(transform(row)))
.pipe(outputStream);
What I'm struggling to understand is how to turn those pipe steps into a single function that accepts a stream and returns a stream. I've looked at libraries like through2 but I'm but not sure how that get's me to where I'm trying to go.

You can use the PassThrough class like this:
var PassThrough = require('stream').PassThrough;
var csvStream = new PassThrough();
csvStream.on('pipe', function (source) {
// undo piping of source
source.unpipe(this);
// build own pipe-line and store internally
this.combinedStream =
source.pipe(csv.parse({columns: true}))
.pipe(csv.transform(function (row) {
return transform(row);
}))
.pipe(csv.stringify({header: true}));
});
csvStream.pipe = function (dest, options) {
// pipe internal combined stream to dest
return this.combinedStream.pipe(dest, options);
};
inputStream
.pipe(csvStream)
.pipe(outputStream);

Here's what I ended up going with. I used the through2 library and the streaming API of the csv library to create the pipe function I was looking for.
var csv = require('csv');
through = require('through2');
module.exports = function(transformFunc) {
parser = csv.parse({columns:true, relax_column_count:true}),
transformer = csv.transform(function(row) {
return transformFunc(row);
}),
stringifier = csv.stringify({header: true});
return through(function(chunk,enc,cb){
var stream = this;
parser.on('data', function(data){
transformer.write(data);
});
transformer.on('data', function(data){
stringifier.write(data);
});
stringifier.on('data', function(data){
stream.push(data);
});
parser.write(chunk);
parser.removeAllListeners('data');
transformer.removeAllListeners('data');
stringifier.removeAllListeners('data');
cb();
})
}
It's worth noting the part where I remove the event listeners towards the end, this was due to running into memory errors where I had created too many event listeners. I initially tried solving this problem by listening to events with once, but that prevented subsequent chunks from being read and passed on to the next pipe step.
Let me know if anyone has feedback or additional ideas.

Related

read huge json file and how to know when data is all been received?

I am having problem with asynchronous nature of NodeJs.
For example, I have the following code, which reads a huge json file
var json_spot_parser = function(path){
this.count = 0;
var self = this;
let jsonStream = JSONStream.parse('*');
let fileStream = fs.createReadStream(path);
jsonStream.on('data', (item) => {
// console.log(item) // which correctlt logged each json in the file
self.count++; //134,000
});
jsonStream.on('end', function () {
//I know it ends here,
});
fileStream.pipe(jsonStream);
};
json_spot_parser.prototype.print_count=function(){
console.log(this.count);
}
module.export= json_spot_parser;
In another module i use it as
var m_path = path.join(__dirname, '../..', this.pathes.spots);
this.spot_parser = new json_spot_parser(m_path);
this.spot_parser.print_count();
I want to read all json objects and process them. but the asynchronous is my problem. I am not familiar with that kind of programming. I used to program in sequence such as c, c++ so on.
Since I don't know when these program finish reading json objects, I don't know when/where to process them.
after
this.spot_parser = new json_spot_parser(m_path);
I expect to deal with json objects, but as I said i can't do it.
I want someone explain me how to write nodejs program in such case, I want to know the standard practice. So far I read some posts, but I believe most of them are short-term fixes.
So, my question is :
How a NodeJs programmer handles problems?
Please tell me standard way, I want to be good at this NodeJs.
Thx!
You can use callbacks as #paqash suggested but returning a promise would be a better solution.
At first, return a new Promise in the json_spot_parser
var json_spot_parser = function(path){
return new Promise(function(resolve, reject) {
this.count = 0;
var self = this;
let jsonStream = JSONStream.parse('*');
let fileStream = fs.createReadStream(path);
jsonStream.on('data', (item) => {
// console.log(item) // which correctlt logged each json in the file
self.count++; //134,000
});
jsonStream.on('end', function () {
resolve(self.count);
});
fileStream.pipe(jsonStream);
};
json_spot_parser.prototype.print_count=function(){
console.log(this.count);
}
});
module.export= json_spot_parser;
In another module
var m_path = path.join(__dirname, '../..', this.pathes.spots);
this.spot_parser = new json_spot_parser(m_path);
this.spot_parser.then(function(count) {console.log(count)});
As you mentioned, Node.js has an async mechanize and you should learn how to think in that way. It's required if you would like to be good at Node.js. If I can suggest, you should start with this article:
Understanding Async Programming in Node.js
Ps: Try to use camel case variables and follow Airbnb JS style guide.
You should process them in the callbacks - your code above looks pretty good, what exactly are you trying to do but are unable?

How to mock streams in NodeJS

I'm attempting to unit test one of my node-js modules which deals heavily in streams. I'm trying to mock a stream (that I will write to), as within my module I have ".on('data/end)" listeners that I would like to trigger. Essentially I want to be able to do something like this:
var mockedStream = new require('stream').readable();
mockedStream.on('data', function withData('data') {
console.dir(data);
});
mockedStream.on('end', function() {
console.dir('goodbye');
});
mockedStream.push('hello world');
mockedStream.close();
This executes, but the 'on' event never gets fired after I do the push (and .close() is invalid).
All the guidance I can find on streams uses the 'fs' or 'net' library as a basis for creating a new stream (https://github.com/substack/stream-handbook), or they mock it out with sinon but the mocking gets very lengthy very quicky.
Is there a nice way to provide a dummy stream like this?
There's a simpler way: stream.PassThrough
I've just found Node's very easy to miss stream.PassThrough class, which I believe is what you're looking for.
From Node docs:
The stream.PassThrough class is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is primarily for examples and testing...
The code from the question, modified:
const { PassThrough } = require('stream');
const mockedStream = new PassThrough(); // <----
mockedStream.on('data', (d) => {
console.dir(d);
});
mockedStream.on('end', function() {
console.dir('goodbye');
});
mockedStream.emit('data', 'hello world');
mockedStream.end(); // <-- end. not close.
mockedStream.destroy();
mockedStream.push() works too but as a Buffer so you'll might want to do: console.dir(d.toString());
Instead of using Push, I should have been using ".emit(<event>, <data>);"
My mock code now works and looks like:
var mockedStream = new require('stream').Readable();
mockedStream._read = function(size) { /* do nothing */ };
myModule.functionIWantToTest(mockedStream); // has .on() listeners in it
mockedStream.emit('data', 'Hello data!');
mockedStream.emit('end');
The accept answer is only partially correct. If all you need is events to fire, using .emit('data', datum) is okay, but if you need to pipe this mock stream anywhere else it won't work.
Mocking a Readable stream is surprisingly easy, requiring only the Readable lib.
let eventCount = 0;
const mockEventStream = new Readable({
objectMode: true,
read: function (size) {
if (eventCount < 10) {
eventCount = eventCount + 1;
return this.push({message: `event${eventCount}`})
} else {
return this.push(null);
}
}
});
Now you can pipe this stream wherever and 'data' and 'end' will fire.
Another example from the node docs:
https://nodejs.org/api/stream.html#stream_an_example_counting_stream
Building on #flacnut 's answer, I did this (in NodeJS 12+) using Readable.from() to construct a stream preloaded with data (a list of filenames):
const mockStream = require('stream').Readable.from([
'file1.txt',
'file2.txt',
'file3.txt',
])
In my case, I wanted to mock the stream of filenames returned by fast-glob.stream:
const glob = require('fast-glob')
// inject the mock stream into glob module
glob.stream = jest.fn().mockReturnValue(mockStream)
In the function being tested:
const stream = glob.stream(globFilespec)
for await (const filename of stream) {
// filename = file1.txt, then file2.txt, then file3.txt
}
Works like a charm!
Here's a simple implementation which uses jest.fn() where the goal is to validate what has been written to the stream created by fs.createWriteStream(). The nice thing about jest.fn() is that although the calls to fs.createWriteStream() and stream.write() are inline in this test function, these functions don't need to be called directly by the test.
const fs = require('fs');
const mockStream = {}
test('mock fs.createWriteStream with mock implementation', async () => {
const createMockWriteStream = (filename, args) => {
return mockStream;
}
mockStream3.write = jest.fn();
fs.createWriteStream = jest.fn(createMockWriteStream);
const stream = fs.createWriteStream('foo.csv', {'flags': 'a'});
await stream.write('foobar');
expect(fs.createWriteStream).toHaveBeenCalledWith('foo.csv', {'flags': 'a'});
expect(mockStream.write).toHaveBeenCalledWith('foobar');
})

How can I chain streams internally within a custom through2 stream

I'm writing my own through stream in Node which takes in a text stream and outputs an object per line of text. This is what the end result should look like:
fs.createReadStream('foobar')
.pipe(myCustomPlugin());
The implementation would use through2 and event-stream to make things easy:
var es = require('event-stream');
var through = require('through2');
module.exports = function myCustomPlugin() {
var parse = through.obj(function(chunk, enc, callback) {
this.push({description: chunk});
callback();
});
return es.split().pipe(parse);
};
However, if I were to pull this apart essentially what I did was:
fs.createReadStream('foobar')
.pipe(
es.split()
.pipe(parse)
);
Which is incorrect. Is there a better way? Can I inherit es.split() instead of use it inside the implementation? Is there an easy way to implement splits on lines without event-stream or similar? Would a different pattern work better?
NOTE: I'm intentionally doing the chaining inside the function as the myCustomPlugin() is the API interface I'm attempting to expose.
Based on the link in the previously accepted answer that put me on the right googling track, here's a shorter version if you don't mind another module: stream-combiner (read the code to convince yourself of what's going on!)
var combiner = require('stream-combiner')
, through = require('through2')
, split = require('split2')
function MyCustomPlugin() {
var parse = through(...)
return combine( split(), parse )
}
I'm working on something similar.
See this solution: Creating a Node.js stream from two piped streams
var outstream = through2().on('pipe', function(source) {
source.unpipe(this);
this.transformStream = source.pipe(stream1).pipe(stream2);
});
outstream.pipe = function(destination, options) {
return this.transformStream.pipe(destination, options);
};
return outstream;

Clone a transform stream in NodeJS

I'm trying to reuse a couple of transform streams (gulp-like, ie. concat() or uglify()) across several readable streams. I only have access to the created instances and not the original subclasses.
It does not work out of the box, I get Error: stream.push() after EOF when I pipe at lease two distinct readable streams into my transforms. Somehow the events do appear to leak from one stream to the other.
I've tried to setup a cloneTransform function to somehow cleanly "fork" into two distincts transforms, however I can't get it to not share events:
function cloneTransform(transform) {
var ts = new Transform({objectMode: true});
ts._transform = transform._transform.bind(ts);
if(typeof transform._flush !== 'undefined') {
ts._flush = transform._flush.bind(ts);
}
return ts;
}
Any alternative idea, existing plugin or solution to address this?
Update: context
I'm working on a rewrite of the gulp-usemin package, it is hosted here: gulp-usemin-stream, example use here.
Basically you parse an index.html looking for comment blocks surrounding styles/scripts declarations, and you want to apply several configurable transformation to theses files (see grunt-usemin).
So the problem I'm trying to solve is to reuse an array of transforms, [concat(), uglify()] that are passed as options to a gulp meta transform.
you're doing unnecessary work in your code. it should be as simple as:
function clone(transform) {
var ts = new Transform({ objectMode: true})
ts._transform = transform._transform
ts._flush = transform._flush
return ts
}
however, i'm not sure how you're trying to use it, so i'm not sure what error you're getting. there's also the issue of initialization where the initial transform stream might initialize the stream with .queue = [] or something, and you won't be initializing it in your clone.
the solution completely depends on the problem you're trying to solve, and i feel like you're approaching the problem incorrectly in the first place.
It looks like you're trying to use a Transform stream directly instead of subclassing it. What you should be doing is subclassing Transform and overriding the _tranform and, optionally, _flush methods. That way, you can just create a new instance of your transform stream for each readable stream that you need to use it with.
Example:
var util = require('util');
var Transform = require('stream').Transform;
function MyTransform(options) {
Transform.call(this, options);
// ... any setup you want to do ...
}
util.inherits(MyTransform, Transform);
MyTransform.protocol._transform = function(chunk, encoding, done) {
// ... your _transform implementation ...
}
// Optional
MyTransform.protocol._flush = function(done) {
// ... optional flush implementation ...
}
Once you have that setup, you can simply create new instances of MyTransform for each stream you want to use it with:
var readStream1 = ...
var readStream2 = ...
var transform1 = new MyTransform();
var transform2 = new MyTransform();
readStream1.pipe(transform1).pipe(...);
readStream2.pipe(transform2).pipe(...);

In Meteor, how do I get a node read stream from a collection's find curser?

In Meteor, on the server side, I want to use the .find() function on a Collection and then get a Node ReadStream interface from the curser that is returned. I've tried using .stream() on the curser as described in the mongoDB docs Seen Here. However I get the error "Object [object Object] has no method 'stream'" So it looks like Meteor collections don't have this option. Is there a way to get a stream from a Meteor Collection's curser?
I am trying to export some data to CSV and I want to pipe the data directly from the collections stream into a CSV parser and then into the response going back to the user. I am able to get the response stream from the Router package we are using, and it's all working except for getting a stream from the collection. Fetching the array from the find to push it into the stream manually would defeat the purpose of a stream since it would put everything in memory. I guess my other option is to use a foreach on the collection and push the rows into the stream one by one, but this seems dirty when I could pipe the stream directly through the parser with a transform on it.
Here's some sample code of what I am trying to do:
response.writeHead(200,{'content-type':'text/csv'});
// Set up a future
var fut = new Future();
var users = Users.find({}).stream();
CSV().from(users)
.to(response)
.on('end', function(count){
log.verbose('finished csv export');
response.end();
fut.ret();
});
return fut.wait();
Have you tried creating a custom function and piping to it?
Though this would only work if Users.find() supported .pipe()(again, only if Users.find inherited from node.js streamble object).
Kind of like
var stream = require('stream')
var util = require('util')
streamreader = function (){
stream.Writable.call(this)
this.end = function() {
console.log(this.data) //this.data contains raw data in a string so do what you need to to make it usable, i.e, do a split on ',' or something or whatever it is you need to make it usable
db.close()
})
}
util.inherits(streamreader,stream.Writeable)
stream.prototype._write = function (chunk, encoding, callback) {
this.data = this.data + chunk.toString('utf8')
callback()
}
Users.find({}).pipe(new streamReader())

Resources