Can I chain two or more commands together when using gm, the GraphicsMagick library for node?
Specifically, I've got an image that I'd like to add text to, then put a watermark on it, but nothing I try seems to work.
I've tried using gm(image).drawText(0,0,"Text").composite(logo) ... but that tells me Unrecognized option (-draw). Similar thing when I composite first, then draw text.
I also tried writing the file, then adding the .drawText call to the end, but that didn't work.
So can I chain two or more commands together?
Yes, you can, but not in this situation. You can make some workaround calling gm twice and piping result of first call to another:
const stream = require('stream');
const passThrough = new stream.PassThrough();
gm(image).drawText(0, 0, 'Text').stream().pipe(passThrough);
gm(passThrough).composite(logo).write('./output.png', e => console.log(e || 'OK'));
To be honest gm library sucks. If you don't know anything about GraphicsMagick, gm fails to provide good enough abstraction to hide it. You are constantly forced to use constructions like .resize(240, 240, '!') which make no sense unless you know syntax of GraphicsMagick's -resize option. That's because when you call gm's method it just appends option to some GraphicsMagick command which will be called when you execute .write() method, and this is one of these situations when this approach fails. GraphicsMagick provides few commands which supports different options. Most commonly used command is convert, it supports for example -draw option which is used by .drawText() method. Another command is composite which is used for merging to images together - it doesn't support -draw option. When you use .composite() method gm uses composite command so .drawText() methods starts failing. So, you can chain methods like .drawText() and .resize(), but not .drawText() and .composite().
You can use toBuffer and passe it to the next gm call (this particular code is not tested but I have used a similar technic of AWS Lambda).
gm(image)
.drawText(0, 0, 'Text')
.toBuffer('PNG',function (err, buffer) {
if (err) return handle(err);
gm(buffer)
.composite(logo)
.write('./output.png', e => console.log(e || 'OK'));
console.log('done!');})
Related
I have a client side app where users can upload an image. I receive this image in my Node JS app as readable data and then manipulate it before saving like this:
uploadPhoto: async (server, request) => {
try {
const randomString = `${uuidv4()}.jpg`;
const stream = Fse.createWriteStream(`${rootUploadPath}/${userId}/${randomString}`);
const resizer = Sharp()
.resize({
width: 450
});
await data.file
.pipe(resizer)
.pipe(stream);
This works fine, and writes the file to the projects local directory. The problem comes when I try to use the same readable data again in the same async function. Please note, all of this code is in a try block.
const stream2 = Fse.createWriteStream(`${rootUploadPath}/${userId}/thumb_${randomString}`);
const resizer2 = Sharp()
.resize({
width: 45
});
await data.file
.pipe(resizer2)
.pipe(stream2);
The second file is written, but when I check the file, it seems corrupted or didn't successfully write the data. The first image is always fine.
I've tried a few things, and found one method that seems to work but I don't understand why. I add this code just before the I create the second write stream:
data.file.on('end', () => {
console.log('There will be no more data.');
});
Putting the code for the second write stream inside the on-end callback block doesn't make a difference, however, if I leave the code outside of the block, between the first write stream code and the second write stream code, then it works, and both files are successfully written.
It doesn't feel right leaving the code the way it is. Is there a better way I can write the second thumb nail image? I've tried to use the Sharp module to read the file after the first write stream writes the data, and then create a smaller version of it, but it doesn't work. The file doesn't ever seem to be ready to use.
You have 2 alternatives, which depends on how your software is designed.
If possible, I would avoid to execute two transform operations on the same stream in the same "context", eg: an API endpoint. I would rather separate those two different tranform so they do not work on the same input stream.
If that is not possible or would require too many changes, the solution is to fork the input stream and the pipe it into two different Writable. I normally use Highland.js fork for these tasks.
Please also see my comments on how to properly handle streams with async/await to check when the write operation is finished.
So I have two PNG images, both non-transparent 24bpp.
One image contains a rainbow, other one contains a single line of text:
I do the same thing with both of them:
var gm = require('gm').subClass({imageMagick: true})
gm("./sources/source.png").bitdepth(24).write("test.png", function(){
console.log("test.png")
});
gm("./sources/source2.png").bitdepth(24).write("test2.png", function(){
console.log("test2.png")
});
where gm is this
And I set both to 24bpp explicitly
In result I have two images with different bit depth:
In some cases I also had 32bpp image.
How can I make it create only 24bpp image (discard alpha channel if needed).
Also, I don't want to create jpgs.
Thanks to #mark-setchell, I could force bit depth. I did it this way in Node:
gm("./sources/source.png")
.out("-define")
.out("png:color-type=2")
.write("test.png", function(){
console.log("test.png")
});
out() is an undocumented method but it basically helps you add custom parameters to commandline. Notice that
.out("-define png:color-type=2")
won't work, it only works if you pass each parameter in individual .out() call
.bitdepth(24) doesn't seem to affect output at all, probably because I did .subClass({imageMagick: true}) above.
My suggestion is to try using -define to set the variable png:color-type=2. As you worked out, and kindly shared with the community, it is done as follows:
gm("./sources/source.png")
.out("-define")
.out("png:color-type=2")
.write("test.png", function(){
console.log("test.png")
});
Node.js has different options to consume the data.
Streams 0,1,2,3 and so on...
My question is with respect to real life application of
These different option. I fairly understand the
Difference between readable /read, data event and
Pipe but not very confident about selecting specific
Method.
For example if I want to use flow control, read with
Some manual work as well as pipe can be used.
data event ignores flow control, should I stop using
Plain data event?
For most things, you should be able to use
src.pipe(dest);
If you look at the source code for the Stream.prototype.pipe implementation, you can see that it's just a very handy wrapper that sets everything up for you
For all the work I do with streams, I generally just choose the proper stream type (Readable, Writable, Duplex, Transform, or PassThrough) and then define the proper methods (_read, _write, and/or _transform) on the stream. Lastly, I use .pipe to connect everything together.
It's very common to see stream setups that appear to be "circular"
client.pipe(encoder).pipe(server).pipe(decoder).pipe(client)
As an example, here's stream I'm using in my burro module. You can write objects to this stream, and you can read JSON strings from it.
// burro/encoder.js
var stream = require("stream"),
util = require("util");
var Encoder = module.exports = function Encoder() {
stream.Transform.call(this, {objectMode: true});
};
util.inherits(Encoder, stream.Transform);
Encoder.prototype._transform = function _transform(obj, encoding, callback) {
this.push(JSON.stringify(obj));
callback(null);
};
As a general recommendation, you will almost always write your Streams like this. That is, you write your own "class" that inherits from one of the built-in streams. It is not really practical for you to use a built-in stream directly.
To demonstrate how you might use this, start by creating a new instance of the stream
var encoder = new Encoder();
See what the encoder outputs by piping it to stdout
encoder.pipe(process.stdout);
Write some sample objects to it
encoder.write({foo: "bar", a: "b"});
// '{"foo":"bar","a":"b"}'
encoder.write({hello: "world"});
// '{"hello":"world"}'
When I pipe something like an image file through a stream is there any way to send an meta object along with it?
My server gets sent an image from a user. The image gets pushed through a set of streams that perform various actions.
The final stream emits a data event, it passes the resulting image buffer into a callback but I lose all context for the user. I need to keep the resulting image tied to the user's id and some other meta data.
Ideal:
stream.on('data', function(img, meta){
...
})
Thanks for any possible solutions!
In short, no, there's nothing built into Node.js to support including metadata with streams. You do have some other options, though, including:
You could use a closure to track the meta data separately from the stream. For example:
function handleImage(imageStream) {
var meta = {...};
imageStream.pipe(otherStreams).on('data', function(image) {
// you now have `image` and `meta` variables at your disposal here.
}
}
The downside of this is that the metadata is not available to your otherStreams.
This is a good solution if your other streams are third-party code outside of your control, of if they don't need to know about the metadata.
You could do something similar to HTTP headers, where all the data up to a certain point is meta data, and everything after it is the image. (In HTTP, the deliminator is wherever \n\n occurs first.) All of your streams in the chain have to know about this and handle it though.
If you know your metadata will always be in one chunk and none of your streams split or merge chunks, then you could simplify this a bit and just say that the first (or last) chunk is always metadata.
Switch to an object stream like Amoli mentioned in his answer. Here you would pass {image: imgData, meta: {...}}. You would then have to update your other streams to expect this format.
The main downside of this method, though, is that you either have to pass the metadata multiple times, cache it somewhere for each stream that needs it, or pass your entire image as one chunk (which kind of kills the entire point of "streams"). And, from what I've been told, node.js can optimize text/binary streams better than object streams. So, this probably isn't a good approach for your situation.
https://github.com/dominictarr/mux-demux might be helpful here. It combines multiple streams into one, so you could have separate image and meta streams. I'm not sure how well it would work for your situation though. You'd probably need to update all of your streams to be aware of it.
I know I said that all but the first option require modifying the other streams, but there is a way around that: you could create a generic "stream wrapper" that splits up the image and meta data and passes just the image data through the main stream, and has the meta data bypass it and go on to the next one down the chain. This gets ugly fast though, so probably not the best idea.
Basically, whenever you want to read or write any objects which are not strings or buffers, you’ll need to put your stream into objectMode
Example (source):
function S3Lister (s3, options) {
options || (options = {});
stream.Readable.call(this, { objectMode : true });
this.s3 = s3; // a knox-like client.
this.marker = options.start;
this.connecting = false;
this.ended = false;
}
util.inherits(S3Lister, stream.Readable);
We set the stream to use objectMode as we want to return not just data but also some metadata.
For more information:
Node.js Docs stream object mode
An introduction to nodes streams
I created a module called metastream for this type of thing. (It is in npm).
What I'd like to achieve is:
parsing a chunk of XML
editing some values
saving the end result in a
new xml file
The module is sax-js: https://github.com/isaacs/sax-js#readme
The module has some built-in mechanism to read/write any.
I thought the task would be a piece of cake; on the contrary I have been struggling with it for the whole day.
Here is my code:
var fs = require('fs');
var saxStream = require("sax").createStream(true);
saxStream.on("text", function (node) {
if (node === 'foo') { //the content I want to update
node = 'blabla';
}
});
fs.createReadStream("mysongs.xml")
.pipe(saxStream)
.pipe(fs.createWriteStream("mysongs-copy.xml"));
I did think that updating some content (see the comment above) would suffice to write the updated stream into a new file.
What's wrong with this code?
Thanks for your help,
Roland
The sax module doesn't let you modify nodes like that. If you take a look at this bit of code, you'll see that the input is passed indiscriminately to the output.
All hope is not, however, lost! Check out the pretty-print example - it would be a good starting point for what you want to do. You'd have to do a bit of work to implement the readable part of the stream, though, if you still want to be able to .pipe() out of it.
If you know the general structure of the XML, you can try xml-flow. It converts an XML stream into objects, but has a utility to convert them back to xml strings:
https://github.com/matthewmatician/xml-flow
Based on deoxxa's answer I wrote an NPM module for this https://www.npmjs.com/package/sax-streamer