I am looking for a working way to to use the GM methods in CollectionFS transformWrite function depending o the image size. There is a size method implemented in GM but this works async and so it seems to be not possible to use.
I tried the following:
gm(readStream, fileObj.name()).size(function(err, dimensions){
if (err) {
console.log('err with getting size:');
console.log(err);
}
console.log('Result of media_size:');
console.log(dimensions);
// here do smth depends on the dimensions ...
gm(readStream, fileObj.name()).resize('1200', '630').stream().pipe(writeStream);
});
When i use the above snippet in the CollectionFS function I get this error:
Error: gm().stream() or gm().write() with a non-readable stream.
This seems to be a problem that I use a async function - when removing the async function the upload works perfectly but then I have no access to the dimensions of the uploaded image.
Is there a solution to get the dimensions of the image in a sync way when having just access to fileObj, readStream & writeStream ?
Edit:
Thanks Jasper for the hint with the wrapAsync. I tested it and have this code in use:
var imgsize;
var img = gm(readStream, fileObj.name());
imgsize = Meteor.wrapAsync(img.size, img);
console.log('call wrapAsync:');
var result;
try {
result = imgsize();
} catch (e) {
console.log('Error:');
console.log(e)
}
console.log('((after imgsize()))');
When take a look at the console.logs the script stops after "call wrapAsync" - also there is no error returning so its very difficult to tell whats the problem. I also tried this with the NPM package "imagesize" with Meteor.wrapAsync(imagesize); and then imgsize(readStream) which causes the same: No console log after "call wrapAsync:".
The core of the problem is not the asynchronous behavior of gm().size(), but the fact that you use the readStream twice. First you use it to get the size of the image, which empties readStream. Then you try to use it again to resize but because it has ended, you get an error telling you the stream is non-readable.
I found the solution at the bottom of the gm package's streams documenation:
GOTCHA: when working with input streams and any 'identify' operation
(size, format, etc), you must pass "{bufferStream: true}" if you also
need to convert (write() or stream()) the image afterwards NOTE: this
buffers the readStream in memory!
Based on that and the small example below, we can change your code to:
gm(readStream, fileObj.name()).size({ bufferStream: true }, function(err, dimensions){
if (err) {
console.log('err with getting size:');
console.log(err);
}
console.log('Result of media_size:');
console.log(dimensions);
// here do smth depends on the dimensions ...
this.resize('1200', '630').stream().pipe(writeStream);
});
Inside of the callback, this refers to the image you're working on and you can use it to continue your chain.
I tested this in a small sample meteor application, and it works!
Related
I working with CarboneJS in NodeJS, in order to generate reports.
This is the documentation : https://carbone.io/documentation.html#getting-started-with-carbone-js
To use CarboneJS is simple:
carbone.render('./node_modules/carbone/examples/simple.odt', data, function(err, result){
if (err) {
return console.log(err);
}
fs.writeFileSync('result.odt', result);
});
What I want to do, is pass my own template (Which it stored in database), lets call it for example MyFileFromDatabase so I can do something like this :
const MyFileFromDatabase = new Buffer (myFile);
carbone.render(MyFileFromDatabase, data, function(err, result){
if (err) {
return console.log(err);
}
// write the result
fs.writeFileSync('result.odt', result);
});
What Im expecting to get : Carbone will render the document.
What I get :
complete erreur sendErrorHttp: TypeError: Cannot read property 'length' of undefined
I don"t know if such feature exist, or should I go with other strategies? Like using Streams?
For now, it is not possible to pass a buffer to the render function.
However, it is already a feature request and the team will probably work on it really soon. Here is the issue on Github: https://github.com/carboneio/carbone/issues/119
A quick alternative is to use the Carbone Render API, which gives you the possibility to pass the template as a Buffer or as a base64 String. You got 100 render for free per month Here is the documentation https://carbone.io/api-reference.html#carbone-render-api
A node SDK is available to call the API easily.
I will update the thread when the buffer feature is out!
How can I write a buffer data to a file from readable.stream in Nodejs? I know there are already npm package, I am asking this question for learning purpose only. I am also wondering why there is no such method available in npm 'fs' where user can pass readablestream and create a file directly?
I tried to write a stream.readableBuffer to a file using fs.write by passing the buffer directly, but somehow a small portion of file, is corrupt, after writing, I can see image but a small portion look black in it, my guess buffer has not written completely.
I pass formdata from ajax XMLHttpRequest to serverside controller (node js router in this case).
and I used npm 'parse-formdata' to parse the request. below is the code:
parseFormdata(req, function (err, data) {
if (err) {
logger.error(err);
throw err
}
console.log('fields:', data.fields); // I have data here but how to write this data to a file?
/** perhaps a bad way to write the data to a file, looking for a better way **/
var chunk = data.parts[0].stream.readableBuffer.head.chunk;
fs.writeFile(data.parts[0].filename, chunk, function(err) {
if(err) {
console.log(err);
} else {
console.log("The file was saved!");
}
});
could some body tell me a better approach to write the data (that I got from parsing of FormData) ?
According to parseFormData
You may use the provided sample:
var pump = require('pump')
var concat = require('concat-stream')
pump(stream, concat(function (buf) {
assert.equal(String(buf), String(file), 'file received')
// then write to your file
res.end()
}))
But you may do shorter:
const ws = fs.createWriteStream('out.txt')
data.parts[0].stream.pipe(ws)
Finally, note that library has not been updated since 2017, so there may be some vulnerabilities or so..
I've been trying different things all day, but nothing seems to offer a simple, straight-forward way to write a ReadableStream (which is an image) to a file. I'm calling an API which returns a ReadableStream, but what then? I tried digging into the object a bit more, and followed it all the way to returning a Buffer[], which seems like it should be what needs to go into a fs.writeFile() but nothing works. The file gets created but I try to open the image and it says it can't open that file type (which file type they're talking about, I have no idea).
Here is my code that returns a Buffer[]. I can also cut off some of those chains to only return the ReadableStream body, but then that returns a PullThrough and I am already so lost. Very little about that class online. Any suggestions?
Here is the api I'm using: https://learn.microsoft.com/en-us/javascript/api/#azure/cognitiveservices-computervision/computervisionclient?view=azure-node-latest#generatethumbnail-number--number--string--computervisionclientgeneratethumbnailoptionalparams--servicecallback-void--
// Image of a dog.
const dogURL = 'https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png';
await computerVisionClient.generateThumbnail(100, 100, dogURL, { smartCropping: true } )
.then((thumbResponse) => {
console.log(thumbResponse.readableStreamBody.readableBuffer.head.data)
fs.writeFile("thumb.jpg", thumbResponse.readableStreamBody.readableBuffer.head.data, "binary", (err) => {
console.log('Thumbnail saved')
if (err) throw err
})
})
Finally found a solution. I don't understand pipe() all that well, but when it's called from a ReadableStream with a filepath as parameter, it works.
The API response thumbResponse.readableStreamBody was the ReadableStream. So anyone who has a readable stream can use this solution. No need to call an API for anything.
// Image of a dog.
const dogURL = 'https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png';
await computerVisionClient.generateThumbnail(100, 100, dogURL, { smartCropping: true } )
.then((thumbResponse) => {
const destination = fs.createWriteStream("thumb.png")
thumbResponse.readableStreamBody.pipe(destination)
console.log('Thumbnail saved')
})
I'm using pkgcloud to build a webservice (express based) that will manage my openstack storage. I have a problem with downloading a file.
the pkgcloud api for download is a bit wierd: it passes to the callback the error and file's metadata, and returns from the function a readable stream with the file's content:
client.download({
container: 'my-container',
remote: 'my-file'
}, function(err, result) {
// handle the download result
})).pipe(myFile);
The problem I have with this is- how can I get the error, the metadata and the stream together? I need the metadata in order to set the content type for my result and obviously I need the error to know if all went well (in case the file doesn't exist, the stream is still returned- just containing an html error message, so I can't count on that for error).
I tried calling it like this:
router.get("/download", function (res, req) {
var stream = client.download(options, function (err, file) {
if (err) {
return res.status(err.status).json({'message' : 'Error downloading file from storage: ' + err.details});
}
}
// I'd like to get the callback's err and file here somehow
stream.pipe(res);
});
but the problem is that the data is piped to res before callback is called, so the callback doesn't help- it is either giving me the error too late, or not giving me the metadata. I also tried moving the stream.pipe(res) into the callback, but it does not work, cause inside the callback I can't access the result stream.
I thought of using promise for that, but how can I tell the promise to 'wait' for both the callback and the returned value? I'm trying to wrap the download function with a function that will run the download with one promise in the callback and one on the returned stream, but how can I do that?
I am in a cloud9 environment on c9.io and I have sucessfully installed graphicsmagick and the node.js gm module. I have been successful in calling a number of methods however some I have not. One specific one I am having a problem with is the color reduction method (colors).
Has anyone successfully been able to call colors and get it to reduce the colors in the source image? The documentation states the usage is: gm("img.png").colors(int) but I can't seem to get it to work and was wondering if anyone has successfully used this.
I have provided a reduced code block to give an idea as to how I'm using it in hopes that someone will see what I am perhaps doing wrong. In the data event handler, I still have way more colors being shown in the passed "chunk" param then my reduced amount of 8 in this case.
Thanks!
var img = gm(sourceFilename),
tmpFilename = temp.path({ suffix: '.miff' });
return img.noProfile().bitdepth(8).colors(8).scale(Math.ceil(wh.height / ratio), MAX_W).write('histogram:' + tmpFilename, function (err) {
var histogram, rs;
histogram = '';
rs = fs.createReadStream(tmpFilename, {encoding: 'utf8'});
rs.addListener('data', function (chunk) {
console.log("Data: ", chunk);
});
});
Ok issue seems to be with "scale". At this time it seems that scale and resize (I tested) are both not working correctly. When I removed scale from the line as shown below, I am now getting the color reduced histogram data I was expecting.
return img.noProfile().bitdepth(8).colors(8).write('histogram:' + tmpFilename, function (err)