I am using the sharp nodejs library found here: https://github.com/lovell/sharp. I am trying to take several screenshots, and then piece the images together using the sharp library.
Here is my code. I am using puppeteer to take screenshots of the page, saving in memory as a binary file and combining those binary files together using sharp's composite() method.
let pagePath = 'path/to/file.png';
let maxScreenshotHeight = 2000;
// Loop over sections of the screen that are of size maxScreenshotHeight.
for (let ypos = 0; ypos < contentSize.height; ypos += maxScreenshotHeight) {
const height = Math.min(contentSize.height - ypos, maxScreenshotHeight);
let image = await page.screenshot({
encoding: 'binary',
clip: {
x: 0,
y: ypos,
width: contentSize.width,
height
}
});
composites.push({input: image, gravity: 'southeast'});
}
sharp()
.composite(composites)
.toFile(pagePath, function(err) {
if (err) {
console.log('fail');
return;
}
console.log('complete');
});
However, in the toFile callback, nothing ever gets logged. Console logging works, as I've added logs before and after the toFile statement, but it seems that this function call never completes. I want to create a png file that I can later download.
How can I merge these multiple screenshots and store them on the server for a later download? Am I using toFile incorrectly?
Related
I have the task of stacking several pngs one on top of another and exporting it as one png. The images do not have the same size, however they do not need to be resized, just cropped to the available canvas.
The images come from the file system and are sent as a buffer. There may be somewhere between 8 and 30 images.
My tests have been with 10 images, and I have compared it with node-canvas which takes less than 50% of the time that Sharp does. I think one problem with Sharp is that I have to make and extract operation before compositing.
This is my code running with Sharp, which takes ~600ms with 10 images:
export const renderComponentsSHARP = async ( layers )=>{
const images = [];
for(let i=0; i<layers.length; i++){
const layer = sharp(layers[i]));
const meta = await layer.metadata();
if(meta.height > AVATAR_SIZE.height || meta.width > AVATAR_SIZE.width)
layer.extract({ top:0, left:0, ...AVATAR_SIZE });
images.push({input:await layer.toBuffer()});
}
return sharp({
create: {
...AVATAR_SIZE,
channels: 4,
background: { r: 0, g: 0, b: 0, alpha: 0 }
}})
.composite(images)
.png()
.toBuffer();
};
This is my code running with canvas, which takes ~280ms:
export const renderComponentsCANVAS = async ( layers )=>{
const canvas = createCanvas(AVATAR_SIZE.width, AVATAR_SIZE.height);
const ctx = canvas.getContext('2d');
const images = await Promise.all(layers.map( layer=>loadImage(layer)));
images.forEach(image=>{
ctx.drawImage(image, 0, 0);
});
return canvas.toBuffer();
};
Am I doing something wrong with sharp? Or is it not the tool for this job?
I am trying to build a messenger bot which does some image processing in 3d and returns a brand new image. I use THREE.CanvasRenderer and my app is hosted on Heroku.
When a user /POSTs an attachment to my webhook, I want to take the URL of the newly created image and insert it into my 3d Scene.
This is my code (using the node-canvas library):
const addTexture = (imageUrl) => {
request({
uri: imageUrl,
method: 'GET'
}, (err, res, body) => {
let image = new Canvas.Image();
image.src = body;
mesh.material.map = new THREE.Texture(image);
mesh.material.needsUpdate = true;
});
}
The callback gets run and I can actually console.log() the image's contents, but nothing shows up in the scene - the plane I am supposed to render to just gets black without any errors... What am I missing here?
I also tried several other ways, without any success...
I tried using THREE.TextureLoader and patch it with jsdom (mock document, window) and node-xmlhttprequest, but then there is error with my load event (event.target is not defined...) Just like in this example
How should one approach this problem? I have a url generated by Facebook, I want to download the image from it and place it in my scene?
Okay, I figured it out after half a day and am posting it for future generations:
Here is the code that did the trick for me:
const addTexture = (imageUrl) => {
request.get(imageUrl, (err, res, data) => {
if (!err && res.statusCode == 200) {
data = "data:" + res.headers["content-type"] + ";base64," + new Buffer(data).toString('base64');
let image = new Canvas.Image();
image.onload = () => {
mesh.material.map = new THREE.Texture(image);
mesh.material.map.needsUpdate = true;
}
image.src = data;
}
});
}
This way, I still get an error: document is not defined, because it seems under the hood three.js writes the texture to a separate canvas.
So the quick and ugly way is to do what I did:
document.createElement = (el) => {
if (el === 'canvas') {
return new Canvas()
}
}
I really hope this helps somebody out there, because besides all the hurdles, rendering WebGL on the server is pure awesomeness.
I have read in the docs, that when you do not specify x, y in the following method, that the image is sliced in to strips:
gm(image).crop(1280, 1)
However! It still only slices as if you write the method as:
gm(image).crop(1280, 1, 0, 0)
http://www.graphicsmagick.org/GraphicsMagick.html#details-crop
Has anyone had this problem, and is there a way to force slices of the given image?
Ok, so if anyone else finds this useful, this was my solution to making a function to iterate through an image and make slices:
/*
SLICE IMAGES
*/
// saved images as an array
var images = fs.readdirSync('captures');
// amount of saved images on disk
var imageCount = images.length;
// assume there are no images currently
var imageCounter = 0;
// create a random string to ID the slices
function randomStringGenerator(length, chars) {
var result = '';
for (var i = length; i > 0; --i) result += chars[Math.round(Math.random() * (chars.length - 1))];
return result;
}
// get images function to iterate over the images saved to disk
(function getImage() {
// use 'setTimeout' to get around memory issues
setTimeout(function () {
// if there are more images than have been currently iterated through
if (imageCount > imageCounter) {
// path to current image to be sliced
var image = 'captures/' + images[imageCounter];
// use the size method to get the image width and height, useful for images submitted on mobile etc.
gm(image).size(function(err, value){
// check for errors, TO DO: put this in 'if' statement
console.log('Error: ', err);
// get current image width
var imageWidth = value.width;
// get current image height
var imageHeight = value.height;
// start slicing on first pixel
var sliceCounter = 1;
//
(function getSlices() {
// use 'setTimeout' to get around memory issues
setTimeout(function() {
// if the image height is bigger than the current slice
if (imageHeight > sliceCounter) {
// apply the random string to the slice name, time not needed here as it is in the parent image file name
var randomString = randomStringGenerator(32, '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ');
// crop image to the full width of current image and increments of 1 pixel
gm(image).crop(imageWidth, 1, sliceCounter, 0).write('slices/slice' + randomString + '.png', function (err) {
// check for errors, TO DO: put this in 'if' statement
console.log('Error: ', err);
// increase the slice counter, to affect the next slice
sliceCounter++;
// fire function recurssively, to help with memory
getSlices();
});
} else {
// if we have sliced the whole image, increase the 'imageCounter' to iterate over next image
imageCounter++;
// get next image
getImage();
}
}, 250);
})();
});
}
}, 250);
})();
I am leveraging the AWS Lambda example script to resize a JPG image using Node.js and ImageMagick/GraphicsMagick libraries. I want to make a simple modification to convert the image from a JPG to a WebP format after the resize. (GraphicsMagick does not support WebP, but ImageMagick does, which is subclassed in the script). This should be possible with the following block of code, as per the example in the Buffers section here (which converts JPG to PNG).
gm('img.jpg')
.resize(100, 100)
.toBuffer('PNG',function (err, buffer) {
if (err) return handle(err);
console.log('done!');
})
When I run that block of code in my local Node.js installation (replacing PNG with WebP), it works.
When I modify the transform function (see below) of the AWS Lambda example script, however, and execute it on AWS, I receive the following "Stream yields empty buffer" error:
Unable to resize mybucket/104A0378.jpg and upload to mybucket_resized/resized-104A0378.jpg due to an error: Error: Stream yields empty buffer
Modified transform() function (see the line with 'webp'):
function tranform(response, next) {
gm(response.Body).size(function(err, size) {
// Infer the scaling factor to avoid stretching the image unnaturally.
var scalingFactor = Math.min(
MAX_WIDTH / size.width,
MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
// Transform the image buffer in memory.
this.resize(width, height)
.toBuffer('webp', function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
}
I realize that the response.ContentType is still equal to image/jpeg, but I don't think that is playing a role here. Also, I realize that I should probably convert to WebP before resizing but...baby steps!
Any ideas?
I have encountered the same error, "Stream yields empty buffer", in other operations using gm and AWS lambda.
Turns out that the lambda container ran out of memory.
I tested this assumption using a large image that was constantly throwing an error.
When i increased the memory of the lambda function everything worked perfectly.
Hope this helps you as well
ImageMagick has to be specifically compiled with WebP support. My experimenting seems to show that the ImageMagick on AWS Lambda is not compiled with WEBP :(
All,
I have an array of sizes such as sizes = [20,40,60,80]
I need to loop over those, resize the src image to each size and name them appropriately.
Then, using knox, upload them to s3, and then remove the resized image.
Here is what I have:
http://jsfiddle.net/bpoppa/ZAcA7/
The problem is the flow control. When knox tries to putFile, I get an error saying it doesn't exist, which means knox is either running before the resize is done or it is already deleted at that point.
Any tips? All help is appreciated.
Thanks!
You need to remember that node.js code runs asynchronously. In your original code, the knox code is running before image.resize has completed (the callback is used to tell you when the operation has completed, not just to handle errors). Node won't wait for the callback and just continues to execute code in your function. You also need to be careful of using anonymous callbacks in for loops without creating a closure.
In general you want to use the callbacks to control program flow like the code below so that you only do the following action when the preceding action has completed.
var src = name + '.jpg';
for (var i = sizes.length - 1; i >= 0; i--) {
var k = i;
var dest = sizes[k] + '.jpg';
var s3 = sizes[k] + '.jpg';
resizeAndPut(src, dest, s3, sizes[k]);
}
fs.unlink(src); /* delete the source file now */
var resizeAndPut = function (src, dest, s3, size) {
easyimage.resize(
{
src: src,
dst: dest,
width: size,
height: size
}, function(err, image) {
if (err) throw err;
knox.putFile(dest, s3, function(err, res) { /* does image contain the path?, if so, might want to use image.path or the like instead of dest here */
if (err) throw err;
fs.unlink(dest); /* delete the local file*/
});
});
};