How can I convert images to WEBP format before sending them to AWS S3 in my NUXT app?
I have a photo upload on my website, I would like to convert the images from the file input to WEBP format before uploading to the Amazon web service. Unlike NodeJS where I can import sharp and use it to convert the images to WEBP format, it is not the case here as I get an error like the like below
Failed to compile with 4 errors friendly-errors 01:16:19
These dependencies were not found: friendly-errors 01:16:19
friendly-errors 01:16:19
* child_process in ./node_modules/detect-libc/lib/detect-libc.js, ./node_modules/sharp/lib/libvips.js friendly-errors 01:16:19
* fs in ./node_modules/detect-libc/lib/detect-libc.js, ./node_modules/sharp/lib/libvips.js friendly-errors 01:16:19
friendly-errors 01:16:19
To install them, you can run: npm install --save child_process fs
and I would like to convert the images like in my code below
drop(e) {
e.preventDefault();
e.stopPropagation();
e.target.classList.remove('solid');
const files = e.dataTransfer.files;
this.handleFiles(files);
},
onFilePicked(e) {
e.preventDefault();
e.stopPropagation();
const files = e.target.files;
this.handleFiles(files);
},
saveToBackend(file, result) {
// compress image
// save to aws
},
readFiles(file) {
const reader = new FileReader();
reader.readAsDataURL(file)
reader.onload = () => {
const uploading = this.uploading
const result = reader.result
uploading.push(result)
this.uploading = uploading
// upload to aws
this.saveToBackend(file, result)
}
},
handleFiles(files) {
const Files = Array.from(files);
Files.forEach(file => {
// check if image is a valid image
if (file.type.includes('image')) {
// display the image
return this.readFiles(file)
}
return
});
console.log(Files, "loaded files");
},
and for the sharp plugin
import vue from "vue"
import sharp from "sharp"
vue.use(sharp)
please how can I compress the images?
you could use the packages imagemin and imagemin-webp as answered here: Convert Images to webp with Node
As I've explained you in your previous question, you cannot use a Node.js plugin into a client side app, especially when this one is already running and especially if you're hosting it as target: static on some Vercel or alike platform.
On top of this, image processing is pretty heavy in terms of required processing. So, having an external server that is doing this as a middleware is the best idea. You'll be able to make a load balancer, allocate auto-scaling, prevent a client side timeout and allow for a simpler way to debug things (maybe even more benefits actually).
You could maybe even do it on a serverless function, if you will not be bothered to much with slower cold starts.
TLDR:
simple and efficient solution, put a Node.js server in-between your Nuxt and your S3 bucket
more affordable one but more complex, call a serverless function for this (not even sure that this will be performant)
wait for Nuxt3 with Nitro, and make some shenigans with a local serviceWorker and Cloudflare workers, in edge-rendering (not even sure that this is the most adapted way of handling your issue neither)
maybe try to see for a not so expensive online service to handle the middleware for you
At the end, Image or Video is heavy and expensive to process. And doing those things require quite some knowledge too!
Eventually, I was able to solve my problem without using any package, and what I did was simply convert the image to a canvas and then I converted the image to WEBP format. Below is my solution.
convertImage(file) {
return new Promise((resolve) => {
// convert image
let src = URL.createObjectURL(file)
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d')
let userImage = new Image();
userImage.src = src
userImage.onload = function() {
canvas.width = userImage.width;
canvas.height = userImage.height;
ctx.drawImage(userImage, 0, 0);
let webpImage = canvas.toDataURL("image/webp");
return resolve(webpImage);
}
});
},
so, the function above first receives a file which is the image you want to convert from file input, then it converts the image into a canvas then converts the canvas back into an image, but this time you specify the format of the image you want to convert it into.
Since in my case, I wanted a webp image, I set canvas.toDataURL("image/webp") and by default, the quality of the WEBP image will be the same quality as the image that is received. if you want to reduce the quality to lower quality, the canvas.toDataURL("image/webp", 1) takes another argument which is a number between 0-1, 1 for the highest quality, and 0 lowest quality. you could set 0.5 for medium quality too, or whatever you want. You could also set other file formats you want through the first argument like canvas.toDataURL('image/jpeg', 1.0)-- for jpeg format or canvas.toDataURL('image/png', 1.0)--for png.
sources
the small channel where I found my solution - Where I found my solution
developer.mozilla.org explanation - more on the CanvasElement.toDataURL()
Related
I am working on an nodejs image server to read and write images on Oracle Object Storage.
The issue I am having is not getting the full image when using the function getObject using the Javascript api from oci-objectstorage
I have succesfully stored the following images.
1x1 image with the size of 70 bytes and another
5120 x 3200 image with size 2.9 MB
When I use the function getObject I am able to retrieve the full 1x1 image but when I attempt it with the 5120 x 3200 image, I can only get 15KB of 2.9MB
I used the following example from Oracle
https://github.com/oracle/oci-typescript-sdk/blob/master/examples/javascript/objectstorage.js
Below is the code that I am using to read the image from Oracle Object Storage
I have the below code in an async function.
router.get('/data/', async function (req, res, next) {
let path = req.query.image_data
fs.access(imagePath, fs.F_OK, async (err) => {
if (err) {
const provider = new common.ConfigFileAuthenticationDetailsProvider();
const client = new os.ObjectStorageClient({
authenticationDetailsProvider: provider
});
const compartmentId = config.COMPARTMENTID
const bucket = config.BUCKET
const request = {};
const response = await client.getNamespace(request);
const namespace = response.value;
const getObjectRequest = {
objectName: imagePath,
bucketName: bucket,
namespaceName: namespace
};
const getObjectResponse = await client.getObject(getObjectRequest);
const head = getObjectResponse.value._readableState.buffer.head.data.toString('base64')
const tail = getObjectResponse.value._readableState.buffer.tail.data.toString('base64')
await fs.writeFile(imagePath, completeImage, {encoding: 'base64'},function(err) {
if (err) return
res.sendFile(path, {root: './imagefiles'}) //using express to serve the image file
});
}
//file exists
res.sendFile(path, {root: './imagefiles'});
})
})
It seems to me that the head and tail both have the same data. I am trying to then write the image using fs.write which then with the large image only write a small portion of the image while with the small 1x1 image it writes the full image.
I am not sure if its an issue with my use of the async/await setup or I may have to use a better implementation using promises that may allow to download the full image.
Any ideas on how to tackle this?
Another small issue I am having is serving the image after writing it. On the webpage I get an error saying could not display the image because it contains errors. But after I refresh the page again, which finds the image since it now exists on disk, it is able to display the image and does not show the previous error.
So, I'm trying to pass an image to a Node Lambda through API Gateway and this is automatically base64 encoded. This is fine, and my form data all comes out correct, except somehow my image is being corrupted, and I'm not sure how to decode this properly to avoid this. Here is the relevant part of my code:
const multipart = require('aws-lambda-multipart-parser');
exports.handler = async (event) => {
console.log({ event });
const buff = Buffer.from(event.body, 'base64');
// using utf-8 appears to lose some of the data
const decodedEventBody = buff.toString('ascii');
const decodedEvent = { ...event, body: decodedEventBody };
const jsonEvent = multipart.parse(decodedEvent, false);
const asset = Buffer.from(jsonEvent.file.content, 'ascii');
}
First off, it would be good to know if aws-sdk had a way of parsing the multipart form data rather than using this unsupported third party code. Next, the value of asset ends with a buffer that's exactly the same size as the original file, but some of the byte values are off. My assumption is that the way this is being encoded vs decoded is slightly different and maybe some of the characters are being interpreted differently.
Just an update in case anybody else runs into a similar problem - updated 'ascii' to 'latin1' in both places and then it started working fine.
I am using Jimp (https://www.npmjs.com/package/jimp) library to crop the image.
Crop is working fine but I only have an issue with image orientation.
Sometimes, user uploaded rotated images and its result rotated cropped images.
I went through with https://www.npmjs.com/package/jimp documentation but couldn't find anything related to this.
Here are couple of links I went through but didn't helped:
https://justmarkup.com/articles/2019-10-21-image-orientation/
Accessing JPEG EXIF rotation data in JavaScript on the client side
Please help
So, long story short: jimp correctly reads images rotated via exif orientation property and rearranges the pixels as if the exif/orientation property didn't exist, but then also stores the old exif property instead of resetting it to 1 as it should for it to be displayed properly on every device.
The simplest solution I was able to implement was using exif-auto-rotate to rotate the image pixels and reset the exif property on the frontend before uploading the (base64 encoded) image to the backend:
import Rotator from 'exif-auto-rotate';
// ...
const [file] = e.target.files;
const image = await Rotator.createRotatedImageAsync(file, "base64")
.catch(err => {
if (err === "Image is NOT have a exif code" || err === "Image is NOT JPEG") {
// just return base64 encoded image if image is not jpeg or contains no exif orientation property
return toBase64(file)
}
// reject if other error
return Promise.reject(err)
});
If you need to do this on the backend then you are probably better off using jpeg-autorotate with buffers as suggested here:
const fileIn = fs.readFileSync('input.jpg')
const jo = require('jpeg-autorotate')
const {buffer} = await jo.rotate(fileIn, {quality: 30})
const image = await Jimp.read(buffer)
More info on browser-based exif orientation issues:
EXIF Orientation Handling Is a Ghetto
just change the jimp version to
"jimp": "0.8.5",
In our project, there are few icons. Can we able to test those images/icons using TestCafe.
Example code:
Expected Result:
image 1 -> Locally stored
image 2 -> Available in the website.
And I need to call the local image and compare it with the website image.
I can suggest two ways to compare images on the website.
One way is to take a screenshot of the DOM element with the image and use a third-party library to compare it with the local image. For example, see the looks-same library.
Another way is to log a request for the image using RequestLogger and compare the response body with the local file using the Buffer.compare() method. See the example below illustrating this approach:
import fs from 'fs';
import { RequestLogger } from 'testcafe';
const logger = RequestLogger('https://devexpress.github.io/testcafe/images/landing-page/banner-image.png', {
logResponseBody: true
});
fixture `Compare images`
.page`https://devexpress.github.io/testcafe/`;
test
.requestHooks(logger)
('Test', async t => {
await t.expect(logger.count(record => record.response.statusCode === 200)).eql(1);
const actualImageBuffer = logger.requests[0].response.body;
const expectedImageBuffer = fs.readFileSync('c:\\Tests\\images\\banner-image.png');
await t.expect(Buffer.compare(actualImageBuff, expectedImageBuff)).eql(0);
});
I am using the knox package to connect my S3 account and pull an image, like this:
var picturestring;
knoxclient.get(key).on('response', function(res){
console.log(res.statusCode);
console.log(res.headers);
res.setEncoding('base64');
res.on('data', function(chunk){
picturestring += chunk;
});
res.on('end', function () {
console.log(picturestring);
resizeimage(picturestring, done); // use the resize library in this function
});
}).end();
After that, I want to use a library that can take in that string (picturestring), resize the image, and return a new base64 string that represents the resized image. At this point, I plan on uploaded the resized image to S3.
I wrote a similar script in Golang that let me resize images like this, but every JS resizing library I've reviewed only give examples on resizing images from the local file system.
Is there any way that I can avoid reading the image from S3 into the file system, and focus on dealing with the returned string exclusively??
***************UPDATE***********************
function pullFromS3 (key, done) {
console.log("This is the key being pulled from Amazon: ", key);
var originalstream = new MemoryStream(null, {readable: false});
var picturefile;
client.get(key).on('response', function(res){
console.log("This is the res status code: ", res.statusCode);
res.setEncoding('base64');
res.pipe(originalstream);
res.on('end', function () {
resizeImage(originalstream, key, done);
});
}).end();
};
function resizeImage (originalstream, key, done) {
console.log("This is the original stream: ", originalstream.toString());
var resizedstream = new MemoryStream(null, {readable: false});
var resize = im().resize('160x160').quality(90);
// getting stuck here ******
originalstream.pipe(resize).pipe(resizedstream);
done();
};
I can't seem to get a grip on how the piping from originalstream --> to the resize ImageMagick function ---> to the resizestream works. Ideally, the resizestream should hold the base64 string for the resized image, which I can then upload to S3.
1) How do I wait for the piping to finish, and THEN use the data in resizedstream?
2) Am I doing the piping correctly? I can't debug it because I am unsure how to wait for the piping to finish!
I'm not using S3 but a local cloud provider in China to store images and their thumbnails. In my case I was using imagemagick library with imagemagick-stream and memorystream modules.
imagemagick-stream provides a way to process image with imagemagick through Stream so that I don't need to save the image in local disk.
memorystream provides a way to store the source image and thumbnail image binaries in memory, and with the ability to read/write to Stream.
So the logic I have is
1, Retrieve the image binaries from the client POST request.
2, Save the image into memory using memorystream
3, Upload it to, in your case, S3
4, Define the image process action in imagemagick-stream, for example resize to 180x180
5, Create a read stream from the original image binaries in step 1 using memorystream, pipe into imagemagick-stream created in step 4, and then pipe into a new memory writable created by memorystream where stores the thumbnail.
6, Upload the thumbnail I got in step 5 to S3.
The only problem in my solution is that, your virtual machine might run out of memory if many huge images came. But I know this should not be happened in my case so that's OK. But you'd better evaluate by yourself.
Hope this helps a bit.