I am trying to validate the image dimensions of the posts which are in my firebase storage using a Cloud Function. What I am trying to do is:
1: Let the user upload an image
2: When it is uploaded, the backend checks it dimensions
2.1: If it has good dimensions don't remove it
2.2: Else, remove it from the storage
The code looks like:
// Validate image dimensions
exports.validateImageDimensions = functions
.region("us-central1")
.runWith({ memory: "1GB", timeoutSeconds: 120 })
.storage.object()
.onFinalize(async (object) => {
// Get the bucket which contains the image
const bucket = gcs.bucket(object.bucket);
// Get the name
const filePath = object.name;
// Check if the file is an image
const isImage = object.contentType.startsWith("image/");
// Check if the image has valid dimensions
const hasValidDimensions = true; // TODO: How to get the image dimension?
// Do nothing if is an image and has valid dimensions
if (isImage && hasValidDimensions) {
return;
}
try {
await bucket.file(filePath).delete();
console.log(
`The image ${filePath} has been deleted because it has invalid dimensions.`
);
// TODO - Remove image's document in firestore
} catch (err) {
console.log(`Error deleting invalid file ${filePath}: ${err}`);
}
});
But I don't know how to get the object dimensions. I have check the documentation but not having answers.
Any ideas?
Use sharp , here's the documentation that talks about it:
const metadata = await sharp('image.jpg').metadata();
const width = metadata.width;
const height = metadata.height;
functions.logger.log(`width: `, width);
functions.logger.log(`height: `, height);
Related
I'm trying to return all the images stored in IPFS but the order is not sorted and does not return how they are listed in IPFS Desktop
A screenshot of IPFS desktop images in ascending order
The code configured to get the images from IPFS
import fs from 'fs';
import { create, urlSource } from 'ipfs'
const cid = 'QmP9Sh6kU2qjh8mHuu3rmuEECpRFrZtjaxMHQR7sKgRmL3'
const main = async () => {
const ipfs = await create();
const list = ipfs.ls(cid);
for await (const item of list) {
console.log(item)
}
}
main()
Returns the IPFS image objects but not in ascending order
Didn't find a solution for returning the images in order from IPFS but was able to create a key/value pair mapping between the image name which returned an integer and the image file path. This method proved to work for my situation.
// Declares the map
const map = new Map();
// Iterates through the images
for await (const item of list) {
// Pushes the IPFS image path to the array for storage
imagePath.push(item.path);
// Pushes the IPFS image name to the array for storage
imageName.push(item.name);
// Creates the key/value pair for the IPFS images
map.set(parseInt(item.name), item.path);
}
I am trying to write a React app that grabs a frame from the webcam and passes it to the Azure Face SDK (documentation) to detect faces in the image and get attributes of those faces - in this case, emotions and head pose.
I have gotten a modified version of the quickstart example code here working, which makes a call to the detectWithUrl() method. However, the image that I have in my code is a bitmap, so I thought I would try calling detectWithStream() instead. The documentation for this method says it needs to be passed something of type msRest.HttpRequestBody - I found some documentation for this type, which looks like it wants to be a Blob, string, ArrayBuffer, or ArrayBufferView. The problem is, I don't really understand what those are or how I might get from a bitmap image to an HttpRequestBody of that type. I have worked with HTTP requests before, but I don't quite understand why one is being passed to this method, or how to make it.
I have found some similar examples and answers to what I am trying to do, like this one. Unfortunately they are all either in a different language, or they are making calls to the Face API instead of using the SDK.
Edit: I had forgotten to bind the detectFaces() method before, and so I was originally getting a different error related to that. Now that I have fixed that problem, I'm getting the following error:
Uncaught (in promise) Error: image must be a string, Blob, ArrayBuffer, ArrayBufferView, or a function returning NodeJS.ReadableStream
Inside constructor():
this.detectFaces = this.detectFaces.bind(this);
const msRest = require("#azure/ms-rest-js");
const Face = require("#azure/cognitiveservices-face");
const key = <key>;
const endpoint = <endpoint>;
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
this.state = {
client: client
}
// get video
const constraints = {
video: true
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
let videoTrack = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(videoTrack);
imageCapture.grabFrame().then(function(imageBitmap) {
// detect faces
this.detectFaces(imageBitmap);
});
})
The detectFaces() method:
async detectFaces(imageBitmap) {
const detectedFaces = await this.state.client.face.detectWithStream(
imageBitmap,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
console.log (detectedFaces.length + " face(s) detected");
});
Can anyone help me understand what to pass to the detectWithStream() method, or maybe help me understand which method would be better to use instead to detect faces from a webcam image?
I figured it out, thanks to this page under the header "Image to blob"! Here is the code that I added before making the call to detectFaces():
// convert image frame into blob
let canvas = document.createElement('canvas');
canvas.width = imageBitmap.width;
canvas.height = imageBitmap.height;
let context = canvas.getContext('2d');
context.drawImage(imageBitmap, 0, 0);
canvas.toBlob((blob) => {
// detect faces
this.detectFaces(blob);
})
This code converts the bitmap image to a Blob, then passes the Blob to detectFaces(). I also changed detectFaces() to accept blob instead of imageBitmap, like this, and then everything worked:
async detectFaces(blob) {
const detectedFaces = await this.state.client.face.detectWithStream(
blob,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
...
}
This is likely a duh mistake but I can't figure this out.
I'm successfully uploading images to a bucket with a signed URL. When trying to delete the object from my Express backend, using the below code from Google's example, I get Not Found, yet the object is there with the correct name. Thoughts?
async function deleteFile(filename) {
console.log(filename); // correct file name as exists in bucket
try {
await storage
.bucket(bucketName) // correct bucket name and subfolder 'my-image-bucket/posts'
.file(filename)
.delete();
} catch (e) {
console.log('Error message = ', e.message); // Not Found
}
}
The only red flag I'm seeing is, "correct bucket name and subfolder 'my-image-bucket/posts'" next to .bucket(). You should only be passing the bucket name to .bucket() and then the full path to .file().
const bucketName = 'my-image-bucket';
const filename = 'posts/image.jpg';
await storage
.bucket(bucketName)
.file(filename)
.delete();
I am following a tutorial to resize images via Cloud Functions on upload and am experiencing two major issues which I can't figure out:
1) If a PNG is uploaded, it generates the correctly sized thumbnails, but the preview of them won't load in Firestorage (Loading spinner shows indefinitely). It only shows the image after I click on "Generate new access token" (none of the generated thumbnails have an access token initially).
2) If a JPEG or any other format is uploaded, the MIME type shows as "application/octet-stream". I'm not sure how to extract the extension correctly to put into the filename of the newly generated thumbnails?
export const generateThumbs = functions.storage
.object()
.onFinalize(async object => {
const bucket = gcs.bucket(object.bucket);
const filePath = object.name;
const fileName = filePath.split('/').pop();
const bucketDir = dirname(filePath);
const workingDir = join(tmpdir(), 'thumbs');
const tmpFilePath = join(workingDir, 'source.png');
if (fileName.includes('thumb#') || !object.contentType.includes('image')) {
console.log('exiting function');
return false;
}
// 1. Ensure thumbnail dir exists
await fs.ensureDir(workingDir);
// 2. Download Source File
await bucket.file(filePath).download({
destination: tmpFilePath
});
// 3. Resize the images and define an array of upload promises
const sizes = [64, 128, 256];
const uploadPromises = sizes.map(async size => {
const thumbName = `thumb#${size}_${fileName}`;
const thumbPath = join(workingDir, thumbName);
// Resize source image
await sharp(tmpFilePath)
.resize(size, size)
.toFile(thumbPath);
// Upload to GCS
return bucket.upload(thumbPath, {
destination: join(bucketDir, thumbName)
});
});
// 4. Run the upload operations
await Promise.all(uploadPromises);
// 5. Cleanup remove the tmp/thumbs from the filesystem
return fs.remove(workingDir);
});
Would greatly appreciate any feedback!
I just had the same problem, for unknown reason Firebase's Resize Images on purposely remove the download token from the resized image
to disable deleting Download Access Tokens
goto https://console.cloud.google.com
select Cloud Functions from the left
select ext-storage-resize-images-generateResizedImage
Click EDIT
from Inline Editor goto file FUNCTIONS/LIB/INDEX.JS
Add // before this line (delete metadata.metadata.firebaseStorageDownloadTokens;)
Comment the same line from this file too FUNCTIONS/SRC/INDEX.TS
Press DEPLOY and wait until it finish
note: both original and resized will have the same Token.
I just started using the extension myself. I noticed that I can't access the image preview from the firebase console until I click on "create access token"
I guess that you have to create this token programatically before the image is available.
I hope it helps
November 2020
In connection to #Somebody answer, I can't seem to find ext-storage-resize-images-generateResizedImage in GCP Cloud Functions
The better way to do it, is to reuse the original file's firebaseStorageDownloadTokens
this is how I did mine
functions
.storage
.object()
.onFinalize((object) => {
// some image optimization code here
// get the original file access token
const downloadtoken = object.metadata?.firebaseStorageDownloadTokens;
return bucket.upload(tempLocalFile, {
destination: file,
metadata: {
metadata: {
optimized: true, // other custom flags
firebaseStorageDownloadTokens: downloadtoken, // access token
}
});
});
I have a collection of URLs that may or may not belong to a particular bucket. These are not public.
I'm using the nodejs aws-sdk to get them.
However, the getObject function needs params Bucket and Key separately, which are already in my URL.
Is there any way I can use the URL?
I tried extracting the key by splitting URL with / and getting bucket by splitting with .. But the problem is the bucket name can also have . and I'm not sure if key name can have / as well.
The amazon-s3-uri library can parse out the Amazon S3 URI:
const AmazonS3URI = require('amazon-s3-uri')
try {
const uri = 'https://bucket.s3-aws-region.amazonaws.com/key'
const { region, bucket, key } = AmazonS3URI(uri)
} catch((err) => {
console.warn(`${uri} is not a valid S3 uri`) // should not happen because `uri` is valid in that example
})
use this module parse-s3-url to set the parameter for the getObject.
bucket.getObject( parseS3Url('https://s3.amazonaws.com/mybucket/mykey'), (err:any, data:any) =>{
if (err) {
// alert("Failed to retrieve an object: " + error);
} else {
console.log("Loaded " + data.ContentLength + " bytes");
// do something with data.Body
}
});
To avoid installing a package.
const objectUrl = 'https://s3.us-east-2.amazonaws.com/my-s3-bucket/some-prefix/file.json'
const { host, pathname } = new URL(objectUrl);
const [, region] = /s3.(.*).amazon/.exec(host)
const [, bucket, key] = pathname.split('/')