A frontend application is creating documents in firestorm with following model
fileRef is string : "gs://bucket-location/folder/fileName.extention"
Now after creation I want to get the public URL of the file and update the document with the URL
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
const firebase = admin.initializeApp();
interface DocumentDataType {
fileRef: string;
fileType: "image" | "video";
fileUrl: string;
timestamp: FirebaseFirestore.Timestamp;
location: FirebaseFirestore.GeoPoint;
}
exports.onDocumentCreated = functions.firestore
.document("db/{docId}")
.onCreate((snapshot, context) => {
const bucket = firebase.storage().bucket();
const { fileRef } = <DocumentDataType>snapshot.data();
const file = bucket.file(fileRef);
const fileUrl = file.publicUrl();
const batch = admin.firestore().batch();
batch.update(snapshot.ref, { ...snapshot.data(), fileUrl });
});
The functions get triggered but the file URL does not update.
is it the right approach for getting the file in cloud storage? -
and also does SDK v9 update is with batch? I really got confused reading the documentation and could not find a proper solution.
Batched writes are useful when you are trying to add/update/delete multiple documents and want to ensure all the operations either pass or fail. In the provided code you are not commiting the batch. Using commit() should update the document:
batch.commit().then(() => console.log("Document updated"));
However if you just want to update a single document then I would prefer update() instead:
exports.onDocumentCreated = functions.firestore
.document("db/{docId}")
.onCreate(async (snapshot, context) => {
const bucket = firebase.storage().bucket();
const { fileRef } = <DocumentDataType>snapshot.data();
const file = bucket.file(fileRef);
const fileUrl = file.publicUrl();
return snapshot.ref.update({ ...snapshot.data(), fileUrl });
});
Related
I am implementing google automl in NodeJS to predict the image level. I have created model, level and uploaded images manually. Now I want to predict level of an image using NodeJS.
I wrote a function but always getting the below error,
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information
the code is below-
async function addfile() {
console.log("add file called")
const projectId = "project-name";
const computeRegion = "us-central1";
const modelId = "modelid";
const filePath = "./src/assets/uploads/micro.jpeg";
const scoreThreshold = "0.9";
const client = new automl.PredictionServiceClient();
const modelFullId = client.modelPath(projectId, computeRegion, modelId);
try {
const content = fs.readFileSync(filePath, 'base64');
const params = {};
if (scoreThreshold) {
params.score_threshold = scoreThreshold;
}
const payload = {};
payload.image = { imageBytes: content };
console.log("try block is running")
var [response] = await client.predict({
name: modelFullId,
payload: payload,
params: params,
keyFilename: "./src/assets/uploads/service_account_key.json"
});
console.log('Prediction results: ' + JSON.stringify(response));
response.payload.forEach(result => {
console.log('Predicted class name: ${result.displayName}');
console.log('Predicted class score: ${result.classification.score}');
});
} catch (exception) {
console.log("exception occur = " + exception);
}
}
Any solution for that will be appreciated.
As mentioned by #Rakesh Saini , this error occurs when environment variables are not set or missing.The environment can be set by adding application credentials in the project and adding other required environment variables like Project ID and location.
I want to increment the value of the field "votes" in a document (item_id) in the collection items. I want a cloud function to do this for me every time a new document is added to the collection votes. The new document contains the item_id. does anyone know how I can do this? This is what I have now:
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
admin.initializeApp();
export const vote = functions.firestore.document("/Votes/{vote}")
.onCreate((snapshot, context) => {
const item = context.params.item_id;
const itemdoc = admin.firestore().collection("items").doc(item);
itemdoc.get().then((doc) => {
if (doc.exists) {
itemdoc.update({
"votes": admin.firestore.FieldValue.increment(1)})
.catch((err) => {
console.log("Error updating item vote", err);
});
}
});
});
In the firebase console logs that the path must be a non-empty string. Does anyone know what I do wrong? Since the path should not be empty.
The following should do the trick:
export const vote = functions.firestore.document("/Votes/{vote}")
.onCreate((snapshot, context) => {
const item = snapshot.data().item_id;
const itemDocRef = admin.firestore().collection("items").doc(item);
return itemDocRef.update({
"votes": admin.firestore.FieldValue.increment(1)
});
});
You need to use the data() method on snapshot, in order to get the JavaScript representation of the new document. Then you take the item_id property.
Another possibility is to use the get() method, as follows:
const item = snapshot.get("item_id");
I would suggest to rename the itemdoc variable to itemDocRef, since it is a DocumentReference.
Update following your comments:
If you want to read the item Doc after having updated it you should do as follows:
export const vote = functions.firestore.document("/Votes/{vote}")
.onCreate(async (snapshot, context) => {
const item = snapshot.data().item_id;
const itemDocRef = admin.firestore().collection("items").doc(item);
await itemDocRef.update({"votes": admin.firestore.FieldValue.increment(1)});
const itemDocSnapshot = await itemDocRef.get();
//Do whatever you want with the Snapshot
console.log(itemDocSnapshot.get("user_id"));
// For example update another doc
const anotherDocRef = admin.firestore().collection("....").doc("....");
await anotherDocRef.update({"user_id": itemDocSnapshot.get("user_id")});
return null;
});
Note the use of the async and await keywords.
const item = context.params.item_id;
By accessing context.params, you are trying to find a value in wildcard present in .document("/Votes/{vote}") which is undefined for sure. To read a field from document try this:
const {item_id} = snapshot.data()
// Getting item_id using Object destructuring
if (!item_id) {
// item_id is missing in document
return null
}
const itemdoc = admin.firestore().collection("items").doc(item_id);
// Pass item_id in doc ^^^^^^^
You can read more about onCreate in the documentation. The first parameter snapshot is the QueryDocumentSnapshot which contains your doc data and the second parameter context is EventContext.
I am working with docxtemplater with nodejs and have read the documentation from the link below:
https://docxtemplater.readthedocs.io/en/latest/generate.html#node
Unlike with the provided documentation, I am trying to load a document from my firebase storage called
'tag-example.docx'
and have the docxtemplater run on the tags on there. The generated document is then saved back to my firebase storage. Simply put:
Load 'tag-example.docx' from firebase storage;
docxtemplater does its thing on the document;
revised output saved to firebase storage.
My issue is that I keep getting the error message below:
Unhandled error TypeError: Cannot read property 'toLowerCase' of undefined
at Object.exports.checkSupport (/workspace/node_modules/pizzip/js/utils.js:293:32)
at ZipEntries.prepareReader (/workspace/node_modules/pizzip/js/zipEntries.js:275:11)
at ZipEntries.load (/workspace/node_modules/pizzip/js/zipEntries.js:295:10)
at new ZipEntries (/workspace/node_modules/pizzip/js/zipEntries.js:32:10)
at PizZip.module.exports [as load] (/workspace/node_modules/pizzip/js/load.js:25:20)
at new PizZip (/workspace/node_modules/pizzip/js/index.js:41:10)
at /workspace/index.js:66:11
at func (/workspace/node_modules/firebase-functions/lib/providers/https.js:336:32)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
Is there a way to solve this issue? Is this because I am not loading the document as a binary like in the example? Can this even be done with firebase storage?
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
var PizZip = require('pizzip');
var Docxtemplater = require('docxtemplater');
admin.initializeApp();
const BUCKET = 'gs://myapp.appspot.com';
const https = require('https');
const storage = new Storage({
projectId: 'myapp' });
const cors = require('cors')({origin: true});
exports.test2 = functions.https.onCall((data, context) => {
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(function(error, key) {
error[key] = value[key];
return error;
}, {});
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({error: error}, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors.map(function (error) {
return error.properties.explanation;
}).join("\n");
console.log('errorMessages', errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
//Load the docx file as a binary
let file_name = 'tag-example.docx';
const myFile =storage.bucket(BUCKET).file(file_name);
var content = myFile.createReadStream();
var zip = new PizZip(content);
var doc;
try {
doc = new Docxtemplater(zip);
} catch(error) {
// Catch compilation errors (errors caused by the compilation of the template : misplaced tags)
errorHandler(error);
}
//set the templateVariables
doc.setData({
first_name: 'John',
last_name: 'Doe',
phone: '0652455478',
description: 'New Website'
});
try {
// render the document (replace all occurences of {first_name} by John, {last_name} by Doe, ...)
doc.render();
}
catch (error) {
// Catch rendering errors (errors relating to the rendering of the template : angularParser throws an error)
errorHandler(error);
}
var buf = doc.getZip()
.generate({type: 'nodebuffer'});
// buf and then save to firebase storage.
buf.pipe(myFile.createWriteStream());
});
this error message is from pizzip, not directly from docxtemplater.
It happens when the argument given to pizzip is invalid.
In your case, you did :
var content = myFile.createReadStream();
var zip = new PizZip(content);
and the problem is that content is I think a specific object, a stream, and not something that has been finished reading.
You need to first resolve the content to a string or a buffer and then you can do :
var zip = new PizZip(buffer);
I have used the code from the Firebase documentation to schedule a backup of the data in my Firestore project in a bucket every 6 hours. See the link and the code here:
https://firebase.google.com/docs/firestore/solutions/schedule-export
const functions = require('firebase-functions');
const firestore = require('#google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient();
// Replace BUCKET_NAME
const bucket = 'gs://BUCKET_NAME';
exports.scheduledFirestoreExport = functions.pubsub
.schedule('every 24 hours')
.onRun((context) => {
const projectId = process.env.GCP_PROJECT || process.env.GCLOUD_PROJECT;
const databaseName =
client.databasePath(projectId, '(default)');
return client.exportDocuments({
name: databaseName,
outputUriPrefix: bucket,
// Leave collectionIds empty to export all collections
// or set to a list of collection IDs to export,
// collectionIds: ['users', 'posts']
collectionIds: []
})
.then(responses => {
const response = responses[0];
console.log(`Operation Name: ${response['name']}`);
})
.catch(err => {
console.error(err);
throw new Error('Export operation failed');
});
});
Everything works well, my data is saved like I want to but nevertheless I am getting an error:
Error serializing return value: TypeError: Converting circular structure to JSON
Can someone tell me what I should change? Would be glad to get a hint.
I am following the tutorial to extract text from images at:
https://cloud.google.com/functions/docs/tutorials/ocr?authuser=1
But I do not wish to translate the text, I wish to detect and save the text.
The tutorial implements 3 functions:
gcloud beta functions deploy ocr-extract --trigger-bucket [YOUR_IMAGE_BUCKET_NAME] --entry-point processImage
gcloud beta functions deploy ocr-translate --trigger-topic [YOUR_TRANSLATE_TOPIC_NAME] --entry-point translateText
gcloud beta functions deploy ocr-save --trigger-topic [YOUR_RESULT_TOPIC_NAME] --entry-point saveResult
I just wish to detect text and save the text but I could not remove the translation portion of the code below:
/**
* Detects the text in an image using the Google Vision API.
*
* #param {string} bucketName Cloud Storage bucket name.
* #param {string} filename Cloud Storage file name.
* #returns {Promise}
*/
function detectText (bucketName, filename) {
let text;
console.log(`Looking for text in image ${filename}`);
return vision.textDetection({ source: { imageUri: `gs://${bucketName}/${filename}` } })
.then(([detections]) => {
const annotation = detections.textAnnotations[0];
text = annotation ? annotation.description : '';
console.log(`Extracted text from image (${text.length} chars)`);
return translate.detect(text);
})
.then(([detection]) => {
if (Array.isArray(detection)) {
detection = detection[0];
}
console.log(`Detected language "${detection.language}" for ${filename}`);
// Submit a message to the bus for each language we're going to translate to
const tasks = config.TO_LANG.map((lang) => {
let topicName = config.TRANSLATE_TOPIC;
if (detection.language === lang) {
topicName = config.RESULT_TOPIC;
}
const messageData = {
text: text,
filename: filename,
lang: lang,
from: detection.language
};
return publishResult(topicName, messageData);
});
return Promise.all(tasks);
});
}
After that, I just wish to save the detectec text to a file, as the code below shows:
/**
* Saves the data packet to a file in GCS. Triggered from a message on a Pub/Sub
* topic.
*
* #param {object} event The Cloud Functions event.
* #param {object} event.data The Cloud Pub/Sub Message object.
* #param {string} event.data.data The "data" property of the Cloud Pub/Sub
* Message. This property will be a base64-encoded string that you must decode.
*/
exports.saveResult = (event) => {
const pubsubMessage = event.data;
const jsonStr = Buffer.from(pubsubMessage.data, 'base64').toString();
const payload = JSON.parse(jsonStr);
return Promise.resolve()
.then(() => {
if (!payload.text) {
throw new Error('Text not provided. Make sure you have a "text" property in your request');
}
if (!payload.filename) {
throw new Error('Filename not provided. Make sure you have a "filename" property in your request');
}
if (!payload.lang) {
throw new Error('Language not provided. Make sure you have a "lang" property in your request');
}
console.log(`Received request to save file ${payload.filename}`);
const bucketName = config.RESULT_BUCKET;
const filename = renameImageForSave(payload.filename, payload.lang);
const file = storage.bucket(bucketName).file(filename);
console.log(`Saving result to ${filename} in bucket ${bucketName}`);
return file.save(payload.text);
})
.then(() => {
console.log(`File saved.`);
});
};
So, the tutorials there are based in a much more 'complex' setup (using Pub Sub and Translate also), and you only want to extract the text, so, with this, you should be able:
'use strict';
const Storage = require('#google-cloud/storage');
const Vision = require('#google-cloud/vision');
const bucketName = 'YOUR_BUCKET';
const srcFilename = 'YOUR_IMAGE.jpg';
const projectId = 'YOUR_PROJECT_ID';
const storage = new Storage({
projectId: projectId
});
const vision = new Vision.ImageAnnotatorClient({
projectId: projectId
});
exports.processImage = (req, res) => {
let text;
vision.textDetection(`gs://${bucketName}/${srcFilename}`)
.then(([detections]) => {
const annotation = detections.textAnnotations[0];
text = annotation ? annotation.description : '';
console.log(`Extracted text: ${text}`);
console.log(`Extracted text from image (${text.length} chars)`);
}).catch(vis_err => {
console.error("Vision error:" , vis_err);
});
res.status(200).send("OK");
}
My dependencies, in my package.json file:
"dependencies": {
"#google-cloud/vision": "0.21.0"
},
You can later on extend this to save this text to Storage, if you wish to. There are other tutorials on how to do so.