I have node js server service running on a Google Cloud App Engine.
I have JSON file in the assets folder of the project that needs to update by the process.
I was able to read the file and configs inside the file. But when adding the file getting Read-Only service error from the GAE.
Is there a way I could write the information to the file without using the cloud storage option ?
It a very small file and using the cloud storage thing would be using a very big drill machine for a Allen wrench screw
Thanks
Nope, in App Engine Standard there is no such a file system. In the docs, the following is mentioned:
The runtime includes a full filesystem. The filesystem is read-only except for the location /tmp, which is a virtual disk storing data in your App Engine instance's RAM.
So having this consideration you can write in /tmp but I suggest to Cloud Storage because if the scaling shutdowns all the instances, the data will be lost.
Also you can think of App Engine Flex which offers to have a HDD (because its backend is a VM) but the minimum size is 10GB so it will be worst than using Storage.
Once thanks for steering me not to waste time finding a hack solution for the problem.
Any way there was no clear code how to use the /tmp directory and download/upload the file using the app engine hosted node.js application.
Here is the code if some one needs it
const {
Storage
} = require('#google-cloud/storage');
const path = require('path');
class gStorage {
constructor() {
this.storage = new Storage({
keyFile: 'Please add path to your key file'
});
this.bucket = this.storage.bucket(yourbucketname);
this.filePath = path.join('..', '/tmp/YourFileDetails');
// I am using the same file path and same file to download and upload
}
async uploadFile() {
try {
await this.bucket.upload(this.filePath, {
contentType: "application/json"
});
} catch (error) {
throw new Error(`Error when saving the config. Message : ${error.message}`);
}
}
async downloadFile() {
try {
await this.bucket.file(filename).download({
destination: this.filePath
});
} catch (error) {
throw new Error(`Error when saving the config. Message : ${error.message}`);
}
}
}
Related
I created a function that extracts a specific attribute from a JSON file, but this file was together with the function in Cloud Functions. In this case, I was simply attaching the file and was able to refer to a specific attribute:
const jsonData = require('./data.json');
const result = jsonData.responses[0].fullTextAnnotation.text;
return result;
Ultimately, I want to read this file directly from cloud storage and here I have tried several solutions, but without success. How can I read a JSON file directly from google storage so that, as in the first case, I can read its attributes correctly?
As mentioned in the comment the Cloud Storage API allows you to do many things through API. Here's an example from documentation on how to download a file from Cloud Storage for your reference.
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';
// The ID of your GCS file
// const fileName = 'your-file-name';
// The path to which the file should be downloaded
// const destFileName = '/local/path/to/file.txt';
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
async function downloadFile() {
const options = {
destination: destFileName,
};
// Downloads the file
await storage.bucket(bucketName).file(fileName).download(options);
console.log(
`gs://${bucketName}/${fileName} downloaded to ${destFileName}.`
);
}
downloadFile().catch(console.error);
To clearly answer the question: you can't!
You need to download the file locally first, and then process it. You can't read it directly from GCS.
With Cloud Functions you can only store file in the /tmp directory, it's the only one writable. In addition, it's an in-memory file system, that means several things:
The size is limited by the memory set up to the Cloud Function. The memory space is shared between your app memory footprint and your file storage in /tmp (you won't be able to download a file of 10Gb for example)
The memory is lost when the instance goes down and
All the Cloud Functions instances have their own memory space. You can't share the files between all the Cloud Functions
The /tmp directory isn't cleaned between 2 functions invocation (on the same instance). Think to cleanup yourselves this directory.
I make a simple audio creating web app using Node.js server. I would like to create audio using Cloud text to speech API and then upload that audio to Cloud storage.
(I use Windows 10, Windows Subsystems for Linux, Debian 10.3 and Google Chrome browser. )
This is the code in Node.js server.
const client = new textToSpeech.TextToSpeechClient();
async function quickStart() {
// The text to synthesize
const text = 'hello, world!';
// Construct the request
const request = {
input: {text: text},
// Select the language and SSML voice gender (optional)
voice: {languageCode: 'en-US', ssmlGender: 'NEUTRAL'},
// select the type of audio encoding
audioConfig: {audioEncoding: 'MP3'},
};
// Performs the text-to-speech request
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
console.log(response);
I would like to upload response to Cloud Storage.
Can I upload response to Cloud Storage directly? Or Do I have to save response in Node.js server and upload it to Cloud Storage?
I searched the Internet, but couldn't find the way to upload response to Cloud Storage directly. So, if you have a hint, please tell me. Thank you in advance.
You should be able to do that, with all your code in the same file. The best way for you to achieve that, it's by using a Cloud Function, that will be the one sending the file to your Cloud Storage. But, yes, you will need to save your file using Node.js, so then, you will upload it to Clou Storage.
To achieve that, you will need to save your file locally and then, upload it to Cloud Storage. As you can check in a complete tutorial in this other post here, you need to construct the file, save it locally and then, upload it. Below code is the main part you will need to add in your code.
...
const options = { // construct the file to write
metadata: {
contentType: 'audio/mpeg',
metadata: {
source: 'Google Text-to-Speech'
}
}
};
// copied from https://cloud.google.com/text-to-speech/docs/quickstart-client-libraries#client-libraries-usage-nodejs
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
// response.audioContent is the downloaded file
return await file.save(response.audioContent, options)
.then(() => {
console.log("File written to Firebase Storage.")
return;
})
.catch((error) => {
console.error(error);
});
...
Once you have this part implemented, you will have the file that is saved locally downloaded and ready to be uploaded. I would recommend you to take a better look at the other post I mentioned, in case you have more doubts on how to achieve it.
Let me know if the information helped you!
Here is my scenario.
I have placed a config file (.xml) into an Azure Blob Storage container
I want to edit that xml file and update/add content to it.
I want to deploy an api to an azure app service that will do that.
I built an api that runs locally that handles this but that isn't exactly going to cut it as a cloud application. This particular iteration is a NODEjs api that uses the Cheerio and File-System modules in order to manipulate and read the file respectively.
How can I retool this to be work with a file that lives in Azure blob storage?
note: Are azure blobs the best place to start with the file even? Is there a better place to put it?
I found this but it isn't exactly what I am after.....Azure Edit blob
Considering the data stored in blob is XML (in other words string type), instead of using getBlobToStream method, you can use getBlobToText method, manipulate the string, and then upload that updated string using createBlockBlobFromText.
Here's the pseudo code:
blobService.getBlobToText('mycontainer', 'taskblob', function(error, result, response) {
if (error) {
console.log('Error in reading blob');
console.error(error);
} else {
var blobText = result;//It
var xmlContent = someMethodToConvertStringToXml(blobText);//Convert string to XML if it is easier to manipulate
var updatedBlobText = someMethodToEditXmlContentAndReturnString(xmlContent);
//Reupload blob
blobService.createBlockBlobFromText('mycontainer', 'taskblob', updatedBlobText, function(error, result, response) {
if (error) {
console.log('Error in updating blob');
console.error(error);
} else {
console.log('Blob updated successfully');
}
});
}
});
Simply refactor your code to use the Azure Storage SDK for Node.js https://github.com/Azure/azure-storage-node
As part of a larger web app, I'm using a combination of Google Datastore and Firebase. On my local machine, all requests go through seamlessly, however when I deploy my app to the GAE (using node.js - Flexible Environment) everything works except the calls to the datastore. The requests do not throw an error, directly or via promise and just never return, hanging the process.
My current configuration is set up to use a Service Account key file containing my private key. Ive checked that it has the proper scope (and even added more than i should just in case to have Datastore Owner permissions).
I've distilled the app down to the bare bones, and still no luck. I'm stuck and looking for any suggestions.
const datastore = require( '#google-cloud/datastore' );
const config = require( 'yaml-config' )
.readConfig( 'config.yaml' );
module.exports = {
get_test: function(query, callback) {
var ds_ref = datastore({
projectId: config.DATASTORE_PROJECT,
keyFilename: __dirname + config.GOOGLE_CLOUD_KEY
});
var q = ds_ref.createQuery('comps')
.filter('record', query.record);
ds_ref.runQuery(q, function(err, entities) {
if (!err) {
if (entities.length > 0) {
callback(err, entities[0]);
} else {
callback(err, []);
}
} else {
callback(err, undefined);
}
});
}
}
UPDATE:
Tried manual_scaling found here but didn't seem to work. Also found this article that seems to be a similar issue.
The problem seems to be in the grpc module. Use 0.6.0 version of datastore. This will automatically use an older version of grpc. The solution will work for compute engine. However you will still face problems with the flexible environment. This is because when the flexible environment is deployed, it will use the new modules which have the problem.
Also please refer to the following links on gitHub:
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1955
https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1946
Please keep a watch of these links for an update in resolution.
I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.