Get upload progress using Azure Storage Node.JS SDK - node.js

Looking at this, we have some code that looks like
var azure = require('azure-storage');
var blobSvc = azure.createBlobService()
blobSvc.createBlockBlobFromLocalFile('mycontainer', 'giantFile', 'giantFile.bak', function(error, result, response){
if(!error){
// file uploaded
}
});
This "work" but we have no idea about the status of each upload until it returns. We'd like to print the progress of the upload to the console since it's a sequence of very large files that can take several hours.

I'm afraid i don't think its possible out of the box with the SDK. MSDN Artical Apparently such an API hasn't been implemented yet.

Related

Editing a file in Azure Blob Storage with an API

Here is my scenario.
I have placed a config file (.xml) into an Azure Blob Storage container
I want to edit that xml file and update/add content to it.
I want to deploy an api to an azure app service that will do that.
I built an api that runs locally that handles this but that isn't exactly going to cut it as a cloud application. This particular iteration is a NODEjs api that uses the Cheerio and File-System modules in order to manipulate and read the file respectively.
How can I retool this to be work with a file that lives in Azure blob storage?
note: Are azure blobs the best place to start with the file even? Is there a better place to put it?
I found this but it isn't exactly what I am after.....Azure Edit blob
Considering the data stored in blob is XML (in other words string type), instead of using getBlobToStream method, you can use getBlobToText method, manipulate the string, and then upload that updated string using createBlockBlobFromText.
Here's the pseudo code:
blobService.getBlobToText('mycontainer', 'taskblob', function(error, result, response) {
if (error) {
console.log('Error in reading blob');
console.error(error);
} else {
var blobText = result;//It
var xmlContent = someMethodToConvertStringToXml(blobText);//Convert string to XML if it is easier to manipulate
var updatedBlobText = someMethodToEditXmlContentAndReturnString(xmlContent);
//Reupload blob
blobService.createBlockBlobFromText('mycontainer', 'taskblob', updatedBlobText, function(error, result, response) {
if (error) {
console.log('Error in updating blob');
console.error(error);
} else {
console.log('Blob updated successfully');
}
});
}
});
Simply refactor your code to use the Azure Storage SDK for Node.js https://github.com/Azure/azure-storage-node

Rackspace cloud taking too long to upload?

Im following rackspace example on file upload to cloud storage on docs. It is works but the upload are taking too loong. Like really longer! No matter what region do I use 17,kb file take more than 3sec is this actual behaviour of rackspace cloud, they really slow?
Im using rackspace with nodejs, whith the help of pacakage named pkgcloud.
// taken from pkgcloud docs
var readStream = fs.createReadStream('a-file.txt');
var writeStream = client.upload({
container: 'a-container',
remote: 'remote-file-name.txt'
});
writeStream.on('error', function(err) {
// handle your error case
});
writeStream.on('success', function(file) {
// success, file will be a File model
});
readStream.pipe(writeStream);
The purpose here is, I do image processing on the backend then I send back CDN URL to the user, but a user cannot wait too long 2MB took forever to upload -- timeout and held my server until crash since the stream aren't finished yet

Streaming upload from NodeJS to Dropbox

Our system needs to use out internal security checks when interacting with dropbox, we can therefore not use the clientside SDK for Dropbox.
We would rather upload to our own endpoint, apply security checks, and then stream the incoming request to dropbox.
I am coming up short here as there was an older NodeJS Dropbox SDK which supported pipes, but the new SDK does not.
Old SDK:
https://www.npmjs.com/package/dropbox-node
We want to take the incoming upload request and forward it to dropbox as it comes in. (and thus prevent the upload from taking twice as long if we first upload the entire thing to our server and then upload to dropbox)
Is there any way to solve this?
My Dropbox NPM module (dropbox-v2-api) supports streaming. It's based on HTTP API, so you can take an advantage of streams. Example? I see it this way:
const contentStream = fs.createReadStream('file.txt');
const securityChecks = ... //your security checks
const uploadStream = dropbox({
resource: 'files/upload',
parameters: { path: '/target/file/path' }
}, (err, result, response) => {
//upload finished
});
contentStream
.pipe(securityChecks)
.pipe(uploadStream);
Full stream support example here.

Azure Functions: Nodejs, What are restrictions / limitations when using file system?

I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.

simple Azure website (built with nodejs), how to log http get request

I have a simple Azure website (free or shared) which is built with nodejs/expressjs. There is no local or database storage.
I'd like to save incoming http/get request for further analysis. I guess I can't just save req to the local drive/json temp file.
Is there a way to save to some log file that I can ftp download later?
simpler and less cost, the better.
Something like this:
fs = require('fs');
function homePage(req, res){
var d = new Date();
var logDate = d.getTime();
fs.writeFile(logDate+'.txt', req, function (err) {
if (err) return console.log(err);
console.log('Logged');
});
}
So the first line we call node's file system as a requirement. Then we write a homepage route that created a date variable to use as the log's file name. After that we use fs to write the request to the file.
You'll need to do some tinkering to optimize readability but this'll get you started. File's shouldn't overwrite since we used the time, but it might overwrite if you get huge traffic.

Resources