Using FFMPEG on Google Cloud Platform - audio

I'm storing audio files on Google Cloud Storage (through Firebase storage).
I need to use FFMPEG to convert the audio file from stereo (two channels) to mono (one channel).
How can I perform the above conversion on Google Cloud Platform?
Update:
I suspect one possibility is to use Google Compute Engine to create a virtual machine, install ffmpeg, and somehow gain access to the audio files.
I'm not sure if this is the best way or even possible. So I'm still investigating.

If you have code that exists already which can talk to Google Cloud Storage, you can deploy that code as an App Engine application which runs on a Custom Runtime. To ensure the ffmpeg binary is available to your application, you'd add this to your app's Dockerfile:
RUN apt-get install ffmpeg
Then, it is just a matter of having your code save the audio file from GCS somewhere in /tmp, then shelling out to /usr/bin/ffmpeg to do your conversion, then having your code do something else with the resulting output file (like serving it back to the client or saving it back to Cloud Storage).

If you're not using the flexible environment or Kubernetes, download the ffmpeg binaries (Linux-64) from https://ffbinaries.com/downloads and include ffmpeg and ffprobe directly in your app. For apps using the standard environment this is really the only way without switching.
Once you've added them, you'll need to point to them in your options array:
$options = array(
'ffmpeg.binaries' => '/workspace/ffmpeg-binaries/ffmpeg',
'ffprobe.binaries' => '/workspace/ffmpeg-binaries/ffmpeg',
'timeout' => 3600,
'ffmpeg.threads' => 12,
);
To have it work locally, you should make them environment variables to point to the correct path in each set up. Add something like export FFMPEG_BINARIES_PATH="/usr/local/bin" (or wherever you have them locally) to your .zshrc or other rc file and the below code to your app.yaml:
env_variables:
FFMPEG_BINARIES_PATH: '/workspace/ffmpeg-binaries'
And then change the options array to:
$options = array(
'ffmpeg.binaries' => getenv("FFMPEG_BINARIES_PATH") . '/ffmpeg',
'ffprobe.binaries' => getenv("FFMPEG_BINARIES_PATH") . '/ffmprobe',
'timeout' => 3600,
'ffmpeg.threads' => 12,
);

Related

updating a deployment - uploaded images gets deleted after redeployment to google cloud

So I have a node js web app, this web app has a folder to store images uploaded by users from a mobile app. How I upload the image to the folder is by using the image's base64 string, and using fs.writeFile to save the image to the folder, like this:
fs.writeFile(__dirname + '/../images/complaintImg/complaintcase_' + data.cID + '.jpg', Buffer.from(data.complaintImage, 'base64'), function (err) {
if (err) {
console.log(err);
} else {
console.log("success");
}
});
The problem is, whenever the application is redeployed to google cloud, the images gets deleted. This is because the image folder of the local version of the application is empty - when the user uploads an image, i don't get a local copy of that image.
How do i prevent the images from getting deleted with every deployment? because the app is constantly updated (changes to js or html files), i can't have the images getting deleted with every deployment. How do i update a deployment to only deploy certain files? the gcloud app deploy command seems to deploy the entire project. or should i upload the images directly to google cloud storage?
please help, currently the mobile app isn't released to the public yet, so having the images deleted with every deployment is still not a big problem now, but it will be once it's released to the public. because the images they upload are very important. thank you in advance!
It appears that your __dirname directory you chose may be under /tmp or, if you use the flexible environment, some other directory local to your instance. If so the images will disappear whenever new instances are started (which always happens at new deployment, but it can happen in between deployments as well). This is expected, the instances are always started "from scratch".
You need to store the files that your app creates and you want to survive instance (re)starts on a persistent storage product, like Cloud Storage, see Using Cloud Storage (or Using Cloud Storage for flexible env). Note that you can't use the regular filesystem calls with Cloud Storage, you need to use the documented client library.
As stated in Dan Cornilescu's answer, for user uploaded files, you should store them in Cloud Storage for GAE Standard or for GAE Flexible.
Just as a reference, there is an alternative for those who are using Python 2.7, Java 8 or PHP 5, which is the BlobStore API

Issue Image Upload to Azure Blog using Nativescript Core

I am using Nativescript 4.2.0 and trying to upload an image from a local image to Azure Blob storage.
Most approaches recommended use the nativescript-background-http plugin. However by including this plugin, there are errors that start coming up requiring other npm modules. I haven't seen this reported anywhere, so I am unsure if I am doing something wrong or there are any other commands to run beside
tns install nativescript-background-http
The other plugin "nativescript-azure-storage" seems to work fine. This requires us to base64 encode our images. After Base64 encoding, the image gets uploaded to Azure Storage. However, since the image is now base64 encoded, it can't be used directly in .
Code used:
const azureNSStorage = new nsAzureStorage.NativeScriptAzureStorage(config.AZURE_STORAGE_CONNECTION_STRING);
let path = selected.android;
const imageFromLocalFile = imageSourceModule.fromFile(path);
let base64string = imageFromLocalFile.toBase64String('png');
azureNSStorage.uploadBlob(mycontainer, blobName,
base64string)
.then(() => alert(`Uploaded successfuly`))
.catch((err) => alert(`Error uploading: ${err}`));
What is the recommended way to upload images on Azure Blob so that we can reference them back in Nativescript page?
Cheers
Abhishek
It actually depends on what you need or what your backend / service provider supports. So until azure works for you as expected, there is nothing wrong with converting the image into base64 string.
Between nativescript-background-http should work too, let's know what are the errors you are facing here.
Check this sample. For Azure Blob does not accept base 64 string. You need to send the stream. https://baskarrao.wordpress.com/2018/10/12/day-3-nativescript-post-series/

Electron Node.js node localstorage osx mkdir permission denied

I am working with Electron and Node.js. We have developed an application that works fine on windows and as a requirement had to package it for mac os. I packaged the application using electron-packager, the packaging process completes and package is generated. Double clicking it throws an error that permission denied for mkdir, as i am using node localstorage to maintain some settings on the user's local machine. somehow mac doesn't local storage to create folder in the root of the application. Any help in this matter will be great. Thanks
First off, is the code in question in the main process or in a renderer process? If it is the latter, you don't need to use 'node-localstorage', because you can use the renderer's native LocalStorage. If you are in the main process, then you need to provide your own storage strategy so using 'node-localstorage' is a viable option.
In any case, you need to carefully consider where to store the data; for starters, let's look at where Electron's renderer processes would store its LocalStorage data: this differs based on the OS, but you can get and set the paths using the app module -- the path in question is userData, which on OS X would default to ~/Library/Application Support/<App Name>. Electron uses that folder to persist cookies, caches, LocalStorage etc. so I would suggest using that folder as well. (Otherwise, refer to XDG defaults for good defaults)
What your example above was trying to do is store your 'errorLogDb' in the current working directory, which might depend on your OS, where your App is installed, how you executed it, etc.
Finally, it's a good idea to differentiate between your 'production' app and your app during development and testing, because you might not want to use the same storage folders for every environment. In any case, just writing to './errorLogDb' is likely to cause lots of headaches so I'd be thankful for the permission denied error.
this strategy worked for me:
const { LocalStorage } = require('node-localstorage');
let ls;
mb.on('ready', () => {
let prefsPath = mb.app.getPath('userData') + '/prefs';
ls = new LocalStorage(prefsPath);
loadPrefs();
});
mb.on('after-create-window', () => { /* ls... */ }
exports.togglePref = () => { /* ls... */ }

Temporary File Download

Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.

Upload filestream to cloud-files with Node.js

I know that it's possible to upload files to my cloud-files account in Node.js, using the following module: node-cloudfiles.
But is it also possible to upload a filestream directly?
In my case I am dowloading an image from a certain location in Node.js and want to upload this directly to my cloud-files account without saving the image temporary on my server.
Of course it is possible - you can just read the documentation on Rackspace Cloud Files API ( http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf ) and implement the necessary parts yourself.
However, I'd suggest to wait until https://github.com/nodejitsu/node-cloudfiles/pull/11 gets integrated into the trunk - then node-cloudfiles library will support uploading files using streams so you won't have to create files before uploading.

Resources