I am attempting to play a .ogg audio file after a event in a vscode extension:
export function activate(context: vscode.ExtensionContext) {
context.subscriptions.push(vscode.commands.registerCommand('progress-announcer.testAudio', () => {
new Audio("**path to ogg**").play();
}));
}
At first I tried with using the Audio object. But this code runs in the electron nodejs server not a webview so it is not defined.
I have tried following this tutorial but I cannot install vorbis as it breaks with the same reasons in this GitHub issue.
Surely there must be a way to play a ogg file with the speaker package? I want to avoid spinning a different process / application to run it as much as possible?
Related
I'm serving some static HTML, CSS and JS files and also a folder called tmp with some images and video files using Express.js for my Node app:
app.use(express.static("build"));
app.use(express.static("../tmp"));
When I go to http://localhost:3003, it loads up my app very nicely and it loads all the images on my webpage(located in the tmp folder) but the problem is every video file looks like this:
If I press fullscreen on the video player or even visit the url directly http://localhost:3003/video_1.mp4, it works.
Is this a problem with Express.js trying to stream the video data from the tmp folder? I really don't know how to solve this issue. I tried to delay the playback and use a 3rd party library to play the video but no luck.
Seems to work when I directly specify the whole path localhost:3003/picture.png in src of the video element
I'm trying to embed a Finder Sync extension written in Swift in my app written with Electron. How can I manage to make them work together and communicate with each other? I have read the Apple documentation but it only explains how to add a target to a native application. I also noticed that I can manually inject the .appex compiled file (produced by XCode) in the application Plugins folder using electron builder.
How can I develop and test the extension in XCode and embed it correctly in a custom Electron app?
Any suggestion?
Thank you very much for any suggestion
Create PlugIns folder in your Electron root folder.
Copy the .appex file into PlugIns folder.
If you are using electron-builder, modify the package.json file - add:
"extraFiles": ["PlugIns/"] in the "mac" section.
Build. The Contents of your app package will contain the PlugIns folder and your appex file inside, and the appex will get loaded into your app's process.
How to embed a mac app extension in an Electron app?
I would compile it as an independent binary and include it in some dir to be executed from the electron app using child_process.execFile
You can use arguments when executing the binary with execFile, here is an example (using promise)
const util = require('util');
const execFile = util.promisify(require('child_process').execFile);
async function FinderSyncExtPlugin(ARGUMENTS) {
const { stdout } = await execFile('YourBinary', ARGUMENTS);
console.log(stdout);
}
FinderSyncExtPlugin(['argument1','argument2','...']);
You could then use the stdout to know the status/result of the requested operation.
I'm storing audio files on Google Cloud Storage (through Firebase storage).
I need to use FFMPEG to convert the audio file from stereo (two channels) to mono (one channel).
How can I perform the above conversion on Google Cloud Platform?
Update:
I suspect one possibility is to use Google Compute Engine to create a virtual machine, install ffmpeg, and somehow gain access to the audio files.
I'm not sure if this is the best way or even possible. So I'm still investigating.
If you have code that exists already which can talk to Google Cloud Storage, you can deploy that code as an App Engine application which runs on a Custom Runtime. To ensure the ffmpeg binary is available to your application, you'd add this to your app's Dockerfile:
RUN apt-get install ffmpeg
Then, it is just a matter of having your code save the audio file from GCS somewhere in /tmp, then shelling out to /usr/bin/ffmpeg to do your conversion, then having your code do something else with the resulting output file (like serving it back to the client or saving it back to Cloud Storage).
If you're not using the flexible environment or Kubernetes, download the ffmpeg binaries (Linux-64) from https://ffbinaries.com/downloads and include ffmpeg and ffprobe directly in your app. For apps using the standard environment this is really the only way without switching.
Once you've added them, you'll need to point to them in your options array:
$options = array(
'ffmpeg.binaries' => '/workspace/ffmpeg-binaries/ffmpeg',
'ffprobe.binaries' => '/workspace/ffmpeg-binaries/ffmpeg',
'timeout' => 3600,
'ffmpeg.threads' => 12,
);
To have it work locally, you should make them environment variables to point to the correct path in each set up. Add something like export FFMPEG_BINARIES_PATH="/usr/local/bin" (or wherever you have them locally) to your .zshrc or other rc file and the below code to your app.yaml:
env_variables:
FFMPEG_BINARIES_PATH: '/workspace/ffmpeg-binaries'
And then change the options array to:
$options = array(
'ffmpeg.binaries' => getenv("FFMPEG_BINARIES_PATH") . '/ffmpeg',
'ffprobe.binaries' => getenv("FFMPEG_BINARIES_PATH") . '/ffmprobe',
'timeout' => 3600,
'ffmpeg.threads' => 12,
);
The gist of the issue is that IBM Watson Speech to Text only allows for FLAC, WAV, and OGG file formats to be uploaded and used with the API.
My solution to that would be that if the user uploads an mp3, BEFORE sending the file to Watson, a data conversion would take place. Essentially, the user uploads an mp3, then using ffmpeg or sox the audio would be converted to an OGG, after which the audio would then be uploaded to Watson.
What I am unsure about is: What exactly do I have to modify in the Node.js Watson code to allow for the audio conversion to happen? Linked below is the Watson repo which is what I am working through. I am sure that the file that will have to be changes is fileupload.js, which I have linked, but where the changes go is what I am uncertain about?
I have looked through both SO and developerWorks, the IBM SO for answers to this issue, but I have not seen any which is why I am posting here. I would be happy to clarify my question if that is necessary.
Watson Speech to Text Repo
The Speech to Text sample application you are trying to use doesn't convert MP3 files to OGG. The src folder(with fileupload.js on it) is just javascript that will be used on the client side(thanks to Browserify).
The application is basically communicating the browser with the API using CORS so the audio goes from the browser to the Watson API.
If you want to convert the audio using ffmpeg or sox you will need to install the dependencies using a custom buildpack since those modules have binary dependencies (C++ code in them)
James Thomas has a buildpack with sox on it: https://github.com/jthomas/nodejs-buildpack.
You need to update your manifest.yml to be something like:
memory: 256M
buildpack: https://github.com/jthomas/nodejs-buildpack.git
command: npm start
Node:
var sox = require('sox');
var job = sox.transcode('audio.mp3', 'audio.ogg', {
sampleRate: 16000,
format: 'ogg',
channelCount: 2,
bitRate: 192 * 1024,
compressionQuality: -1
});
this is the scenario i'm trying to achieve: a sound stored on the same server as a web application, plays when a condition is met on the client. It works perfectly when I run it in the IDE and change the webconfig to point to the server where the DB is. However when I deploy it and access it via the browser, the sound does not play. The same sound that played when i used my development machine. Code is:
var configsetings = new System.Configuration.AppSettingsReader();
string soundPath= configsetings.GetValue("Notification",typeof(System.String)).ToString();
var sound = new System.Media.SoundPlayer { SoundLocation = Server.MapPath(soundPath) };
sound.Load();
sound.Play();
web config is:
<add key="Notification" value="~/beep-4.wav" />
The sound file is sitting in the root folder of the ASP.NET web application. So what could be wrong? There is no audio output device on the server neither is there a player like media player nevertheless these factors did NOT stop it from working in my dev machine.
Looking at the code you posted I will assume you wrote it in C#.
So, this code will run on the server-side, and the client-side (the web browser) will never know about it or about your audio file. Please read about asp.net code-behind and how it works. If you want to play an audio file in the browser (client-side), you need to use either javascript, or flash, or the < audio > tag from html5.
By installing a sound card on a server you will only achieve (in a best case scenario) to get the file played on that server.
Thanks yms, the tag worked. I put a routine that writes the tag's HTML to a div at run time and put it in a timer.
sounddiv.InnerHtml = "<audio preload=\"auto\" autoplay=\"autoplay\">" +
"<source src=\"" + soundPath + "\" type=\"audio/wav\" />" +
" Your browser does not support the audio tag. </audio>";
This code is called in the code behind in a timer in response to the condition.
So the sound repeats every 30 seconds. Problem solved. Thanks guys for the leads.