Node-Webkit read MP3 files - audio

I use the audio class to read MP3 file thanks to a little trick : replacing the ffmpegsumo.so of Node-Webkit with the chromium one. This enable the reading of MP3 on Windows but doesn't works on Mac OS. Does anyone know why ?
Here's the code :
player = new Audio()
player.src = '/path/to/the/audio.mp3';
player.play();

This seems to be dependant upon the dll/so being a 32 bit version. I am guessing that is why copying the file from Chrome doesn't work correctly for most people ( my 3 year old phone is the only 32-bit device I have left ).
I keep seeing this link --
https://github.com/rogerwang/node-webkit/wiki/Support-mp3-and-h264-in-video-and-audio-tag
.. but it is a blank page. I am guessing it was deleted since the info was likely not current or correct.
This issue thread has links to some rebuilt ffmpegsumo libraries for both Mac and Windows --
https://github.com/rogerwang/node-webkit/issues/1423
The alternative appears to be rebuilding ffmpegsumo, this thread has some config for doing that -- https://github.com/rogerwang/node-webkit/issues/1208
I am still confused about the licensing on it after you build the library, so that is probably worth some research. Everything about mpeg4-part10 is copyrighted and heavily patent encumbered. I think we all need to get smart enough to stop using mp4/h.264. Before I got this working correctly on node-webkit, it was easier to use ffmpeg to transcode the video to an ogv container using Theora and Vorbis codecs. At this point it seems like iOS is keeping h.264 alive, when it should probably die the horrible death it has earned.

Related

HTML5 Audio long buffering before playing

I'm currently making an electron app that needs to play some 40Mbyte audio file from the file system, maybe it's wrong to do this but I found that the only way to play from anywhere in the file system is to convert the file to a dataurl in the background script and then transfer it using icp, after that I simply do
this.sound = new Audio(dataurl);
this.sound.preload = "metadata"
this.sound.play()
(part of a VueJS component hence the this)
I did a profling inside electron and this is what came out:
Note that actually transferring the 40Mbytes audio file doesn't take that long (around 80ms) what is extremely annoying is the "Second Task" which is probably buffering (I have no idea) which last around 950ms, this is way too long and ideally would need it under <220ms
I've already tried changing the preload option to all available options and while I'm using the native html5 audio right now I've also tried howlerjs with similar results (seemed a bit faster tho).
I would guess that loading the file directly might be faster but even after disabling security measures put by electron to block the file:/// it isn't recognized as a valid URI by XHR
Is there a faster way to load the dataurl since all the data is there it just needs to be converted to a buffer or something like that ?
Note: I can not "pre-buffer" every file in advance since there is about 200 of them it just wouldn't make sense in my opinion.
Update:
I found this post Electron - throws Not allowed to load local resource when using showOpenDialog
don't know how I missed it, so I followed step 1 and I now can load files inside electron with the custom protocol, however, nor Audio nor howlerjs is faster, it's actually slower at around 6secs from click to first sound, is it that it needs to buffer the whole file before playing ?
Update 2:
It appears that the 6sec loading time is only effective on the first instance of audio that is created. I do not know why tho. After that the use of two instances (one playing and one pre-buffering) work just fine, however even loading a file that isn't loaded is instantaneous. Seems weird that it only is the firs one.

Will --vout=dummy option work with --video-filter=scene?

I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.

Convert an RTSP/RTMP-Livestream with G.711 audio into RTMP/RTSP with aac-audio

im new at this forum and my english skills are not the best!
I have a website where i publish the videostreams of the cameras to show what happens inside during the nesting-time live! An guy with high IT-skills has build me a little Server for Restream it (Datarhei-Restreamer) But this guy has still no time and worse response-times...
To my Problem: The Restreamer dont support the "G.711" Audio-Codec from the cameras and the Livestream are still without audio at the website. So, i need to convert the Livestreams (RTSP and RTMP- in H.264) so that the audio changes to "aac" or something other supported. But i have no plan how to do this. I tried it with FFMPEG but i dont find the correct commands to get the my result. There is something with an Streaming-server to send the new created stream to - i dont get it into my head to do this (i need just a stream that are viewable with VLC player and then as input for my restreamer-server, jsut the same like ca
I want to change the source-stream into the correct codec (audio from G.711 to AAC, the rest like source) and then, put this "new" stream into my Restreamer-Server and it will work fine! (Tested with XSplitbroadcaster, but dont runs on Raspberry, only 1 instance runable but 2 livestreams needs to be encoded at same time) And this programm has annoying bugs (endless and not removeable error-messages, but running stream)
I have a new second raspberry that are planned as "live-encoder" for the restreamer-raspberry were the "new" streams are are going in (rtmp/rtsp-input on a graphical ui) I try it still with FFMPEG but still no result...
Sorry about this long text with all the language-issues but im really frustrated with it because i have purchased 2 new cameras with total 450 euros just to get the livestream with sound now :(
Finally, I found the best solution here and it works (https://github.com/datarhei/restreamer/issues/11). Inside the long discussion, use the solution written by svenerbeck on 4 Apr 2016. The essential part is written below.
Create a new live.json in /mnt/live.json with the upcoming modification:
"ffmpeg": {
"options": {
"native_h264": [
"-vcodec copy",
"-acodec aac",
"-f flv"
],
.....
Exec the container with
docker run ... -v /mnt/live.json:/restreamer/conf/live.json ....

Euterpea Exception: No MIDI output device found

I am trying to get started using Haskell's Euterpea library. My first goal was to get it to play a given sound file (e.g. mp3 or wav), but first I ran into an issue following instructions to get it to just play a simple note sound in ghci.
Following the "Setting up MIDI" instructions at Euterpea's web page, I ran
import Euterpea
play $ c 4 qn
in ghci. The 'play' command resulted in the following error message:
Prelude Euterpea> play $ c 4 qn
*** Exception: No MIDI output device found
CallStack (from HasCallStack):
error, called at ./Euterpea/IO/MIDI/MidiIO.lhs:122:18 in Euterpea-2.0.2-Iz37iWlkpjn2emP4FnvOI1:Euterpea.IO.MIDI.MidiIO
I thought I needed to specify midi output to my machine (macOS Sierra) and found an application called 'Audio MIDI Setup', but it showed that a midi output (my internal speakers) was already specified.
Anyone know what this issue is or how to fix it?
Perhaps you solved this, but for posterity some ideas:
Sounds like you didn't install and run a MIDI synth (e.g. SimpleSynth) first. AFAICT, Audio MIDI Setup doesn't actually include a software synthesizer, it's more for advanced / hardware MIDI setup.
This should create the MIDI output devices that Euterpea couldn't find there. You may also need to play around with channels (e.g. use playDev n instead of play and work out a value for n from your device list... or even just try 1 through 8)
Either way there's some good Mac-focussed help on Donya's working with MIDI on Mac OS X page. Hope that helps.

audio error in vmware running mac os x

simple synchronous loading of an audio file (.mp3) in a cocos2d app makes my vmware disconnect the sound.
the error is display bottom right, saying 'error in creating sound stream; sound is disconnected'
i read that it might be cause of my vmware's version (mine is 8) but I'm looking for a fix, not to downgrade to another version.
before i get that error, the sound on the system works just fine (youtube, etc)
the exact code im calling is..
[CDSoundEngine setMixerSampleRate: CD_SAMPLE_RATE_MID];
[[CDAudioManager sharedManager] setResignBehavior: kAMRBStopPlay autoHandle:Yes];
soundEngine = [SimpleAudioEngine sharedEngine];
[soundEngine preloadBackgroundMusic:#"somemp3.mp3"];
[soundEngine playBackgroundMusic:#"somemp3.mp3"];
maybe the bit rate is too high .. ?
thanks
There was a problem in the vmware..

Resources