HTML5 Audio long buffering before playing - audio

I'm currently making an electron app that needs to play some 40Mbyte audio file from the file system, maybe it's wrong to do this but I found that the only way to play from anywhere in the file system is to convert the file to a dataurl in the background script and then transfer it using icp, after that I simply do
this.sound = new Audio(dataurl);
this.sound.preload = "metadata"
this.sound.play()
(part of a VueJS component hence the this)
I did a profling inside electron and this is what came out:
Note that actually transferring the 40Mbytes audio file doesn't take that long (around 80ms) what is extremely annoying is the "Second Task" which is probably buffering (I have no idea) which last around 950ms, this is way too long and ideally would need it under <220ms
I've already tried changing the preload option to all available options and while I'm using the native html5 audio right now I've also tried howlerjs with similar results (seemed a bit faster tho).
I would guess that loading the file directly might be faster but even after disabling security measures put by electron to block the file:/// it isn't recognized as a valid URI by XHR
Is there a faster way to load the dataurl since all the data is there it just needs to be converted to a buffer or something like that ?
Note: I can not "pre-buffer" every file in advance since there is about 200 of them it just wouldn't make sense in my opinion.
Update:
I found this post Electron - throws Not allowed to load local resource when using showOpenDialog
don't know how I missed it, so I followed step 1 and I now can load files inside electron with the custom protocol, however, nor Audio nor howlerjs is faster, it's actually slower at around 6secs from click to first sound, is it that it needs to buffer the whole file before playing ?
Update 2:
It appears that the 6sec loading time is only effective on the first instance of audio that is created. I do not know why tho. After that the use of two instances (one playing and one pre-buffering) work just fine, however even loading a file that isn't loaded is instantaneous. Seems weird that it only is the firs one.

Related

Heroku cannot store files temporarily

I am writing a nodejs app which works with fonts. One action it performs is that it downloads a .ttf font from the web, converts it to a base64 string, deletes the .ttf and uses that string in other stuff. I need the .ttf file stored somewhere, so I convert it. This process takes like 1-2 seconds. I know heroku has an ephemeral file system but I need to store stuff for such a short time. Is there any way I can store my files? Using fs.writeFile currently returns this error:
Error: EROFS: read-only file system, open '/app\test.txt']
I had idea how about you make an action, That would get font, convert it and store it on a global variable before used by another task.
When you want to use it again, make sure you check that global variable already filled or not with that font buffer.
Reference
Singleton
I didn't know that you could store stuff in /tmp directory. It is working for the moment but according to the dyno/ephemeral system, it gets cleaned frequently so I don't know if it may cause other problems in the long run.

What's the best practice for watching for finished video encodes?

TL;DR
I'm unsure the best way to recognise when encoding videos have finished with Chokidar. Given the different methods encoders build their video files, what's the best way to accomodate all of them?
Context
I've developed a workflow for our office that allows us to quickly queue encode jobs in Adobe Premiere Pro. To queue them locally, I made use of Premiere's CEP API. I can easily send a job to Adobe Media Encoder (on the same machine) and it will automatically encode the video file to the relative project directory. This works great.
To queue encode jobs onto LAN workstations, I've taken a different approach, as the CEP API doesn't allow for any extensibility beyond the local machine. Instead I made use of Adobe Media Encoder's watch folders to detect added Premiere project files to a subfolder on our NAS (everything is on the NAS). This works great too.
Unfortunately, I'm unaware of a way for the queued encodes to be output to the relative project directory in the same way queuing locally does. I'm trying to find a way to do this by watching a common directory and moving finished files.
Since each video filename I'm queuing has this structure:
"projectName_sequenceName_givenName_renderType.mp4/.mxf" I've been able to move the files with this information easily. However, I'm struggling to accomodate for the different methods different encoding processes use. Different encoders - X264, MainConcept H264, etc - encode to disk differently.
Using Chokidar, I watched how different encoders build their files:
Example #1:
If I start a DnXHR MXF encode, it will first create the final .MXF container and then fill it. When it finishes, it writes the sidecar .XMP file. If the encode fails or is cancelled, the sidecar file will not be written.
Example #2:
If I start a TMPG x264 encode, it will first create the final .mp4 container, then create a temporary file: '.mp4_00_' appended. It will then write some initial metadata to the final container, start encoding to the '.mp4_00_' and depending on file size, create additional temporary files, '_.mp4_01_', etc. Finally it writes some additional information to the container, then to the temporary files and then deletes the temporary files. If the encode fails or is cancelled, the files are deleted.
Example #3:
If I start a MainConcept H264 encode (Premiere's default), it will first create the audio temp file, in this case '.aac'. Then create another temp file '.mkv.md0'. Halfway through encoding, it will create the video container '.m4v', start encoding to that, create some more temporary files '.md7/md6', create the final container '.mp4', along with 'sbjo.tmp', copy the '.mkv' file and '.aac' into the '.mp4' container, add a '00' file, very quickly delete it and then finish writing the '.mp4' metadata. Some of this happens very quickly and Chokidar has not always picked it up. Unless the encoder is being inconsistent.
These are the three encode types I've observed, and they're the three we need and use. I suppose I could watch each of them differently, but my concern would be if we ever switched encoders, I'd have to rewrite the code to accomodate them. The watch folder feature that Adobe Media Encoder has recognises when files have finished encoding before attempting to use them. I haven't tested every format, but a good deal. Would Media Encoder be accomodating each unique encoding process? Simply polling locked files? Or is there something I'm missing?
The code I have currently works fine for DNxHR MXFs provided they don't fail or are cancelled. It struggles with the h264/x264 examples. Since the file is created and left untouched while encoding to the temporary files, chokidar will register 'add'. Since the file is locked the move fails. Obviously this works fine when simply copying or moving a finished video file.
const watcher = chokidar.watch(['Z:/NETWORKRENDER/Finished/*.{mp4,mxf}'], {
persistent: true,
// On start, works on existing files
ignoreInitial: false,
followSymlinks: true,
interval: 1000,
awaitWriteFinish: {
stabilityThreshold: 5000,
pollInterval: 20000
},
});

Will --vout=dummy option work with --video-filter=scene?

I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.

Can you read the length of an mp3 file in python 3 on windows 10?

I am currently creating a music player in python 3.3 and I have a way of opening the mp3/wav files, namely through using through 'os.startfile()', but, this way of running the files means that if I run more than one, the second cancels the first, and the third cancels the second, and so on and so forth, so I only end up running the last file. So, basically, I would like a way of reading the mp3 file length so that I can use 'time.sleep(SongLength)' between the start of each file.
Thanks in advance.
EDIT:
I forgot to mention, but I would prefer to do this using only pre-installed libraries, as i am hoping to publish this online as a part of a (much) larger program
i've managed to do this Using an external module, as after ages of trying to do it without any, i gave up and used tinytag, as it is easy to install and use.
Nothing you can do without external libraries, as far as I know. Try using pymad.
Use it like this:
import mad
SongFile = mad.MadFile("something.mp3")
SongLength = SongFile.total_time()

Why are images not fully shown with my streaming app?

I've written a simple Node.js app that streams content (primarily for videos, but other stuff works too). It seems to work fine except with some jpegs, where the bottom of the image is not always displayed.
Here's an example, with the bottom portion of the picture missing on the left.
A site showing the problem is here and an identical-looking site without the problem is here. (There should be pink bits at the bottom of most images with the first link.)
The code for the app is on Github, but I guess the important lines are
stream = fs.createReadStream(info.file, { flags: "r", start: info.start, end: info.end });
stream.pipe(res);
(So all of the heavy lifting is done by stream.js, not my code.)
I've tried removing the start and end params, but it makes no difference.
If I change it to stream the file to process.stdout instead and save the output to a file, the whole image is shown.
A file comparison program says the file from stdout and the original are identical.
If I transfer something other than a baseline jpeg (so even progressive jpegs work), it shows the whole file.*
If I access the file via the linked Node.js app (which uses Express), but not through the streamer, it shows the whole image.
If I save the incomplete picture from my browser to the desktop, it saves the whole image.
For some jpegs it works, but not for most.
The problem happens locally and on my server (Windows 2008 R2 and Ubuntu 11 resp.).
Browser tests (over 3 computers & 1 VM)
Safari 4, 5 - works
Firefox 4, 11 (2 computers) - doesn't work
Chrome 18 (2 computers) - doesn't work
IE 8 - doesn't work
IE 9 - works
Opera 11 - works
I really have no idea what's going on here. I thought it could be a weird graphics driver bug, but the behaviour is consistent across all computers I've tried. I find it hard to blame Node.js / my code for this as it does seem to be transferring the whole of the file, but then it works fine without it.
Happy to supply more details if needed.
* Obviously I haven't tried every possible file type, but pngs, progressive jpegs, text and videos seem to work.
It turns out that the problem was the file size sent in the header (header["Content-Length"]). I'd set this to info.end - info.start, whereas the actual size is 1 more than that. Some browsers didn't like this and missed off the end of the picture when displaying it on screen. Oops.

Resources