Jplayer 'play' command after 'pause' with time - jplayer

Hello i am using jplayer for my solution. I load the media and on ready i use ('pause', time) command.
After that if i use a ('play') command my media starts playing from the beggining.
Is there something i do wrong as on jplayer's dev guide it is said on the play method:
"Open media will play from where the play-head was when previously paused using jPlayer("pause", [time])."

I encountered similar problem, when dealing with lost connection. jPlayer resumed playback from the beginning when a .play() method was called. So, at some point (in my case it was the error handler) I was executing $("#jquery_jplayer_1").jPlayer("play", event.jPlayer.status.currentTime);. It gets current position and orders jPlayer to play from that exact position. Works flawlessly for me.
But, please, provide some code, maybe your problem is completely different.

Related

Issue with ytdl-core - discord.js (Lag on adding a new song)

When my bot is playing music and someone adds a new song to the queue, the current song that is being played lags for a second or so. As far as i can tell it's due to downloading the infos about the song, but it's weird.
Does anyone know how to solve this out?
I'm executing the code in an asynchronous function
I havent information about your source code, but probably you're using yt-search lib to search your command request, right?
If yes, this is the problem. This lib has sync process to parse the result html from youtube and its block de node event loop.
I recommend to use the youtube v3 api.. There is a lot of lib to use it:
https://www.npmjs.com/package/youtube-search
https://www.npmjs.com/package/youtube-search-without-api-key
I hope its help you.

HTML5 Audio long buffering before playing

I'm currently making an electron app that needs to play some 40Mbyte audio file from the file system, maybe it's wrong to do this but I found that the only way to play from anywhere in the file system is to convert the file to a dataurl in the background script and then transfer it using icp, after that I simply do
this.sound = new Audio(dataurl);
this.sound.preload = "metadata"
this.sound.play()
(part of a VueJS component hence the this)
I did a profling inside electron and this is what came out:
Note that actually transferring the 40Mbytes audio file doesn't take that long (around 80ms) what is extremely annoying is the "Second Task" which is probably buffering (I have no idea) which last around 950ms, this is way too long and ideally would need it under <220ms
I've already tried changing the preload option to all available options and while I'm using the native html5 audio right now I've also tried howlerjs with similar results (seemed a bit faster tho).
I would guess that loading the file directly might be faster but even after disabling security measures put by electron to block the file:/// it isn't recognized as a valid URI by XHR
Is there a faster way to load the dataurl since all the data is there it just needs to be converted to a buffer or something like that ?
Note: I can not "pre-buffer" every file in advance since there is about 200 of them it just wouldn't make sense in my opinion.
Update:
I found this post Electron - throws Not allowed to load local resource when using showOpenDialog
don't know how I missed it, so I followed step 1 and I now can load files inside electron with the custom protocol, however, nor Audio nor howlerjs is faster, it's actually slower at around 6secs from click to first sound, is it that it needs to buffer the whole file before playing ?
Update 2:
It appears that the 6sec loading time is only effective on the first instance of audio that is created. I do not know why tho. After that the use of two instances (one playing and one pre-buffering) work just fine, however even loading a file that isn't loaded is instantaneous. Seems weird that it only is the firs one.

Will --vout=dummy option work with --video-filter=scene?

I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.

Node fluent-ffmpeg killing process kills server - How to start and stop recorder?

I'm using fluent-ffmpeg in a node application. Recording from the screen/camera to an mp4 file. Would like a server request to start and another request to stop recording (links to a web interface - testing some tech with a view to making an Electron App later with it).
Starting is fine, but cannot figure out how to stop it.
This is the code to start (to run on MacOS):
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
This is what I thought would stop it from documentation and reading around the subject:
recordingProcessVideo.kill('SIGINT');
However, when I call this command, the server quits with the following ambiguous message:
code ELIFECYCLE
errno 1
Also, the video file produced will not open as if it quit before it completed. Can't seem to work it out, as from the docs and what people have written, to start and stop the recorder should be to make the process, then kill it when ready. Anyone know the correct way - been looking for ages but can't find any answers.
Using Node v10.15.2 and Ffmpeg version V92718-g092cb17983 running on MacOS 10.14.3.
Thanks for any help.
I have solved the issue through tracing out all the messages FFMpeg issued in the terminal. For some unknown reason, my installation of FFMpeg throws an error when completing the video and does not correctly close the file. This is happening in the terminal as well, though the error doesn't really display, and ends up with an MP4 that actually works in all video players - even the browser - with the exception of Quicktime, which is what I was using on this occasion. To prevent the error from crashing my Node application, I just needed to add an error handler to the video call. Indeed, I was adding the handler in my original code, but I was adding it to the process and NOT the original call to FFMPeg. So the code which works looks like this (I catch all of the end events and log them in this example).
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.native()
.on('error', error => console.log(`Encoding Error: ${error.message}`))
.on('exit', () => console.log('Video recorder exited'))
.on('close', () => console.log('Video recorder closed'))
.on('end', () => console.log('Video Transcoding succeeded !'))
.save(file.video);
I have two version of FFMpeg on my laptop and both fail. The official downloaded release installed on my computer (V4.1.1) and the Node packaged version my app is using, which will make distribution via Electron easier, as it won't have the dependency of installing FFMpeg on the local machine running the app (#ffmpeg-installer/ffmpeg). So the reason the video fails to export is some magical reason to do with my laptop, which I have to figure out, but importantly, my code works now and is resilient to this failing now.
Maybe it will help someone in the future.
To complete the ffmpeg conversion process
You need to run ffmpeg conversion like this as follows:
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
recordingProcessVideo.run();
And then you can disable your conversion command ffmpeg:
recordingProcessVideo.kill();
The key point is the launch method .run() and you need to run it when the template has already been passed to the variable recordingProcessVideo
After such a launch recordingProcessVideo.run();
You will be able to disable recordingProcessVideo.kill();
The bottom line is that, ffmpeg only passes the template to your variable recordingProcessVideo and if you run .run() immediately when creating a template example:
ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.save(filePath);
.run();
Then the variable recordingProcessVideo will be empty.
This is my first comment on this site, do not scold much for mistakes :)

wxWidgets application loop slideshow

I have a rather simple problem which I can't seem to solve.
I would like to write a slideshow program that also plays an audio file everytime the slide has changed. This audio files vary in lengths and I do not want to program to loop through the next entry / picture till the sound has finished playing.
Currently I have implemented a loop:
void UI_BRAINWASH::PlaySound_top()
{
wxString tmppath(parent->get_currentdirect()+parent->current_db.get_card(m_index)->get_topentryaudiopath());
ISound* firstsound = this->engine->play2D(tmppath.mb_str(), false, false, true);
while(engine->isCurrentlyPlaying(tmppath.mb_str()))
{
StaticTextTop->GetParent()->Update();
//wxSleep(3);
}
m_timer->Start(1000);
}
and this loops through the entries as expected and everything is dandy...
However, I would like, to be able to abort the programm by pressing the Escape amongst other things, but the while loop obviously hinders me from doing exactly that.
I also noticed that I can't move my window or close the programm while it is looping through the pictures.
So I have looked at threads and the wxIdleevent class. in: wxwidgets/samples/threads/ is an example of a "workers thread", which seems to be what I need.
My question now is: are threads not a bit of an overkill for a simple slideshow?
Is there another / better way of looping through my entries - waiting for the sound to have played, updating the gui and also being able to still move the window around?
What is engine?
Most APIs for playing sounds provide the ability to start playing a sound-file and then return immediatly. They will send an event when the sound is finished. They will also provide a call to interrupt a sound that is still playing. This is what you want.
You should check the docs for whatever API you are using and find this feature. If the feature is not available, than you need to find another API that does - most do.

Resources