How can I make jPlayer show the track duration before hitting play? - jplayer

I'm using jPlayer to show an audio player for mp3 files, and the duration field shows 00:00 until I hit the play button, and then it shows the correct duration while playing. (Note that this is a different symptom than in this similar question.)
How can I get jPlayer to pick up and display the duration before the user hits the play button (so they know how long the audio clip is before deciding to play it)? Thanks.

It turns out that jPlayer supports the HTML5 preload argument, such that adding preload: 'auto' to the jPlayer constructor options fixes this problem and displays the track duration. However, this also downloads the whole audio file, which is annoying, because it can use a lot of bandwidth for an audio track that the user may never play. And it's even worse in my case, because I've got a page with a dozen user stories, and the page downloads all of them.
Frustratingly, there's an option preload: 'metadata', which sounds like it should do exactly what I want, namely, download the track metadata without downloading the whole file, but this doesn't seem to work. This is apparently something that browsers should support, but don't yet.
For now, I'll either skip this feature, or build a server side piece to check the duration and stream it via separate AJAX call.

Related

How to Playback Multiple Audio Files Starting at Different Times

I have two audio files of different durations. I want to play them simultaneously with the shorter duration starting in the middle of the longer duration file.
I've enabled media synchronization with the app launch setting --sout-all --sout #display.
Swapping between input-master and -slave settings results in either the shorter file not playing or nothing played back.
How can this be done in VLC?
As of the date of this response, asynchronized audio playback or recordings where files start and end at different times cannot be done using a single instance of VLC alone without customized addons, if available. This case is not an intended use of the VLC standard application.
It is possible yet cumbersome to have asynchronized playback with two or more instances of VLC and manually starting and ending each audio track.
ALTERNATIVES
Alternatively, there are numerous online audio editing tools, many accessible freely, that permit uploading audio files as separate tracks for playback, editing, mixing, or recording and for downloading that are much more advanced than the features of an uncustomized VLC.
A web search for "audio editors online" will produce a lengthy list of options.

Icecast: I have strange behaviour with repeats of end of tracks, as well as pitch changes from my Icecast Server

I only began using icecast a few days ago, so if I stuffed something up somewhere, please let me know.
I have a weird problem with icecast. Everytime a track is "finished" on icecast, a section of the end of the currently playing track (i think 64kbs of the track) is repeated about 2 to 3 times before the next song plays, but the next song doesn't begin playing in the start, but a few seconds of the way through. Also, I can notice that the playback speed (and hence the pitch) sometimes differs from the original as well.
I consulted this post and this post that was quoted below which taught me what the <burst-on-connect> and the <burst-size> tags are used for. It also taught me this:
What's happening here is that nothing is being added to the buffer, so clients connect, get the contents of that buffer, and then the stream ends. The client must be re-connecting repeatedly, and it keeps getting that same buffer.
Cheers to Brad for that post. A solution to this problem was provided in a comments section of that post and it said to decrease the <source-timeout> of the icecast server, so that it will close the connection quicker and stop any repeating. But this is assuming I want to close the mountpoint, and I dont, because what I am using Icecast for is actually a 24/7 radio player. If I did close my mountpoint, then what happens is VLC just turns off and doesn't repeatedly attempt to connect anymore. Unless this is wrong. I don't know.
I use VLC to hear the playback of the icecast streams and I use nodeshout which is a bunch of bindings from libshout built for node.js. I use nodeshout to send data to a bunch of mounts on my icecast server. In the future I plan to make a site that will listen to the icecast streams, meaning it will replace VLC.
icecast.xml
<limits>
<clients>100</clients>
<sources>4</sources>
<queue-size>1008576</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>30</source-timeout>
<burst-on-connect>1</burst-on-connect>
<burst-size>252144</burst-size>
</limits>
This is a summary of the audio sending code on my node.js server.
nodejs
// these lines of code is a smaller part of a function, and this sets all the information. The variables name, description etc come from the arguments of the function
var nodeshout = require("nodeshout");
let shout = nodeshout.create();
shout.setHost('localhost');
shout.setPort(8000);
shout.setUser('source');
shout.setPassword(process.env.icecastPassword); //password in .env file
shout.setName(name);
shout.setDescription(description);
shout.setMount(mount);
shout.setGenre(genre);
shout.setFormat(1); // 0=ogg, 1=mp3
shout.setAudioInfo('bitrate', '128');
shout.setAudioInfo('samplerate', '44100');
shout.setAudioInfo('channels', '2');
return shout
// now meanwhile somewhere lower in the file, there is this summary of how the audio is sent to the icecast server
var nodeshout = require("nodeshout")
var {FileReadStream, ShoutStream} = require("nodeshout") //here is where the FileReadStream and ShoutStream functions come from
const filecontent = new FileReadStream(pathToSong, 65536); //if I change the 65536 to a higher value, then more bytes are being repeated at the end of the track. If I decrease this, it starts sounding buggy and off.
var streamcontent = filecontent.pipe(new ShoutStream(shoutstream))
streamcontent.on('finish', () => {
next()
console.log("Track has finished on " + stream.name + ": " + chosenTrack)
})
I also notice weirder behaviour. After the previous song had it's last chunk repeated a few times, that's when the server calls the streamcontent.on('finish') event that is located in the nodejs script, and only then does it warn me that the track is finished.
What I have tried
I tried messing around with the <source-timeout> tag, the number of bytes (or bits im not sure) that are being sent on nodejs, the burst size, I also tried turning bursting off completely but it results in super strange behavior.
I also thought creating a new stream every time per song was a bad idea as seen in new ShoutStream(shoutstream) when piping the file data, but using the same stream meant that the program would return an error because it would write the next track to the shoutstream after it had said it had closed.
If any more information is necessary to figure out what is going on, I can provide it. Thanks for your time.
Edit: I would like to add: Do you think I should manually control how many bytes are sent to icecast and then use the same stream object instead of calling a new one every time?
I found out why the stream didn't play some tracks as opposed to others.
How I got there
I could not switch to ogg/vorbis or ogg/opus for my stream, so I had to do something with my source client. I double checked everything was correct and that my audio files were in the correct bitrate. When i ran the ffprobe tool with ffprobe audio.mp3 sometimes the bitrates did not adhere to the typical rates of 120kbps, 128kbps, 192, 312, etc etc so on. It was always some strange value such as 129852 just to provide an example.
I then downloaded the checkmate mp3 checker here and checked my audio files, and they were all encoded in a variable bitrate!!! VBR damnit!
TLDR
I fixed my problem by re-encoding all my tracks to a constant bitrate of 128kbps using ffmpeg.
Quick Edit: I am pretty sure that programs such as Darkice might already support variable bit rate transfers to Icecast servers, but it would be impractical for me to use darkice, hence why I stuck with nodeshout.

same mp3 files behave differently in iPhone app

I have 2 mp3 files that are nearly identical. The first file refused to stream down and play in my Appcelerator iPhone app that I am developing:
http://www.zerogravpro.com/temp/bad.mp3 (you'll find you can play this just fine in your browser, or download it and it plays fine)
This is 100% replicatable; it's not sporadic at all. The actual behavior is that the file begins to play in the iphone mediaPlayer for just a split second, then stops with some kind of "unknown" error. So then I took that file, opened it in audacity, removed the first split-second of silence from the beginning of the clip, and re-generated the mp3:
http://www.zerogravpro.com/temp/good.mp3
And this one works perfectly in the iphone app! 100% success each and every time. I have many mp3 files that are similar to bad.mp3 in that they play fine in any audio device, but error out when streaming/playing in iphone's media player. Audacity fixed it somehow and I need to know how/why, so that I can automate the fix in my hundreds of other mp3 files. I'd love to not have to open hundreds of files in Audacity and re-save. There must be some way to automate these fixes. How did Audacity fix the file? What did it do? I can only think of 2 possibilities:
The existence of a split second of silence at the beginning of the clip chokes iphone
Audacity fixes something non-obvious in the mp3
Experts: Any idea what the difference is between these 2 files, and how I could automatically turn "bad" mp3s into good ones, from some command-line tool or something? Thanks all.
I discovered that the only difference between the files that actually matter is the size. iPhone apps (at least in the simulator), when using the audio streaming library, choke on any file under 40Kb. So you have to use the standard sound library for the smaller files.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

How does youtube support starting playback from any part of the video?

Basically I'm trying to replicate YouTube's ability to begin video playback from any part of hosted movie. So if you have a 60 minute video, a user could skip straight to the 30 minute mark without streaming the first 30 minutes of video. Does anyone have an idea how YouTube accomplishes this?
Well the player opens the HTTP resource like normal. When you hit the seek bar, the player requests a different portion of the file.
It passes a header like this:
RANGE: bytes-unit = 10001\n\n
and the server serves the resource from that byte range. Depending on the codec it will need to read until it gets to a sync frame to begin playback
Video is a series of frames, played at a frame rate. That said, there are some rules about the order of what frames can be decoded.
Essentially, you have reference frames (called I-Frames) and you have modification frames (class P-Frames and B-Frames)... It is generally true that a properly configured decoder will be able to join a stream on any I-Frame (that is, start decoding), but not on P and B frames... So, when the user drags the slider, you're going to need to find the closest I frame and decode that...
This may of course be hidden under the hood of Flash for you, but that is what it will be doing...
I don't know how YouTube does it, but if you're looking to replicate the functionality, check out Annodex. It's an open standard that is based on Ogg Theora, but with an extra XML metadata stream.
Annodex allows you to have links to named sections within the video or temporal URIs to specific times in the video. Using libannodex, the server can seek to the relevant part of the video and start serving it from there.
If I were to guess, it would be some sort of selective data retrieval, like the Range header in HTTP. that might even be what they use. You can find more about it here.

Resources