Chrome browser cannot play wav audio files with 1000 sampling rate - frontend

I can see that the browser has loaded a wav file with a sampling rate of 1000, but the canplaythrough event cannot be triggered. Manually clicking the play button cannot play the wav file (the wav file with a sampling rate of 8000 can be played smoothly). After downloading the wav file with a sampling rate of 1000, it can be played with the player provided by the computer. Does chrome have playback restrictions on audio files with low sampling rate, or where the front-end and browser settings are wrong? Is there any way to let the browser play the wav file with 1000 sampling rate smoothly. I hope you can give me some ideas.

It doesn't clearly state so, but according to the Web Audio API specification, audio buffers with sample rates between 8000 and 96000 have to be supported. This is for programmatically creating such a buffer, but it's to be assumed that the same system is used internally when playing <audio> tags.
It's up to browser vendors whether they want to support sample rates outside of this range, but they don't have to (and apparently Chrome doesn't).
Note there are more constraints and browser differences in regards to supported codecs and file formats, for details see this and this page.

Related

Routing AVPlayer audio output to AVAudioEngine

Due to the richness and complexity of my app's audio content, I am using AVAudioEngine to manage all audio across the app. I am converting every audio source to be represented as a node in my AVAudioEngine graph.
For example, instead using AVAudioPlayer objects to play mp3 files in my app, I create AVAudioPlayerNode objects using buffers of those audio files.
However, I do have a video player in my app that plays video files with audio using the AVPlayer framework (I know of nothing else in iOS that can play video files). Unfortunately, there seems to be no way I can obtain the audio output stream as a node in my AVAudioEngine graph.
Any pointers?
If you have a video file, you can extract audio data and pull it out from the video.
Then you can set the volume of AVPlayer to 0. (If you didn't remove audio data from the video)
and Play AVAudioPlayerNode.
If you receive the video data through network, You should make parser of the packet and divide them.
But AV-sync is very tough thing.

How do you automatically detect device dimensions when using HTTP Live Streaming?

I have an app that delivers video content using HTTP Live Streaming. I want the app to retrieve the appropriate resolution based on the devices screen size (either 4x3 or 16x9). I ran Apple's tool to create the master .m3u8 playlist file (variantplaylistcreator) and got the following:
#EXTM3U
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=248842,BANDWIDTH=394849,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=480x360
4x3/lo/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=384278,BANDWIDTH=926092,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=480x360
4x3/mid/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=787643,BANDWIDTH=985991,CODECS="mp4a.40.2, avc1.42801e",RESOLUTION=480x360
4x3/hi/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=249335,BANDWIDTH=392133,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=640x360
16x9/lo/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=384399,BANDWIDTH=950686,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=640x360
16x9/mid/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=780648,BANDWIDTH=987197,CODECS="mp4a.40.2, avc1.42801e",RESOLUTION=640x360
16x9/hi/prog_index.m3u8
This does cause my live stream to switch between video quality correctly, but it seems to randomly pick whether it uses a 4x3 or 16x9 resolution.
Is there a way to have it select the correct dimensions automatically, or do I need to have multiple playlist files and have a device request a specific one? For example, if on an iPad do I need to detect that it's screen has a 4x3 dimension and request a 4x3_playlist.m3u8 that only has the 480x360 resolution option?
Update 2017:
Keeping the same aspect-ratio is only a recommendation in the latest HLS authoring guide:
1.33. All video variants SHOULD have identical aspect ratios.
Original answer:
Audio/Video Stream Considerations:
Video aspect ratio must be exactly
the same, but can be different dimensions.
Apple Technical Note TN2224 - Best Practices for Creating and Deploying HTTP Live Streaming Media for the iPhone and iPad
Select a playlist based on the User-Agent instead.

What exactly does bitrate mean in an video/audio file?

I use ffmpeg to convert videos from one format to another.
Is bitrate the only parameter which decides the output size of a video/audio file?
Yes, bitrate is essentially what will control the file size (for a given playback duration). It is the number of bits used to represent each second of material.
However, there are some subtleties, e.g. :
a video file encoded at a certain video bitrate probably contains a separate audio stream, with a separately-specified bitrate
most file formats will contain some metadata that won't be counted towards the basic video stream bitrate
sometimes the algorithm will not actually aim to achieve the specified bitrate - for example, using the CRF factor. http://trac.ffmpeg.org/wiki/x264EncodingGuide explains how two-pass would be preferred if targeting a specific file size.
So you may want to do a little experimenting with a particular set of options for a particular file format.
Bitrate describes the quality of an audio or video file.
For example, an MP3 audio file that is compressed at 192 Kbps will have a greater dynamic range and may sound slightly more clear than the same audio file compressed at 128 Kbps. This is because more bits are used to represent the audio data for each second of playback.
Similarly, a video file that is compressed at 3000 Kbps will look better than the same file compressed at 1000 Kbps. Just like the quality of an image is measured in resolution, the quality of an audio or video file is measured by the bitrate.

Seting dwScale and dwRate values in the AVISTREAMHEADER structure at AVI muxing

During capturing from some audio and video sources and encoding at AVI container for synchronizing audio & video I set audio as a master stream and this gave best result for synchronizing.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd312034(v=vs.85).aspx
But this method gives a higher FPS value as a result. About 40 or 50 instead of 30 FPS.
If this media file just playback - all OK, but if try to recode at different software to another video format appears out of sync.
How can I programmatically set dwScale and dwRate values in the AVISTREAMHEADER structure at AVI muxing?
How can I programmatically set dwScale and dwRate values in the AVISTREAMHEADER structure at AVI muxing?
MSDN:
This method works by adjusting the dwScale and dwRate values in the AVISTREAMHEADER structure.
You requested that multiplexer manages the scale/rate values, so you cannot adjust them. You should be seeing more odd things in your file, not just higher FPS. The file itself is perhaps out of sync and as soon as you process it with other applciations that don't do playback fine tuning, you start seeing issues. You might be having video media type showing one frame rate and effectively the rate is different.

Audio Playback control in C++

I'm working on a project that requires me to sync an audio playback(preferably an mp3 file) with my program.
My program reads a motion file from a txt file and output's it onto the serial port at a particular rate. At the same time an audio file has to be played back on the speaker. This audio file has to be in sync with the data..that is to say after say transmittin 100 bytes of data, the audio mustve played back to a predefined time.
What would be the tools used to play and control audio like this?
a tutorial would be great!
Thanks!!
In general, when working with audio, you want to synchronize other sources to audio. This is for several reasons, but most important is that audio runs on a clock running on its own hardware. You'll have to get timing information from that clock. There is a guide here written for using portaudio, but the principles apply to other situations:
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf

Resources