audio weird "tick sound" at track end - audio

i created an app which plays a playlist of small tracks every thing was working fine , till windows phone 8.1 update
the problem is -> there is weird tick sound" at track end
so i tried to play the track in xbox music player it also has the same tick ... i tried to play the audio at my pc and android device the audio is okay, so i think it's a wp8.1 issue or a comparability issue with my mp3 tracks
so, is there any specifications for the mp3 to be compatible with wp8.1?
or any work around in code, i was thinking a bout muting the sound before the track end , by the way i'm using AudioPlayerAgent

All audio rendering processes encounter this same challenge/problem. Root cause : sound is a curve and as it varies above/below centerline, (typically varies from -1 to 0 to +1 where centerline is 0), if it ends not close enough to the centerline this pop/tick sound happens, (speaker is left in the lurge not at 0 and will physically instantaneously return to 0 producing the tick). Solution : either the player ~helps~ the sound by artificially forcing the hand by ending the clip at the centerline or do similar as a preprocess step in the source media. This ending transition can happen quickly, yet not instantaneously, or you'd be back where you started with the instantaneous transition to 0. Silence is just when the media supplies a series of zeros (IE. at centerline).

Related

Real duration of audios played through browsers

I need to play 4 audios through a browser web.
These audios last 150ms, 300ms, 450ms and 600ms.
I don't care about latency (if an audio is played 100 ms after it's not that important for my purpose).
But I do care about the duration of these audios: is the 150ms audio last exactly 150ms or there is an error due to the audio board or other components?
I know for sure that there is an error (I see a test using a Mac).
My question is: can anyone show me a paper, an article or anything that talks about the duration and test different setting or tell me if this error is always (Windows, Mac, old device, new device) very small (less than 10ms for example).
In other words: if I play an audio of 100ms how long does it really last (100ms? more? less?)?
In what manner is the sound not lasting the correct amount of time?
Does the beginning or the end get cut off?
Does the sound play back slower or faster than it should?
In my experience, I've never heard an error with playback rates caused by the browser or sound boards. But I have come across situations where a sound is played back with a different audio format than which is was encoded. For example, a sound encoded at 48000 fps played back at 44100 fps will take longer to execute, but will be very close to the original in pitch (maybe about a 1/2 step lower). I recommend as a diagnostic step to confirm the audio format used at each end. How to do so will depend on the systems being used.

Audio Frame Repetition when combining audio clips in moviepy

Audio frames at very end of a clip get repeated when I concatenate two or more video clips.
I tinkered with
buffer size (writing with audio_buffsize = 1000 works fine for now)
duration ( because I observed that for a clip with 43.15 sec of audio, final video get rounded to 44.0 which adds some glitch / last frame buffer repetition (I guess) = 44.0-43.15.)
com_vid.write_videofile(FINAL_OUT_VID,
fps=1,
codec="libx264",
audio_bitrate='192k',
audio_fps=44100,
audio_nbytes=2,
audio_codec="aac",
audio_bufsize=1000) # fix issue for audio glitches.
writing with audio_buffsize = 1000 works fine for now. But I am not sure whether it will work for every case. I need to write one long clips with many small clips hence need some advice/pointers on how to get cohesive result/clip.
Waveform: this is the case when above code breaks and glitches appear again.
Pip was installing 1.0.3 version somehow. Upgrading to latest version fixed the issue.

Delay Audio on Windows 10 by a second

Is there a way to universally delay audio in Windows 10 (I have Realtek Hi-Definiton Audio) as an example.
I have 2 reasons why I want to accomplish this
1) My audio is playing a quarter of a second before the video plays (out of sync). This is consistent in youtube, vlc media player, windows media player, pretty much any video content... the mouth will move 1/4 second after the audio. The delay builds up overtime as well, about 5 minutes in it becomes unbearable.
2) Unrelated to #1 I want to scan the audio and edit it in near real time. Searching for certain sounds and also reading certain subtitles.

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

I hear clicking in audio with a DirectShow graph created with Graph Edit yet player software on my PC plays audio smoothly

I have a DirectShow application that I built with Delphi 6 using the DSPACK component library. For two days I have been trying to solve a problem with audio playback. When I run the filter graph I create I hear repetitive clicks in the playback. What was really confusing was that the audio file I created simultaneously with my filter graph had clean continuous audio, not gaps. So I knew that the audio buffers were being delivered properly but something I was doing was "jamming up" the "live" playback. Or so I thought. I spent two days diagnosing the problem looking for semaphores being held too long (locks) or perhaps timestamp problems, which I documented in this other Stack Overflow post:
Getting stuttering during rendering of my DirectShow filter despite output file being "smooth"
A few minutes ago I decided to try a test with the Graph Edit utility. I created a dead simple graph consisting of just the capture device I was using (VOIP phone microphone), and the renderer device I was using (HD ATI Rear Audio output to headphones). Two filters total. Much to my surprise I heard the same clicking. So here was a case that did not involve my code at all and I heard clicking.
Then I changed the audio renderer in the Graph Edit created filter graph to the VOIP phone ear piece. The clicking went away.
Now I know there's a way to get smooth audio on ut the ATI Rear Audio device since its the preferred audio output device and everything from videos I play on my PC to wave files I play on it sound flawless. So are the other software programs doing something different than just connecting filters? I am wondering if perhaps the default mode for the HD ATI Rear Audio is without double-buffering and perhaps those other software programs know how to enable that feature? Or are they doing something else, perhaps using another DirectShow or DirectSound filter or technique for example, to make the audio play smoothly on the HD ATI Rear Audio renderer?
What you possibly having (depends on actual stuttering though) is that when you are using capture and playback devices backed by different hardware, their sampling rates slightly differ. For example, you capture 22050 Hz at actual rate of (22050 - 2%) Hz and you play it back with hardware consuming bytes at (22050 + 2%) Hz.
Now obviously this won't work out smooth: eventually playback will experience data underlow... If you save into file and play back from file, it will go smooth as the file will be able to supply data at the rate of playback device. If capture and playback devices are the same hardware, they are likely to use shared "hardware" clock and rates match.
The problem is known as "rate matcing" and is discussed on MSDN in Live Sources section.

Resources