playing audio in openTK - audio

What's the easiest way to play audio (WAV, MP3, OGG, doesn't matter) in an OpenTK application? I need to be able to play it, stop it, and get the current position in seconds at any time. I've used WMPLib in my Forms apps in the past but that doesn't seem to work with OpenTK for whatever reason.
Thanks!

OpenTK is just a graphics library, but OpenAL provides support for audio.

Related

Blender VSE Audio out-of-sync when animation (video) is rendered

Ok, so I found out that Blender has this really cool video-editing interface and I was beginning to love it. Until, I created this awesome project composition and when I exported the animation as a video file, the audio was out of sync :(.
Actual Problem
Audio is in-sync with video when the animation is played in Blender but is out-of-sync in the rendered video.
Solutions I tried out and failed
I used the 'Audio-Sync' option in the sequencer but that made no difference.
Then I thought that my scene audio frequency might have been an issue since it was initially 48kHz and my videos were at 24kHz, so I changed the scene audio frequency to 24kHz, this still failed to solve the issue.
Initially, I was combining videos with different frame rates and thought that might have been an issue (although animation played as expected in Blender), so I recreated the source videos to ensure all videos I was using in my project had the same frame rate, but this also did not work.
Someone online suggested exporting the video and audio separately and then combining them using a command-line tool like FFMPEG, this also failed.
What's really frustrating
This lag (audio is a few frames ahead of the video) is noticeable only in longer videos (>12 mins, my video is 1 hr long) suggesting a very small rendered rate difference between the video and the audio.
Also, note that the animation plays absolutely fine in Blender, so all I could figure out was that this was a rendering issue.
So if anyone figured this out please let me know. I am a noob in video/audio codecs so please forgive me if I used some incorrect nomenclature above.
I encountered this issue on OBS capture (a 13 minute clip) with Blender 2.93.3. OBS capture is constant framerate at 60 fps, I did try Handbrake conversion to 60 fps constant framerate also with no help. Workaround to solve the issue is to set Blender rendering fps to 59.94, sequencer shows audio track extending over video track but after render everything matches perfectly. Unfortunately you cannot edit the video in 59.94 fps mode, so you need to switch back to 60 fps for editing.
In case your video is 24 fps then use 23.98 fps preset and for 30 fps you can use the 29.97 fps preset.
May 2021. Blender v2.92.0 - I experienced the same as described out-of-sync problem with rendered videos that were over five minutes long. Source was as-is (3.6GB, 10mins) file from Canon EOS 5DMKII, which is an old camera, so pretty much any software can handle the encoding.
In Blender's preview mode everything looks in-sync. Audio and Video tracks are of the same length. I didn't even cut or merged any segments of the source video. I tried running rendering after a clean boot, gave Blender highest resource priority in Win10, allocated more memory to caching, etc. Source and output was on SSD. Rendered result still didn't match what GUI showed. Very frustrating, and a lot of wasted time.
What worked better for me is the following:
Change Video Codec to "FFmpeg video codec #1". This produces a lossless file that is about 27 times bigger (13.8GB for 10mins) than H.264 codec file (0.5GB). However, the audio remains in sync all the way through.
Use HandBrake open source video transcoder to convert FFmpeg file into H.264 (or H.265). End result produces a smallish-size file with A/V that is in sync.
This workaround is relatively painless and produces good-quality results because there's only a single lossy compression step. The time required to get to final file more than triples though. I believe the issue continues to be with the way H.264 rendering in Blender is implemented. I also experienced similar out-of-sync issues in ShotCut a year ago while working with cheap action cam H.265 files. I also found ShotCut to be less stable than Blender.
So after a lot of online searching, I did find an answer to fix this problem, but not in Blender. If you are like me and would like to use Blender for video editing and still get around the issue, then I found a workaround, but you need Shotcut for this. Shotcut is another great free and open-source video editor
Export the entire long video from Blender (the rendered video has desync issues as expected).
Open the video in Shotcut and detach the audio from it.
Use the audio properties to make very fine adjustments to the audio playback speed to suit your requirements (make fine adjustments until video and audio are in sync).
Follow the GIF attached.
(I am using a shorter video in the GIF but you get the idea)
Explanation
Blender has issues while rendering long videos and I noticed that the video is exported at 1.0x speed but the audio is sometimes faster (1.00400x or something like that) and hence the rendered video has audio not in sync with the video.
Another bad thing is that Blender does not really allow very fine playback speed adjustment just to the audio.
One trick is to adjust the pitch of the audio in Blender which in turn changes the playback speed but this is only allowed up to 2 decimal places (not what we want for long videos) and it makes the audio sound funny (since it actually changes the pitch).
Shotcut is a great tool that allows fine playback adjustment, and it also has a pitch compensation feature so that your pitch is kind of unaffected (since we don't want the characters to be sounding funny in our edited video).
Shotcut allows playback speed adjustment up to 6 decimal places.
I landed at this thread because of the same issue happening in a video that I have just finished. The "View animation CTRL F11" command starts an internal player that has sync issues with long videos. Opening the same video file on "Videos" in Fedora, it plays perfectly synchronized.

Making an audio equalizer video

I need to make a video of an audio equalizer.
So i need a script that analyses audio every frame, and extracts the frequency apectrum so i can draw that somehow and make an equalizer.
The first part of the problem is easily solvable on frontend as there is a myriad of open source equalizer visualisations in canvas.
The thing works nicely in browser but i have a problem to make an mp4 of that.
Ive tried using headless browsers(pupeteer and phantomjs) to capture frames from canvas, but i could not get the framerate above 10fps, resulting in unacceptable video quality and sync issues when connecting the jpg frames and mp3 via ffmpeg. The plan was to speed it up, so you dont have to wait for the full audio length to finish to get an mp4, but i cant even get it to show above 10fps on regular playback speed.
I feel the tech i thought would work is not there yet, and i might be in need of a different approach.
The only condition is that it has to run as a script on a linux server. So any programmimg language or any equalizer design will work.
Any ideas or resources are more than welcome. Thanks

Vaadin Audio seek position

I would like some how to seek position of Audio component in Vaadin, or to read current time from Audio player. Is this possible to do somehow?
These features are not included in Audio component. But luckily there is add-on, more sophisticated audio player, which has the features you are looking for.
https://vaadin.com/directory/component/audiovideo
For even more complex use cases of multiple audio streams there exists also
https://vaadin.com/directory/component/audioplayer-add-on

Corona sdk : Is there a difference between audio.play() and media.play() and which one is better?

Is there a difference between audio.play() and media.play() and which one is better?
The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.
media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.

Include simple sound in iphone app

I searched many questions - but no one seems to be giving simplest, most uniform approach, hence please do not close as duplicate.
My requirement is simple: I have quiz app.
I want to include:
background music that plays continually - probably more than one
audio.
I need occassional sounds played at specific events - they
are very short in duration. Maybe 4-5 in number.
What sound format do I use? [aac etc]
How do I produce it? (optionally, get it from internet, if free)
What is the best approach to incorporate it? [audioplayback, openal etc)
Forgive me if this is quite stupid, but I am going very generic here and can't seem to find it.
Thanks for the help!
For sound format, use AAC or uncompressed 16-bit little endian in a CAF container (avoid mp3 since it's difficult to make it loop cleanly). You can convert using the command line tool 'afconvert':
Compressed:
afconvert -f caff -d aac sourcefile.wav destfile.caf
Uncompressed 16-bit:
afconvert -f caff -d LEI16 sourcefile.wav destfile.caf
For production, either record it yourself (using an audio program such as Audacity), get a professional to do it, or buy royalty free sounds/music.
To incorporate it, use AVAudioPlayer for music and OpenAL for sounds. OpenAL is difficult to use and doesn't decode compressed audio on its own, so you may want to use an audio library such as https://github.com/kstenerud/ObjectAL-for-iPhone

Resources