I need to play video and audio at the same time somehow but I can't seem to succeed.
Im playing the video in threads and the audio is played through a function.
any ideas?
Related
The end goal: I'm creating a discord bot built for Final Fantasy XIV raiding. You simply tell the bot when you pull the boss and it follows a pre-defined timeline that warns everyone of each mechanic shortly before it happens. In short: It plays a pre-defined list of audio files at pre-defined times after receiving a !start command.
The good news is that the audio is playing.
The bad news is...complicated.
There are two issues and I have a feeling they're related. The first issue is that, no matter what audio file I play, the last bit (about 0.5s) gets cut off. I gave it an audio file that says "Tank buster soon" and it played "Tank buster s-" then cut out. Up until now, I've been working around this by simply adding one second of silence on the end of every sound file. It's been working. It's still getting truncated, of course, it's just that it's truncating silence.
The second issue is that, after playing one audio file, the next audio file has a short delay between when the bot tries to start playing it, and when the audio actually comes out. (In discord, I can see this as the bot cueing up their "mic" a short time before it starts playing audio.) This delay gets progressively worse with every file played, to the point where it's literally several seconds delayed. (When the delay is severe enough, I see the bot cue up for about a second, un-cue, and then re-cue when the delay finally finishes)
The code doing most of the work is as follows:
//timeline() is called once per second by a setInterval() object, while the fight is active.
function timeline()
{
tick++;
var len = callouts.length;
for (var i = 0; i < len; i++)
{
if (tick == callouts[i]["time"])
{
dispatcher[dispatcherIndex] = voiceConnection.playFile(callouts[i]["file"]);
dispatcherIndex++;
activeChannel.send(callouts[i]["message"]);
}
}
} //end timeline()
Discord.js documentation mentions that it prefers node-opus to be used for voice functionality, but that opusscript will also work. I cannot seem to install the former, so I am using the latter.
EDIT: I've found a workaround for now. The progressive delay is "reset" when the bot leaves and rejoins voice, so I've simply made it do so when it finishes playing an audio file. Strictly speaking, it works, but I'd still like a "proper" solution if possible.
Installing git (why the heck have I not installed git long ago?) ultimately turned out to be the correct catalyst. That, as mentioned, let me install discord.js master branch, which handles voice better. After a couple ffmpeg-related hiccups, I now have the bot playing complete audio files with no delay, with no workarounds required. Everything's working great now!
I am building a ambient audio skill for sleep for Alexa! I am trying to loop the audio so I don't have to download 10 hour versions of the audio. How do I get the audio to work? I have it build to where it will play the audio, but not loop.
I've solved this problem in my skill Rainmaker: https://www.amazon.com/Arif-Gebhardt-Rainmaker/dp/B079V11ZDM
The trick is to handle the PlaybackNearlyFinished event.
https://developer.amazon.com/de/docs/alexa-voice-service/audioplayer.html#playbacknearlyfinished
This event is fired shortly before the currently playing audio stream is ending.
Respond to the event with another audioPlayerPlay directive with behavior ENQUEUE. This will infinitely loop your audio until it gets interrupted by e.g. the AMAZON.StopIntent.
Advanced: if you want a finite loop, say ten times your audio, use the token of the audioPlayerPlay directive to count down from ten. Once the counter hits zero, just don't enqueue another audio. But be sure to respond something in this case, even if it's just an empty response. Otherwise you will get a timeout error or the like.
I'm curious if it's possible to use libspotify to play multiple files at once, through different outputs. I want to be able to play one song through speakers and jump around through other songs on headphones when djing.
Does anyone know if this is possible? If it's only doable with offline files I'm ok with that. I'd even be fine with having two separate processes running libspotify if that's what works.
Thanks!
No, it is not possible. Libspotify will stop the playback of the first track and call the play_token_lost() callback.
In my application i am running a timer in background for every 8 seconds to play a custom sound,it works fine ,but it get stops at later sometime,so how can i play the sound continuously in background?
Currently i am using the below code to play the sound in background
SystemSoundID soundID;
AudioServicesCreateSystemSoundID((__bridge CFURLRef)filePath, &soundID);
AudioServicesPlaySystemSound(soundID);
let me know the good solution to play the sound continuously in background
Short answer: your app is simply suspended.
Long answer: You are missing key parts of the background savvy implementation.
You need to tell the iOS that you are an Audio App, and that you are requesting extra cycles when suspended.
The UIBackgroundModes is subject to approval
From the documentation:
Background modes for apps:
UIBackgroundModes value = audio
The app plays audible content to the user or records audio while in the background. (This content includes streaming audio or video content using AirPlay.)
If your app does not fall into any of the categories below, then your only option to extend backgrounding beyond the typical 5 seconds is to invoke -beginBackgroundTaskWithName:expirationHandler:. You will likely be suspend within 30 seconds or so.
audio
location
voip
newsstand-content
external-accessory
bluetooth-central & bluetooth-peripheral
fetch
remote-notification
In the jukebox.c example of libspotify I count all frames of the current track in the music_delivery callback. When end_of_track is called the frames count is different each time I played the same track. So end_of_track is called several seconds after the song is over. And this timespan differs for each playback.
How can I determine if the song is really over? Do I have to take the duration of the song in seconds and multiply it with the sample rate to take care when the song is over?
Why are more frames delivered than necessary for the track? And why is end_of_track not called on the real end of it? Or I am missing something?
end_of_track is called when libspotify has finished delivering audio frames for that track. This is not information about playback - every playback implementation I've seen keeps an internal buffer between libspotify and the sound driver.
Depending on where you're counting, this will account for the difference you're seeing. Since the audio code is outside of libspotify, you need to keep track of what's actually going to the sound driver yourself and stop playback, skip to the next track or whatever you need to do accordingly. end_of_track is basically there to let you know that you can close any output streams you may have from the delivery callback to your audio code or something along those lines.