I'm trying to create a discord bot using discord.js that is able to both play music and sound effects.
The bot correctly plays music, but when trying to play a sound effect during music playback, the sound effect never starts/ends.
I have the following code for playing sound effects.
function play(guildID, serverQueue, resource){
const newAudioPlayer = createAudioPlayer();
const oldAudioPlayer = serverQueue.audioPlayer;
const connection = getVoiceConnection(guildID)
oldAudioPlayer.pause();
connection.subscribe(newAudioPlayer);
newAudioPlayer.on(AudioPlayerStatus.Idle, () => {
console.log("Done playing");
connection.subscribe(oldAudioPlayer);
oldAudioPlayer.unpause();
newAudioPlayer.stop();
}).on('error', err => {
console.log("Something went wrong when trying to play sound effect");
console.log(err);
connection.subscribe(oldAudioPlayer);
oldAudioPlayer.unpause();
newAudioPlayer.stop();
})
newAudioPlayer.play(resource);
console.log("Trying to play");
}
I realize that a voice connection can only subscribe to one audioplayer at a time and that an audioplayer can only play one resource at a time. This is why, I create a new temporary audio player called "newAudioPlayer" that will play the short sound effect. I then pause the old audio player and subscribe the connection to the new audio player.
The sound effect is correctly played as long as the oldAudioPlayer has not been used to play a resource before. As soon as the oldAudioPlayer has been used to play a resource, the newAudioPlayer never starts playing the resource. I have checked all the different AudioPlayerStates for the newAudioPlayer, but none of them get triggered.
serverQueue.audioPlayer is initialized when a voice channel is joined, and is always set.
The program does print "Trying to play" but no audio can be heard.
Apparently this is a known issue with discord.js:
https://github.com/discordjs/discord.js/issues/7232
The workaround is to play the local file as a stream if the file for the other audioPlayer is also reading a stream.
Related
I am trying to build a web-application with the functionality of screen-recording with system audio + headphone-mic audio being captured in the saved video.
I have been thoroughly googling on a solution for this, however my findings show multiple browser solutions where the above works so long as headphones are NOT connected, meaning the microphone input is coming from the system rather than headset.
In the case that you connect headphones, all of these solutions capture the screen without video-audio, and the microphone audio from my headset. So to re-clarify on this, it should have recorded video-audio from the video being played whilst recording, and the headset-mic audio also.
This is thoroughly available in native applications, however I am searching for a way to do this on a browser.
If there are no solutions for this currently that anybody knows of, some insight on the limitations around developing this would also really help, thank you.
Your browser manages the media input being received in the selected tab/window
To receive media input, you need to ensure you have the checkbox Share Audio in the image below checked. However this will only record media-audio being played in your headphones, when it comes to receiving microphone audio, the opposite must be done i.e the checkbox should be unchecked, or merge the microphone audio separately on saving the recorded video
https://slack-files.com/T1JA07M6W-F0297CM7F32-89e7407216
create two const, one retrieving on-screen video, other retrieving audio media:
const DISPLAY_STREAM = await navigator.mediaDevices.getDisplayMedia({video: {cursor: "motion"}, audio: {'echoCancellation': true}}); // retrieving screen-media
const VOICE_STREAM = await navigator.mediaDevices.getUserMedia({ audio: {'echoCancellation': true}, video: false }); // retrieving microphone-media
Use AudioContext to retrieve audio sources from getUserMedia() and getDisplayMedia() separately:
const AUDIO_CONTEXT = new AudioContext();
const MEDIA_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(DISPLAY_STREAM); // passing source of on-screen audio
const MIC_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(VOICE_STREAM); // passing source of microphone audio
Use the method below to create a new audio source which will be used as as the merger or merged version of audio, then passing audios into the merger:
const AUDIO_MERGER = AUDIO_CONTEXT.createMediaStreamDestination(); // audio merger
MEDIA_AUDIO.connect(AUDIO_MERGER); // passing media-audio to merger
MIC_AUDIO.connect(AUDIO_MERGER); // passing microphone-audio to merger
Finally, connect the merged-audio and video together into one array to form a track, and pass it to the MediaStreamer:
const TRACKS = [...DISPLAY_STREAM.getVideoTracks(), ...AUDIO_MERGER.stream.getTracks()] // connecting on-screen video with merged-audio
stream = new MediaStream(TRACKS);
I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.
in UWP application, Sometime Playing sound is stop.
await Execute.OnUIThreadAsync(async () =>
{
var element = new MediaElement();
var uri = new Uri($"ms-appx:///Assets/sound/abc.wav");
StorageFile sf = await StorageFile.GetFileFromApplicationUriAsync(uri);
var stream = await sf.OpenAsync(Windows.Storage.FileAccessMode.Read);
element.SetSource(stream, "");
element.Play();
});
I think, This UIThread job finish immediately.
but My sound file has 1 minutes length, Then,
The task was closed. then, Sound can not play by end.
How should I write to play sound ?
Refer to the following MSDN doc:Play media in the background. To support your music playing in background, you need to check the requirements from "Requirements for background audio". Actually you've mentioned that it is "sometime", so I'm not so sure whether you've already used the solution from the above doc. But if you haven't, you need to refer to that article, enable capbility and then manage both the transitioning and also notice the memory.
Background: I'm coding a metro-styled app for Win8. I need to be able to play music-file. Because of quality and space requirements we're using encoded audio (mp3/ogg).
I'm using XAudio2 to play sound effects (.wav files), but since I couldn't figure out a way to play encoded audio with it, I decided to play the music files with Media Foundation (IMFMediaPlayer interface).
I downloaded metro apps sample, and found out that the Media Engine Native C++ video playback sample was closest to what I needed.
Now that my app has MediaPlayer playing musics, I ran into a problem. If the device running the app is slow enough, MediaPlayer hangs. When I'm running the release-version of the app on my device, it's fine and I can hear the music just fine. But when I attach the debugger or run it on a slower device, it hangs when I'm setting bytestream for the MediaPlayer to play.
Here's some code, you'll find it pretty similiar to the sample:
StorageFolder^ installedLocation = Windows::ApplicationModel::Package::Current->InstalledLocation;
m_pickFileTask = Concurrency::task<StorageFile^>(installedLocation->GetFileAsync(filename)), m_tcs.get_token());
auto player = this;
m_pickFileTask.then([player](StorageFile^ fileHandle)
{
player->SetURL(fileHandle->Path);
Concurrency::task<IRandomAccessStream^> fOpenStreamTask = Concurrency::task<IRandomAccessStream^> (fileHandle->OpenAsync(Windows::Storage::FileAccessMode::Read));
fOpenStreamTask.then([player](IRandomAccessStream^ streamHandle)
{
MEDIA::ThrowIfFailed(
player->m_spMediaEngine->Pause()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
player->SetBytestream(streamHandle);
if (player->m_spMediaEngine)
{
MEDIA::ThrowIfFailed(
player->m_spEngineEx->Play()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
}
}
);
}
);
And here's the SetBytestream method:
SetBytestream(IRandomAccessStream^ streamHandle)
{
if(m_spMFByteStream != nullptr)
{
m_spMFByteStream->Close();
m_spMFByteStream = nullptr;
}
MEDIA::ThrowIfFailed(
MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &m_spMFByteStream)
);
MEDIA::ThrowIfFailed(
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
);
MEDIA::GetMediaError(m_spEngineEx);
return;
}
The line where it hangs is:
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
When I'm debugging the app, I can press pause and see the stack. Well, not much of it, but atleast I can see it that it's indefinitely at
ntdll.dll!77b7f4dc()
Any ideas why my app would hang in such a way?
(OPTIONAL: If you know a better way to play mp3/ogg in a c++ metro-styled app, let me know)
Could not figure out why this is happening, but I managed to code a work-a-round:
IMFSourceReader can be used to decode MP3s and feed bytes into XAudio2SourceVoice.
XAudio2 audio stream effect sample contains good example how to do this.
I just developed an App by using adobe air. It contains some animations with background music in mp3 format. The problem is that the music is very jerky when the animation is playing...
FYI, this is the way how I play audio in flash:new Sound(new URLRequest("m3.mp3")).play()
Have I done anything wrong?
BTW, the funny thing is that if you hit the HOME button, and then come back to the app again, everything plays beautifully...
Without knowing more about the code, it seems like the sound is not fully loaded. The file plays as far as it can, then waits for more data to show up, then continues . . . very jerky. You may have to wait for the sound to load completely before playing it:
var s = new air.Sound();
s.addEventListener(air.Event.COMPLETE, onSoundLoaded);
var req = new air.URLRequest("bigSound.mp3");
s.load(req);
function onSoundLoaded(event)
{
var localSound = event.target;
localSound.play();
}
This code is from Adobe's Sound docs.