Windows Phone 8.1 play audio data stream through speaker? - audio

I receive over network PCM audio data stream and this part works fine so I am ending up with
DataReader incomming = args.GetDataReader();
byte[] RcvBuffer = new byte[incomming.UnconsumedBufferLength];
incomming.ReadBytes(RcvBuffer);
I have all audio data in buffer.
How I can play this through telephone Speaker ? Can you point me in some direction ?
Thanks

There're many ways to do that.
You can prepend the WAVE header to your data, and use MediaElement for playback, see the documentation for SetSource method.
If however by “telephone speaker” you mean the earphone, then it is only possible if you are creating a VoIP app.

It took a while but I sorted it, maybe someone else will need help in the future.
First Problem - since I just started app development for Windows Phone I have chosen Blank App (Windows Phone) instead Blank App (Windows Phone Silverlight) and I did not have access to many features that are available in Silverlight projects, so my suggestions for beginners: understand what each project is for.
Like Soonts said there are many ways to do this, this is one that I used.
I simplified this code and retyped this so there can be some typos.
using Microsoft.Xna.Framework.Audio;
using System.IO;
1) Create Stream to load your incoming data:
MemoryStream stream = new MemoryStream();
2) Load data from buffer to stream:
stream.Write(RcvBuffer, 0, RcvBuffer.Length);
3) I am using SoundEfect to play this through Loud-Speaker. Sample rate that I use is 8 kHz
SoundEffect sound;
sound = new SoundEffect(stream.toArray(), 8000, AudioChannels.Mono)
sound.Play();

Related

Screen Recording with both headphone & system audio

I am trying to build a web-application with the functionality of screen-recording with system audio + headphone-mic audio being captured in the saved video.
I have been thoroughly googling on a solution for this, however my findings show multiple browser solutions where the above works so long as headphones are NOT connected, meaning the microphone input is coming from the system rather than headset.
In the case that you connect headphones, all of these solutions capture the screen without video-audio, and the microphone audio from my headset. So to re-clarify on this, it should have recorded video-audio from the video being played whilst recording, and the headset-mic audio also.
This is thoroughly available in native applications, however I am searching for a way to do this on a browser.
If there are no solutions for this currently that anybody knows of, some insight on the limitations around developing this would also really help, thank you.
Your browser manages the media input being received in the selected tab/window
To receive media input, you need to ensure you have the checkbox Share Audio in the image below checked. However this will only record media-audio being played in your headphones, when it comes to receiving microphone audio, the opposite must be done i.e the checkbox should be unchecked, or merge the microphone audio separately on saving the recorded video
https://slack-files.com/T1JA07M6W-F0297CM7F32-89e7407216
create two const, one retrieving on-screen video, other retrieving audio media:
const DISPLAY_STREAM = await navigator.mediaDevices.getDisplayMedia({video: {cursor: "motion"}, audio: {'echoCancellation': true}}); // retrieving screen-media
const VOICE_STREAM = await navigator.mediaDevices.getUserMedia({ audio: {'echoCancellation': true}, video: false }); // retrieving microphone-media
Use AudioContext to retrieve audio sources from getUserMedia() and getDisplayMedia() separately:
const AUDIO_CONTEXT = new AudioContext();
const MEDIA_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(DISPLAY_STREAM); // passing source of on-screen audio
const MIC_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(VOICE_STREAM); // passing source of microphone audio
Use the method below to create a new audio source which will be used as as the merger or merged version of audio, then passing audios into the merger:
const AUDIO_MERGER = AUDIO_CONTEXT.createMediaStreamDestination(); // audio merger
MEDIA_AUDIO.connect(AUDIO_MERGER); // passing media-audio to merger
MIC_AUDIO.connect(AUDIO_MERGER); // passing microphone-audio to merger
Finally, connect the merged-audio and video together into one array to form a track, and pass it to the MediaStreamer:
const TRACKS = [...DISPLAY_STREAM.getVideoTracks(), ...AUDIO_MERGER.stream.getTracks()] // connecting on-screen video with merged-audio
stream = new MediaStream(TRACKS);

MediaPlayerElement vs MediaElement Which one to choose?

I have gone through the answer provided here for the difference. But I need to just play notification sound for like 2 seconds as an alert. No video or any other heavy loading.
This is the notification sound I am about to play.
ms-winsoundevent:Notification.SMS
The below is for MediaPlayerElement:
MediaPlayerElement mediaPlayerElement = new MediaPlayerElement();
mediaPlayerElement.SetMediaPlayer(new Windows.Media.Playback.MediaPlayer { AudioCategory = Windows.Media.Playback.MediaPlayerAudioCategory.Alerts});
mediaPlayerElement.MediaPlayer.AudioCategory = Windows.Media.Playback.MediaPlayerAudioCategory.Alerts;
mediaPlayerElement.Source = Windows.Media.Core.MediaSource.CreateFromUri(new Uri("ms-winsoundevent:Notification.Default"));
mediaPlayerElement.AutoPlay = false;
mediaPlayerElement.MediaPlayer.Play();
The below is for MediaElement:
MediaElement mediaElement = new MediaElement();
mediaElement.AudioCategory = AudioCategory.Alerts;
mediaElement.Source = new Uri("ms-winsoundevent:Notification.Default");
mediaElement.AutoPlay = false;
mediaElement.Play();
Can I use MediaElement since its a small audio or should I only use MediaPlayerElement as it is the one prescribed by Microsoft? which one is better to use in this case?
P.S.: I need to set audio category as Alerts in order to dim any background music.
Can I use MediaElement since its a small audio or should I only use MediaPlayerElement as it is the one prescribed by Microsoft? which one is better to use in this case?
Derive from official document,
In Windows 10, build 1607 and on we recommend that you use MediaPlayerElement in place of MediaElement. MediaPlayerElement has the same functionality as MediaElement, while also enabling more advanced media playback scenarios. Additionally, all future improvements in media playback will happen in MediaPlayerElement.
And it means that the new feature will be developed base on the MediaPlayerElement, we recommend using MediaPlayerElement that could make your app has longer life.

Web Audio Api destination maxChannelCount always is 2 on Linux

I have a problem with Web Audio API and an usb audio interface on Linux;
I wrote some audio player code on Web Audio API.
Everything is alright when I connect my 7.1 USB Audio Interface (TASCAM 16x08 - there are 8 output channels) and start my APP on Windows machine. context.destination.maxChannelCount equals 8 and I can select the channel to output the sound.
But when I do the same on Linux machine context.destination.maxChannelCount always is 2 (stereo).
I tried to:
create virtual audio multichannel device = same result - always only 2 maxChannelCount;
setting alsa, pulseaudio, jack audio connection kit and more...
The result is the same: in my code context.destination.maxChannelCount is always is 2
but the operating systems settings dialog detects 8 channels.
This is some code to be clear:
var context = new (window.AudioContext || window.webkitAudioContext)();
var audio = new Audio();
var source = context.createMediaElementSource(audio);
source.connect(context.destination);
audio.src = 'audio.mp3';
audio.play();
console.log(context.destination.maxChannelCount); //output on win: 2
on linux: 8
What can be the problem?
I found solution here https://ubuntuforums.org/archive/index.php/t-1072792.html
solved it by editing /etc/pulse/daemon.conf.
; default-sample-channels
= 2 uncomment the line and add more channels.
What browser are you running? Browser is responsible for giving you those available outputs, if they're not there (in all available browsers) then i think you're out of luck. I have done some things with Web Audio and multiple outputs and even on the same OS i got different results from different browsers.

Windows Universal App - MediaElement and M3U

is it possible to open up a m3u webradio stream in a MediaElement class in Windows 10?
Sample stream would be
http://www.antenne.de/webradio/channels/top-40.m3u
Opening normal mp3 in the internet work perfect but i do not get any m3u file opened.
Kind regards
Michael
Starting from Windows 10 version 1607, it is recommended to use the MediaPlayer class instead of MediaElement for media playback & The lightweight XAML control MediaPlayerElement.
Then you can use the MediaPlaybackList to create playlist for the MediaPlayer.
StorageFolder vfolder = Windows.Storage.KnownFolders.VideosLibrary;
StorageFileQueryResult query = vfolder.CreateFileQueryWithOptions(Constants.QueryOptions);
var files = await query.GetFilesAsync();
MediaPlaybackList playbackList = new MediaPlaybackList();
foreach (StorageFile file in files)
{
MediaSource source = MediaSource.CreateFromStorageFile(file);
playbackList.Items.Add(new MediaPlaybackItem(source));
}
_mediaPlayer = new MediaPlayer();
_mediaPlayer.AutoPlay = true;
_mediaPlayer.Source = playbackList;
MPElement.SetMediaPlayer(_mediaPlayer);
_mediaPlayer.Play();
More information Microsoft Docs
In m3u file (playlist file), there are often links point out to the source of audio. You need to get the file, open, parse it to get urls, and supply one of them to MediaElement. Its the same when you try to streaming video.
A M3U file isn't supported as it's not a media file. The playlist file format is simple and documented well enough that I'd recommend just parsing the M3U file and playing the individual files.
Unfortunately, Windows 10 UWP apps do not have access to the Playlist class which would be helpful in your scenario. It's only available for Desktop applications and in a Windows 8 app.

How can I select an audio output device in directshow

I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem.
Any help will be really helpful for me.
First off, if you're not using DirectShow .NET (DirectShowLib), get that here: It serves as a (very complete) interface between unmanaged DirectShow and C#
What follows is a pretty simple example of how to play an audio file, to the desired audio device
using DirectShowLib;
private IGraphBuilder m_objFilterGraph = null;
private IBasicAudio m_objBasicAudio = null;
private IMediaControl m_objMediaControl = null;
private void playAudioToDevice(string fName, int devIndex)
{
object source = null;
DsDevice[] devices;
devices = DsDevice.GetDevicesOfCat(FilterCategory.AudioRendererCategory);
DsDevice device = (DsDevice)devices[devIndex];
Guid iid = typeof(IBaseFilter).GUID;
device.Mon.BindToObject(null, null, ref iid, out source);
m_objFilterGraph = (IGraphBuilder)new FilterGraph();
m_objFilterGraph.AddFilter((IBaseFilter)source, "Audio Render");
m_objFilterGraph.RenderFile(fName, "");
m_objBasicAudio = m_objFilterGraph as IBasicAudio;
m_objMediaControl = m_objFilterGraph as IMediaControl;
m_objMediaControl.Run();
}
It is up to user to manage audio devices and choose a primary device (such as via Control Panel applet). You can find ways to switch devices programmatically in Windows XP, however in Vista+ it is impossible without interactive user action by design.
See also Larry's answer here: How to change default sound playback device programmatically?
UPDATE: The mentioned above refers to modifying system configuration trying to alter default audio output device. An application is however not limited to default device only. Instead, it can enumerate available devices (see Using the System Device Enumerator + CLSID_AudioRendererCategory) and then create an instance of renderer for specific device with BindToObject call. From there on, it is a regular filter, just bound internally to device of interest.

Resources