Windows IoT Core RaspPi Walkie Talkie - audio

I am trying to use a few Rasp Pi Board for a local network audio broadcast with one RaspPi as main(broadcast) and the rest as slave (receiver) like walkie talkie.
I have looked up for example such as [WebCam App][1] but it seems like the audio is recorded first before playback.
Is there any sample I could refer to for my application where the audio input is captured and live stream to the respective slave device.
Thanks.

There is a sample of recording and handling and sending the live stream of the WebCam. You can reference it and modify it for your use case.
HttpWebcamLiveStream
It also uses MediaCapture API. You can set to audio recording, for example, like this:
mediaCapture = new MediaCapture();
var settings = new Windows.Media.Capture.MediaCaptureInitializationSettings();
settings.StreamingCaptureMode = Windows.Media.Capture.StreamingCaptureMode.Audio;
settings.MediaCategory = Windows.Media.Capture.MediaCategory.Communications;
settings.AudioProcessing = Windows.Media.AudioProcessing.Raw;
await mediaCapture.InitializeAsync(settings);
This sample also shows how to implement an HTTP server so that it can send the live stream data to other clients.

Related

capture video conference between 1:1 user and broadcast to an rtmp url

I am currently working on nodejs and socket app that is currently doing 1:1 video conference that is using webrtc. Videos are two separate element in the html and i would like to merge them together so that i can broadcast to rtmp url for public view (2:many). Is this possible
For webrtc, i followed this tutorial https://www.youtube.com/watch?v=DvlyzDZDEq4, and for broadcasting i am using ffmpeg which current does 1 video stream.
Please confirm if this is doable
Update
I was able to merge the video using
https://www.npmjs.com/package/video-stream-merger
And now the final issue
i am receiving merger.result which is merged stream and I tried to create a mediaRecorder object. Callback method for MediaRecorder ondataavailable is called only once but not every 250ms which i need to broadcast to youtube. How can i do this?
var merger = new VideoStreamMerger(v_opts);
...
...
merger.start()
myMediaRecorder = new MediaRecorder(merger.result);
myMediaRecorder.start(250);
myMediaRecorder.ondataavailable = function (e) {
console.log("DataAvailable")
//socket.emit("binarystream", e.data);
state = "start";
//chunks.push(e.data);
}
So you're looking for many peers. This is possible - please see the below links for reference.
WebRTC: https://webrtc.github.io/samples/src/content/peerconnection/multiple/
StackOverflow reference: webRTC multi-peer connection (3 clients and above)
GitHub reference: https://github.com/Dirvann/webrtc-video-conference-simple-peer
https://deepstream.io/tutorials/webrtc/webrtc-full-mesh/

Switch audio output to speakers

I work on own specialized VoIP client for W10 mobile & desktop. Basic things work ok.
However I cannot get audio output to speakers on my old Lumia.
foreach (var item in (await DeviceInformation.FindAllAsync(DeviceClass.AudioRender)).Where(i => i.Name.Contains("Speakers")))
RendererId = item.Id;
There is "Speakers (WaveRT)" in device list so RendererId is valid.
Later application tries to open audio device (WSAPI) with found RendererId. But anyway phone plays to receiver only.
I modified Voip sample app in attempt to reproduce issue - yes, it happens with Voip sample app also.
My collegue confirms he has same problem on his phone.
Is it possible to play audio via speaker for voip app ?
Thank you!
On the Phone devices only, you can use the AudioRoutingManager to change the audio output.
// to get the audio manager
IsAudioRoutingSupported = ApiInformation.IsApiContractPresent(typeof(PhoneContract).FullName, 1);
if(IsAudioRoutingSupported)
{
// audio routing is supported, we register for the output change events
m_audioRoutingManager = AudioRoutingManager.GetDefault();
m_audioRoutingManager.AudioEndpointChanged += OnAudioEndpointChanged;
}
// to change to output
m_audioRoutingManager.SetAudioEndpoint(AudioRoutingEndpoint.Speakerphone);

External hardware triggering of a USB3.0 camera with DirectShow and Visual C++

Using external hardware triggering of a UVC compliant USB3.0 camera, I want to acquire still images utilizing Directshow in our Visual C++ code. Using an API supplied by the camera manufacturer we can set the camera in External Trigger Mode . With the camera in Master Mode (“Free Running Mode” with no external triggering) we get the proper event notification code in our DirectShow VC++ program. However, we don’t get the proper event notification code when triggering the camera externally in External Trigger Mode, as described below.
To get the event notification code of the event interface IMediaEventEx *pEvent of the running filter graph we call
hr = pEvent->WaitForCompletion(INFINITE, &evCode);
or
while (hr = pEvent->GetEvent(&evCode, &param1, &param2, 0), SUCCEEDED(hr))
{
hr = pEvent->FreeEventParams(evCode, param1, param2);
if ((evCode == EC_COMPLETE))
{
break;
}
}
In Master Mode (no external trigger) we obtain the event notification code EC_COMPLETE for evCode (i.e. all data has been rendered) and to grab the image data from the camera we can call
hr = pSGrabber->GetCurrentBuffer(&cbBufSize, NULL);
and
hr = pSGrabber->GetCurrentBuffer(&cbBufSize, (long*)pBuffer);
Here pSGrabber is the ISampleGrabber interface for the Sample Grabber Filter.
However, in External Trigger Mode for evCode we only get the event notification code EC_ACTIVATE and therefore we cannot grab any image data. It can also be noted that via the IAMVideoControl interface for the still Pin we set the flag for external trigger
hr = pAMVidControl->SetMode(pPinStill, VideoControlFlag_ExternalTriggerEnable);
where IPin *pPinStill is the pointer to the still Pin.
We know that the external trigger pulses we use are adequate for triggering the camera, because with a commercial piece of software we succeed with triggering the camera externally. Therefore, I believe the problem is related to DirectShow programming. Does anyone have experience with grabbing image data in DirectShow when using external triggering of a camera? Or point to some source of information?
Thank you very much.

Play Audio from receiver website

I'm trying to get my receiver to play an mp3 file hosted on the server with the following function
playSound_: function(mp3_file) {
var snd = new Audio("audio/" + mp3_file);
snd.play();
},
However, most of the time it doesn't play and when it does play, it's delayed. When I load the receiver in my local browser, however, it works fine.
What's the correct way to play audio on the receiver?
You can use either a MediaElement tag or Web Audio API. Simplest is probably a MediaElement.

Monotouch video streaming with MPMoviePlayerController

I am creating a Monotouch iPhone app that will display streaming videos. I have been able to get the MPMoviePlayerController working with a local file (NSUrl FromFile), but have not been able to get videos streamed from a media server.
Here is the code I am using to play the video:
string url = #"http://testhost.com/test.mp4";
var nsurl = NSUrl.FromString(url);
mp = new MPMoviePlayerController(nsurl);
mp.SourceType = MPMovieSourceType.Streaming;
//enable AirPlay
//mp.AllowsAirPlay = true;
//Add the MPMoviePlayerController View
this.View.AddSubview(mp.View);
//set the view to be full screen and show animated
mp.SetFullscreen(true, true);
//MPMoviePlayer must be set to PrepareToPlay before playback
mp.PrepareToPlay();
//Play Movie
mp.Play();
Is there something else in implementing the MPMoviePlayerController for video streaming that I am missing? I also read that videos for iOS should be streamed using Apple's HTTP Live Streaming on the media server, is that a requirement? I have never worked with video streaming on an iOS device before so I am not sure if there is something lacking in my code or the media server setup or a combination.
I'm pretty sure you need a streaming server to use a video file from an HTTP url. However it's a requirement for applications (on the AppStore) to do so:
Important iPhone and iPad apps that send large amounts of audio or
video data over cellular networks are required to use HTTP Live
Streaming. See “Requirements for Apps.”
The good news is that your existing code should not have to be changed to handle this (it's server, not client side).

Resources