webrtc audio_processing: what is keyboard channel in StreamConfig? - audio

At webrtc/modules/audio_processing/include/audio_processing.h class StreamConfig there is an option has_keyboard, my question is what it is? and How to use it?

That's the keyboard microphone channel and by default it will be used if the user's keyboard has a microphone.
If you look at helpers.cc:
webrtc::StreamConfig CreateStreamConfig(const AudioParameters& parameters) {
int channels = parameters.channels();
(...)
const bool has_keyboard = parameters.channel_layout() ==
media::CHANNEL_LAYOUT_STEREO_AND_KEYBOARD_MIC;

Related

How do I send metadata from ExoPlayer to bluetooth?

I'm trying to send fixed metadata through bluetooth on my radio app, basically I would put the radio name as title, and the radio slogan as subtitle, so there isn't anything dynamic involved.
I have tried searching for other answers on StackOverflow but they're related to ICY streams or getting the metadata from ExoPlayer itself.
The stream itself provides the metadata when listening directly through FM or a stream player (for example, VLC), but it fails to display when going through my app.
This is my code, from what I've managed to understand I should send the metadata inside the brackets after 'addMetadataOutput'.
extractorsFactory = new DefaultExtractorsFactory();
trackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
trackSelector = new DefaultTrackSelector(trackSelectionFactory);
defaultBandwidthMeter = new DefaultBandwidthMeter();
dataSourceFactory = new DefaultDataSourceFactory(this,
Util.getUserAgent(this, "mediaPlayerSample"), defaultBandwidthMeter);
mediaSource = new ExtractorMediaSource(Uri.parse("https://sr11.inmystream.it/proxy/radiocircuito29?mp=/stream"), dataSourceFactory, extractorsFactory, null, null);
player = ExoPlayerFactory.newSimpleInstance(this, trackSelector);
player.addMetadataOutput();
There are lots of dependencies in this area, it's not just a case of sending values, you are going to need to handle notifications and media sessions.
For that, you are going to need some additional Exoplayer extensions. Specifically: MediaSessionCompat, MediaSessionConnector and connect them up to to your Exoplayer and NotificationManager.
mMediaSession = new MediaSessionCompat( this, Constants.PLAYBACK_CHANNEL_ID );
mPlayerNotificationManager.setMediaSessionToken( mMediaSession.getSessionToken() );
mMediaSession.setActive( true );
mMediaSessionConnector = new MediaSessionConnector( mMediaSession );
mMediaSessionConnector.setPlayer( mPlayer );
Once you have these, you also need to implement a DescriptionAdapter class
public class MyDescriptionAdater implements PlayerNotificationManager.MediaDescriptionAdapter
This will create the interface to populate with your static metadata. You then need to wire this up to you PlayerNotificationManager using setMediaDescriptionAdapter

Which HID key code to use for KEYCODE_BACK button mapping?

Platform:
STB Xiaomi Mi Box S 4
OS Version:
Android Version 9
Issue description:
I want to use the USB keyboard gadget to control the box. I mapped the remote controller buttons (arrow buttons/select/home) into corresponding HID key codes following the page. However, none of the specified/corresponding key codes (0x00f1, 0x009e) for the KEYCODE_BACK button is working as expected.
Question:
Do you maybe know which HID key code should be used for the BACK button?
Appreciated any help!
I found detailed document about HID usages. I hope you can find your answer from the pdf.
PDF: https://usb.org/sites/default/files/hut1_21.pdf
You may use the following code to determine if the Key is Back / Menu on the Android device / key code
if ((event.getKeyCode() == KeyEvent.KEYCODE_BACK || event.getKeyCode() == KeyEvent.KEYCODE_MENU || event.getKeyCode() == KeyEvent.KEYCODE_BUTTON_MODE) && event.getRepeatCount() == 0) {
if( onBackPressed() ) { // function to determine if we should block the action in your code and do something else return true if you want to proc
return true;
} else {
// let the Android system handle the back button
return false;
}
}

How to play more than one stream at once using soundio?

I'm new to soundio. I'm wondering how to play more than one audio source at once.
I'm looking to understand if I'm supposed to create several streams (seems wrong) and let the operating system do the mixing, or am I supposed to implement software mixing?
Software mixing seems tough too if my input sources are operating at different frequencies.
Am I basically asking "how to mix audio"?
I need a little direction.
Here's my use-case:
I have 5 different MP3 files. One is background music and the other 4 are sound effects. I want to start the background music and then play the sound effects as the user does something (such as click a graphical button). (This is for a game)
You can create multiple streams and play them simultaneously. You don't need to do the mixing yourself. Anyway, it needs a lot of work.
Define WAVE_INFO and PLAYBACK_INFO:
struct WAVE_INFO
{
SoundIoFormat format;
std::vector<unsigned char*> data;
int frames; // number of frames in this clip
}
struct PLAYBACK_INFO
{
const WAVE_INFO* wave_info; // information of sound clip
int progress; // number of frames already played
}
Extract WAVE info out of sound clips and store them in an array of WAVE_INFO: std::vector<WAVE_INFO> waves_;. This vector is not going to change after being initialized.
When you want to play waves_[index]:
SoundIoOutStream* outstream = soundio_outstream_create(sound_device_);
outstream->write_callback = write_callback;
PlayBackInfo* playback_info = new PlayBackInfo({&waves_[index], 0});
outstream->format = playback_info->wave_info->format;
outstream->userdata = playback_info;
soundio_outstream_open(outstream);
soundio_outstream_start(outstream);
std::thread stopper([this, outstream]()
{
PlayBackInfo* playback_info = (PlayBackInfo*)outstream->userdata;
while (playback_info->progress != playback_info->wave_info->frames)
{
soundio_wait_events(soundio_);
}
soundio_outstream_destroy(outstream);
delete playback_info;
});
stopper.detach();
In write_callback function:
PlayBackInfo* playback_info = (PlayBackInfo*)outstream->userdata;
int frames_left = playback_info->audio_info->frames - playback_info->progress;
if (frames_left == 0)
{
soundio_wakeup(Window::window_->soundio_);
return;
}
if (frames_left > frame_count_max)
{
frames_left = frame_count_max;
}
// fill the buffer using
// soundio_outstream_begin_write and
// soundio_outstream_end_write by
// data in playback_info->wave_info.data
// considering playback_info->progress.
// update playback_info->progress based on
// number of frames are written to buffer
// for background music:
if (playback_info->audio_info->frames == playback_info->progress)
{
// if application has not exited:
playback_info->progress = 0;
}
Also this solution works, it needs lots of improvements. Please consider it as a POC only.

How do I share an Audio File in an App using Swift 3?

Sharing an Audio File in Swift
How do I share an audio file which exists in my apps document directory to other apps?
To elaborate on this question, what I mean is when a user taps a share button in the app they should be able to email their recorded audio track to another person, or alternatively to be able to send it across to a range of other apps which can handle audio like perhaps soundcloud.
Researching the topic, I have found:
UIActivityViewController
UIDocumentInteractionController
Since my application makes an audio recording of a person's voice which they should be able to share, and despite searching through stack overflow, I have not been able to find a code example of how exactly this option can be implemented in a Swift app. Can I request suggestions and example code on how this may be accomplished. Many Thanks,
Swift 3.x:
let activityItem = URL.init(fileURLWithPath: Bundle.main.path(forResource: "fileName", ofType: "mp3")!)
let activityVC = UIActivityViewController(activityItems: [activityItem],applicationActivities: nil)
activityVC.popoverPresentationController?.sourceView = self.view
self.present(activityVC, animated: true, completion: nil)
My answer is using for doing this with UIDocumentInteractionController.
I begin by instantiating a UIDocumentInteractionController at the top of my class
var controller = UIDocumentInteractionController()
Then I link up an IBAction to a share button on my nib or Storyboard:
#IBAction func SHARE(_ sender: Any) {
let dirPath: String = NSSearchPathForDirectoriesInDomains(.documentDirectory,
.userDomainMask,
true)[0]
let recordingName = UserDefaults.standard.string(forKey: "recordingName")
let pathArray: [String] = [dirPath, recordingName!]
let filePathString: String = pathArray.joined(separator: "/")
controller = UIDocumentInteractionController(url: NSURL(fileURLWithPath: filePathString) as URL)
controller.presentOpenInMenu(from: CGRect.zero,
in: self.view,
animated: true)
}

How to get direct show fillter?

i am recording video from webcam using DirectshowLib2005.dll in C#.net..i have this code to startVideoRecoding as below..
try
{
IBaseFilter capFilter = null;
IBaseFilter asfWriter = null;
IFileSinkFilter pTmpSink = null;
ICaptureGraphBuilder2 captureGraph = null;
GetVideoDevice();
if (availableVideoInputDevices.Count > 0)
{
//
//init capture graph
//
graphBuilder = (IFilterGraph2)new FilterGraph();
captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
//
//sets filter object from graph
//
captureGraph.SetFiltergraph(graphBuilder);
//
//which device will use graph setting
//
graphBuilder.AddSourceFilterForMoniker(AvailableVideoInputDevices.First().Mon, null, AvailableVideoInputDevices.First().Name, out capFilter);
captureDeviceName = AvailableVideoInputDevices.First().Name;
//
//check saving path is exsist or not;if not then create
//
if (!Directory.Exists(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\"))
{
Directory.CreateDirectory(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\");
}
#region WMV
//
//sets output file name,and file type
//
captureGraph.SetOutputFileName(MediaSubType.Asf, ConstantHelper.RootDirectoryName + "\\Assets\\Video\\" + videoFilename + ".wmv", out asfWriter, out pTmpSink);
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
#endregion
//
//render the stram to output file using graph setting
//
captureGraph.RenderStream(null, null, capFilter, null, asfWriter);
m_mediaCtrl = graphBuilder as IMediaControl;
m_mediaCtrl.Run();
isVideoRecordingStarted = true;
VideoStarted(m_mediaCtrl, null);
}
else
{
isVideoRecordingStarted = false;
}
}
catch (Exception Ex)
{
ErrorLogging.WriteErrorLog(Ex);
}
if you observe this lines of code
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
it will apply video setting which is described on that GUID i got this GUID from file located at "C:\windows\WMSysPr9.prx"..
so my question is how create my own video setting with format,resolutions and all?
How to Record video using webcam in black and white mode or in grayscale?
so my question is how create my own video setting with format,resolutions and all?
GUID based profiles are deprecated, though you can still use them. You can build custom profile in code using WMCreateProfileManager and friends (you start with empty profile and add video and/or audio streams at your discretion). This is C++ API, and I suppose that WindowsMedia.NET, a sister project to DirectShowLib you are already using, provides you interface into .NET code.
Windows SDK WMGenProfile sample both shows how to build profile manually and provides you a tool to build it interactively and save into .PRX file you can use in your application.
$(WindowsSDK)\Samples\multimedia\windowsmediaformat\wmgenprofile
How to Record video using webcam in black and white mode or in grayscale?
The camera gives you a picture, then it goes through pipeline up to recording through certain processing. Ability to make it greyscale is not something inherent.
There are two things you might want to think of. First of all, if the camera is capable of stripping color information on capture, you can leverage this. Check it out - if its settings have Saturation slider, then you just put it input minimal value position and the camera gives you greyscale.
In code, you use IAMVideoProcAmp interface for this.
Another option, including if the camera is missing mentioned capability, is to apply post processing filter or effect that converts to greyscale. There is no stock solution for this, and otherwise there are several ways to achieve the effect:
use third party filter that strips color
export from DirectShow pipeline, convert data in code using Color Control Transform DSP (available starting Win Vista) or GDI functions
use Sample Grabber in the streaming pipeline and update image bits directly

Resources