i am developing a sip client using c#.net and in which it has microphone control through which i can mute and unmute the microphone and it works well when i mute and unmute from my application and the issue is when i open my application and mute the mic and same time if i open skype there is issue with mic what i undersatnd is when i mute a mic in my application it is muting all over..can any one guide me how to mute the mic only in my application which does not affects other applications like skype.. etc..here is the code for mute and unmute microphone..
private void MicMuteButton_Click(object sender, EventArgs e)
{
MixerLine micline;
mMixers = new Mixers();
mMixers.Playback.MixerLineChanged += new WaveLib.AudioMixer.Mixer.MixerLineChangeHandler(mMixer_MixerLineChanged);
mMixers.Recording.MixerLineChanged += new WaveLib.AudioMixer.Mixer.MixerLineChangeHandler(mMixer_MixerLineChanged);
micline = mMixers.Recording.UserLines.GetMixerFirstLineByComponentType(MIXERLINE_COMPONENTTYPE.SRC_MICROPHONE);
if ((string)MicMuteButton.Image.Tag == "Disabled")
{
micline.Mute = false;
MicMuteButton.Image = Properties.Resources.MicrophoneEnabled;
MicMuteButton.Image.Tag = "Enabled";
ttu.SetToolTip(MicMuteButton, "Mute");
}
else if ((string)MicMuteButton.Image.Tag == "Enabled")
{
micline.Mute = true;
MicMuteButton.Image = Properties.Resources.MicrophoneDisabled;
MicMuteButton.Image.Tag = "Disabled";
ttu.SetToolTip(MicMuteButton, "Unmute");
}
}
Microsoft wrote a separate framework for lower level graphic and audio called DirectX. Maybe you should try.
Related
I have C# api that get Audio stream and play it on my pc. (using localhost)
var response = (HttpWebResponse)request.GetResponse();
using (Stream receiveStream = response.GetResponseStream())
{
PlayWav(receiveStream, false);
}
public static void PlayWav(Stream stream, bool play_looping)
{
if (Player != null)
{
Player.Stop();
Player.Dispose();
Player = null;
}
if (stream == null) return;
Player = new SoundPlayer(stream);
if (play_looping)
Player.PlayLooping();
else
Player.Play();
}
This works fine, However I need to use the function PlayWav in the client :
To get the audio stream from the server and play it in the client browser. (I don't want to save it , just to play it)
I tried using ajax/fetch ,put it on
I tried ReadableStream , I tried SoundPlayer..
I am confused and not sure how to handle audio streams.
Nothing works.
I write javascript on client side.
What am I doing wrong ?
I'm recording my voice through a Bluetooth headset (ERA JAWBONE) and playing it in realtime on the phone speaker. This works with the following code:
buffer = new byte[buffersize];
android.os.Process.setThreadPriority(
android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
aManager.startBluetoothSco();
aManager.setBluetoothScoOn(true);
aManager.setMode(AudioManager.MODE_NORMAL);
arec = new AudioRecord(
MediaRecorder.AudioSource.VOICE_RECOGNITION,
hz,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffersize);
if (AcousticEchoCanceler.isAvailable()) {
AcousticEchoCanceler echoCanceller = AcousticEchoCanceler.create(arec.getAudioSessionId());
if (echoCanceller != null) {
echoCanceller.setEnabled(true);
}
} else { Log.e(TAG, "Echo Canceler not available");}
if (NoiseSuppressor.isAvailable()) {
NoiseSuppressor noiseSuppressor = NoiseSuppressor.create(arec.getAudioSessionId());
if (noiseSuppressor != null) {
noiseSuppressor.setEnabled(true);
}
} else { Log.e(TAG, "Noise Suppressor not available");}
atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
hz,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffersize,
AudioTrack.MODE_STREAM);
atrack.setPlaybackRate(pbhz);
}
aManager.setStreamSolo(AudioManager.STREAM_MUSIC, true);
arec.startRecording();
atrack.play();
onRecording=true;
Runnable runnable = new Runnable() {
#Override
public void run() {
while (onRecording) {
arec.read(buffer, 0, buffersize);
atrack.write(buffer, 0, buffer.length);
}
}
};
new Thread(runnable).start();
}
But my headset is picking up every sound in the room and starts to echo. When I use the headset to make a call my voice is clear. According to the specs the JawBone uses NoiseAssassin, but I'm pretty sure that it isn't activated when I'm using the code above. Nor is the noisesuppressor or echocanceler available on my device.
Is there a way to filter out the noise and have just a clear voice coming from the phone speaker?
I can't seem to find the property for the MediaCapture class that allows me to detect the front camera and switch to it if available. Here is my current setup of the device, it all works as expected on Windows (front cam) and Phone (rear cam). None of the Microsoft samples show the front camera being used in Universal or WP 8.1 (WinRT/Jupiter).
mediaCaptureManager = new MediaCapture();
await mediaCaptureManager.InitializeAsync();
if (mediaCaptureManager.MediaCaptureSettings.VideoDeviceId != "" && mediaCaptureManager.MediaCaptureSettings.AudioDeviceId != "")
{
StartStopRecordingButton.IsEnabled = true;
TakePhotoButton.IsEnabled = true;
ShowStatusMessage("device initialized successfully!");
mediaCaptureManager.VideoDeviceController.PrimaryUse = CaptureUse.Video;
mediaCaptureManager.SetPreviewRotation(VideoRotation.Clockwise90Degrees);
mediaCaptureManager.SetRecordRotation(VideoRotation.Clockwise90Degrees);
mediaCaptureManager.RecordLimitationExceeded += RecordLimitationExceeded;
mediaCaptureManager.Failed += Failed;
}
There is a sample on the Microsoft github page that is relevant, although they target Windows 10. Still, the APIs should work on 8/8.1.
UniversalCameraSample: This one does capture photos, and supports portrait and landscape orientations. Here is the relevant part:
private static async Task<DeviceInformation> FindCameraDeviceByPanelAsync(Windows.Devices.Enumeration.Panel desiredPanel)
{
// Get available devices for capturing pictures
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
// Get the desired camera by panel
DeviceInformation desiredDevice = allVideoDevices.FirstOrDefault(x => x.EnclosureLocation != null && x.EnclosureLocation.Panel == desiredPanel);
// If there is no device mounted on the desired panel, return the first device found
return desiredDevice ?? allVideoDevices.FirstOrDefault();
}
And you can use it like so:
// Attempt to get the front camera if one is available, but use any camera device if not
var cameraDevice = await FindCameraDeviceByPanelAsync(Windows.Devices.Enumeration.Panel.Front);
if (cameraDevice == null)
{
Debug.WriteLine("No camera device found!");
return;
}
// Create MediaCapture and its settings
_mediaCapture = new MediaCapture();
var settings = new MediaCaptureInitializationSettings { VideoDeviceId = cameraDevice.Id };
// Initialize MediaCapture
try
{
await _mediaCapture.InitializeAsync(settings);
_isInitialized = true;
}
catch (UnauthorizedAccessException)
{
Debug.WriteLine("The app was denied access to the camera");
}
catch (Exception ex)
{
Debug.WriteLine("Exception when initializing MediaCapture with {0}: {1}", cameraDevice.Id, ex.ToString());
}
Have a closer look at the sample to see how to get all the details. Or, to have a walkthrough, you can watch the camera session from the recent //build/ conference, which includes a little bit of a walkthrough through some camera samples.
Here is how to get the device's available cameras and set the front one for the stream:
mediaCaptureManager = new MediaCapture();
var devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
var deviceInfo = devices[0]; //grab first result
foreach (var device in devices)
{
if (device.Name.ToLowerInvariant().Contains("front"))
{
deviceInfo = frontCamera = device;
hasFrontCamera = true;
}
if (device.Name.ToLowerInvariant().Contains("back"))
{
rearCamera = device;
}
}
var mediaSettings = new MediaCaptureInitializationSettings
{
MediaCategory = MediaCategory.Communications,
StreamingCaptureMode = StreamingCaptureMode.AudioAndVideo,
VideoDeviceId = deviceInfo.Id
};
await mediaCaptureManager.InitializeAsync(mediaSettings);
You'll need to consider rotation because front and rear cameras on different devices have different rotations, but this will initialize your MediaCapture properly
I'm using JSR 135 to play an audio file.
This is the way I'm doing it.
InputStream is = getClass().getResourceAsStream("/tone.mp3");
Player p = Manager.createPlayer(is, "audio/mp3");
p.start();
// get volume control for player and set volume to max
VolumeControl vc = (VolumeControl) p.getControl("VolumeControl");
if (vc != null) {
vc.setLevel(100);
}
p.close();
p = null;
is.close();
is = null;
But I need the audio to always play through the loudspeaker of the phone. In this moment, if I plugin a headset, the sound will only play in the earpiece. Is there a way to achieve this?
No, there is no way to achieve this, and rightly so IMO.
I am trying to write code in J2ME for the Nokia SDK (S60 device) and am using Eclipse.
The code tries to play some wav files placed within the "res" directory of the project. The code is as follows:
InputStream in1 = null;
System.out.println("ABout to play voice:" + i);
try {
System.out.println("Getting the resource as stream.");
in1 = getClass().getResourceAsStream(getsound(i));
System.out.println("Got the resouce. Moving to get a player");
}
catch(Exception e) {
e.printStackTrace();
}
try {
player = Manager.createPlayer(in1, "audio/x-wav");
System.out.println("Created player.");
//player.realize();
//System.out.println("Realized the player.");
if(player.getState() != player.REALIZED) {
System.out.println("The player has been realized.");
player.realize();
}
player.prefetch();
System.out.println("Fetched player. Now starting to play sound.");
player.start();
in1.close();
int i1 = player.getState();
System.out.println("Player opened. Playing requested sound.");
//player.deallocate();
//System.out.println("Deallocated the player.");
}
catch(Exception e) {
e.printStackTrace();
}
}
Where the function getSound returns a string that contains the name of the file to be played. It is as follows:
private String getSound(int i) {
switch(i) {
case 1: return "/x1.wav";
case 2: return "/x2.wav";
}
}
My problem is this:
1. When I try to add more than 10 sounds, the entire application hangs right before the prefetch() function is called. The entire system slows down considerably for a while. I then have to restart the application.
I have tried to debug this, but have not gotten any solutions so far. It would be great if I could get some help on this.
The problem lies in the emulator being used for the project. In the emulation tab in the Run Configurations window, the following Device must be selected:
Group: Nokia N97 SDK v1.0
Device: S60 Emulator
Changing to the above from Devices listed under the Sun Java Wireless Toolkit solved the problem.