How does a windows phone detect if a headset is plugged - audio

I am developing a windows Phone application which is supposed to send audio signals to a circuitry. The circuit sends signals to another device on receiving the audio signal from windows phone. this works fine when connected to the audio jack of a laptop/pc, but windows phone does not detect this device to be a headset and it plays the audio signals through the loudspeaker. Is there any way to force the signals through the audio port?

Related

PCM data handling with Raspberry 4 and audio codec (TLV320AIC3104)

I´m working on a project with a Raspberry 4 compute module and IO board.
This is a schema of the project:
https://drive.google.com/file/d/1mg5IhAKTUE2Athzafis1KSsEJS1T7DXG/view?usp=sharing
I have to handle audio and voice to make calls with the raspberry. My setup consists of an audio box, which has a headset connected to it, pass the signal through a TLV320AIC3104 audio codec connected via PCM to the raspberry, and then make calls through a USB modem, which has an integrated audio card. I need two way communication.
So far, I can make the calls with the raspberry and have audio and voice with the caller, using a USB headset.
I am using the TLV320AIC3104EVM-K evaluation board (https://www.ti.com/lit/ug/slau218a/slau218a.pdf?ts=1664262721095&ref_url=https%253A%252F%252Fwww.ti.com%252Ftool%252FTLV320AIC3104EVM-K) to learn how to connect the audio codec to the raspberry, but so far I have been unable to get anything. When the TLV320AIC3104EVM-K is connected to the PC via USB I can use the headset passing through the audio box without problems.
The thing is I can´t figure out how to connect the audio codec to the raspberry to get the headset audio, and then finish the setup passing the information to the modem and back.
I understand that the connection between the TLV320AIC3104EVM-K and the Raspberry is actually an I2S connection. I have connected the GPIO PCM ports (GPIO 18, 19, 20, 21). I have configured the raspberry in the file /etc/asound.conf to have the card0 as default, being this card the one from the modem.
/etc/asound.conf
pcm.!default {
type hw
card 0
}
ctl.!default {
type hw
card 0
}
And have the GPIO ports configured to ALT0.
What am I missing here?

How can I have my linux desktop sit between cell phone and headset using HFP or HSP? to enable recording?

I'm not sure if this is feasible, but here's the idea. I want my Linux PC to connect to the cell phone as if it's a headset, and then have the headset connect to the PC, as if the PC is a phone. Then look for a way to bridge or passthrough the audio, while using something like alsamixer to tap into the audio and save it to disk.

How to analyze my own transmitted Bluetooth signal on Android

I want to create an Android app where I want to be able to receive my own transmitter Bluetooth signal in Android , since Bluetooth signal is omnidirectional.
I.e. I want to receive and transmit Bluetooth at same time and receive my own transmitted signal and analyze it on same phone.
Or I tried to use beacon simulator to simulate LE beacon Bluetooth signal and in my Bluetooth setting I tried to scan for available devices but it does not show my android device.
How can I do so?
It is not possible to simultaneously transmit and receive the same packets from the same antenna systems. Theoretically, you can receive packets resulting from reflections. In the practice, this application has no right to work. Switching the operating mode from the transmitter to the radio receiver requires time.
To track bluetooth packets you can use the CC2640 dongle with the PC application:
http://www.ti.com/tool/PACKET-SNIFFER

no audio data from PC send to STM32F4 audio class USB

I'm working in an audio project. We use stm32f407 like a USB audio device to get audio data from PC then send out by I2S module. We are using stm32f4 Discovery kit and STM32cubeMX. After generate code by following this video, i change nothing and flash to Kit; my PC identifies that STM Audio device but there isn't any data send to my kit when play music, except MuteCMD . My question is:
i don't know which function is callback when data stream from PC to Kit.
why PC identifies that my kit is an audio output device but the callback of volume control isn't called when I config volume on PC and there isn't any data of music send to my device. The only one mute control callback function is called when i mute the PC.
this is my config in STM32cubeMX
pinout config figure
USB device config figure 1
USB device config figure 2
USB device config figure 3
PC identifies AUDIO device figure
choosing PC's audio output device figure
fail to play test tone figure
You should set USBD_AUDIO_FREQ to 22050 (or 44100, or 11025). Your value is 22100 and it seems like Windows or built-in audio drivers can't use that frequency.
I had exact same problem.
My project is generated from STM32Cube.
Windows recognizes the F7-DISCO board as sound card but failed to play test sounds.
I changed USBD_AUDIO_FREQ to 48000 and PID to 0x5730 (22320 in decimal).
And then everything works fine.

How to handle voice from Bluetooth?

I am working on a Raspberry pi based embedded project which turn a raspberry pi to a bluetooth based voice processing system. I have a BLE mic and BLE receiver connected to the RPI, I have BlueZ integrated to my stack. I am able to connect both together. But the problem I am facing now is in handling Audio from the mic. I don't have ALSA or pulse audio in my system. Is there a way to get PCM data from the bluez stack directly? it would be helpful if there is a architecture for handling voice with BLE.

Resources