I've tried to play out an 24 bit PCM audio(s24le) by using Vlcsharp and NAudio.
I set audio play callback by calling libvlc_audio_set_callbacks() to get raw sample data firstly.
Next, I tried two sample video. The one has 24-bit PCM audio, and the another has 16-bit one(converted by using ffmpeg CLI from 24-bit), except of it, everything is the same.
They're played well, but when I saw the data by debugger, the sample data got by vlc's audio play callback is the same.
After some researches, I found out the general PC can't play 24-bit PCM audio without a proper sound card.
If It's right, Is there some bit-depth convert(24->16) process to play? so, the sample data got by callback is same?
You could set the format parameter here if you wanted to force vlc to convert the format :
https://code.videolan.org/mfkl/libvlcsharp-samples/-/blob/master/AudioCallbacks/Program.cs#L59
int AudioSetup(ref IntPtr opaque, ref IntPtr format, ref uint rate, ref uint channels)
see the doc here
format should be a 4 ascii chars string, but I don't know the available formats you can pass here.
Related
What I want to do:
I want to build a audio recorder with my arduino. I have a mic connected to it, as well as a SD Card adapter. On a button push I want to record something and save that on the SD Card.
Problem:
I used this wav file "template" -> How to convert analog input readings from Arduino to .WAV from sketch to create a file and write it onto my sd card. I am using this
int micIn = analogRead(A1);
writeDataToWavFile(micIn);
to "feed" my wav file with data. Values are mapped from -32.. to 32.. (see method in the link).
Good is, the wav file is correctly created and not corrupted, but there is no sound and the length is 0. (but bytes are written).
I also tried to use
writeDataToWavFile(0);
'cause i thought i'd get at least a silent but longer (at least existent) sound, but it didn't work.
How am i supposed to add the actual data? just raw as voltage values? mapped? 0-centered? sampled, or are they already sampled?
I'm trying to implement audio output using R2R 8bit DAC and Arduino UNO.
This is my hardware:
http://imgur.com/a/hiUCq
This is the sound i want to hear:
http://vocaroo.com/i/s0VDwBBkQRdc
This is a wav file, 16Khz, mono, 8 bit pcm.
I used matlab to retrieve the binary data from the wav file, and then wrote this code:
http://csharppad.com/gist/0f39da965e31fc838c6de73ff50e5669
I connected the output to some speaker and what i hear is some strange sound..do you think i miss something in my hardware/software?
Thanks!
I have access to an audio stream of PCM audio buffers. I should be clear I do not have access to the audio file. I only have access to a stream of 4096 byte chunks of the audio data.
The PCM buffers come in with the following format:
PCM Int 16
Little Endian
Two Channels
Interleaved
To support audio playback on a standard browser I need to convert the audio to the following format:
PCM Float 32
Big Endian
Two channels (at most)
Deinterleaved
This audio is coming from an iOS app so I have access to Swift and Objective C (although I am not very comfortable with Objective C...which makes Apple's Audio Converter Services almost impossible to use because Swift really doesn't like pointers).
Additionally the playback will occur on a browser so I could handle the conversion in client side Javascript or server sider. I am proficient enough in the following server side languages to do a conversion:
Java (preferred)
PHP
Node.js
Python
If anyone knows a way to do this in any of these languages please let me know. I have worked on this for long enough that I will probably understand even a very technical description of how to do this.
My current plan is to use bitwise operations to deinterleave the left and right channels, then cast the Int 16 Buffer to a Float 32 Buffer with the Web Audio API. Does this seem like a good plan?
Any help is appreciated, thank you.
My current plan is to use bitwise operations to deinterleave the left and right channels, then cast the Int 16 Buffer to a Float 32 Buffer with the Web Audio API. Does this seem like a good plan?
Yes, that is exactly what you need to do. I do the exact same thing in my applications, and this method works well and is really the only way that makes sense to do it. You don't want to send 32-bit float samples to the client from the server due to the amount of bandwidth. Do the conversion client-side.
I receive data from a Kinect v2, which is (I believe, information is hard to find) 16kHz mono audio in 32-bit floating point PCM. The data arrives in up to 4 "SubFrames", which contain 256 samples each.
When I send this data to lame.exe with -r -s 16 --bitwidth 32 -m m I get an output containing gaps (supposedly where the second channel should be). These command line switches should however take stereo and downmix it to mono.
I've also tried importing the raw data into Audacity, but I still can't figure out the correct way to get continuous audio out of it.
EDIT: I can get continuous audio when I only save the first SubFrame. The audio still doesn't sound right though.
In the end I went with Ogg Vorbis. A free format, so no problems there either. I use the following command line switches for oggenc2.exe:
oggenc2.exe --raw-format=3 --raw-chan=1 --raw-rate=16000 - --output=[filename]
I might be asking the wrong question, but my knowledge in this area is very limited.
I'm using acmStreamConvert to convert PCM to GSM (6.10).
Audio Format: 8khz, 16-bit, mono
For the PCM buffer size I'm using 640 bytes (320 samples). For GSM buffer I'm using 65 bytes. My understanding is that GSM "always" converts 320 samples to 65 bytes.
The reason I ask "block or stream" is I'm wondering if I can safely convert multiple audio streams (real-time) using the same acmStreamConvert handle? I see the function has some flags for ACM_STREAMCONVERTF_START and ACM_STREAMCONVERTF_END and ACM_STREAMCONVERTF_BLOCKALIGN, but is it required I use this start/end sequence for GSM? I understand that might be required for some formats that use head/tails, but I'm hoping this isn't required for GSM format?
I'm working on a group VOIP client, and each client sends GSM format, and then needs to convert to PCM before playing. I'm hoping I don't need one ACM handle per client.
Stream based, or at least the ACM API usage of it is. Trying to use the same ACM objects/handles for multiple streams will produce undesired results. I suspect this also means it doesn't handle lost packets as well as other codecs might (haven't confirmed that part yet).