I'm trying to extract audio from a telephone session captured with Wireshark. The capture as send to us from the telephone provider for debugging/analysis. I have 3 files: signalling, and two files with UDP data, one for each direction. After merging two of these files (one direction with signalling), Wireshark provides RTP stream analysis. What I observe (as I do for a second session capture) is that Wireshark isn't able to export RTP stream audio (Payload type: ITU-T G.711 PCMA (8)) for one direction. This happens to be an RTP stream containing "RTF 2833 RTP events" (Payload type: telephone-event (106)). These events seem to transport DTMF tunes out-of-band, for each DTMF tune, there is a section of 7 consecutive RTP events of this type. What Wireshark does is producing an 8 GB *.au file for an audio stream less than two minutes. For the opposite-direction stream I get an audio file that is 2 MB in size.
I have to admit that this is just guesswork: I connect the error with a feature that I can see, I'm a bit confused that Wireshark obviously knows these Events but fails on saving the corresponding audio stream. Do I maybe need some plugin for that?
I tried to search the web for this issue but without success.
This question was previously asked on Network Engineering but turned out to be off-topic there.
You can filter (rtp.p_type != 106) the DTMF events from the wireshark logs (pcap) and then save only the G.711 data in a separate file.
Then do the RTP analysis and save the audio payload in .au/.raw file format.
Related
Short story:
If I myself intend to receive and then send a Shoutcast compatible audio stream processed by my application, then how to do it properly using an mp3 (de/en)coder library? Pseudo code, or better - lame mp3 specific code would be highly appreciated.
Long story:
More specific questions which bother me were caused by an article about mp3, which says:
Generally, frames are independent items. Each frame has its own header
and audio informations. There is no file header. Therefore, you can
cut any part of MPEG file and play it correctly (this should be done
on frame boundaries but most applications will handle incorrect
headers). For Layer III, this is not 100% correct. Due to internal
data organization in MPEG version 1 Layer III files, frames are often
dependent of each other and they cannot be cut off just like that.
This made me wonder, how Shoutcast servers and clients deal with frame headers and frame dependencies.
Do I have to encode to constant bitrate (CBR) only, if I want to achieve maximum compatibility with the most of Shoutcast players out there?
Is the mp3 frame header used at all or the stream format is deduced from a Shoutcast protocol specific HTTP header?
Does Shoutcast protocol guarantee (or is it common good practice) to start serving mp3 stream on frame boundaries and continue to respond with chunks that are cut at frame boundaries? But what is the minimum or recommended size of a mp3 frame for streaming live audio?
How does Shoutcast deal with frame dependencies - does it do something special with mp3 encoding to ensure that the served stream does not have frames which depend on previous frames (if this is even possible)? Or maybe it ignores these dependencies on server side/client side, thus getting audio quality reduction or even artifacts?
SHOUTcast servers do not know or care about the data being passed through them. They send it as-is. You can actually send arbitrary data through a SHOUTcast server, and receive it. SHOUTcast will segment the media data wherever the buffer size falls.
It's up to the client to re-sync to the data. It does this by locating the frame header, then being decoding. Once the codec has enough frames to reliably play back audio, it will begin outputting raw PCM. It's up to the codec when to decide it's safe to start playback. Since the codec knows what it's doing in terms of decoding the media, it knows when it has sufficient data (including bit reservoirs) to begin without artifacts. It's also worth noting that the bit reservoir cannot be carried on too far, so it doesn't take but a few frames at worst to handle it.
This is one of the reasons it's important to have a sizable buffer server-side, to flush to the clients as fast as possible on connect. If playback is to start quickly, the codec needs more data than the current frame to begin.
I have a text file contaning payload of RTP packets (in hex,coded with GSM\ opus\speex ) belong to a VoIP conversation, does anyone know how to convert this file into a .wav audio file?
I'm using windows.
Thanks
.wav is just an file container which you can have an any codec format and makes the player to recognize the codec inside. Refer Wiki: WAV and for more technical details refer WaveFormat. And It just wrappers the raw codec content. If you have experience in C program, there are open source available to convert one codec to PCM. Since PCM is the raw audio data in 16 bit format.
But I suggest an solution but don't know it will met your requirement.
install latest wireshark
using wireshark capture the RTP packets.
Select UDP packet and Right click and choose Decode As option
Select Transport tab and choose RTP protocol
Now you can see RTP packet with right codec
Go to Telephony -> RTP -> Stream analysis -> Save the RTP Payload as .raw
At this stage the codec data is available in .raw file format.
There are open software are available are available such as SoX, ffmpeg etc...
From there are you can covert .raw to .wav format.
After that you can play in VLC (PCM, GSM, ADPM, Alaw , uLaw) or any other supported player (Amr)
You won't find Speex, g729 since these are paid
I asked earlier about H264 at RTP H.264 Packet Depacketizer
My question now is about the audio packets.
I noticed via the RTP packets that audio frames like AAC, G.711, G.726 and others all have the Marker Bit set.
I think frames are independent. am I right?
My question is: Audio is small, but I know that I can have more than one frame per RTP packet. Independent of how many frames I have, they are complete? Or it may be fragmented between RTP packets.
The difference between audio and video is that audio is typically encoded either in individual samples, or in certain [small] frames without reference to previous data. Additionally, amount of data is small. So audio does not typically need complicated fragmentation to be transmitted over RTP. However, for any payload type you should again refer to RFC that describes the details:
AAC - RTP Payload Format for MPEG-4 Audio/Visual Streams
G.711 - RTP Payload Format for ITU-T Recommendation G.711.1
G.726 - RTP Profile for Audio and Video Conferences with Minimal Control
Other
I might be asking the wrong question, but my knowledge in this area is very limited.
I'm using acmStreamConvert to convert PCM to GSM (6.10).
Audio Format: 8khz, 16-bit, mono
For the PCM buffer size I'm using 640 bytes (320 samples). For GSM buffer I'm using 65 bytes. My understanding is that GSM "always" converts 320 samples to 65 bytes.
The reason I ask "block or stream" is I'm wondering if I can safely convert multiple audio streams (real-time) using the same acmStreamConvert handle? I see the function has some flags for ACM_STREAMCONVERTF_START and ACM_STREAMCONVERTF_END and ACM_STREAMCONVERTF_BLOCKALIGN, but is it required I use this start/end sequence for GSM? I understand that might be required for some formats that use head/tails, but I'm hoping this isn't required for GSM format?
I'm working on a group VOIP client, and each client sends GSM format, and then needs to convert to PCM before playing. I'm hoping I don't need one ACM handle per client.
Stream based, or at least the ACM API usage of it is. Trying to use the same ACM objects/handles for multiple streams will produce undesired results. I suspect this also means it doesn't handle lost packets as well as other codecs might (haven't confirmed that part yet).
I have several RTP streams coming to from the network, and since RTP can only handle one stream in each direction, I need to able to merge a couple to send back to another client (could be one that is already sending an RTP stream, or not... that part isn't important).
My guess is that there is some algorithm for mixing audio bytes.
RTP Stream 1 ---------------------
\_____________________ (1 MUXED 2) RTP Stream Out
/
RTP Stream 2 ---------------------
There is an IETF draft for RTP stream Muxing which might help you the link is here http://www.cs.columbia.edu/~hgs/rtp/drafts/draft-tanigawa-rtp-multiplex-01.txt
In case you want to use only one stream, then perhaps send data from multiple streams as different channles this link gives an overview how Audio channels are multiplexed in WAV files. You can adopt similar strategy
I think you are talking about VoIP conference.
mediastreamer2 library I think supports conference filter.