"RFC 2833 RTP Event" Consecutive Events and the E "End" Bit - voip

Why do I get a dtmf sound when the E bit is 0 and no sound when it is 1? (RTP packets appear in wireshark either way)
Background:
I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt
obtaining the following behaviour when the E bit is NOT set:
If for example keys 7874556332111111145855885#3 are pressed, then ALL events are sent and show up in a program like wireshark, however only 87456321458585#3 sound.
So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound.
In section 3.9, figure 2 of the above linked document, they give a 911 example where all but the last event have the E bit set.
When I set the 'E' bit to 1 for all numbers, I never get an event to sound.
I have thought of some possible causes but do not know if they are the reason:
1) figure 2 shows payload types of 96 and 97 sent. I have not sent these headers. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively"
2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this?
3) I have a separate output stream that also plays so wonder if it might be interfering with hearing this stream.
4) have also fiddled around with timestamp intervals and the RTP marker.
Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas:
6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end)
Real-Time Transport Protocol
Stream setup by SDP (frame 6225)
Setup frame: 6225
Setup Method: SDP
10.. .... = Version: RFC 1889 Version (2)
..0. .... = Padding: False
...0 .... = Extension: False
.... 0000 = Contributing source identifiers count: 0
0... .... = Marker: False
Payload type: telephone-event (101)
Sequence number: 0
Extended sequence number: 65536
Timestamp: 2000
Synchronization Source identifier: 0x15f27104 (368210180)
RFC 2833 RTP Event
Event ID: DTMF Pound # (11)
1... .... = End of Event: True
.0.. .... = Reserved: False
..00 0000 = Volume: 0
Event Duration: 1000
Please note: A volume of zero is the loudest obtainable level as explained in the ietf.org/rfc/rfc2833.txt specification:
"volume: For DTMF digits and other events representable as tones,
this field describes the power level of the tone, expressed
in dBm0 after dropping the sign. Power levels range from 0 to
-63 dBm0. The range of valid DTMF is from 0 to -36 dBm0 (must
accept); lower than -55 dBm0 must be rejected (TR-TSY-000181,
ITU-T Q.24A). Thus, larger values denote lower volume. This
value is defined only for DTMF digits. For other events, it
is set to zero by the sender and is ignored by the receiver."
The issue is when the "End of Event" bit is switched on.

I recommend you to start with the RFC 4733 for two reasons:
It obsolotes the RFC 2833.
The chapter 5. is a great source to understand how a DTMF digit is produced.
Here is my understanding of how a DTMF digit should be sent:
A start packet is emitted. It has its M flag set and the E flag cleared. The timestamp for the event is set.
One or more continuation packets are emitted (as long as the user pressed the digit). They have theirs M And E flags cleared. They use the timestamp defined in the start packet, but their sequence numbers and their duration are incremented (see the RFC for the intervals).
An end packet is sent (when the user stop pressing the digit). It has it M flag cleared and its E flag set.
Why should several packets be sent for one event ? Because the network is not always perfect and some loss can occur:
The RFC states (2.5.1.2. "Transmission of Event Packets") that:
For robustness, the sender SHOULD
retransmit "state" events
periodically.
And (2.5.1.4. "Retransmission of Final Packet") that:
The final packet for each event and
for each segment SHOULD be sent a
total of three times at the interval
used by the source for updates.
This ensures that the duration of the
event or segment can be recognized
correctly even if an instance of the
last packet is lost.
As for your problem:
If for example keys
7874556332111111145855885#3 are
pressed, then ALL events are sent and
show up in a program like wireshark,
however only 87456321458585#3 sound.
So the first key (which I figure could
be a separate issue) and any repeats
of an event (ie 11111) are failing to
sound.
Without a WireShark trace, it is hard to tell what is going on, but I suspect that the repeating 1 digits are ignored because there is not distinction between successive events; the first 1 digit is recognized and the others are considered as retransmissions of the event.

I notice your Volume is set to 0; that seems like a likely reason to not be hearing any sound?

beauty! thanks a lot laurent. i have implemented a working solution based on your recommendations. (just posting as another answer to get better text formatting - I will give you the bounty)
to sum up what was wrong:
I needed redundancy of packets.
Setting all 'E's to 0 meant that repeated same key events were ignored.
Setting all 'E's to 1 meant that I signaled the end of an event, but not the actual event - hence the silence.
Here is a summarized flow example:
Event_ID M E Timestamp Duration Sequence_number
3 1 0 400 400 1
3 0 0 400 800 2
3 0 0 400 1200 3
3 0 0 400 1600 4
3 0 1 400 1600 5
3 0 1 400 1600 6
3 0 1 400 1600 7
7 1 0 800 400 8
7 0 0 800 800 9
7 0 0 800 1200 10
7 0 0 800 1600 11
7 0 1 800 1600 12
7 0 1 800 1600 13
7 0 1 800 1600 14
*note: just looked at the suggested rfc4733 first example in section 5 and it is great! much better than the 2833 ones

Related

Determining end-to-end delay

Can somebody please help me understand this question:
"(c) A wants to send a 500 byte packet to D through B. B is supposed to follow the store-andforward model, that is, B will receive the whole packet from A and then start transmitting the
packet to D. What is the end-to-end delay seen by the packet?"
A --> B (4 Mbps & 3000 km) and B --> D (10 Mbps & 900 km)
This is also assuming all data is sending at the speed of light (3 * 10pow(5) km/s
Im just really stuck on this question like I get the calculations for the most part, however I have no idea how to determine any of this.
It sounds like the questions aims at helping you understand different transit times of data of different speed/length links.
For A->B you should calculate how long the packet takes to transmit on a 4 megabit link. You then need to add the physical transit time, using the distance and the speed of light.
I.e. first you need to know how long it takes until the last bit is put on the link and then how long it takes for that bit to travel to the receiver.
When B has received that last bit it will forward the packet to D. You therefor need to repeat the calculation for the B->D part.
The sum of the two parts should be your answer.
I won't to the calculations for you, though.
Edit:
Ok, I get the feeling you really tried yourself, so here goes.
Transmission time
A->B:
4 Mbps = 4 000 000 bits/s
500 bytes = 500*8 bits = 4000 bits
Transmission time = Packet size / Bit rate => 4 000 / 4 000 000 => 0.001 s
Distance = 3 000 km
Propagation speed = 300 000 km/s
Propagation time = Distance / propagation speed => 3 000 / 300 000 = 0.01 s
Total time = 0.001 + 0.01 = 0.011 s
Now you do B->D and add the two parts.
Thanks for the help, I was actually able to figure it out. So I had to calculate the delay from A -> B, which after calculations turned out to be 11ms. Then I had to add that to the delay of B -> D which was 3.4ms which makes the delay from A -> D 14.4ms. Thanks for the help.

Does this Bluetooth HCI Event suggest it can exceed the max parameter total length?

The Bluetooth Core Specification v5.1 (Vol 2, Part E, 7.7.19) details that the HCI_Number_Of_Completed_Packets event (p. 1182 of the Core 5.1 spec) contains a Number_of_Handles parameter described as: "The number of Connection_Handles and Num_HCI_Data_Packets parameters pairs contained in this event." This value is described to range from 0 to 255, and the size of these Connection_Handles and Num_HCI_Data_Packets is 2 octets each.
Therefore, in a situation where the Number_of_Handles is 255, this means that this event must contain 510 octets of Connection_Handles and 510 octets of Num_HCI_Data_Packets, for a total of 1021 octets, including the Number_of_Handles. However, an HCI Event packet can only have up to 255 octets of data excluding its header (Vol 2, Part E, 5.4.4).
Is this a mistake in the specified range for Number_of_Handles? Shouldn't it be from 0 to 63 instead, adding up to a maximum of 253 parameter octets for this event?

difference between counting packets and counting the total number of bytes in the packets

I'm reading perfbook. In chapter5.2, the book give some example about statistical counters. These example can solve the network packet count problem.
Quick Quiz 5.2: Network-packet counting problem. Suppose that you need
to collect statistics on the number of networking packets (or total
number of bytes) transmitted and/or received. Packets might be
transmitted or received by any CPU on the system. Suppose further that
this large machine is capable of handling a million packets per
second, and that there is a system-monitoring package that reads out
the count every five seconds. How would you implement this statistical
counter?
There is one QuickQuiz ask about difference between counting packets and counting the total number of bytes in the packets.
I can't understand the answer. After reading it, I still don't know the difference.
The example in "To see this" paragraph, if changing number the 3 and 5 to 1, what difference does it make?
Please help me to understand it.
QuickQuiz5.26: What fundamental difference is there between counting
packets and counting the total number of bytes in the packets, given
that the packets vary in size?
Answer: When counting packets, the
counter is only incremented by the value one. On the other hand, when
counting bytes, the counter might be incremented by largish numbers.
Why does this matter? Because in the increment-by-one case, the value
returned will be exact in the sense that the counter must necessarily
have taken on that value at some point in time, even if it is
impossible to say precisely when that point occurred. In contrast,
when counting bytes, two different threads might return values that are
inconsistent with any global ordering of operations.
To see this, suppose that thread 0 adds the value three to its counter,
thread 1 adds the value five to its counter, and threads 2 and 3 sum the
counters. If the system is “weakly ordered” or if the compiler uses
aggressive optimizations, thread 2 might find the sum to be three and
thread 3 might find the sum to be five. The only possible global orders
of the sequence of values of the counter are 0,3,8 and 0,5,8, and
neither order is consistent with the results obtained.
If you missed > this one, you are not alone. Michael Scott used this
question to stump Paul E. McKenney during Paul’s Ph.D. defense.
I can be wrong but presume that idea behind that is the following: suppose there are 2 separate processes which collect their counters to be summed up for a total value. Now suppose that there are some sequences of events which occur simultaneously in both processes, for example a packet of size 10 comes to the first process and a packet of size 20 comes to the second at the same time and after some period of time a packet of size 30 comes to the first process at the same time when a packet of size 60 comes to the second process. So here is the the sequence of events:
Time point#1 Time point#2
Process1: 10 30
Process2: 20 60
Now let's build a vector of possible total counter states after the time point #1 and #2 for a weakly ordered system, considering the previous total value was 0:
Time point#1
0 + 10 (process 1 wins) = 10
0 + 20 (process 2 wins) = 20
0 + 10 + 20 = 30
Time point#2
10 + 30 = 40 (process 1 wins)
10 + 60 = 70 (process 2 wins)
20 + 30 = 50 (process 1 wins)
20 + 60 = 80 (process 2 wins)
30 + 30 = 60 (process 1 wins)
30 + 60 = 90 (process 2 wins)
30 + 90 = 110
Now presuming that there can be some period of time between time point#1 and time point#2 let's assess which values reflect the real state of the system. Apparently all states after time point#1 can be treated as valid as there was some precise moment in time when total received size was 10, 20 or 30 (we ignore the fact the the final value may not the actual one - at least it contains a value which was actual at some moment of system functioning). For the possible states after the Time point#2 the picture is slightly different. For example the system has never been in the states 40, 70, 50 and 80 but we are under the risk to get these values after the second collection.
Now let's take a look at the situation from the number of packets perspective. Our matrix of events is:
Time point#1 Time point#2
Process1: 1 1
Process2: 1 1
The possible total states:
Time point#1
0 + 1 (process 1 wins) = 1
0 + 1 (process 1 wins) = 1
0 + 1 + 1 = 2
Time point#2
1 + 1 (process 1 wins) = 2
1 + 1 (process 2 wins) = 2
2 + 1 (process 1 wins) = 3
2 + 1 (process 2 wins) = 3
2 + 2 = 4
In that case all possible values (1, 2, 3, 4) reflect a state in which the system definitely was at some point in time.

Bluetooth heartrate monitor byte decoding

Problem :
I am having trouble with understanding the returned data of the BLE Heart Rate Characteristic(service 180d, characteristic 2a37).
According to the specification there will be either 6 or 7 bytes of data (when base64-decoded), i fully understand how to deal with it when this is the case.
But sometimes it won't return 6 or 7 bytes but 8 and more rarely 4 bytes, i have no idea why there are more/less bytes and what the meaning of the added bytes is or which bytes are left out.
I could skip all the cases where there are not 6 or 7 bytes but i want to fully understand this.
I am certain that the converting the base64-encoded to byte-array is done correctly, i made a function for it and checked it using manual base64-decode combined with charCodeAt(index) and truly manually checked it using good ol' pencil, paper and brain (not necessarily in that order).
TL;DR :
BLE Heart Rate (180d,2a37) sometimes does not return the expected amount of bytes (4 and 8 when it should be either 6 or 7 bytes).
What exactly happened and why?
Example :
// Example results in byte-array's
["00010110", "01110111", "00000100", "00000010"] // unexpected 4 byte
["00010110", "01111000", "11111111", "00000001", "11111111", "00000001", "00001100", "00000001"] // unexpected 8 byte
["00010110", "01110111", "00001000", "00000010", "00001000", "00000010"] // normal 6 byte
// Example results in hex-array's (easier to read on small screens)
["0x16","0x77","0x04","0x02"] // unexpected 4 byte
["0x16","0x78","0xFF","0x01","0xFF","0x01","0x0C","0x01"] // unexpected 8 byte
["0x16","0x77","0x08","0x02","0x08","0x02"] // normal 6 byte
Byte Explanation :
Flags. The first bit (most right) is on if the heart rate is in 16 bit format (i only got 8 bit).
heartrate, if the heart rate is in 16 bit format there will be 2 bytes here
energy expended
energy expended
rr interval
rr interval
Energy expenditure is optional check the bit 3 of the flags in your sample data case it is not present. There are a variable number of rr intervals. with 4 bytes you have just 1 with 6 bytes you have 2 and with 8 bytes you have 3 you could in theory get 10 and 4.
You should decode the bytes using the flags then if rr's are present the number of bytes left / 2 is the number of rr's you have.
See the XML-Definition file for more details.

Z80 Software Delay

I am trying to create a software delay. Here is a sample program of what I am doing:
Address Data Opcode Comment
1800 06 LD, B Load register B with fix value
1801 “ “ Fixed value
1802 05 DEC, B Decrement value in register B
1803 C2 JP cc Jump to 1802 if value is not 0
1804 02 - Address XX
1805 18 - Address XX
My question is how can I calculate the required fixed value to load into register B so that the process of decrementing the value until 0 takes 2 seconds?
In my manual the time given to run the instructions is based on a 4MHz CPU but the Z80 CPU I am using has a speed of 1.8MHz. Any idea how I can calculate this? Thanks. P.S here is the decrement (DEC) and jump (JP cc) instructions from the manual:
Instruction M Cycles T states 4 MHz E.t
DEC r 1 4 1.00
JP cc 3 10 (4,3,3) 2.50
If by 1.8MHz you mean exactly 1,800,000 Hz, then to get a 2 second delay you'd need to delay for 3,600,000 T-states. Your current delay loop takes 14 T-states per iteration, which means that your initial value for B would have to be 3600000/14 == 257143, which obviously won't fit in one byte.
The greatest number of iterations that you could specify with an 8-bit register is 256, and to reach 3,600,000 T-states with 256 iterations, each iteration would have to take 14,062 T-states. That's one big loop body.
If we use a 16-bit counter things start getting a bit more manageable. At 65,536 iterations we only need 55 T-states per iteration to reach a total of 3,600,000 T-states. Below is an example of what that could look like:
; Clobbers A, B and C
ld bc,#0
1$:
bit #0,a ; 8
bit #0,a ; 8
bit #0,a ; 8
and a,#255 ; 7
dec bc ; 6
ld a,c ; 4
or a,b ; 4
jp nz,1$ ; 10, total = 55 states/iteration
; 65536 iterations * 55 states = 3604480 states = 2.00248 seconds
I'm a bit of an optimization freak, so here is my go using the syntax with which I am most familiar (from the TASM assembler and similar):
Instruction opcode timing
ld bc,$EE9D ;01EE9D 10cc
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
djnz $-4 ;10FA 13cc*(256C+B) - 5*C
dec c ;0D 4*C
jr nz,$-7 ;20F7 12*C-5
This code is 12 bytes and 3600002 clock cycles.
EDIT: It seems like part of my answer is gone! To answer your question better, your Z80 can process 1800000 clock cycles in one second, so you need twice that (3600000). If you add up the timings given in my code, you get:
=10+(256C+B)(19*4+13)-5C+4C+12C-5
=5+(256C+B)89+11C
=5+22795C+89B
So the code timing is largely dependent on C. 3600000/22795 is about 157, so we initialize C with 157 (0x9D). Plugging this back in, we get B to be roughly 237.9775, so we round that up to 238 (0xEE). Plugging these in gets our final timing of 3600002cc or roughly 2.000001 seconds. This assumes that the processor is running at exactly 1.8MHz which is very unlikely.
As well, if you can use interrupts, figure out roughly how many times it fires per second and use a loop like halt \ djnz $-1 . This saves a lot more in terms of power consumption.

Resources