Problem :
I am having trouble with understanding the returned data of the BLE Heart Rate Characteristic(service 180d, characteristic 2a37).
According to the specification there will be either 6 or 7 bytes of data (when base64-decoded), i fully understand how to deal with it when this is the case.
But sometimes it won't return 6 or 7 bytes but 8 and more rarely 4 bytes, i have no idea why there are more/less bytes and what the meaning of the added bytes is or which bytes are left out.
I could skip all the cases where there are not 6 or 7 bytes but i want to fully understand this.
I am certain that the converting the base64-encoded to byte-array is done correctly, i made a function for it and checked it using manual base64-decode combined with charCodeAt(index) and truly manually checked it using good ol' pencil, paper and brain (not necessarily in that order).
TL;DR :
BLE Heart Rate (180d,2a37) sometimes does not return the expected amount of bytes (4 and 8 when it should be either 6 or 7 bytes).
What exactly happened and why?
Example :
// Example results in byte-array's
["00010110", "01110111", "00000100", "00000010"] // unexpected 4 byte
["00010110", "01111000", "11111111", "00000001", "11111111", "00000001", "00001100", "00000001"] // unexpected 8 byte
["00010110", "01110111", "00001000", "00000010", "00001000", "00000010"] // normal 6 byte
// Example results in hex-array's (easier to read on small screens)
["0x16","0x77","0x04","0x02"] // unexpected 4 byte
["0x16","0x78","0xFF","0x01","0xFF","0x01","0x0C","0x01"] // unexpected 8 byte
["0x16","0x77","0x08","0x02","0x08","0x02"] // normal 6 byte
Byte Explanation :
Flags. The first bit (most right) is on if the heart rate is in 16 bit format (i only got 8 bit).
heartrate, if the heart rate is in 16 bit format there will be 2 bytes here
energy expended
energy expended
rr interval
rr interval
Energy expenditure is optional check the bit 3 of the flags in your sample data case it is not present. There are a variable number of rr intervals. with 4 bytes you have just 1 with 6 bytes you have 2 and with 8 bytes you have 3 you could in theory get 10 and 4.
You should decode the bytes using the flags then if rr's are present the number of bytes left / 2 is the number of rr's you have.
See the XML-Definition file for more details.
Related
Given a random integer, for example, 19357982357627685397198. How can I compress these numbers into a string of text that has fewer characters?
The string of text must only contain numbers or alphabetical characters, both uppercase and lowercase.
I've tried Base64 and Huffman-coding that claim to compress, but none of them makes the string shorter when writing on a keyboard.
I also tried to make some kind of algorithm that tries to divide the integer by the numbers "2,3,...,10" and check if the last number in the result is the number it was divided by (looks for 0 in case of division by 10). So, when decrypting, you would just multiply the number by the last number in the integer. But that does not work because in some cases you can't divide by anything and the number would stay the same, and when it would be decrypted, it would just multiply it into a larger number than you started with.
I also tried to divide the integer into blocks of 2 numbers starting from left and giving a letter to them (a=1, b=2, o=15), and when it would get to z it would just roll back to a. This did not work because when it was decrypted, it would not know how many times the number rolled over z and therefore be a much smaller number than in the start.
I also tried some other common encryption strategies. For example Base32, Ascii85, Bifid Cipher, Baudot Code, and some others I can not remember.
It seems like an unsolvable problem. But because it starts with an integer, each number can contain 10 different combinations. While in the alphabet, letters can contain 26 different combinations. This makes it so that you can store more data in 5 alphabetical letters, than in a 5 digit integer. So it is possible to store more data in a string of characters than in an integer in mathematical means, but I just can't find anyone who has ever done it.
You switch from base 10 to eg. base 62 by repeatedly dividing by 62 and record the remainders from each step like this:
Converting 6846532136 to base62:
Operation Result Remainder
6846532136 / 62 110427937 42
110427937 / 62 1781095 47
1781095 / 62 28727 21
28727 / 62 463 21
463 / 62 7 29
7 / 62 0 7
Then you use the remainder as index in to a base62 alphabet of your choice eg:
0 1 2 3 4 5 6
01234567890123456789012345678901234567890123456789012345678901
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
Giving: H (7) d (29) V (21) V (21) v (47) q (42) = HdVVvq
------
It's called base10 to base62, there bunch of solutions and code on the internet.
Here is my favorite version: Base 62 conversion
I am trying to find the capacity of a list by a function. But a step involves subtracting the list size by 64 ( in my machine ) and also it has to be divided by 8 to get the capacity. What does this capacity value mean ?
I tried reading the docs of python to know about sys.getsizeof() method but still it couldn't answer my doubts.
import sys
def disp(l1):
print("Capacity",(sys.getsizeof(l1)-64)//8) // What does this line mean especially //8 part
print("Length",len(l1))
mariya_list=[]
mariya_list.append("Sugarisverysweetand it can be used for cooking sweets
and also used in beverages ")
mariya_list.append("Choco")
mariya_list.append("bike")
disp(mariya_list)
print(mariya_list)
mariya_list.append("lemon")
print(mariya_list)
disp(mariya_list)
mariya_list.insert(1,"leomon Tea")
print(mariya_list)
disp(mariya_list)
Output:
Capacity 4
Length 1
['Choco']
['Choco', 'lemon']
Capacity 4
Length 2
['Choco', 'leomon Tea', 'lemon']
Capacity 4
Length 3
This is the output. Here I am unable to understand what does capacity 4 mean. Why does it repeats the same value four even after subsequent addition of elements.
I came across the PermMask values while using JSLink in SharePoint Online for checking permissions of user from the ctx.CurrentItem.PermMask
The values for different permissions aren't matching with the Microsoft's documentation, any idea what these values represent? or they have to be converted into some other format? I haven't come across these values expect for the Admin permission whose value is 0x7fffffffffffffff
0x1b03c431aef - Edit
0xb008431041 - View Only
0x1b03c4312ef - Contribute
0x1b03c5f1bff - Design
0x7fffffffffffffff - Admin
webPermMasks are TWO 32-bit integers indicating which permissions a user has.
Each bit represents a permission.
(_spPageContextInfo.webPermMasks.High).toString(2)
(_spPageContextInfo.webPermMasks.Low).toString(2)
Displays the bits
High & Low
In the good old days computer worked with 8 bits, which someone named a Byte.
With 8 bits (8 permissions) you can only count from 0 to 255
So to store a larger number of 16 bits (0- 32768) on an 8-bit CPU you need 2 Bytes.
We called these the High-Byte and the Low-Byte
SharePoint has 37 types of permissions
Present computers have evolved from CPUs that can handle 8-bits to 16-bits to 32-bits
Currently SharePoint has 37 different Security permissions..
which do not fit in those 32 bits
Like so many moons ago you need TWO 32-bit values to encode Permissions
Which some Microsoft engineer with common sense named the High and Low value
The SP.js library (available standard on most pages) has the information on which Permission is which bit number
Run this in the developer console:
for (var permLevelName in SP.PermissionKind.prototype) {
if (SP.PermissionKind.hasOwnProperty(permLevelName)) {
var permLevel = SP.PermissionKind.parse(permLevelName);
console.info(permLevelName,permLevel);
}
}
}
Note permLevel is not the value, it is the bit-number
SP.PermissionKind.openItems is bit-number 6 and thus value 2^6
If you add up all the values you get the High order and Low order integer values for Permissions.
Note permLevel for SP.PermissionKind.manageAlerts is the 39th bit
This is in the High order integer, so the value is 2^(39-31)
webPermMasks
_spPageContextInfo.webPermMasks.Low
_spPageContextInfo.webPermMasks.High
Gives you 64 bits in TWO 32 bit Integers (with 37 permissions only a few are used in the High order)
indicating what Permissions the Current User has on the Current Page
All PermissionKinds (SP.PermissionsKnd.[name])
Note: This is the bit-number, not the value!
To check if someone has permissions,
You have to calculate the (summed) value then binary check against the High and Low order integers.
viewListItems: 1
addListItems: 2
editListItems: 3
deleteListItems: 4
approveItems: 5
openItems: 6
viewVersions: 7
deleteVersions: 8
cancelCheckout: 9
managePersonalViews: 10
manageLists: 12
viewFormPages: 13
anonymousSearchAccessList: 14
open: 17
viewPages: 18
addAndCustomizePages: 19
applyThemeAndBorder: 20
applyStyleSheets: 21
viewUsageData: 22
createSSCSite: 23
manageSubwebs: 24
createGroups: 25
managePermissions: 26
browseDirectories: 27
browseUserInfo: 28
addDelPrivateWebParts: 29
updatePersonalWebParts: 30
manageWeb: 31
anonymousSearchAccessWebLists: 32
useClientIntegration: 37
useRemoteAPIs: 38
manageAlerts: 39
createAlerts: 40
editMyUserInfo: 41
enumeratePermissions: 63
Use in script
The SP library supplies a function to check for individual levels:
SP.PageContextInfo.get_webPermMasks().has( [bitnumber] );
SP.PageContextInfo.get_webPermMasks().has( SP.PermissionKind.enumeratePermissions );
Using unused space (tales of the past)
Only a handfull of bits in the High Order integer are used by SharePoint.
Yet the database stores all 32 bits...
When we still built SharePoint Back End stuff we would use those unused bits for our own Permission scheme.
The free trials we let everyone install was actually the full blown product.
And when they bought the Licensed Product.. all it did was flip one bit in the database.
J1 iSPT
It's sum of Permissions.
For example:
View Only includes below permissions.
ViewListItems = 1
ViewVersions = 64
CreateAlerts = 549755813888
ViewFormPages = 4096
CreateSSCSite = 4194304
ViewPages = 131072
BrowseUserInfo = 134217728
UseRemoteAPIs = 137438953472
UseClientIntegration = 68719476736
Open = 65536
The sum is 756052856897=0xb008431041
I am trying to create a software delay. Here is a sample program of what I am doing:
Address Data Opcode Comment
1800 06 LD, B Load register B with fix value
1801 “ “ Fixed value
1802 05 DEC, B Decrement value in register B
1803 C2 JP cc Jump to 1802 if value is not 0
1804 02 - Address XX
1805 18 - Address XX
My question is how can I calculate the required fixed value to load into register B so that the process of decrementing the value until 0 takes 2 seconds?
In my manual the time given to run the instructions is based on a 4MHz CPU but the Z80 CPU I am using has a speed of 1.8MHz. Any idea how I can calculate this? Thanks. P.S here is the decrement (DEC) and jump (JP cc) instructions from the manual:
Instruction M Cycles T states 4 MHz E.t
DEC r 1 4 1.00
JP cc 3 10 (4,3,3) 2.50
If by 1.8MHz you mean exactly 1,800,000 Hz, then to get a 2 second delay you'd need to delay for 3,600,000 T-states. Your current delay loop takes 14 T-states per iteration, which means that your initial value for B would have to be 3600000/14 == 257143, which obviously won't fit in one byte.
The greatest number of iterations that you could specify with an 8-bit register is 256, and to reach 3,600,000 T-states with 256 iterations, each iteration would have to take 14,062 T-states. That's one big loop body.
If we use a 16-bit counter things start getting a bit more manageable. At 65,536 iterations we only need 55 T-states per iteration to reach a total of 3,600,000 T-states. Below is an example of what that could look like:
; Clobbers A, B and C
ld bc,#0
1$:
bit #0,a ; 8
bit #0,a ; 8
bit #0,a ; 8
and a,#255 ; 7
dec bc ; 6
ld a,c ; 4
or a,b ; 4
jp nz,1$ ; 10, total = 55 states/iteration
; 65536 iterations * 55 states = 3604480 states = 2.00248 seconds
I'm a bit of an optimization freak, so here is my go using the syntax with which I am most familiar (from the TASM assembler and similar):
Instruction opcode timing
ld bc,$EE9D ;01EE9D 10cc
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
ex (sp),hl ;E3 19*(256C+B)
djnz $-4 ;10FA 13cc*(256C+B) - 5*C
dec c ;0D 4*C
jr nz,$-7 ;20F7 12*C-5
This code is 12 bytes and 3600002 clock cycles.
EDIT: It seems like part of my answer is gone! To answer your question better, your Z80 can process 1800000 clock cycles in one second, so you need twice that (3600000). If you add up the timings given in my code, you get:
=10+(256C+B)(19*4+13)-5C+4C+12C-5
=5+(256C+B)89+11C
=5+22795C+89B
So the code timing is largely dependent on C. 3600000/22795 is about 157, so we initialize C with 157 (0x9D). Plugging this back in, we get B to be roughly 237.9775, so we round that up to 238 (0xEE). Plugging these in gets our final timing of 3600002cc or roughly 2.000001 seconds. This assumes that the processor is running at exactly 1.8MHz which is very unlikely.
As well, if you can use interrupts, figure out roughly how many times it fires per second and use a loop like halt \ djnz $-1 . This saves a lot more in terms of power consumption.
Why do I get a dtmf sound when the E bit is 0 and no sound when it is 1? (RTP packets appear in wireshark either way)
Background:
I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt
obtaining the following behaviour when the E bit is NOT set:
If for example keys 7874556332111111145855885#3 are pressed, then ALL events are sent and show up in a program like wireshark, however only 87456321458585#3 sound.
So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound.
In section 3.9, figure 2 of the above linked document, they give a 911 example where all but the last event have the E bit set.
When I set the 'E' bit to 1 for all numbers, I never get an event to sound.
I have thought of some possible causes but do not know if they are the reason:
1) figure 2 shows payload types of 96 and 97 sent. I have not sent these headers. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively"
2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this?
3) I have a separate output stream that also plays so wonder if it might be interfering with hearing this stream.
4) have also fiddled around with timestamp intervals and the RTP marker.
Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas:
6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end)
Real-Time Transport Protocol
Stream setup by SDP (frame 6225)
Setup frame: 6225
Setup Method: SDP
10.. .... = Version: RFC 1889 Version (2)
..0. .... = Padding: False
...0 .... = Extension: False
.... 0000 = Contributing source identifiers count: 0
0... .... = Marker: False
Payload type: telephone-event (101)
Sequence number: 0
Extended sequence number: 65536
Timestamp: 2000
Synchronization Source identifier: 0x15f27104 (368210180)
RFC 2833 RTP Event
Event ID: DTMF Pound # (11)
1... .... = End of Event: True
.0.. .... = Reserved: False
..00 0000 = Volume: 0
Event Duration: 1000
Please note: A volume of zero is the loudest obtainable level as explained in the ietf.org/rfc/rfc2833.txt specification:
"volume: For DTMF digits and other events representable as tones,
this field describes the power level of the tone, expressed
in dBm0 after dropping the sign. Power levels range from 0 to
-63 dBm0. The range of valid DTMF is from 0 to -36 dBm0 (must
accept); lower than -55 dBm0 must be rejected (TR-TSY-000181,
ITU-T Q.24A). Thus, larger values denote lower volume. This
value is defined only for DTMF digits. For other events, it
is set to zero by the sender and is ignored by the receiver."
The issue is when the "End of Event" bit is switched on.
I recommend you to start with the RFC 4733 for two reasons:
It obsolotes the RFC 2833.
The chapter 5. is a great source to understand how a DTMF digit is produced.
Here is my understanding of how a DTMF digit should be sent:
A start packet is emitted. It has its M flag set and the E flag cleared. The timestamp for the event is set.
One or more continuation packets are emitted (as long as the user pressed the digit). They have theirs M And E flags cleared. They use the timestamp defined in the start packet, but their sequence numbers and their duration are incremented (see the RFC for the intervals).
An end packet is sent (when the user stop pressing the digit). It has it M flag cleared and its E flag set.
Why should several packets be sent for one event ? Because the network is not always perfect and some loss can occur:
The RFC states (2.5.1.2. "Transmission of Event Packets") that:
For robustness, the sender SHOULD
retransmit "state" events
periodically.
And (2.5.1.4. "Retransmission of Final Packet") that:
The final packet for each event and
for each segment SHOULD be sent a
total of three times at the interval
used by the source for updates.
This ensures that the duration of the
event or segment can be recognized
correctly even if an instance of the
last packet is lost.
As for your problem:
If for example keys
7874556332111111145855885#3 are
pressed, then ALL events are sent and
show up in a program like wireshark,
however only 87456321458585#3 sound.
So the first key (which I figure could
be a separate issue) and any repeats
of an event (ie 11111) are failing to
sound.
Without a WireShark trace, it is hard to tell what is going on, but I suspect that the repeating 1 digits are ignored because there is not distinction between successive events; the first 1 digit is recognized and the others are considered as retransmissions of the event.
I notice your Volume is set to 0; that seems like a likely reason to not be hearing any sound?
beauty! thanks a lot laurent. i have implemented a working solution based on your recommendations. (just posting as another answer to get better text formatting - I will give you the bounty)
to sum up what was wrong:
I needed redundancy of packets.
Setting all 'E's to 0 meant that repeated same key events were ignored.
Setting all 'E's to 1 meant that I signaled the end of an event, but not the actual event - hence the silence.
Here is a summarized flow example:
Event_ID M E Timestamp Duration Sequence_number
3 1 0 400 400 1
3 0 0 400 800 2
3 0 0 400 1200 3
3 0 0 400 1600 4
3 0 1 400 1600 5
3 0 1 400 1600 6
3 0 1 400 1600 7
7 1 0 800 400 8
7 0 0 800 800 9
7 0 0 800 1200 10
7 0 0 800 1600 11
7 0 1 800 1600 12
7 0 1 800 1600 13
7 0 1 800 1600 14
*note: just looked at the suggested rfc4733 first example in section 5 and it is great! much better than the 2833 ones