FlutterBlue Characteristics - bluetooth

I am developing a bluetooth app that connects to a fitness watch. This is my first time working with bluetooth. I managed to connect my app with the device using the brilliant FlutterBlue library.
However I am unable to make sense of the results I get from reading. This is how I read the characteristics:
_readCharacteristic(BluetoothCharacteristic c) async {
var results = await widget.device.readCharacteristic(c);
print("${results.toList()}");
//setState(() {});
}
This is the result:
[7, 133, 0, 0, 1, 0, 0, 124, 92, 1]
I have no idea what these numbers mean or what I am supposed to do with them.

From the documentation:
var characteristics = service.characteristics;
for(BluetoothCharacteristic c in characteristics) {
List<int> value = await device.readCharacteristic(c);
print(value);
}
// Writes to a characteristic
await device.writeCharacteristic(c, [0x12, 0x34])
We can see that the library works on List(int) types, and that it sends/receives "32 bit" values.
It is most likely pairs of bytes being sent over, so 16 bit values represented as numbers in the list; these are characters. This means that you can send over characters with their utf8 representation.
In the above example, the characteristic is writing 0x12, then 0x34. In the link to the ascii character table, this translates to "(Device Control 2) (4)".
It is your job to decode them into characters, (UTF8) and also to encode them when sending them back to the watch. This is required by the watch's software, which can respond to certain characteristic writes depending on the received value.
You probably will have to do some digging into the documentation/bluetooth spec of the watch you are using.
Check out the UTF8Decoder class of dart:convert lib. It should help you translate it into human-readable text. If not, you will have to do some digging.
String decoded = UTF8Decoder().convert(value) // value == List<int>, Uint8List, etc.
print(decoded)

Related

Unpadded RSA ciphertext multiplication by 2**e breaks deciphering on a small message _sporadically_

Please, help me to understand, why the snippet below fails to decrypt the message sometimes (it successes best 5 out 6 times) when ran multiple times.
It generates an 1024-bit rsa keys pair, then encrypts "Hello World!!" in most naive way possible, doubles ciphertext, decrypts doubled ciphertext, and finally divides the result to get original plaintext. At the step of decryption it could be clearly seen (logging doubled_decr) when it is going wildly off.
As the given plaintext is small, it should recover from doubling well. bigint-mod-arith package, used for modular exponentiation here, is maintained and have some tests (though really big numbers only in performance section) in it, was used for a number of times, and doesn't seem to be a cause.
import {generateKeyPairSync, privateDecrypt, publicEncrypt, constants} from "node:crypto";
import * as modAr from "bigint-mod-arith";
// "Generate a 1024 bit RSA key pair." https://cryptopals.com/sets/6/challenges/46
const keys = generateKeyPairSync("rsa", {modulusLength: 1024});
let jwk_export = keys.privateKey.export({format: "jwk"});
let pt_test = Buffer.from("Hello World!!");
let ct_test = naiveEncrypt(pt_test, keys.publicKey);
let doubled = bigintFromBuf(ct_test)
* modAr.modPow(2, bigintFromParam(jwk_export.e), bigintFromParam(jwk_export.n));
let doubled_decr = naiveDecrypt(Buffer.from(doubled.toString(16), "hex"), keys.privateKey);
console.debug(pt_test, "plaintext buffer");
console.debug(doubled_decr, "homomorphically doubled buffer (after decryption)");
console.debug(
"_Decrypted doubled buffer divided back by 2 and converted to text_:",
Buffer.from((bigintFromBuf(doubled_decr) / 2n).toString(16), "hex").toString()
)
function bigintFromParam(str) {return bigintFromBuf(Buffer.from(str, "base64url"))}
function bigintFromBuf(buf) {return BigInt("0x" + buf.toString("hex"))}
function naiveEncrypt(message, publicKey) {
const keyParameters = publicKey.export({format: "jwk"});
// console.debug(bigintFromParam(keyParameters.e));
// console.debug(bigintFromParam(keyParameters.n));
return Buffer.from(modAr.modPow(
bigintFromBuf(message),
bigintFromParam(keyParameters.e),
bigintFromParam(keyParameters.n)
).toString(16), "hex");
}
function naiveDecrypt(message, privateKey) {
const keyParameters = privateKey.export({format: "jwk"});
// console.debug(bigintFromParam(keyParameters.d));
console.assert(
bigintFromParam(keyParameters.e) == modAr.modInv(
bigintFromParam(keyParameters.d),
(bigintFromParam(keyParameters.q) - 1n) * (bigintFromParam(keyParameters.p) - 1n)
)
);
return Buffer.from(modAr.modPow(
bigintFromBuf(message),
bigintFromParam(keyParameters.d),
bigintFromParam(keyParameters.n)
).toString(16), "hex");
}
There are two problems, one fundamental and one 'just coding':
you need to divide the 'doubled' plaintext by 2 in the modular ring Zn not the plain integers Z. In general to divide in Zn we modular-multiply by the modular inverse -- a over b = a*modInv(b,n)%n -- but for the particular case of 2 we can simplify to just a/2 or (a+n)/2
when you take bigint.toString(16) the result is variable length depending on the value of the bigint. Since RSA cryptograms are for practical purposes uniform random numbers in [2,n-1], with a 1024-bit key most of the time the result is 128 digits, but about 1/16 of the time is it 127 digits, 1/256 of the time it is 126 digits, etc. If the number of digits is odd, doing Buffer.from(hex,'hex') throws away the last digit and produces a value that is very wrong for your purpose.
In standard RSA we conventionally represent all cryptograms and signatures as a byte string of fixed length equal to the length needed to represent n -- for a 1024-bit key always 128 bytes even if that includes leading zeros. For your hacky scheme, it is sufficient if we have an even number of digits -- 126 is okay, but not 127.
I simplified your code some to make it easier for me to test -- in particular I compute the bigint versions of n,e,d once and reuse them -- but only really changed bigintToBuf and halve= per above, to get the following which works for me (in node>=16 so 'crypto' supports jwk export):
const modAr=require('bigint-mod-arith'); // actually I use the IIFE for reasons but that makes no difference
const jwk=require('crypto').generateKeyPairSync('rsa',{modulusLength:1024}).privateKey.export({format:'jwk'});
const n=bigintFromParam(jwk.n), e=bigintFromParam(jwk.e), d=bigintFromParam(jwk.d);
function bigintFromParam(str){ return bigintFromBuf(Buffer.from(str,'base64url')); }
function bigintFromBuf(buf){ return BigInt('0x'+buf.toString('hex')); }
function bigintToBuf(x){ let t=x.toString(16); return Buffer.from(t.length%2?'0'+t:t,'hex');}
let plain=Buffer.from('Hello world!');
let encr=bigintToBuf(modAr.modPow(bigintFromBuf(plain),e,n));
let double=bigintToBuf(bigintFromBuf(encr)*modAr.modPow(2n,e,n))
// should actually take mod n there to get an in-range ciphertext,
// but the modPow(,,n) in the next step = decrypt fixes that for us
let decr=bigintToBuf(modAr.modPow(bigintFromBuf(double),d,n));
let temp=bigintFromBuf(decr), halve=bigintToBuf(temp%2n? (temp+n)/2n: temp/2n);
console.log(halve.toString());
PS: real RSA implementations, including the 'crypto' module, don't use modPow(,d,n) for decryption, they use the "Chinese Remainder" parameters in the private key to do a more efficient computation instead. See wikipedia for a good explanation.
PPS: just for the record, 1024-bit RSA -- even with decent padding -- has been considered marginal for security since 2013 at latest and mostly prohibited, although there is not yet a reported break in the open community. However, that is offtopic for SO, and your exercise clearly isn't about security.

What are differences between `createSign()` and `privateEncrypt()` in `node:crypto` for RSA?

privateEncrypt() says "..., the padding property can be passed. Otherwise, this function uses RSA_PKCS1_PADDING." sign() says "padding Optional padding value for RSA, one of the following:
crypto.constants.RSA_PKCS1_PADDING (default)".
So naive expectation would be that both returns the same buffer as the padding scheme and hash function used are identical. (I suppose that that privateEncrypt() uses the signing variant of the scheme, when publicEncrypt() uses encryption variant; please, count this as part of the question, as I could find this one in docs, and sunk mapping OpenSSL manuals to node:crypto, your expertise is helpful!)
But they don't. Maybe I read the docs incorrectly, maybe it's common knowledge I'm missing, or maybe it's something else. Please explain the differences between them in this sense, or correct the snippet so it would be visually clear.
// this snippet is solely for discussion purpose and shouldn't be used in production
import {
generateKeyPairSync, createSign, privateEncrypt, createVerify, createHash
} from "node:crypto";
const keyPair = generateKeyPairSync("rsa", {modulusLength: 1024});
const encrypted = privateEncrypt(
keyPair.privateKey,
createHash('sha256').update("stack overflow").digest()
);
// console.debug(encrypted);
const signed = createSign("SHA256").update("stack overflow").end().sign(keyPair.privateKey);
// console.debug(signed);
// console.debug(createVerify("SHA256").update("stack overflow").end().verify(
// keyPair.publicKey, signed
// )); // "true"
console.assert(!Buffer.compare(encrypted, signed)); // "Assertion failed"
Sign#sign() is used for signing and applies the RSASSA-PKCS1-v1_5 variant for PKCS#1v1.5 padding.
RSASSA-PKCS1-v1_5 is described in RFC 8017, Section 8.2. This uses the EMSA-PKCS1-v1_5 encoding described in Section 9.2, which pads the message as follows:
EM = 0x00 || 0x01 || PS || 0x00 || T
where T is the DER encoding of the DigstInfo value (containing the digest OID and the message hash) and PS is a sequence of as many 0xFF bytes until the length of the key/modulus is reached.
While sign() implicitly determines T from the message, crypto.privateEncrypt() uses the message directly as T. Thus, for privateEncrypt() to work like sign(), T must be explicitly determined from the message and passed instead of the message. Since the posted code already hashes, only the digest-specific byte sequence containing the OID (see RFC8017, p. 47) needs to be prepended.
const digestInfo = Buffer.from('3031300d060960864801650304020105000420','hex');
const dataHashed = crypto.createHash('sha256').update("stack overflow").digest();
const dataToSign = Buffer.concat([digestInfo, dataHashed]);
const encrypted = crypto.privateEncrypt(keyPair.privateKey, dataToSign);
Because RSASSA-PKCS1-v1_5 is deterministic, both approaches provide the same signature (for an identical key).
Why are there both methods?
Sign#sign() is used for regular signing, crypto.privateEncrypt() is meant for a low level signig, e.g. when not the message itself but only the hashed data is available.
Since v12.0.0 there is also crypto.sign() which performs a high level signing like Sign#sign(). This new variant also supports newer algorithms like Ed25519:
const signedv12 = crypto.sign('sha256', 'stack overflow', keyPair.privateKey);
In contrast, crypto.publicEncrypt() performs an encryption and uses the RSAES-PKCS1-v1_5 variant for PKCS#1v1.5 padding.
Note that 1024 bit keys are insecure nowadays. The key size should be at least 2048 bits.

What does the int value returned from compareTo function in Kotlin really mean?

In the documentation of compareTo function, I read:
Returns zero if this object is equal to the specified other object, a
negative number if it's less than other, or a positive number if it's
greater than other.
What does this less than or greater than mean in the context of strings? Is -for example- Hello World less than a single character a?
val epicString = "Hello World"
println(epicString.compareTo("a")) //-25
Why -25 and not -10 or -1 (for example)?
Other examples:
val epicString = "Hello World"
println(epicString.compareTo("HelloWorld")) //-55
Is Hello World less than HelloWorld? Why?
Why it returns -55 and not -1, -2, -3, etc?
val epicString = "Hello World"
println(epicString.compareTo("Hello World")) //55
Is Hello World greater than Hello World? Why?
Why it returns 55 and not 1, 2, 3, etc?
I believe you're asking about the implementation of compareTo method for java.lang.String. Here is a source code for java 11:
public int compareTo(String anotherString) {
byte v1[] = value;
byte v2[] = anotherString.value;
if (coder() == anotherString.coder()) {
return isLatin1() ? StringLatin1.compareTo(v1, v2)
: StringUTF16.compareTo(v1, v2);
}
return isLatin1() ? StringLatin1.compareToUTF16(v1, v2)
: StringUTF16.compareToLatin1(v1, v2);
}
So we have a delegation to either StringLatin1 or StringUTF16 here, so we should look further:
Fortunately StringLatin1 and StringUTF16 have similar implementation when it comes to compare functionality:
Here is an implementation for StringLatin1 for example:
public static int compareTo(byte[] value, byte[] other) {
int len1 = value.length;
int len2 = other.length;
return compareTo(value, other, len1, len2);
}
public static int compareTo(byte[] value, byte[] other, int len1, int len2) {
int lim = Math.min(len1, len2);
for (int k = 0; k < lim; k++) {
if (value[k] != other[k]) {
return getChar(value, k) - getChar(other, k);
}
}
return len1 - len2;
}
As you see, it iterated over the characters of the shorter string and in case the charaters in the same index of two strings are different it returns the difference between them. If during the iterations it doesn't find any different (one string is prefix of another) it resorts to the comparison between the length of two strings.
In your case, there is a difference in the first iteration already...
So its the same as `"H".compareTo("a") --> -25".
The code of "H" is 72
The code of "a" is 97
So, 72 - 97 = -25
Short answer: The exact value doesn't have any meaning; only its sign does.
As the specification for compareTo() says, it returns a -ve number if the receiver is smaller than the other object, a +ve number if the receiver is larger, or 0 if the two are considered equal (for the purposes of this ordering).
The specification doesn't distinguish between different -ve numbers, nor between different +ve numbers — and so neither should you.  Some classes always return -1, 0, and 1, while others return different numbers, but that's just an implementation detail — and implementations vary.
Let's look at a very simple hypothetical example:
class Length(val metres: Int) : Comparable<Length> {
override fun compareTo(other: Length)
= metres - other.metres
}
This class has a single numerical property, so we can use that property to compare them.  One common way to do the comparison is simply to subtract the two lengths: that gives a number which is positive if the receiver is larger, negative if it's smaller, and zero of they're the same length — which is just what we need.
In this case, the value of compareTo() would happen to be the signed difference between the two lengths.
However, that method has a subtle bug: the subtraction could overflow, and give the wrong results if the difference is bigger than Int.MAX_VALUE.  (Obviously, to hit that you'd need to be working with astronomical distances, both positive and negative — but that's not implausible.  Rocket scientists write programs too!)
To fix it, you might change it to something like:
class Length(val metres: Int) : Comparable<Length> {
override fun compareTo(other: Length) = when {
metres > other.metres -> 1
metres < other.metres -> -1
else -> 0
}
}
That fixes the bug; it works for all possible lengths.
But notice that the actual return value has changed in most cases: now it only ever returns -1, 0, or 1, and no longer gives an indication of the actual difference in lengths.
If this was your class, then it would be safe to make this change because it still matches the specification.  Anyone who just looked at the sign of the result would see no change (apart from the bug fix).  Anyone using the exact value would find that their programs were now broken — but that's their own fault, because they shouldn't have been relying on that, because it was undocumented behaviour.
Exactly the same applies to the String class and its implementation.  While it might be interesting to poke around inside it and look at how it's written, the code you write should never rely on that sort of detail.  (It could change in a future version.  Or someone could apply your code to another object which didn't behave the same way.  Or you might want to expand your project to be cross-platform, and discover the hard way that the JavaScript implementation didn't behave exactly the same as the Java one.)
In the long run, life is much simpler if you don't assume anything more than the specification promises!

Forward compatibility in storage size constrained protocol

I have a simple protocol consisting of lets say 4 fields:
Field-1 (4-bits)
Field-2 (6-bits)
Field-3 (4-bits)
Field-4 (2-bits)
Currently, I organize them so they are byte-aligned as:
Field-1,Field-3,Field-2,Field-4
In total, the message occupies 2 bytes with 0 bytes overhead.
To make this backwards compatible, so I can understand messages from a previous version I add a 1-byte version field at the beginning and it becomes:
Version-Field,Field-1,Field-3,Field-2,Field-4
3 bytes in total with an overhead of 1 byte.
How do I add forwards compatibility such that I can add new fields in new versions of the protocol while ensuring old versions of the software can still understand the messages, with the lowest possible overhead?
Typically, your protocol would specify that each message has:
a message length indicator that will work for for all future versions. This is typically either a fixed-size integer that is guaranteed to be big enough, or a variable-length-encoded integer using extension bits like you see with VLQ or UTF-8.
an indicator of the minimum version of the protocol that you need to understand to parse the message. This is important because new versions might
introduce things that must be understood.
Each new version of the protocol then allows you to add new data to a prefix that conforms to the previous version of the protocol, and every version of the protocol has to specify how to recognize the end of the data it defines (in your example that's fixed length, so it's easy), and the start of the data defined in some future version.
To process a message, the consumer checks to make sure it is a high enough version, processes the prefix that it understands, and uses the length field to skip the rest.
For something as space-constrained as your protocol, I might do something like this:
The first byte is a 4-bit minimum version and a 4-bit length field.
If the length field L is in 0-11, then the remainder of the message is L+1 bytes long.
Otherwise the L-11 bytes after the first byte are an integer containing the length.
When the minimum version you must understand is > 15, then some version of the protocol before version 15 will define additional version information in the message.
You'll have FC by ensuring strict BC with this rule:
New version must keep field layout known to previous versions.
If you can follow the rule, you'll automatically have both BC and FC.
Consequently, with the rule you can only add new fields by appending them to existing layout.
Let me explain with an example.
Say that you need to add these fields for version 2:
Field-5 (1-bit)
Field-6 (7-bits)
Remember the rule, new fields can only be appended to existing layout.
So, this is version 2 message layout:
Version-Field,Field-1,Field-3,Field-2,Field-4,Field-5,Field-6
Because the layout known to version 1 is intact, your version 1 code can read messages of any version with this (pseudocode):
function readMessageVersion1(byte[] input) {
var msg = {};
msg.version = input[0];
msg.field1 = input[1] & 0x0f;
msg.field3 = input[1] >> 4 & 0x0f;
msg.field2 = input[2] & 0x3f;
msg.field4 = input[2] >> 6 & 0x03;
return msg;
}
Version 1 doesn't need to check the version field because the known layout is unconditional.
However, version 2 and all other versions will need to check the version field.
Assuming that we use the value 2 to indicate version 2, this will do (pseudocode):
function readMessageVersion2(byte[] input) {
var msg = readMessageVersion1(input);
//check version field
if (msg.version < 2) return msg;
msg.field5 = input[3] & 0x01;
msg.field6 = input[3] >> 1 & 0x7f;
return msg;
}
The most important part of the code is the fact that it reuses code from the previous version and this check:
if (msg.version < 2) return msg;
Version 3 of the code can simply follow version 2 like this:
function readMessageVersion3(byte[] input) {
var msg = readMessageVersion2(input);
//check version field
if (msg.version < 3) return msg;
// read the input bytes here
return msg;
}
Think of it as a template for future versions.
By following the rule and the examples, any version of the protocol can read messages from any version with just 1 byte overhead.

The last challenge of Bytewiser: Array Buffers

The challenge is like that:
Array Buffers are the backend for Typed Arrays. Whereas Buffer in node
is both the raw bytes as well as the encoding/view, Array Buffers are
only raw bytes and you have to create a Typed Array on top of an Array
Buffer in order to access the data.
When you create a new Typed Array and don't give it an Array Buffer to
be a view on top of it will create it's own new Array Buffer instead.
For this challenge, take the integer from process.argv[2] and write it
as the first element in a single element Uint32Array. Then create a
Uint16Array from the Array Buffer of the Uint32Array and log out to
the console the JSON stringified version of the Uint16Array.
Bonus: try to explain the relevance of the integer from
process.argv[2], or explain why the Uint16Array has the particular
values that it does.
The solution given by the author is like that:
var num = +process.argv[2]
var ui32 = new Uint32Array(1)
ui32[0] = num
var ui16 = new Uint16Array(ui32.buffer)
console.log(JSON.stringify(ui16))
I don't understand what does the plus sign in the first line means? And I can't understand the logic of this block of code either.
Thank you a lot if you can solve my puzzle.
Typed arrays are often used in context of asm.js, a strongly typed subset of JavaScript that is highly optimisable. "strongly typed" and "subset of JavaScript" are contradictory requirements, since JavaScript does not natively distinguish integers and floats, and has no way to declare them. Thus, asm.js adopts the convention that the following no-ops on integers and floats respectively serve as declarations and casts:
n|0 is n for every integer
+n is n for every float
Thus,
var num = +process.argv[2]
would be the equivalent of the following line in C or Java:
float num = process.argv[2]
declaring a floating point variable num.
It is puzzling though, I would have expected the following, given the requirement for integers:
var num = (process.argv[2])|0
Even in normal JavaScript though they can have uses, because they will also convert string representations of integers or floats respectively into numbers.

Resources