How can I compute the size in bits of group elements in charm-crypto? - signature

I want to use the scheme “Hess - Efficient identity based signature schemes based on pairings.” of Charm-Crypto.
I need to calculate the bit-length of group elements (eg. S2 in the siganture).
As I understand it from this related question, serializing the elements gives me a Base64 encoded string. From that and this other question about Base64, I concluded that I need to compute the following:
signature = {'S1' : S1, 'S2' : S2}
S2_serial = group.serialize(signature['S2'])
sigLenInBase64 = len(S2_serial)
sigLenInByte = (sigLenInBase64 *3)/4
sigLenInBit = sigLenInByte * 8
As S2 is a group element of G1, I expect the size to be the same as that of the underlying curve (so 512Bit for 'SS512', 224Bit for 'MNT224', etc). However, the size differ by 28 Bit for 'SS512' and 'MNT224' (with 540 and 252 Bit, respectively) and by 21 Bit for 'MNT159'. Why is that? How can I predict for a given curve by how much it will be off?
My current guess is, that I'm not taking into account some additional info (like a byte for the sign).
Update:
Using the accepted answer, I now can correctly compute the size like this:
def compute_bitsize(base64_input):
b_padded = base64_input.split(str.encode(":"))[1]
pad_size = b_padded.count(str.encode("="))
b_len_without_pad = len(b_padded)-4
byte_len = (b_len_without_pad *3)/4 +(3-pad_size)-1
bit_len = byte_len * 8
return bit_len

Take the example serialization for 'SS512' from the question you link to. The serialization string is:
1:Nri028S0SySkDoN0JH1Lu6HQo/Jkhq7DCZHI1MUrHOuIgCONonN14GzmhwopYQOxnjOysclhYNOnsvCLjVDLggE=
Looking at the line:
PyObject *result = PyBytes_FromFormat("%d:%s", element->element_type, (const char *) base64_data_buf);
from the source code for charm we see that the part of the string you are interested in is
Nri028S0SySkDoN0JH1Lu6HQo/Jkhq7DCZHI1MUrHOuIgCONonN14GzmhwopYQOxnjOysclhYNOnsvCLjVDLggE=
Decoding this as described on Wikipedia under 'Decoding Base64 with padding' gives us 65 bytes where one byte is a "byte for the sign of the element" according to the question you link to.

Related

Unpadded RSA ciphertext multiplication by 2**e breaks deciphering on a small message _sporadically_

Please, help me to understand, why the snippet below fails to decrypt the message sometimes (it successes best 5 out 6 times) when ran multiple times.
It generates an 1024-bit rsa keys pair, then encrypts "Hello World!!" in most naive way possible, doubles ciphertext, decrypts doubled ciphertext, and finally divides the result to get original plaintext. At the step of decryption it could be clearly seen (logging doubled_decr) when it is going wildly off.
As the given plaintext is small, it should recover from doubling well. bigint-mod-arith package, used for modular exponentiation here, is maintained and have some tests (though really big numbers only in performance section) in it, was used for a number of times, and doesn't seem to be a cause.
import {generateKeyPairSync, privateDecrypt, publicEncrypt, constants} from "node:crypto";
import * as modAr from "bigint-mod-arith";
// "Generate a 1024 bit RSA key pair." https://cryptopals.com/sets/6/challenges/46
const keys = generateKeyPairSync("rsa", {modulusLength: 1024});
let jwk_export = keys.privateKey.export({format: "jwk"});
let pt_test = Buffer.from("Hello World!!");
let ct_test = naiveEncrypt(pt_test, keys.publicKey);
let doubled = bigintFromBuf(ct_test)
* modAr.modPow(2, bigintFromParam(jwk_export.e), bigintFromParam(jwk_export.n));
let doubled_decr = naiveDecrypt(Buffer.from(doubled.toString(16), "hex"), keys.privateKey);
console.debug(pt_test, "plaintext buffer");
console.debug(doubled_decr, "homomorphically doubled buffer (after decryption)");
console.debug(
"_Decrypted doubled buffer divided back by 2 and converted to text_:",
Buffer.from((bigintFromBuf(doubled_decr) / 2n).toString(16), "hex").toString()
)
function bigintFromParam(str) {return bigintFromBuf(Buffer.from(str, "base64url"))}
function bigintFromBuf(buf) {return BigInt("0x" + buf.toString("hex"))}
function naiveEncrypt(message, publicKey) {
const keyParameters = publicKey.export({format: "jwk"});
// console.debug(bigintFromParam(keyParameters.e));
// console.debug(bigintFromParam(keyParameters.n));
return Buffer.from(modAr.modPow(
bigintFromBuf(message),
bigintFromParam(keyParameters.e),
bigintFromParam(keyParameters.n)
).toString(16), "hex");
}
function naiveDecrypt(message, privateKey) {
const keyParameters = privateKey.export({format: "jwk"});
// console.debug(bigintFromParam(keyParameters.d));
console.assert(
bigintFromParam(keyParameters.e) == modAr.modInv(
bigintFromParam(keyParameters.d),
(bigintFromParam(keyParameters.q) - 1n) * (bigintFromParam(keyParameters.p) - 1n)
)
);
return Buffer.from(modAr.modPow(
bigintFromBuf(message),
bigintFromParam(keyParameters.d),
bigintFromParam(keyParameters.n)
).toString(16), "hex");
}
There are two problems, one fundamental and one 'just coding':
you need to divide the 'doubled' plaintext by 2 in the modular ring Zn not the plain integers Z. In general to divide in Zn we modular-multiply by the modular inverse -- a over b = a*modInv(b,n)%n -- but for the particular case of 2 we can simplify to just a/2 or (a+n)/2
when you take bigint.toString(16) the result is variable length depending on the value of the bigint. Since RSA cryptograms are for practical purposes uniform random numbers in [2,n-1], with a 1024-bit key most of the time the result is 128 digits, but about 1/16 of the time is it 127 digits, 1/256 of the time it is 126 digits, etc. If the number of digits is odd, doing Buffer.from(hex,'hex') throws away the last digit and produces a value that is very wrong for your purpose.
In standard RSA we conventionally represent all cryptograms and signatures as a byte string of fixed length equal to the length needed to represent n -- for a 1024-bit key always 128 bytes even if that includes leading zeros. For your hacky scheme, it is sufficient if we have an even number of digits -- 126 is okay, but not 127.
I simplified your code some to make it easier for me to test -- in particular I compute the bigint versions of n,e,d once and reuse them -- but only really changed bigintToBuf and halve= per above, to get the following which works for me (in node>=16 so 'crypto' supports jwk export):
const modAr=require('bigint-mod-arith'); // actually I use the IIFE for reasons but that makes no difference
const jwk=require('crypto').generateKeyPairSync('rsa',{modulusLength:1024}).privateKey.export({format:'jwk'});
const n=bigintFromParam(jwk.n), e=bigintFromParam(jwk.e), d=bigintFromParam(jwk.d);
function bigintFromParam(str){ return bigintFromBuf(Buffer.from(str,'base64url')); }
function bigintFromBuf(buf){ return BigInt('0x'+buf.toString('hex')); }
function bigintToBuf(x){ let t=x.toString(16); return Buffer.from(t.length%2?'0'+t:t,'hex');}
let plain=Buffer.from('Hello world!');
let encr=bigintToBuf(modAr.modPow(bigintFromBuf(plain),e,n));
let double=bigintToBuf(bigintFromBuf(encr)*modAr.modPow(2n,e,n))
// should actually take mod n there to get an in-range ciphertext,
// but the modPow(,,n) in the next step = decrypt fixes that for us
let decr=bigintToBuf(modAr.modPow(bigintFromBuf(double),d,n));
let temp=bigintFromBuf(decr), halve=bigintToBuf(temp%2n? (temp+n)/2n: temp/2n);
console.log(halve.toString());
PS: real RSA implementations, including the 'crypto' module, don't use modPow(,d,n) for decryption, they use the "Chinese Remainder" parameters in the private key to do a more efficient computation instead. See wikipedia for a good explanation.
PPS: just for the record, 1024-bit RSA -- even with decent padding -- has been considered marginal for security since 2013 at latest and mostly prohibited, although there is not yet a reported break in the open community. However, that is offtopic for SO, and your exercise clearly isn't about security.

Reversing Bytes and cross compatible binary parsing in Nim

I've started taking a look at Nim for hobby game modding purposes.
Intro
Yet, I found it difficult to work with Nim compared to C when it comes to machine-specific low-level memory layout and would like to know if Nim actually has better support here.
I need to control byte order and be able to de/serialize arbitrary Plain-Old-Datatype objects to binary custom file formats. I didn't directly find a Nim library which allows flexible storage options like representing enum and pointers with Big-Endian 32-bit. Or maybe I just don't know how to use the feature.
std/marshal : just JSON, i.e. no efficient, flexible nor binary format but cross-compatible
nim-serialization : seems like being made for human readable formats
nesm : flexible cross-compatibility? (It has some options and has a good interface)
flatty : no flexible cross-compatibility, no byte order?
msgpack4nim : no flexible cross-compatibility, byte order?
bingo : ?
Flexible cross-compatibility means, it must be able to de/serialize fields independently of Nim's ABI but with customization options.
Maybe "Kaitai Struct" is more what I look for, a file parser with experimental Nim support.
TL;DR
As a workaround for a serialization library I tried myself at a recursive "member fields reverser" that makes use of std/endians which is almost sufficient.
But I didn't succeed with implementing byte reversal of arbitrarily long objects in Nim. Not practically relevant but I still wonder if Nim has a solution.
I found reverse() and reversed() from std/algorithm but I need a byte array to reverse it and turn it back into the original object type. In C++ there would be reinterprete_cast, in C there is void*-cast, in D there is a void[] cast (D allows defining array slices from pointers) but I couldn't get it working with Nim.
I tried cast[ptr array[value.sizeof, byte]](unsafeAddr value)[] but I can't assign it to a new variable. Maybe there was a different problem.
How to "byte reverse" arbitrary long Plain-Old-Datatype objects?
How to serialize to binary files with byte order, member field size, pointer as file "offset - start offset"? Are there bitfield options in Nim?
It is indeed possible to use algorithm.reverse and the appropriate cast invocation to reverse bytes in-place:
import std/[algorithm,strutils,strformat]
type
LittleEnd{.packed.} = object
a: int8
b: int16
c: int32
BigEnd{.packed.} = object
c: int32
b: int16
a: int8
## just so we can see what's going on:
proc `$`(b: LittleEnd):string = &"(a:0x{b.a.toHex}, b:0x{b.b.toHex}, c:0x{b.c.toHex})"
proc `$`(l:BigEnd):string = &"(c:0x{l.c.toHex}, b:0x{l.b.toHex}, a:0x{l.a.toHex})"
var lit = LittleEnd(a: 0x12, b:0x3456, c: 0x789a_bcde)
echo lit # (a:0x12, b:0x3456, c:0x789ABCDE)
var big:BigEnd
copyMem(big.addr,lit.addr,sizeof(lit))
# here's the reinterpret_cast you were looking for:
cast[var array[sizeof(big),byte]](big.addr).reverse
echo big # (c:0xDEBC9A78, b:0x5634, a:0x12)
for C-style bitfields there is also the {.bitsize.} pragma
but using it causes Nim to lose sizeof information, and of course bitfields wont be reversed within bytes
import std/[algorithm,strutils,strformat]
type
LittleNib{.packed.} = object
a{.bitsize: 4}: int8
b{.bitsize: 12}: int16
c{.bitsize: 20}: int32
d{.bitsize: 28}: int32
BigNib{.packed.} = object
d{.bitsize: 28}: int32
c{.bitsize: 20}: int32
b{.bitsize: 12}: int16
a{.bitsize: 4}: int8
const nibsize = 8
proc `$`(b: LittleNib):string = &"(a:0x{b.a.toHex(1)}, b:0x{b.b.toHex(3)}, c:0x{b.c.toHex(5)}, d:0x{b.d.toHex(7)})"
proc `$`(l:BigNib):string = &"(d:0x{l.d.toHex(7)}, c:0x{l.c.toHex(5)}, b:0x{l.b.toHex(3)}, a:0x{l.a.toHex(1)})"
var lit = LitNib(a: 0x1,b:0x234, c:0x56789, d: 0x0abcdef)
echo lit # (a:0x1, b:0x234, c:0x56789, d:0x0ABCDEF)
var big:BigNib
copyMem(big.addr,lit.addr,nibsize)
cast[var array[nibsize,byte]](big.addr).reverse
echo big # (d:0x5DEBC0A, c:0x8967F, b:0x123, a:0x4)
It's less than optimal to copy the bytes over, then rearrange them with reverse, anyway, so you might just want to copy the bytes over in a loop. Here's a proc that can swap the endianness of any object, (including ones for which sizeof is not known at compiletime):
template asBytes[T](x:var T):ptr UncheckedArray[byte] =
cast[ptr UncheckedArray[byte]](x.addr)
proc swapEndian[T,U](src:var T,dst:var U) =
assert sizeof(src) == sizeof(dst)
let len = sizeof(src)
for i in 0..<len:
dst.asBytes[len - i - 1] = src.asBytes[i]
Bit fields are supported in Nim as a set of enums:
type
MyFlag* {.size: sizeof(cint).} = enum
A
B
C
D
MyFlags = set[MyFlag]
proc toNum(f: MyFlags): int = cast[cint](f)
proc toFlags(v: int): MyFlags = cast[MyFlags](v)
assert toNum({}) == 0
assert toNum({A}) == 1
assert toNum({D}) == 8
assert toNum({A, C}) == 5
assert toFlags(0) == {}
assert toFlags(7) == {A, B, C}
For arbitrary bit operations you have the bitops module, and for endianness conversions you have the endians module. But you already know about the endians module, so it's not clear what problem you are trying to solve with the so called byte reversal. Usually you have an integer, so you first convert the integer to byte endian format, for instance, then save that. And when you read back, convert from byte endian format and you have the int. The endianness procs should be dealing with reversal or not of bytes, so why do you need to do one yourself? In any case, you can follow the source hyperlink of the documentation and see how the endian procs are implemented. This can give you an idea of how to cast values in case you need to do some yourself.
Since you know C maybe the last resort would be to write a few serialization functions and call them from Nim, or directly embed them using the emit pragma. However this looks like the least cross platform and pain free option.
Can't answer anything about generic data structure serialization libraries. I stray away from them because they tend to require hand holding imposing certain limitations on your code and depending on the feature set, a simple refactoring (changing field order in your POD) may destroy the binary compatibility of the generated output without you noticing it until runtime. So you end up spending additional time writing unit tests to verify that the black box you brought in to save you some time behaves as you want (and keeps doing so across refactorings and version upgrades!).

What does the int value returned from compareTo function in Kotlin really mean?

In the documentation of compareTo function, I read:
Returns zero if this object is equal to the specified other object, a
negative number if it's less than other, or a positive number if it's
greater than other.
What does this less than or greater than mean in the context of strings? Is -for example- Hello World less than a single character a?
val epicString = "Hello World"
println(epicString.compareTo("a")) //-25
Why -25 and not -10 or -1 (for example)?
Other examples:
val epicString = "Hello World"
println(epicString.compareTo("HelloWorld")) //-55
Is Hello World less than HelloWorld? Why?
Why it returns -55 and not -1, -2, -3, etc?
val epicString = "Hello World"
println(epicString.compareTo("Hello World")) //55
Is Hello World greater than Hello World? Why?
Why it returns 55 and not 1, 2, 3, etc?
I believe you're asking about the implementation of compareTo method for java.lang.String. Here is a source code for java 11:
public int compareTo(String anotherString) {
byte v1[] = value;
byte v2[] = anotherString.value;
if (coder() == anotherString.coder()) {
return isLatin1() ? StringLatin1.compareTo(v1, v2)
: StringUTF16.compareTo(v1, v2);
}
return isLatin1() ? StringLatin1.compareToUTF16(v1, v2)
: StringUTF16.compareToLatin1(v1, v2);
}
So we have a delegation to either StringLatin1 or StringUTF16 here, so we should look further:
Fortunately StringLatin1 and StringUTF16 have similar implementation when it comes to compare functionality:
Here is an implementation for StringLatin1 for example:
public static int compareTo(byte[] value, byte[] other) {
int len1 = value.length;
int len2 = other.length;
return compareTo(value, other, len1, len2);
}
public static int compareTo(byte[] value, byte[] other, int len1, int len2) {
int lim = Math.min(len1, len2);
for (int k = 0; k < lim; k++) {
if (value[k] != other[k]) {
return getChar(value, k) - getChar(other, k);
}
}
return len1 - len2;
}
As you see, it iterated over the characters of the shorter string and in case the charaters in the same index of two strings are different it returns the difference between them. If during the iterations it doesn't find any different (one string is prefix of another) it resorts to the comparison between the length of two strings.
In your case, there is a difference in the first iteration already...
So its the same as `"H".compareTo("a") --> -25".
The code of "H" is 72
The code of "a" is 97
So, 72 - 97 = -25
Short answer: The exact value doesn't have any meaning; only its sign does.
As the specification for compareTo() says, it returns a -ve number if the receiver is smaller than the other object, a +ve number if the receiver is larger, or 0 if the two are considered equal (for the purposes of this ordering).
The specification doesn't distinguish between different -ve numbers, nor between different +ve numbers — and so neither should you.  Some classes always return -1, 0, and 1, while others return different numbers, but that's just an implementation detail — and implementations vary.
Let's look at a very simple hypothetical example:
class Length(val metres: Int) : Comparable<Length> {
override fun compareTo(other: Length)
= metres - other.metres
}
This class has a single numerical property, so we can use that property to compare them.  One common way to do the comparison is simply to subtract the two lengths: that gives a number which is positive if the receiver is larger, negative if it's smaller, and zero of they're the same length — which is just what we need.
In this case, the value of compareTo() would happen to be the signed difference between the two lengths.
However, that method has a subtle bug: the subtraction could overflow, and give the wrong results if the difference is bigger than Int.MAX_VALUE.  (Obviously, to hit that you'd need to be working with astronomical distances, both positive and negative — but that's not implausible.  Rocket scientists write programs too!)
To fix it, you might change it to something like:
class Length(val metres: Int) : Comparable<Length> {
override fun compareTo(other: Length) = when {
metres > other.metres -> 1
metres < other.metres -> -1
else -> 0
}
}
That fixes the bug; it works for all possible lengths.
But notice that the actual return value has changed in most cases: now it only ever returns -1, 0, or 1, and no longer gives an indication of the actual difference in lengths.
If this was your class, then it would be safe to make this change because it still matches the specification.  Anyone who just looked at the sign of the result would see no change (apart from the bug fix).  Anyone using the exact value would find that their programs were now broken — but that's their own fault, because they shouldn't have been relying on that, because it was undocumented behaviour.
Exactly the same applies to the String class and its implementation.  While it might be interesting to poke around inside it and look at how it's written, the code you write should never rely on that sort of detail.  (It could change in a future version.  Or someone could apply your code to another object which didn't behave the same way.  Or you might want to expand your project to be cross-platform, and discover the hard way that the JavaScript implementation didn't behave exactly the same as the Java one.)
In the long run, life is much simpler if you don't assume anything more than the specification promises!

TypeError when applying sum to a list of strings [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Python has a built in function sum, which is effectively equivalent to:
def sum2(iterable, start=0):
return start + reduce(operator.add, iterable)
for all types of parameters except strings. It works for numbers and lists, for example:
sum([1,2,3], 0) = sum2([1,2,3],0) = 6 #Note: 0 is the default value for start, but I include it for clarity
sum({888:1}, 0) = sum2({888:1},0) = 888
Why were strings specially left out?
sum( ['foo','bar'], '') # TypeError: sum() can't sum strings [use ''.join(seq) instead]
sum2(['foo','bar'], '') = 'foobar'
I seem to remember discussions in the Python list for the reason, so an explanation or a link to a thread explaining it would be fine.
Edit: I am aware that the standard way is to do "".join. My question is why the option of using sum for strings was banned, and no banning was there for, say, lists.
Edit 2: Although I believe this is not needed given all the good answers I got, the question is: Why does sum work on an iterable containing numbers or an iterable containing lists but not an iterable containing strings?
Python tries to discourage you from "summing" strings. You're supposed to join them:
"".join(list_of_strings)
It's a lot faster, and uses much less memory.
A quick benchmark:
$ python -m timeit -s 'import operator; strings = ["a"]*10000' 'r = reduce(operator.add, strings)'
100 loops, best of 3: 8.46 msec per loop
$ python -m timeit -s 'import operator; strings = ["a"]*10000' 'r = "".join(strings)'
1000 loops, best of 3: 296 usec per loop
Edit (to answer OP's edit): As to why strings were apparently "singled out", I believe it's simply a matter of optimizing for a common case, as well as of enforcing best practice: you can join strings much faster with ''.join, so explicitly forbidding strings on sum will point this out to newbies.
BTW, this restriction has been in place "forever", i.e., since the sum was added as a built-in function (rev. 32347)
You can in fact use sum(..) to concatenate strings, if you use the appropriate starting object! Of course, if you go this far you have already understood enough to use "".join(..) anyway..
>>> class ZeroObject(object):
... def __add__(self, other):
... return other
...
>>> sum(["hi", "there"], ZeroObject())
'hithere'
Here's the source: http://svn.python.org/view/python/trunk/Python/bltinmodule.c?revision=81029&view=markup
In the builtin_sum function we have this bit of code:
/* reject string values for 'start' parameter */
if (PyObject_TypeCheck(result, &PyBaseString_Type)) {
PyErr_SetString(PyExc_TypeError,
"sum() can't sum strings [use ''.join(seq) instead]");
Py_DECREF(iter);
return NULL;
}
Py_INCREF(result);
}
So.. that's your answer.
It's explicitly checked in the code and rejected.
From the docs:
The preferred, fast way to concatenate a
sequence of strings is by calling
''.join(sequence).
By making sum refuse to operate on strings, Python has encouraged you to use the correct method.
Short answer: Efficiency.
Long answer: The sum function has to create an object for each partial sum.
Assume that the amount of time required to create an object is directly proportional to the size of its data. Let N denote the number of elements in the sequence to sum.
doubles are always the same size, which makes sum's running time O(1)×N = O(N).
int (formerly known as long) is arbitary-length. Let M denote the absolute value of the largest sequence element. Then sum's worst-case running time is lg(M) + lg(2M) + lg(3M) + ... + lg(NM) = N×lg(M) + lg(N!) = O(N log N).
For str (where M = the length of the longest string), the worst-case running time is M + 2M + 3M + ... + NM = M×(1 + 2 + ... + N) = O(N²).
Thus, summing strings would be much slower than summing numbers.
str.join does not allocate any intermediate objects. It preallocates a buffer large enough to hold the joined strings, and copies the string data. It runs in O(N) time, much faster than sum.
The Reason Why
#dan04 has an excellent explanation for the costs of using sum on large lists of strings.
The missing piece as to why str is not allowed for sum is that many, many people were trying to use sum for strings, and not many use sum for lists and tuples and other O(n**2) data structures. The trap is that sum works just fine for short lists of strings, but then gets put in production where the lists can be huge, and the performance slows to a crawl. This was such a common trap that the decision was made to ignore duck-typing in this instance, and not allow strings to be used with sum.
Edit: Moved the parts about immutability to history.
Basically, its a question of preallocation. When you use a statement such as
sum(["a", "b", "c", ..., ])
and expect it to work similar to a reduce statement, the code generated looks something like
v1 = "" + "a" # must allocate v1 and set its size to len("") + len("a")
v2 = v1 + "b" # must allocate v2 and set its size to len("a") + len("b")
...
res = v10000 + "$" # must allocate res and set its size to len(v9999) + len("$")
In each of these steps a new string is created, which for one might give some copying overhead as the strings are getting longer and longer. But that’s maybe not the point here. What’s more important, is that every new string on each line must be allocated to it’s specific size (which. I don’t know it it must allocate in every iteration of the reduce statement, there might be some obvious heuristics to use and Python might allocate a bit more here and there for reuse – but at several points the new string will be large enough that this won’t help anymore and Python must allocate again, which is rather expensive.
A dedicated method like join, however has the job to figure out the real size of the string before it starts and would therefore in theory only allocate once, at the beginning and then just fill that new string, which is much cheaper than the other solution.
I dont know why, but this works!
import operator
def sum_of_strings(list_of_strings):
return reduce(operator.add, list_of_strings)

The last challenge of Bytewiser: Array Buffers

The challenge is like that:
Array Buffers are the backend for Typed Arrays. Whereas Buffer in node
is both the raw bytes as well as the encoding/view, Array Buffers are
only raw bytes and you have to create a Typed Array on top of an Array
Buffer in order to access the data.
When you create a new Typed Array and don't give it an Array Buffer to
be a view on top of it will create it's own new Array Buffer instead.
For this challenge, take the integer from process.argv[2] and write it
as the first element in a single element Uint32Array. Then create a
Uint16Array from the Array Buffer of the Uint32Array and log out to
the console the JSON stringified version of the Uint16Array.
Bonus: try to explain the relevance of the integer from
process.argv[2], or explain why the Uint16Array has the particular
values that it does.
The solution given by the author is like that:
var num = +process.argv[2]
var ui32 = new Uint32Array(1)
ui32[0] = num
var ui16 = new Uint16Array(ui32.buffer)
console.log(JSON.stringify(ui16))
I don't understand what does the plus sign in the first line means? And I can't understand the logic of this block of code either.
Thank you a lot if you can solve my puzzle.
Typed arrays are often used in context of asm.js, a strongly typed subset of JavaScript that is highly optimisable. "strongly typed" and "subset of JavaScript" are contradictory requirements, since JavaScript does not natively distinguish integers and floats, and has no way to declare them. Thus, asm.js adopts the convention that the following no-ops on integers and floats respectively serve as declarations and casts:
n|0 is n for every integer
+n is n for every float
Thus,
var num = +process.argv[2]
would be the equivalent of the following line in C or Java:
float num = process.argv[2]
declaring a floating point variable num.
It is puzzling though, I would have expected the following, given the requirement for integers:
var num = (process.argv[2])|0
Even in normal JavaScript though they can have uses, because they will also convert string representations of integers or floats respectively into numbers.

Resources