node: converting buffers to decimal values - node.js

I have a buffer that is filled with data and begins with < Buffer 52 49 ...>
Assuming this buffer is defined as buf, if I run buf.readInt16LE(0) the following is returned:
18770
Now, the binary representation of hex values 52 and 49 are:
01010010 01001001
If I were to convert the first 15 bits to decimal, omitting the 16th bit for two's complement I would get the following:
21065
Why didn't my results give me the value of 18770?

18770 is 01001001 01010010 which is your 2 bytes reversed, which is what the readInt*LE functions are going to do.
Use readInt16BE.

You could do this: parseInt("0x" + buf.toString("hex")). Probably a lot slower but would do in a pinch.

Related

How does encode and decode 64 figure out that the last few zeros are mere padding?

https://learn.microsoft.com/en-us/dotnet/api/system.convert.tobase64string?view=net-5.0
It says
If an integral number of 3-byte groups does not exist, the remaining
bytes are effectively padded with zeros to form a complete group. In
this example, the value of the last byte is hexadecimal FF. The first
6 bits are equal to decimal 63, which corresponds to the base-64 digit
"/" at the end of the output, and the next 2 bits are padded with
zeros to yield decimal 48, which corresponds to the base-64 digit,
"w". The last two 6-bit values are padding and correspond to the
valueless padding character, "=".
Now,
Imagine that the byte array I send is
0
So, only one byte, namely 0
That one byte will be padded right into 000 right?
So now, we will have something like 0=== as the encoding because it takes 4 characters in base 64 encoding to encode 3 bytes.
Now, we gonna decode that.
How do we know that the original byte isn't 00, or 000, but just 0?
I must be missing something here.
So now, we will have something like 0=== as the encoding
3 padding characters is illegal. This would mean 6 bit plus padding.
And then 0 as a byte value is A in Base64, so it would be AA==.
So the first A has the first 6 bits of the 0 byte, the second A contributes the 2 remaining 0 bits for your byte, and then there are just 4 0 bits plus the padding left, not enough for a second byte.
How do we know that the original byte isn't 00, or 000, but just 0?
AA== has only 12 bits (6 bits per character) so it can only encode 1 Byte => 0
AAA= has 18 bits, enough for 2 bytes => 00
AAAA has 24 bits = 3 bytes => 000

How to count binary sequence in binary number in Python?

I would like to count '01' sequence in 5760 binary bits.
First, I would like to combine several binary numbers then count # of '01' occurrences.
For example, I have 64 bits integer. Say, 6291456. Then I convert it into binary. Most significant 4 bits are not used. So I'll get 60 bits binary 000...000011000000000000000000000
Then I need to combine(just put bits together since I only need to count '01') first 60 bits + second 60 bits + ...so 96 of 60 bits are stitched together.
Finally, I want to count how many '01' appears.
s = binToString(5760 binary bits)
cnt = s.count('01');
num = 6291226
binary = format(num, 'b')
print(binary)
print(binary.count('01'))
If I use number given by you i.e 6291456 it's binary representation is 11000000000000000000000 which gives 0 occurrences of '01'.
If you always want your number to be 60 bits in length you can use
binary = format(num,'060b')
It will add leading 0 to make it of given length
Say that nums is your list of 96 numbers, each of which can be stored in 64 bits. Since you want to throw away the most 4 significant bits, you are really taking the number modulo 2**60. Thus, to count the number of 01 in the resulting string, using the idea of #ShrikantShete to use the format function, you can do it all in one line:
''.join(format(n%2**60,'060b') for n in nums).count('01')

node.js: get byte length of the string "あいうえお"

I think, I should be able to get the byte length of a string by:
Buffer.byteLength('äáöü') // returns 8 as I expect
Buffer.byteLength('あいうえお') // returns 15, expecting 10
However, when getting the byte length with a spreadsheet program (libreoffice) using =LENB("あいうえお"), I get 10 (which I expect)
So, why do I get for 'あいうえお' a byte length of 15 rather than 10 using Buffer.byteLength?
PS.
Testing the "あいうえお" on these two sites, I get two different results
http://bytesizematters.com/ returns 10 bytes
https://mothereff.in/byte-counter returns 15 bytes
What is correct? What is going on?
node.js is correct. The UTF-8 representation of the string "あいうえお" is 15 bytes long:
E3 81 82 = U+3042 'あ'
E3 81 84 = U+3044 'い'
E3 81 86 = U+3046 'う'
E3 81 88 = U+3048 'え'
E3 81 8A = U+304A 'お'
The other string is 8 bytes long in UTF-8 because the Unicode characters it contains are below the U+0800 boundary and can each be represented with two bytes:
C3 A4 = U+E4 'ä'
C3 A1 = U+E1 'á'
C3 B6 = U+F6 'ö'
C3 BC = U+FC 'ü'
From what I can see in the documentation, LibreOffice's LENB() function is doing something different and confusing:
For strings which contain only ASCII characters, it returns the length of the string (which is also the number of bytes used to store it as ASCII).
For strings which contain non-ASCII characters, it returns the number of bytes required to store it in UTF-16, which uses two bytes for all characters under U+10000. (I'm not sure what it does with characters above that, or if it even supports them at all.)
It is not measuring the same thing as Buffer.byteLength, and should be ignored.
With regard to the other tools you're testing: Byte Size Matters is wrong. It's assuming that all Unicode characters up to U+FF can be represented using one byte, and all other characters can be represented using two bytes. This is not true of any character encoding. In fact, it's impossible. If you encode every characters up to U+FF using one byte, you've used up all possible values for that byte, and you have no way to represent anything else.

ASCII text to Hexadecimal in Excel

I want to this but i don't know what to do, the only functions it seems to be useful is "DEC.TO.HEX".
This is the problem, i have in one cell this text:
1234
And in the next cell i want the hexadecimal value of each character, the expected result would be:
31323334
Each character must be represented by two hexadecimal characters. I don't have an idea how to solve this in excel avoiding make a coded program.
Regards!
Edit: Hexadecimal conversion
Text value Ascii Value (Dec) Hexadecimal Value
1 49 31
2 50 32
3 51 33
4 52 34
Please try:
=DEC2HEX(CODE(MID(A1,1,1)))&DEC2HEX(CODE(MID(A1,2,1)))&DEC2HEX(CODE(MID(A1,3,1)))&DEC2HEX(CODE(MID(A1,4,1)))
In your version you might need the .s in the function (and perhaps ;s rather than ,s).
DEC2HEX may be of assistance. Use, as follows:
=DEC2HEX(A3)
First split 1234 to 1 2 3 4 by using MID(), then use Code() for each character, and then again concentate. Below is the formula, Y21 is the cell in which 1234 is written
=CONCATENATE(CODE(MID(Y21,1,1)),CODE(MID(Y21,2,1)),CODE(MID(Y21,3,1)),CODE(MID(Y21,4,1)))
1234 >> 49505152

Representing and adding negative numbers in Easy68k Assembly

I'm trying to write a simple program in Easy68k that stores to negative values, adds them together, and then outputs them in the console. I am having trouble figuring out how to represent the negative numbers. We are asked that they be in hex format and output in decimal. Everything seems correct but the values themselves. I used 2s complement and then converted the two numbers to hex.
First decimal number = -102
Second decimal number = -87
Using 2s complement I converted the two numbers to hex (though I'm not sure if this is even correct):
-102 -> 1A
-87 -> 29
Here's my code so far:
addr EQU $7CE0
data1 EQU $1A
data2 EQU $29
ORG $1000
START: ; first instruction of program
* Put program code here
MOVE #data2,D1
MOVEA.W #addr,A0
ADD #data1,D1
MOVE D1,(A0)
MOVE.B #3,D0
TRAP #15
* Variables and Strings
* Put variables and constants here
END START ; last line of source
I even tried to just convert binary versions of the negative numbers straight to hex:
-102 -> 11100110 -> E6
-87 -> 11010111 -> D7
Which didn't work either. I also tried storing the binary version and adding them, but got the same result.
Here's the question:
Write a program in assembly to add the two numbers (-102 and -87). Inputs should be in hexadecimal format. Store the result in hexadecimal at an address $7CE0. Print out the result in decimal.(Hint: use the track function task #3). If an error happens, you should also print out the error message as well.
I know I am misrepresenting the two negative numbers, but I just can't figure out how to do it right. I've looked everywhere and found nothing on how to store/add/output negative numbers in 68k. Any help is appreciated, this is for an assignment so I'm not expecting direct answers. Thanks!

Resources