I have a document that says the numbers I'm looking at are either "9511" or "8087" floating point format (many errors in the documentation I have - it also says the numbers are "9511 float, eight bytes long, LSB -> MSB". But the 9511 didn't DO doubles...).
They are definitely not IEEE-754, but are four bytes long.
How can I make the byte string 0x02 0xB2 0x5D 0x07 into "2.787"?
Thanks,
Joe
Related
In base2 (binary), the characters to represent each digit are 01. 0 being the first character of the base2 alphabet, you can prefix any base2 number with as many 0 as you want without changing the meaning of the number.
All of these are equivalent:
11
011
0011
00011
In base10 (decimal), the characters to represent each digit are 0123456789. 0 being the first character of the base10 alphabet, you can prefix any base10 number with as many 0 as you want without changing the meaning of the number.
All of these are equivalent:
3
03
003
0003
In a hypothetical base64, let's assume the characters to represent each digit are ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/. A being the first character of the base64 alphabet, you should be able to prefix any base64 number with as many A as you want without changing the meaning of the number.
All of these would be equivalent:
5+fn
A5+fn
AA5+fn
AAA5+fn
I understand that base64 does not work this way because it was not intended to encode numbers but any binary data.
Is there a formal RFC documenting this hypothetical base64 encoding? Are there any implementation in some programming languages?
https://learn.microsoft.com/en-us/dotnet/api/system.convert.tobase64string?view=net-5.0
It says
If an integral number of 3-byte groups does not exist, the remaining
bytes are effectively padded with zeros to form a complete group. In
this example, the value of the last byte is hexadecimal FF. The first
6 bits are equal to decimal 63, which corresponds to the base-64 digit
"/" at the end of the output, and the next 2 bits are padded with
zeros to yield decimal 48, which corresponds to the base-64 digit,
"w". The last two 6-bit values are padding and correspond to the
valueless padding character, "=".
Now,
Imagine that the byte array I send is
0
So, only one byte, namely 0
That one byte will be padded right into 000 right?
So now, we will have something like 0=== as the encoding because it takes 4 characters in base 64 encoding to encode 3 bytes.
Now, we gonna decode that.
How do we know that the original byte isn't 00, or 000, but just 0?
I must be missing something here.
So now, we will have something like 0=== as the encoding
3 padding characters is illegal. This would mean 6 bit plus padding.
And then 0 as a byte value is A in Base64, so it would be AA==.
So the first A has the first 6 bits of the 0 byte, the second A contributes the 2 remaining 0 bits for your byte, and then there are just 4 0 bits plus the padding left, not enough for a second byte.
How do we know that the original byte isn't 00, or 000, but just 0?
AA== has only 12 bits (6 bits per character) so it can only encode 1 Byte => 0
AAA= has 18 bits, enough for 2 bytes => 00
AAAA has 24 bits = 3 bytes => 000
I am looking at my lecture notes and I see this:
Why is the MSB for an int the 31st bit and not the 32nd bit? If an int has 4 bytes, there are 32 bits and the leftmost bit is the 32nd bit right?
The notes say
the leftmost bit represents the sign of the integer... If the MSB is 1, the integer is negative. Note the MSB is the sign no matter what size of the integer type...For example, for an int, it is bit 31. For a long, it is bit 63. For a byte, it is bit 7. To get the two complements negative of a positive number, first invert all the bits, change the 0s to 1s and the 1s to 0s, then add 1.
Is that right?
Also I don't understand why inverting all bits and adding one gives me the negative number. Can someone explain this better?
While converting the hexadecimal value "FFFFFFFF00" into octal value using Hex2Oct of MS Excel, it should return "Error string" as per the rules mentioned here:
If number is negative, HEX2OCT ignores places and returns a 10-character octal number.
If number is negative, it cannot be less than FFE0000000, and if number is positive, it cannot be greater than 1FFFFFFF.
If number is not a valid hexadecimal number, HEX2OCT returns the #NUM! error value.
If HEX2OCT requires more than places characters, it returns the #NUM! error value.
If places is not an integer, it is truncated.
If places is nonnumeric, HEX2OCT returns the #VALUE! error value.
If places is negative, HEX2OCT returns the #NUM! error value.
But it computes and returns as "7777777400" without considering the rules/remarks mentioned in the link.
For example:
While calculating HEX2OCT,
As per Excel rule, If number is positive, it cannot be greater than 1FFFFFFF(hex)<->3777777777(oct)<->536870911(decimal).
But while calculating the HEX2OCT for FFFFFFFF00(hex) <-> 7777777400(oct) <-> 1099511627520(decimal).
Here the hex value FFFFFFFF00 is greater than 1FFFFFFF, but MS Excel does not return the error string instead it returns the converted octal value.
Can anyone explain why?
FFFFFFFF00 is actually well within the range of hex2oct because it is a negative number.
According to that documentation the largest negative number it can handle is FFE0000000 which when converted to decimal is -536870912. Converting your "big" hex over to decimal yields -256.
The reason the value of FFFFFFFF00 looks so big is because it's a negative number. The first bit is set to 1 (when converted to binary) which signifies that the number is negative. Negatives are computed in binary using two's complement which is found by flipping each bit and then adding 1 to the number.
Undoing the two's complement:
For your big number, the binary representation is:
1111111111111111111111111111111100000000
Subtracting 1:
1111111111111111111111111111111011111111
Flipping all the bits:
0000000000000000000000000000000100000000
Which is 256
So.. basically if the hex looks big, but the first bit is 1 then it's actually a small negative and well within your range of allowable values.
Lastly, when you hex2oct you don't get a negative sign for these because we are still not in decimal notation. The first bit of your octal is still a 1 (when converted to binary) since it's still the same number, just represented in a different counting system.
The clue lies earlier in the documentation page you quote:
The HEX2OCT function syntax has the following arguments:
Number Required. The hexadecimal number you want to convert. Number cannot contain more than 10 characters. The most significant
bit of number is the sign bit. The remaining 39 bits are magnitude
bits. Negative numbers are represented using two's-complement notation.
The hex value FFFFFFFF00 corresponds the binary value
1111 1111 1111 1111 1111 1111 1111 1111 0000 0000
and as the documentation says, "the most significant bit is the sign bit ... two's complement notation". So this value represents a negative number. By the rules of two's complement, it actually represents -256. And this is fine, because it is not "less than FFE0000000", as FFE0000000 is -2097152.
If you actually want to treat FFFFFFFF00 as an unsigned quantity, and get the octal representation of decimal 1099511627520, you'll need to use another method.
This is supposed to be a low-level course, and it is only the third day of class. However, we are asked to "Write the null-terminated string 'R5' in hexadecimal, binary, and octal notations. Assume that ASCII code is used"
I have no idea where to go to learn how to do this. Any suggestions? Thanks.
NULL-terminated ASCII strings are stored with one byte per character, plus one byte for the NULL. You would therefore be printing three bytes - 'R', '5', and 0.
Look up 'R' and '5' on an ASCII chart to see what the numeric values are for those characters in ASCII. Then, write out your three bytes three different ways - one each for hexadecimal, binary and octal.
Hope that helps.
It seems like this just requires you to look up the appropriate entries from the ASCII table, which in most cases lists hex and octal and the characters themselves.
ASCII is a standard way of defining how characters are represented, and most tables will list characters against corresponding hex, decimal, and octal values. The first 128 is standard and the next 128 are the extended characters (those weird characters that don't map to an English keyboard).
If you google "ASCII table" you'll be inundated with different links. The top one I saw at www.asciitable.com appears to have everything you need - except binary.
Most of the times you're not going to see binary listed, but it's fairly academic to translate a hex value into binary - your Windows Calculator will happily do this for you.
To more directly translate your specific string you'll look up each character (including the NULL) separately and translate each individually.
Ultimately to the computer, everything is a number. To represent characters such as letters or symbols, we can agree on an encoding, or a numbering of these characters. For example, we could invent a new encoding where 1 means 'A', 2, means 'B', and so on. ASCII is one commonly used text encoding which maps characters to numbers. In this case, we are concerned with a string of 3 characters: 'R', '5', and null (a null character marks the end of a string. It is represented by the value 0. If you look in an ASCII table, you'll find that the numeric values are 82, 53, and 0.
String: R, 5, <null>
Decimal numbers: 82, 53, 0
Our normal number system is base-10, or decimal. This means that each digit represents a value ten times larger than the next (1, to 10, to 100, to 1000, etc.). Alternate bases include 8 (octal), 16 (hexadecimal), and 2 (binary). There is a straightforward way to convert between bases, although you can also easily find calculators that will do the conversion for you. You may want to review the relevant section of your textbook, or check out the Wikipedia articles. For the example of decimal 82, the hexadecimal value is 52 (this means 5*16 + 2 = 8*10 + 2). Oftentimes you will see a prefix of "0x", this is commonly used to make it clear the following digits are in base 16. (otherwise, you might think "52" refers to the decimal value 52).
Interesting. So would it be correct to say that the null-terminated string "R5" is simply "52, 35, 30" or is there a more correct format to it? Thank you for your patience. –
As I pointed out in another comment, the actual value 0 marks the end of a string, not the value 0x30, which represents a character '0' in the string. Note that the value of zero (0) is the same regardless of which base your numbers are in.
String: R, 5, <null>
Decimal : 82, 53, 0
Hexadecimal: 52, 35, 0