Range of Values Represented by 1's Complement with 7 bits - ones-complement

Assume that there are 7-bits that are available to store a binary number. Specify the range of numbers that can be represented by 1's Complement. I found that the range for 2's Complement is -64 ≤ 0 ≤ 63. How do I do this for 1's Complement?

In 2s complement method of representation of binary numbers, for signed numbers, the range of numbers that can be represented is -2^(N-1) - (2^(N-1)-1) for an N-bit number.
That is why you have obtained the range -64 - 63 for a 7 bit binary number.
Now in the case of 1s Complement method of representation, the range of numbers that can be represented is -(2^(N-1)-1) - (2^(N-1)-1).
And this will result in a range of -63 - 63 for a 7 bit number in 1s complement representation.

Related

How to compress an integer to a smaller string of text?

Given a random integer, for example, 19357982357627685397198. How can I compress these numbers into a string of text that has fewer characters?
The string of text must only contain numbers or alphabetical characters, both uppercase and lowercase.
I've tried Base64 and Huffman-coding that claim to compress, but none of them makes the string shorter when writing on a keyboard.
I also tried to make some kind of algorithm that tries to divide the integer by the numbers "2,3,...,10" and check if the last number in the result is the number it was divided by (looks for 0 in case of division by 10). So, when decrypting, you would just multiply the number by the last number in the integer. But that does not work because in some cases you can't divide by anything and the number would stay the same, and when it would be decrypted, it would just multiply it into a larger number than you started with.
I also tried to divide the integer into blocks of 2 numbers starting from left and giving a letter to them (a=1, b=2, o=15), and when it would get to z it would just roll back to a. This did not work because when it was decrypted, it would not know how many times the number rolled over z and therefore be a much smaller number than in the start.
I also tried some other common encryption strategies. For example Base32, Ascii85, Bifid Cipher, Baudot Code, and some others I can not remember.
It seems like an unsolvable problem. But because it starts with an integer, each number can contain 10 different combinations. While in the alphabet, letters can contain 26 different combinations. This makes it so that you can store more data in 5 alphabetical letters, than in a 5 digit integer. So it is possible to store more data in a string of characters than in an integer in mathematical means, but I just can't find anyone who has ever done it.
You switch from base 10 to eg. base 62 by repeatedly dividing by 62 and record the remainders from each step like this:
Converting 6846532136 to base62:
Operation Result Remainder
6846532136 / 62 110427937 42
110427937 / 62 1781095 47
1781095 / 62 28727 21
28727 / 62 463 21
463 / 62 7 29
7 / 62 0 7
Then you use the remainder as index in to a base62 alphabet of your choice eg:
0 1 2 3 4 5 6
01234567890123456789012345678901234567890123456789012345678901
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
Giving: H (7) d (29) V (21) V (21) v (47) q (42) = HdVVvq
------
It's called base10 to base62, there bunch of solutions and code on the internet.
Here is my favorite version: Base 62 conversion

What is the maximum value of the exponent of single precision floating point on MSVC?

I've been trying to figure out how biased exponents work.
8 bits are reserved for the exponent itsef, so it's either -127 to 127 or 0 to 255. When i want to store a number(exponent part) that doesn't fit into 8 bits, where it obtains additional bits to store that data from?
In case you are going to say it uses bias as an offset, then provide an additional info on where exactly the data is stored.
To a first approximation, the value of a float with exponent e and significand f is 1.f x 2^e. There are special cases to consider regarding subnormals, infinities, NaN etc; but you can ignore those for starters. Essentially, the exponent is really the exponent of the base-2 in IEEE754 notation. So, the comment you made regarding how 30020.3f fits in 8-bits is simple: Easily. You only need an exponent of 14 to represent that, which is nicely representable in the biased exponent with 8-bits; capable of covering it nicely.
In fact, here's the exact binary binary representation of 30020.3 as a single-precision IEEE-754 float:
3 2 1 0
1 09876543 21098765432109876543210
S ---E8--- ----------F23----------
Binary: 0 10001101 11010101000100010011010
Hex: 46EA 889A
Precision: SP
Sign: Positive
Exponent: 14 (Stored: 141, Bias: 127)
Hex-float: +0x1.d51134p14
Value: +30020.3 (NORMAL)
As you can see we just store 14 in the exponent. The sign is 0, and the fraction covers the rest so that 1.f * 2^14 gives you the correct value.
What is the maximum value of the exponent of single precision floating point on MSVC?
Maximum value of the binary exponent: 254-BIAS --> 127
For a decimal perspective, <float.h> FLT_MAX_10_EXP as the "maximum integer such that 10 raised to that power is in the range of representable finite floating-point numbers," 38
printf("%d %g\n", FLT_MAX_10_EXP, FLT_MAX);
// common result
38 3.40282e+38
8 bits are reserved for the exponent itself, so it's either -127 to 127 or 0 to 255.
Pretty close: for finite values, the raw exponent is more like [0...254] with the value of 0 having special meaning: as if raw exponent of 1 and 0. implied digit.
The exponent is then raw exponent - bias of 127 or [-126 to 127].
Recall this is an exponent for 2 raised to some power.
Using binary32, the maximum value of a biased exponent for finite values is 254.
Applying the bias of -127, the maximum value of the exponent is 254-127 or 127 in the form of:
biased_exponent > 0
-1neg_sign * 1.(23 bit significant fraction) * 2biased_exponent - 127
And for completes of subnormals and zero:
biased_exponent == 0
-1neg_sign * 0.(23 bit significant fraction) * 21 - 127
How 30k fits into 8 bits? 30020 is for exponent and .3 for fraction.
Mathematically 30020.3f has a 30020 whole number portion and a fraction. 30030 is not for the exponent and .3 for the fraction used elsewhere. All the value contribute to the exponent and the significand. floats are typically encoded as a binary 1.xxxxx2 * 2exponent
printf("%a\n", 30020.3f); // 0x1.d51134p+14
+1.110101010001000100110102 * 214
Encoding with binary32
sign + or 0,
biased exponent (14 + 127 = 141 or 100010112)
fraction of significand 11010101000100010011010
0 10001011 110101010001000100110102
01000101 11101010 10001000 100110102
45 EA 88 9A16

How do the numbers and letters differ in hexadecimal colours?

I had a look at how hexadecimal colour codes work, for the most part, it seems pretty simple. But one thing I don't understand. If I have the code #37136F, how does the 6 and the F work together? Does this mean that the two number values are added together? So the blue value is 21? Or do they add together like: 615? If it is added together (which I feel like if the most logical way) then the maximum value you can get is 30, which gives me a range of 0-30... I feel like this isn't right, please enlighten me.
First you split the hex code into pairs of digits (so #37136F becomes 37, 13, and 6F), and those are the values for red, green, and blue respectively. Let's focus on the blue component, 6F.
6F is a two digit hexadecimal number (base 16). Just as 25 in base 10 is actually 2*10 + 5, 6F in hexadecimal is actually 6*16 + 15 = 111 in base 10. In general, if X and Y are hexadecimal digits (0 through F), then XY in base 16 is X*16 + Y.
Note that the minimum and maximum two-digit hex numbers are 00 and FF respectively, which equal 0*16 + 0 = 0 and 15*16 + 15 = 255 respectively. This is why RGB values range from 0 to 255 inclusive, when written in base 10.

A more natural color representation: Is it possible to convert RGBA color to a single float value?

Is it possible to represent an RGBA color to a single value that resembles the retinal stimulation? The idea is something like:
0.0 value for black (no stimulation)
1.0 for white (full stimulation)
The RGBA colors in between should be represented by values that capture the amount of stimulation they cause to the eye like:
a very light yellow should have a very high value
a very dark brown should have a low value
Any ideas on this? Is converting to grayscale the only solution?
Thanks in advance!
Assign specific bits of a single number to each part of RGBA to represent your number.
If each part is 8 bits, the first 8 bits can be assigned to R, the second 8 bits to G, the third 8 bits to B, and the final 8 bits to A.
Let's say your RGBA values are= 15,4,2,1. And each one is given 4 bits.
In binary, R is 1111, G is 0100, B is 0010, A is 0001.
In a simple concatenation, your final number would be 1111010000100001 in binary, which is 62497. To get G out of this, 62497 / 256, round it to an integer, then modulo 16. 256 is 16 to the second power because it is the 2nd position past the first from the right(R would need third power, B would need first power). 16 is 2 to the fourth power because I used 4 bits.
62497 / 256 = 244, 244 % 16 = 4.

Normalized values, when summed are more than 1

I have two files:
File 1:
TOPIC:topic_0 1294
aa 234
bb 123
TOPIC:topic_1 2348
aa 833
cc 239
bb 233
File 2:
0.1 0.2 0.3 0.4
This is just the format of my files. Basically, when the second column (omitting the first "TOPIC" line) is summed for each topic, it constitutes to 1 as they are the normalized values. Similarly, in file 2, the values are normalized and hence they also constitute to 1.
I perform multiplication of the values from file 1 and 2. The resulting output file looks like:
aa 231
bb 379
cc 773
The second column when summed of the output file should give 1. But few files have values little over 1 like 1.1, 1.00038. How can I precisely get 1 for the output file? Is it some rounding off that I should do or something?
PS: The formats are just examples, the values and words are different. This is just for understanding purposes. Please help me sort this.
Python stores floating point decimals in base-2.
https://docs.python.org/2/tutorial/floatingpoint.html
This means that some decimals could be terminating in base-10, but are repeating in base-2, hence the floating-point error when you add them up.
This gets into some math, but imagine in base-10 trying to express the value 2/6. When you eliminate the common factors from the numerator and denominator it's 1/3.
It's 0.333333333..... repeating forever. I'll explain why in a moment, but for now, understand that if only store the first 16 digits in the decimal, for example, when you multiply the number by 3, you won't get 1, you'll get .9999999999999999, which is a little off.
This rounding error occurs whenever there's a repeating decimal.
Here's why your numbers don't repeat in base-10, but they do repeat in base-2.
Decimals are in base-10, which prime factors out to 2^1 * 5^1. Therefore for any ratio to terminate in base-10, its denominator must prime factor to a combination of 2's and 5's, and nothing else.
Now let's get back to Python. Every decimal is stored as binary. This means that in order for a ratio's "decimal" to terminate, the denominator must prime factor to only 2's and nothing else.
Your numbers repeat in base-2.
1/10 has (2*5) in the denominator.
2/10 reduces to 1/5 which still has five in the denominator.
3/10... well you get the idea.

Resources