A more natural color representation: Is it possible to convert RGBA color to a single float value? - grayscale

Is it possible to represent an RGBA color to a single value that resembles the retinal stimulation? The idea is something like:
0.0 value for black (no stimulation)
1.0 for white (full stimulation)
The RGBA colors in between should be represented by values that capture the amount of stimulation they cause to the eye like:
a very light yellow should have a very high value
a very dark brown should have a low value
Any ideas on this? Is converting to grayscale the only solution?
Thanks in advance!

Assign specific bits of a single number to each part of RGBA to represent your number.
If each part is 8 bits, the first 8 bits can be assigned to R, the second 8 bits to G, the third 8 bits to B, and the final 8 bits to A.
Let's say your RGBA values are= 15,4,2,1. And each one is given 4 bits.
In binary, R is 1111, G is 0100, B is 0010, A is 0001.
In a simple concatenation, your final number would be 1111010000100001 in binary, which is 62497. To get G out of this, 62497 / 256, round it to an integer, then modulo 16. 256 is 16 to the second power because it is the 2nd position past the first from the right(R would need third power, B would need first power). 16 is 2 to the fourth power because I used 4 bits.
62497 / 256 = 244, 244 % 16 = 4.

Related

Why does EXIF geodata need so much precision?

According to spec, EXIF stores latitude and longitude with 192 precision each. But a simple calculation shows that you only need 32 bits to divide the circumference of Earth into segments of 9 mm:
r = 6378 km = 6.378 × 10^6 m
C = 2πr = 4.007 × 10^6 m
stepSize = C / 2^32 = 0.009 m = 9 mm
That's assuming you store the data in steps of equal size, so as an unsigned int. I can understand that would make handling code harder to write, so what the hell: let's use a double. At this precision, we can divide the Earth's circumference into steps of 2 picometers. A Helium atom has a diameter of 62 picometers. So at 64 bits, we have enough precision to divide the Earth's surface at subatomic scales.
What on Earth do we need 192 bits per angle?
The format stores latitude and longitude each as 6 32-bit integer values, which adds up to 192 bits. The 6 integers store each of degrees, minutes and seconds as rational numbers with a numerator and denominator.
Why this format? Presumably it's designed for very simple processors that can't handle floating point, and might not even be able to do division. The format is more than 25 years old (though I'm not sure when GPS data was added), and cameras weren't as smart back then. Cameras needed to be able to store lots of data (pictures are big), but they didn't need to do a lot of mathematical operations on it. So they wasted some bits to make manipulation easier.

How many operations can we do with an 8-digit (plus decimal) calculator?

I have this model: a simple 8-digit display calculator (no memory buttons, no square root etc etc) has buttons (the decimal point does not count as a 'digit'):
10 buttons for integers 0 to 9,
1 button for dot (decimal point, so it can hold decimals, like from 0.0000001 to 9999999.9),
4 buttons for operations (+, -, /, *), and
1 button for equality (=). (the on/off button doesn't count for this question)
The question is two-fold: how many numbers can they be represented on the calculator's screen? (a math-explained solution would be appreciated)
*AND
if we have to make all 4 basic operations between any pair of 2 numbers, of the above calculated, how many operations would that be?
Thank you for your insight and help!
For part one of this answer, we want to know how many numbers can be represented on the calculator's screen.
Start with a simplified example and work up from there. Let's start with a 1-digit display. With this calculator, you can display the numbers from 0 to 9, and you can display each of those numbers with a decimal point either before the digit (making it a decimal), or after the digit (making it an integer). How many unique numbers can be made?
.0, .1, .2, .3, .4, .5, .6, .7, .8, .9, 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.
That's 20 possibilities with 1 repeat number makes 19 unique numbers. Let's find this result again, but using a mathematical approach that we can scale up to a larger number of digits.
Start by finding all the numbers 0 <= n < 1 that can be made. For the numbers to fit in that range, the decimal point must be before the first digit. We're still dealing with 1 digit, so there are 101 different ways to fill the calculator with numbers that are greater than or equal to 0, but less than 1.
Next, find all the numbers 1 <= n < 10 that can be made. To do this, you move the decimal point one place to the right, so now it's after the first digit, and you also can't allow the first digit to be zero (or the number will be less than 1). That leaves you 9 unique numbers.
[0<=n<1] + [1<=n<10] = 10 + 9 = 19
Now we have a scaleable system. Let's do it with 2 digits so you see how it works with multiple digits before we go to 8 digits. With 2 digits, we can represent 0-99, and the decimal point can go in three different places, which means we have three ranges to check: 0<=n<1, 1<=n<10, 10<=n<100. The first set can have zero in its first place, since zero is in the set, but every other set can't have zero in the first place or else the number would be in the set below it. So the first set has 102 possibilities, but each of the other sets has 9 * 101 possibilities. We can generalize this by saying that for any number d of digits that our calculator can hold, the set 0<=n<1 will have 10d possibilities, and each other set will have 9 * 10d-1 possibilities
So for 2 digits:
[0<=n<1] + [1<=n<10] + [10<=n<100] = 100 + 90 + 90 = 280
Now you can see a pattern emerging, which can be generalize to give us the total amount of unique numbers that can be displayed on a calculator with d digits:
Unique displayable numbers = 10d + d * 9 * 10d-1
You can confirm this math with a simple Python script that manually finds all the unique numbers that can be displayed, prints the quantity it found, then also prints the result of the formula above. It gets bogged down when it gets to higher numbers of digits, but digits 1 through 5 should be enough to show the formula works.
for digits in range(1, 6):
print('---%d Digits----' % digits)
numbers = set()
for d in range(digits + 1):
numbers.update(i / 10**d for i in range(10**digits))
print(len(set(numbers)))
print(10**digits + digits * 9 * 10**(digits - 1))
And the result:
---1 Digits----
19
19
---2 Digits----
280
280
---3 Digits----
3700
3700
---4 Digits----
46000
46000
---5 Digits----
550000
550000
Which means that a calculator with an 8 digit display can show 820,000,000 unique numbers.
For part two of this answer, we want to know if we have to make all 4 basic operations between any pair of 2 numbers, of the above calculated, how many operations would that be?
How many pairs of numbers can we make between 820 million unique numbers? 820 million squared. That's 672,400,000,000,000,000 = 672.4 quadrillion. Four different operations can be used on these number pairs, so multiply that by 4 and you get 2,689,600,000,000,000,000 = 2.6896 quintillion different possible operations on a simple 8 digit calculator.
EDIT:
If the intention of the original question was for a decimal point to not be allowed to come before the first digit (a decimal 0<=n<1 would have to start with 0.) then the formula for displayable numbers changes to 10d + (d - 1) * 9 * 10d-1, which means the amount of unique displayable numbers is 730 million and the total number of operations is 2.1316 quintillion.

How do the numbers and letters differ in hexadecimal colours?

I had a look at how hexadecimal colour codes work, for the most part, it seems pretty simple. But one thing I don't understand. If I have the code #37136F, how does the 6 and the F work together? Does this mean that the two number values are added together? So the blue value is 21? Or do they add together like: 615? If it is added together (which I feel like if the most logical way) then the maximum value you can get is 30, which gives me a range of 0-30... I feel like this isn't right, please enlighten me.
First you split the hex code into pairs of digits (so #37136F becomes 37, 13, and 6F), and those are the values for red, green, and blue respectively. Let's focus on the blue component, 6F.
6F is a two digit hexadecimal number (base 16). Just as 25 in base 10 is actually 2*10 + 5, 6F in hexadecimal is actually 6*16 + 15 = 111 in base 10. In general, if X and Y are hexadecimal digits (0 through F), then XY in base 16 is X*16 + Y.
Note that the minimum and maximum two-digit hex numbers are 00 and FF respectively, which equal 0*16 + 0 = 0 and 15*16 + 15 = 255 respectively. This is why RGB values range from 0 to 255 inclusive, when written in base 10.

Normalized values, when summed are more than 1

I have two files:
File 1:
TOPIC:topic_0 1294
aa 234
bb 123
TOPIC:topic_1 2348
aa 833
cc 239
bb 233
File 2:
0.1 0.2 0.3 0.4
This is just the format of my files. Basically, when the second column (omitting the first "TOPIC" line) is summed for each topic, it constitutes to 1 as they are the normalized values. Similarly, in file 2, the values are normalized and hence they also constitute to 1.
I perform multiplication of the values from file 1 and 2. The resulting output file looks like:
aa 231
bb 379
cc 773
The second column when summed of the output file should give 1. But few files have values little over 1 like 1.1, 1.00038. How can I precisely get 1 for the output file? Is it some rounding off that I should do or something?
PS: The formats are just examples, the values and words are different. This is just for understanding purposes. Please help me sort this.
Python stores floating point decimals in base-2.
https://docs.python.org/2/tutorial/floatingpoint.html
This means that some decimals could be terminating in base-10, but are repeating in base-2, hence the floating-point error when you add them up.
This gets into some math, but imagine in base-10 trying to express the value 2/6. When you eliminate the common factors from the numerator and denominator it's 1/3.
It's 0.333333333..... repeating forever. I'll explain why in a moment, but for now, understand that if only store the first 16 digits in the decimal, for example, when you multiply the number by 3, you won't get 1, you'll get .9999999999999999, which is a little off.
This rounding error occurs whenever there's a repeating decimal.
Here's why your numbers don't repeat in base-10, but they do repeat in base-2.
Decimals are in base-10, which prime factors out to 2^1 * 5^1. Therefore for any ratio to terminate in base-10, its denominator must prime factor to a combination of 2's and 5's, and nothing else.
Now let's get back to Python. Every decimal is stored as binary. This means that in order for a ratio's "decimal" to terminate, the denominator must prime factor to only 2's and nothing else.
Your numbers repeat in base-2.
1/10 has (2*5) in the denominator.
2/10 reduces to 1/5 which still has five in the denominator.
3/10... well you get the idea.

Verilog - Floating points multiplication

We have a problem with Verilog.
We have to use multiplication with two floating points(binary), but it doesn't work 100% perfectly.
We have a Req m[31:0]. The first numbers (before the comma) are m[31:16] and the numbers after comma m[15:0] so we have like:
m[31:16] = 1000000000000000;
m[15:0] = 1000000000000000;
m[31:0] = 10000000000000000(.)1000000000000000;
The Problem is: we want to multiplicate numbers with decimal places, but we don't know how.
For example: m = 2.5 in binary. The result of m*m is 6.25.
The question does not fully cover what is understood about fixed-point numbers, therefore will cover a little background which might not be relevant to the OP.
The decimal weighting of unsigned binary (base 2) numbers, 4bit for the example follows this rule:
2^3 2^2 2^1 2^0 (Base 2)
8 4 2 1
Just for reference the powers stay the same and the base is changed. For 4 hex it would be:
16^3 16^2 16^1 16^0
4096 256 16 1
Back to base 2, for twos complement signed number the MSB (Most Significant Bit) becomes negative.
-2^3 2^2 2^1 2^0 (Base 2, Twos complement)
-8 4 2 1
When we insert a binary point or fractional bit the pattern continues. 4 Integer bits 4 fractional bits.
Base 2: Twos complement 4 integer, 4 bit frational
-2^3 2^2 2^1 2^0 . 2^-1 2^-2 2^-3 2^-4
-8 4 2 1 . 0.5 0.25 0.125 0.0625
Unfortunately Verilog does not have a fixed-point format so the user has to keep track of the binary point and worked with scaled numbers. Decimal points . can not be used in in verilog numbers stored as reg or logic as they are essentially integer formats. However verilog does ignore _ when placed in number declarations, so it can be used as the binary point in numbers. Its use is only symbolic and has no meaning to the language.
In the above format 2.5 would be represented by 8'b0010_1000, the question has 16 fractional bits therefore you need to place 16 bits after the _ to keep the binary point in the correct place.
Fixed-point Multiplication bit widths
If we have two numbers A and B the width of the result A*B will be:
Integer bits = A.integer_bits + B.integer_bits.
Fractional bits = A.fractional_bits + B.fractional_bits.
Therefore [4 Int, 4 Frac] * [4 Int, 4 Frac] => [8 Int, 8 Frac]
reg [7:0] a = 0010_1000;
reg [7:0] b = 0010_1000;
reg [15:0] sum;
always #* begin
sum = a * b ;
$displayb(sum); //Binary
$display(sum); //Decimal
end
// sum == 00000110_01000000; //Decimal->6.25
Example on EDA Playground.
From this you should be able to change the depths to suit any type of fixed point number. and cast ing back to a 16 Int 16 fractional number can be done by part selecting the correct bits. Be careful if you need to saturate instead of overflowing.
There is a related Q&A that has 22 fractional bits.

Resources