All,
This might be a very silly question, but in a Programming lang X where the int range is -127 to +128, does this value refer to the actual value -127 and +128 ?
It refers to an 8-bit signed integer, where the high bit is used to determine whether it's negative or not:
01111111 = 127
00000001 = 1
00000000 = 0
11111111 = -1
11111110 = -2
10000001 = -127
10000000 = -128 or +128 or even -0, depending on the language
See: http://en.wikipedia.org/wiki/Two%27s_complement
What do you mean?
It will typically mean -127 to 128 inclusive, so both -127 and 128 are themselves valid values.
Normally, the ranges of the values indicates how much memory they use, and they are normally designed to ocuppy full bytes.
In your case (-127 to 128), this type will occupy 1 byte, which can have 256 different values.
So, you have 127 negative values, 128 positive values, and the 0 value.
127 + 128 + 1 = 256.
So, the values -127 and 128 are included in the range.
Related
So in the deflate algorithm each block starts off with a 3 bit header:
Each block of compressed data begins with 3 header bits
containing the following data:
first bit BFINAL
next 2 bits BTYPE
Assuming BTYPE is 10 (compressed with dynamic Huffman codes) then the next 14 bits are as follows:
5 Bits: HLIT, # of Literal/Length codes - 257 (257 - 286)
5 Bits: HDIST, # of Distance codes - 1 (1 - 32)
4 Bits: HCLEN, # of Code Length codes - 4 (4 - 19)
The next (HCLEN + 4) x 4 bits represent the code lengths.
What happens after that is less clear to me.
RFC1951 § 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) says this:
HLIT + 257 code lengths for the literal/length alphabet,
encoded using the code length Huffman code
HDIST + 1 code lengths for the distance alphabet,
encoded using the code length Huffman code
Doing infgen -ddis on 1589c11100000cc166a3cc61ff2dca237709880c45e52c2b08eb043dedb78db8851e (produced by doing gzdeflate('A_DEAD_DAD_CEDED_A_BAD_BABE_A_BEADED_ABACA_BED')) gives this:
zeros 65 ! 0110110 110
lens 3 ! 0
lens 3 ! 0
lens 4 ! 101
lens 3 ! 0
lens 3 ! 0
zeros 25 ! 0001110 110
lens 3 ! 0
zeros 138 ! 1111111 110
zeros 22 ! 0001011 110
lens 4 ! 101
lens 3 ! 0
lens 3 ! 0
zeros 3 ! 000 1111
lens 2 ! 100
lens 0 ! 1110
lens 0 ! 1110
lens 2 ! 100
lens 2 ! 100
lens 3 ! 0
lens 3 ! 0
I note that 65 is the hex encoding of "A" in ASCII, which presumably explains "zeros 65".
"lens" occurs 16 times, which is equal to HCLEN + 4.
In RFC1951 § 3.2.2. Use of Huffman coding in the "deflate" format there's this:
2) Find the numerical value of the smallest code for each
code length:
code = 0;
bl_count[0] = 0;
for (bits = 1; bits <= MAX_BITS; bits++) {
code = (code + bl_count[bits-1]) << 1;
next_code[bits] = code;
}
So maybe that's what "zeros 65" is but then what about "zeros 25", "zeros 138" and "zeros 22"? 25, 138 and 22, in ASCII, do not appear in the compressed text.
Any ideas?
The next (HCLEN + 4) x 3 bits represent the code lengths.
The number of lens's has nothing to do with HCLEN. The sequence of zeros and lens represent the 269 (259+10) literal/length and distance codes code lengths. If you add up the zeros and the number of lens, you get 269.
A zero-length symbol means it does not appear in the compressed data. There are no literal bytes in the data in the range 0..64, so it starts with 65 zeros. The first symbol coded is then an 'A', with length 3.
I was working on a problem for converting base64 to hex and the problem prompt said as an example:
3q2+7w== should produce deadbeef
But if I do that manually, using the base64 digit set ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ I get:
3 110111
q 101010
2 110110
+ 111110
7 111011
w 110000
As a binary string:
110111 101010 110110 111110 111011 110000
grouped into fours:
1101 1110 1010 1101 1011 1110 1110 1111 0000
to hex
d e a d b e e f 0
So shouldn't it be deadbeef0 and not deadbeef? Or am I missing something here?
Base64 is meant to encode bytes (8 bit).
Your base64 string has 6 characters plus 2 padding chars (=), so you could theoretically encode 6*6bits = 36 bits, which would equal 9 4bit hex numbers. But in fact you must think in bytes and then you only have 4 bytes (32 bits) of significant information. The remaining 4 bits (the extra '0') must be ignored.
You can calculate the number of insignificant bits as:
y : insignificant bits
x : number of base64 characters (without padding)
y = (x*6) mod 8
So in your case:
y = (6*6) mod 8 = 4
So you have 4 insignificant bit on the end that you need to ignore.
Following division showing incorrect result. It should be 15 digits after point according to the windows calculator. But in C# showing 14 digits after point.
I fixed. It will be (Decimal)200/(Decimal)30 to be more Scale digits
If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
Please refer to below code for any further help.
C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
sbyte System.Sbyte Yes 1 -128 to 127
short System.Int16 Yes 2 -32768 to 32767
int System.Int32 Yes 4 -2147483648 to 2147483647
long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
byte System.Byte No 1 0 to 255
ushort System.Uint16 No 2 0 to 65535
uint System.UInt32 No 4 0 to 4294967295
ulong System.Uint64 No 8 0 to 18446744073709551615
float System.Single Yes 4 Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double System.Double Yes 8 Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal System.Decimal Yes 12 Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
char System.Char N/A 2 Any Unicode character (16 bit)
bool System.Boolean N/A 1 / 2 true or false
I took apart an ICC file from http://www.brucelindbloom.com/index.html?MunsellCalcHelp.html with a look up table using ICC Profile Inspector. The ICC file is supposed to convert Lab to Uniform LAB.
The files it outputs include headers, a matrix (3x3 identity matrix), Input and Output curves, and a lookup table. What do these files mean? And how are they related to the color transform?
The header contents are:
InputChan: 3
OutputChan: 3
Input_Entries: 258
Output_Entries: 256
Clut_Size: 51
The InputCurves file has entries like:
0 0 0 0
1 256 255 255
2 512 510 510
...
256 65535 65280 65280
257 65535 65535 65535
The OutputCurves file has entries like:
0 0 0 0
1 256 257 257
2 512 514 514
...
254 65024 65278 65278
255 65280 65535 65535
And the lookup table entries look like:
0 0 0 25968
1 0 0 26351
2 0 0 26789
...
132649 65535 65535 49667
132650 65535 65535 50603
I'd like to understand how an input LAB color maps to an output value. I'm especially confused because a & bvalues can be negative.
I believe I understand how this works after skimming through http://www.color.org/specification/ICC1v43_2010-12.pdf
This explination may have some off by 1 errors, but it should be generally correct.
The input values are LAB, and L values are mapped using table 39 & 40 in section 10.8 lut16Type. Then the 258 values in the input curves are uniformly spaced across those L, a, & b ranges. The output values are 16 bit, so 0-65535.
The same goes for the CLUT. There are 51^3 entries (51 was chosen by the ICC file authoer). Each dimension (L,a,b) is split uniformally across this space as well. So 0 = 0 & 50 (note 0 - 50 is 51 entries) = 65535 from the previous section. The first 51 rows are for L =0 and a =0, but incriment b. Every 51 rows, the a value increses by 1, and every 51*51 rows, the L values increases by 1.
So given L, a, and b values adjusted by the input curves, figure out their index (0-50) and look those up in the CLUT (l_ind*51*51+a_ind*51+b_ind), which will give you 3 more values.
Now the output curves come in. It's another set of curves that work just like the input curves. The outputs can then get mapped back using the same values from Tables 39 & 40.
170! approaches the limit of a floating point double: 171! will overflow.
However 170! is over 300 digits long.
There is, therefore, no way that 170! can be represented precisely in floating point.
Yet Excel returns the correct answer for 170! / 169!.
Why is this? I'd expect some error to creep in, but it returns an integral value. Does Excel somehow know how to optimise this calculation?
If you find the closest doubles to 170! and 169!, they are
double oneseventy = 5818033100654137.0 * 256;
double onesixtynine = 8761273375102700.0;
times the same power of two. The closest double to the quotient of these is exactly 170.0.
Also, Excel may compute 170! by multiplying 169! by 170.
William Kahan has a paper called "How Futile are Mindless Assessments of Roundoff in Floating-Point Computation?" where he discusses some of the insanity that goes on in Excel. It may be that Excel is not computing 170 exactly, but rather it's hiding an ulp of reality from you.
The answer of tmyklebu is already perfect. But I wanted to know more.
What if implementation of n! was something trivial as return double(n)*(n-1)!...
Here is a Smalltalk snippet, but you can translate in many other languages, that's not the point:
(2 to: 170) count: [:n |
| num den |
den := (2 to: n - 1) inject: 1.0 into: [:p :e | p*e].
num := n*den.
num / den ~= n].
And the answer is 12
So you have not been particulary lucky, due to good properties of round to nearest even rounding mode, out of these 169 numbers, only 12 don't behave as expected.
Which ones? Replace count: by select: and you get:
#(24 47 59 61 81 96 101 104 105 114 122 146)
If I had an Excel handy, I would ask to evaluate 146!/145!.
Curiously (only apparently curiously), a less naive solution that computes the exact factorial with large integer arithmetic, then convert to nearest float, does not perform better !
(2 to: 170) reject: [:n |
n factorial asFloat / (n-1) factorial asFloat = n]
leads to:
#(24 31 34 40 41 45 46 57 61 70 75 78 79 86 88 92 93 111 115 116 117 119 122 124 141 144 147 164)