Division in C# results not exact value after decimal point - c#-4.0

Following division showing incorrect result. It should be 15 digits after point according to the windows calculator. But in C# showing 14 digits after point.

I fixed. It will be (Decimal)200/(Decimal)30 to be more Scale digits

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
Please refer to below code for any further help.
C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
sbyte System.Sbyte Yes 1 -128 to 127
short System.Int16 Yes 2 -32768 to 32767
int System.Int32 Yes 4 -2147483648 to 2147483647
long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
byte System.Byte No 1 0 to 255
ushort System.Uint16 No 2 0 to 65535
uint System.UInt32 No 4 0 to 4294967295
ulong System.Uint64 No 8 0 to 18446744073709551615
float System.Single Yes 4 Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double System.Double Yes 8 Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal System.Decimal Yes 12 Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
char System.Char N/A 2 Any Unicode character (16 bit)
bool System.Boolean N/A 1 / 2 true or false

Related

decoding deflate blocks after (HCLEN + 4) x 3 bits

So in the deflate algorithm each block starts off with a 3 bit header:
Each block of compressed data begins with 3 header bits
containing the following data:
first bit BFINAL
next 2 bits BTYPE
Assuming BTYPE is 10 (compressed with dynamic Huffman codes) then the next 14 bits are as follows:
5 Bits: HLIT, # of Literal/Length codes - 257 (257 - 286)
5 Bits: HDIST, # of Distance codes - 1 (1 - 32)
4 Bits: HCLEN, # of Code Length codes - 4 (4 - 19)
The next (HCLEN + 4) x 4 bits represent the code lengths.
What happens after that is less clear to me.
RFC1951 § 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) says this:
HLIT + 257 code lengths for the literal/length alphabet,
encoded using the code length Huffman code
HDIST + 1 code lengths for the distance alphabet,
encoded using the code length Huffman code
Doing infgen -ddis on 1589c11100000cc166a3cc61ff2dca237709880c45e52c2b08eb043dedb78db8851e (produced by doing gzdeflate('A_DEAD_DAD_CEDED_A_BAD_BABE_A_BEADED_ABACA_BED')) gives this:
zeros 65 ! 0110110 110
lens 3 ! 0
lens 3 ! 0
lens 4 ! 101
lens 3 ! 0
lens 3 ! 0
zeros 25 ! 0001110 110
lens 3 ! 0
zeros 138 ! 1111111 110
zeros 22 ! 0001011 110
lens 4 ! 101
lens 3 ! 0
lens 3 ! 0
zeros 3 ! 000 1111
lens 2 ! 100
lens 0 ! 1110
lens 0 ! 1110
lens 2 ! 100
lens 2 ! 100
lens 3 ! 0
lens 3 ! 0
I note that 65 is the hex encoding of "A" in ASCII, which presumably explains "zeros 65".
"lens" occurs 16 times, which is equal to HCLEN + 4.
In RFC1951 § 3.2.2. Use of Huffman coding in the "deflate" format there's this:
2) Find the numerical value of the smallest code for each
code length:
code = 0;
bl_count[0] = 0;
for (bits = 1; bits <= MAX_BITS; bits++) {
code = (code + bl_count[bits-1]) << 1;
next_code[bits] = code;
}
So maybe that's what "zeros 65" is but then what about "zeros 25", "zeros 138" and "zeros 22"? 25, 138 and 22, in ASCII, do not appear in the compressed text.
Any ideas?
The next (HCLEN + 4) x 3 bits represent the code lengths.
The number of lens's has nothing to do with HCLEN. The sequence of zeros and lens represent the 269 (259+10) literal/length and distance codes code lengths. If you add up the zeros and the number of lens, you get 269.
A zero-length symbol means it does not appear in the compressed data. There are no literal bytes in the data in the range 0..64, so it starts with 65 zeros. The first symbol coded is then an 'A', with length 3.

Converting Base64 to Hex confusion

I was working on a problem for converting base64 to hex and the problem prompt said as an example:
3q2+7w== should produce deadbeef
But if I do that manually, using the base64 digit set ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ I get:
3 110111
q 101010
2 110110
+ 111110
7 111011
w 110000
As a binary string:
110111 101010 110110 111110 111011 110000
grouped into fours:
1101 1110 1010 1101 1011 1110 1110 1111 0000
to hex
d e a d b e e f 0
So shouldn't it be deadbeef0 and not deadbeef? Or am I missing something here?
Base64 is meant to encode bytes (8 bit).
Your base64 string has 6 characters plus 2 padding chars (=), so you could theoretically encode 6*6bits = 36 bits, which would equal 9 4bit hex numbers. But in fact you must think in bytes and then you only have 4 bytes (32 bits) of significant information. The remaining 4 bits (the extra '0') must be ignored.
You can calculate the number of insignificant bits as:
y : insignificant bits
x : number of base64 characters (without padding)
y = (x*6) mod 8
So in your case:
y = (6*6) mod 8 = 4
So you have 4 insignificant bit on the end that you need to ignore.

Compare two fractions (no floating-point)

Using integers ONLY (no floating-point), is there a way to determine between two fractions, which result is greater?
for example say we have these two fraction:
1000/51 = 19(.60) && 1000/52 = 19(.23)
If we were to use floating point numbers obviously the first fraction is greater; however, both fractions equal 19 if we were to use integers only. How might one find out which is greater with out using floating point math?
I have tried to get the remainder using the % operator but does not seem to work in all cases.
1/2 can be think one apple give two people, so every people take 0.5 apple.
so 1000/51 consider as 1000 apples give 51 people.
1000/51 > 1000/52, because the apple the same,but we wanna give it to more people.
it is simple example, more complex exmaple:
1213/109 1245/115 which is greater?
1245 is greater than 1213 and 115 is greater than 109, difference:
1245 - 1213 = 32, and 115 - 109 = 6, 32/6 replce 1245/109, compare 1213/109 to 32/6.
32/6 ≈ 5 and less 6, 6*109 = 654 < 1213, so 1213/109 > 1245/115.
1213/109 1245/115
1213/109 32/6 # make diff 1245 - 1213 = 32 115 - 109 = 6
# compare diff to 1213/109
1213 > 109 * 6
# then
1213/109 > 1245/115

Send DNS data: MSB or LSB first?

I'm implementing a DNS(multicast DNS in fact) in c#.
I just want to know if I must encode my uint/int/ushort/... with the LSB on the left or the MSB on the left. And more globally how I could know this? One of this is standard?
Because I didn't found anything in the IETF description. I found a lot of things(each header field length, position), but I didn't found this.
Thank you!
The answer is in RFC 1035 (2.3.2. Data Transmission Order)
Here is the link: http://www.ietf.org/rfc/rfc1035.txt
And the interesting part
2.3.2. Data Transmission Order
The order of transmission of the header and data described in this
document is resolved to the octet level. Whenever a diagram shows a
group of octets, the order of transmission of those octets is the
normal order in which they are read in English. For example, in the
following diagram, the octets are transmitted in the order they are
numbered.
0 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 1 | 2 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 3 | 4 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 5 | 6 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Whenever an octet represents a numeric quantity, the left most bit in
the diagram is the high order or most significant bit. That is, the
bit labeled 0 is the most significant bit. For example, the following
diagram represents the value 170 (decimal).
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|1 0 1 0 1 0 1 0|
+-+-+-+-+-+-+-+-+
Similarly, whenever a multi-octet field represents a numeric quantity
the left most bit of the whole field is the most significant bit.
When a multi-octet quantity is transmitted the most significant octet
is transmitted first.

Query on integer range in a Programming Language

All,
This might be a very silly question, but in a Programming lang X where the int range is -127 to +128, does this value refer to the actual value -127 and +128 ?
It refers to an 8-bit signed integer, where the high bit is used to determine whether it's negative or not:
01111111 = 127
00000001 = 1
00000000 = 0
11111111 = -1
11111110 = -2
10000001 = -127
10000000 = -128 or +128 or even -0, depending on the language
See: http://en.wikipedia.org/wiki/Two%27s_complement
What do you mean?
It will typically mean -127 to 128 inclusive, so both -127 and 128 are themselves valid values.
Normally, the ranges of the values indicates how much memory they use, and they are normally designed to ocuppy full bytes.
In your case (-127 to 128), this type will occupy 1 byte, which can have 256 different values.
So, you have 127 negative values, 128 positive values, and the 0 value.
127 + 128 + 1 = 256.
So, the values -127 and 128 are included in the range.

Resources