msvc division by zero - visual-c++

I have two console apps (msvc 2008). When they have division by zero,
they behave differently. My questions are below.
a) In one app, result of division by zero shows as 1.#INF000000000000 as debugger.
Printf "%4.1f" prints it as "1.$".
b) In another app, result of division by zero 9.2559631349317831e+061 in debugger.
Printf "%4.1f" prints it as "-1.$".
Why neither app has exception or signal on div by zero ?
Isn't exception/signal a default dehavour ?
What are define names for the two constants above ?
Generally, If I check for denominator == 0 before div, then which define value shall I use for dummy result ? DBL_MIN ok ? I found that NaN value is not.
Can I tell stdio how to format one specific double value as char string I tell it? I realize it's too much to ask. But it would be nice to tell stdio to print, say, "n/a" for vaues DBL_MIN in my app, as example.
How shall I approach, for best portability, division-by-zero and printing it's results ? By printing, I mean "print number as 'n/a' if it's a result of division by zero".
What is not clear here to me, how shall I represent result of div-by-zero in one double, in a portable way.
Why two different results? It is compiler options ?
Compiler is C++, but used very much like C. Thanks.

When doing floating-point division by zero, the result should be infinity (represented with a special bit pattern).
My guess is that the second application does not actually perform a division by zero, but rather a division with a really small number. You can check this by inspecting the underlying representation, either in a debugger or by trace output (you can access it by placing it in a union of the floating-point type and an integer of the same size). Note that simply printing it might not reveal this, as the print algorithm sometimes print really small numbers as zero.

Related

Python recursive function to convert binary to decimal is working. I just don't understand how

I will start this off by saying that I have not done any schooling. All of my programming knowledge has come from 12 years of doing various projects in which I had to write a program of some sort in some language.
That said. I am helping my friend who is just getting into programming and who is taking a introductory python class. Her class is currently learning about recursive functions. Due to my lack of schooling this is the first time I have heard about them. So when she asked me to explain why the function she had worked I couldn't do it. I had to learn them myself.
I have been looking around at various posts about solving this same problem. I found one here at geeksforgeeks that is a function that does exactly what we need. With my elementary understanding of recursion this is the function that I would have thought would have been the right choice.
def bintodec(n):
if len(n) == 1:
bin_digit= int(n)
return bin_digit * 2**(len(n) - 1)
else:
bin_digit = int(n[0])
return bintodec(n[1:]) + bin_digit * 2**(len(n) - 1)
This is the function she came up with
def convertToDecimal(binNum):
if len(binNum) == 0:
return 0
else:
return convertToDecimal(binNum[:-1]) * 2 + int(binNum[-1])
When I print the function call it works.
print(convertToDecimal("11111111"))
# results in 255
print(convertToDecimal("00000111"))
# results in 7
I understand that sometimes there is a shorthand way to things. I can't see any shorthand methods mentions in the documentation that I have read.
The thing that really confuses me is how it takes that string and does math with it. I see the typecast for int, but the other side doesn't have it.
This is where everything falls apart and my brain starts melting. I am thinking there is a core mechanic of recursion that I am missing. Normally that is the case.
So along to figuring out why that works, I would love to know how this method would compare to say the method we found over at geeksforgeeks
What your friend has implemented is the typical implementation of Horner's method for polynomial evaluation. Here is the formula.
Now think of the binary number as a polynomial with a's equal to one or zero, and x equals to 2.
The thing that really confuses me is how it takes that string and does math with it. I see the typecast for int, but the other side doesn't have it.
The "other side" will take the value as int number which is result of latest recursive function call. in this case it will be 0.
Ok, so in words, what this program is doing is, on each invocation, taking the string and splitting it into 2 parts, lets call them a and b. a contains the entire string, apart from the final character, while b only contains the final digit.
Next, it takes a and calls the same function again, but this time with the shorter string, and then takes the result of this and doubles it. The doubling is done, as if you were to add an additional 0 to the end of a binary number, you would be doubling it.
Finally, it converts the value of b into an integer, either 1, or 0, and adds this to the previous result, which will be the decimal version of your binary string.
In other words, this function is only computing the result one character at a time, then it calls back to itself as a way of 'looping' to the next character.
It's important that there is an exit condition in a recursive function, to prevent infinite looping, in this case, when the string is empty, the program just returns 0, ending the loop.
Now on to the syntax. The only potentially confusing thing here I can see is python's array/slice syntax. Firstly, by trying to access the -1 index in an array, you are actually accessing the final element.
Also in that snippet is slice notation, which is the colon : in the array index. This is essentially used to select a subset of an array, in this case, all elements but the final one.
I honestly couldn’t make her function run as written. I got the below error
if len(binNum) == 0:
TypeError: object of type 'int' has no len()
I'm guessing however that under testing even working this would fail at some point, I’d like to see if you have it returning say, 221 (11011101) where the 1s and 0s are not consecutive and see if that works or fails.
Lastly, back to my error, I’m assuming the intention is to go out of the loop if it’s a zero. Even if zero wasn’t a null character, len(binNum) == 1 would still exit the loop as written. A try/catch block would be better

Most efficient way to check if a stringified number is above MAX_UINT64?

Suppose I have a script which is executed by a 64-bit Perl and which is taking one parameter which actually is a number, but of course is a string in the first place (because all command line parameters are strings).
Now, if that parameter's value fits into a 64 bit unsigned int, the script should do something with the parameter; otherwise, it should abort with an appropriate error message.
What would be the most efficient way to check if that parameter (as a string, i.e. before using it in mathematical operations) fits into a 64-bit unsigned integer?
What I already have thought of:
I could do a string comparison
I don't want to do that because in that case I had to cope with collations, and the documentation for Unicode::Collate looks a bit oversized for my small problem.
But this is just a feeling, so I'd be grateful for comments or other opinions.
Side note: I have tried this, and it worked like expected. But this was just a quick test; I did not play around with locales, so on other systems it might not work (although I doubt that there is a collation which puts "2" before "1", but you never know).
Converting to numbers before comparing won't work:
root#spock:/root/test# perl -e '$i="18446744073709551615"+0; $j="18446744073709551616"+0; print "$i $j\n"; print(($i < $j) ? "less\n" : "greater or equal\n")'
18446744073709551615 1.84467440737096e+19
greater or equal
Note how Perl prints the second number. This is the smallest unsigned integer which does not fit into 64 bits, so Perl converts it to a double. When it then compares $i and $j numerically, it has to convert $i to a double as well; due to the loss of precision involved herein, $i is converted to the same value as $j, so the comparison goes wrong.
I could do use bigint;. I have tried this, and it behaved as expected.
But that probably would lead to a dramatic loss of performance. As far as I have understood, use bigint; implies the use of various heavy libraries.
But this is just a feeling as well, so if this is the way to go, please let me know.
Another idea (not tried yet): Could I use pack() to generate a byte sequence from the stringified number somehow? Then I could check the length of that byte sequence. If it is less or equal to 8 bytes, the stringified number fits into a 64-bit unsigned integer.
How would you solve this problem?
use constant MAX_UINT64 = '18446744073709551615';
my $larger_than_max =
length($s) > length(MAX_UINT64)
|| length($s) == length(MAX_UINT64) && $s gt MAX_UINT64;
Assumes input matches /^(?:0|[1-9][0-9]*)\z/. Adjust to liking (e.g. to handle leading zeros or signs).
You can use a simple shortcut that should eliminate most numbers. Any number that has 19 or fewer digits in the decimal representation can fit in a 64 bit integer, so if the length of the string containing the integer is less than 20, it is good.
Any string with length greater than or equal to 21 is bad.
UINT64_MAX is 18446744073709551615. So, there are some numbers with 20 decimal digits can fit a 64 bit unsigned integer. Some can't.
At this point, simple string comparison using ge will be enough because the ordering of Arabic digits is the same regardless of locale.
$ perl -E "say 'yes' if $ARGV[1] ge $ARGV[0]" 18446744073709551615 18446744073709551616
yes
I'll assume the input is a string of digits for clarity.
You ask for the most efficient way. This can't be determined without understanding the distribution of inputs. For example if the inputs are uniform in 128 bit integers, the most efficient is to start with something like:
if (length(#ARGV[0]) > 20) {die "Number too large.\n"}
This deals with over 99.9999999999 % of cases. In fact if the inputs were uniform in 256 bit integers you might be forgiven for simply writing:
warn "Number too large.\n";
As to repeatedly and consistently testing in a reasonable amount of time you could consider something like this regex from Damian Conway's Regexp::Number (for signed 64 bit numbers but the principle is valid). Notice, being real code, it deals with leading zeros.
'0*(?:(?:9(?:[0-1][0-9]{17}' .
'|2(?:[0-1][0-9]{16}' .
'|2(?:[0-2][0-9]{15}' .
'|3(?:[0-2][0-9]{14}' .
'|3(?:[0-6][0-9]{13}' .
'|7(?:[0-1][0-9]{12}' .
'|20(?:[0-2][0-9]{10}' .
'|3(?:[0-5][0-9]{9}' .
'|6(?:[0-7][0-9]{8}' .
'|8(?:[0-4][0-9]{7}' .
'|5(?:[0-3][0-9]{6}' .
'|4(?:[0-6][0-9]{5}' .
'|7(?:[0-6][0-9]{4}' .
'|7(?:[0-4][0-9]{3}' .
'|5(?:[0-7][0-9]{2}' .
'|80(?:[0-6])))))))))))))))))' .
'|[1-8]?[0-9]{0,18})'
This should be blindingly fast compared with perl run-up time for example, or even a keystroke.
As to bigint, it executes very quickly and includes some cool optimization features, but unless you are testing many numbers in code the above should suffice.
If you really want to burn rubber, though, take a look at perl guts, and use something that exposes the macro SvIOK(SV*). (See https://metacpan.org/pod/release/KRISHPL/pod2texi-0.1/perlguts.pod#What-is-an-%22IV%22? for more details.)

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

How to convert decimal to binary using pythons basic fuctions

I'm trying to convert decimal to binary, binary to decimal, and to create a binary counter. Because it's for school, we are only allowed to use while and for loops, if, if-else, if-elif-else, int(), input(), print (), range(), pow(), len(), and str(). We are not allowed to use break, continue, quit(), or exit() statements, or return statements to break out of a loop. We also cannot use any functions that were not listed in the allowed section, like bin, etc. I'm struggling to come up with a way to convert decimal to binary this way. Does anyone have an idea on where to start? I've created the selection menu for the converter, but haven't been able to create the converter. Any tips on where to start would be helpful.
Decimal to Binary Algorithm
Lets say you have a number n,
till n>0:
n%2 ( will give you the remainder)
n = n/2
reverse the results obtained through n%2 to get the Binary equivalent of decimal.
Try working out decimal to binary and vice versa step by step by hand. Then it should be easy to code.

Alternatives to String.Compare for performance

I've used a profiler on my C# application and realised that String.Compare() is taking a lot of time overall: 43% of overall time with 124M hits
I'm comparing relatively small string: from 4 to 50 chars.
What would you recommend to replace it with in terms of performance??
UPD: I only need to decide if 2 strings are the same or not. Strings can be zero or "". No cultural aspect or any other aspect to it. Most of the time it'll be "4578D" compared to "1235E" or similar.
Many thanks in advance!
It depends on what sort of comparison you want to make. If you only care about equality, then use one of the Equals overloads - for example, it's quicker to find that two strings have different lengths than to compare their contents.
If you're happy with an ordinal comparison, explicitly specify that:
int result = string.CompareOrdinal(x, y);
An ordinal comparison can be much faster than a culture-sensitive one.
Of course, that assumes that an ordinal comparison gives you the result you want - correctness is usually more important than performance (although not always).
EDIT: Okay, so you only want to test for equality. I'd just use the == operator, which uses an ordinal equality comparison.
You can use different ways of comparing strings.
String.Compare(str1, str2, StringComparison.CurrentCulture) // default
String,Compare(str1, str2, StringComparison.Ordinal) // fastest
Making an ordinal comparison can be something like twice as fast as a culture dependant comparison.
If you make a comparison for equality, and the strings doesn't contain any culture depenant characters, you can very well use ordinal comparison.

Resources