double.TryParse("Infinity",out dbl) return zero - c#-4.0

I am using double.TryParse method to parse my string to double. Here in some case string might be NaN, Infinity, -Infinity. While parsing this kind of text I want double value as zero instead of double.Nan, double.Infinity. So, double.TryParse has any option to do so or need to write a method to filter this.

TryParse has no option to behave the way you desire so you will have to code it yourself. Given that Infinity and NaN are not zero, it can be no surprise that none of the built in methods return zero for those inputs.

You can parse it like that:
double value = 0;
double a = double.TryParse("YourString", out value) == true ? value : 0;
If its not double, you will get 0 else the value.

Related

Compare Unicode code point range in Python3

I would like to check if a character is in a certain Unicode range or not, but seems I cannot get the expected answer.
char = "?" # the unicode value is 0xff1f
print(hex(ord(char)))
if hex(ord(char)) in range(0xff01, 0xff60):
print("in range")
else:
print("not in range")
It should print: "in range", but the results show: "not in range". What have I done wrong?
hex() returns a string. To compare integers you should simply use ord:
if ord(char) in range(0xff01, 0xff60):
You could've also written:
if 0xff01 <= ord(char) < 0xff60:
In general for such problems, you can try inspecting the types of your variables.
Typing 0xff01 without quotes, represents a number.
list(range(0xff01, 0xff60)) will give you a list of integers [65281, 65282, .., 65375]. range(0xff01, 0xff60) == range(65281, 65376) evaluates to True.
ord('?') gives you integer 65311.
hex() takes an integer and converts it to '0xff01' (a string).
So, you simply need to use ord(), no need to hex() it.
Just only use ord:
if ord(char) in range(0xff01, 0xff60):
...
hex is not needed.
As mentioned in the docs:
Convert an integer number to a lowercase hexadecimal string prefixed with “0x”.
Obviously that already describes it, it becomes a string instead of what we want, an integer.
Whereas the ord function does what we want, as mentioned in the docs:
Given a string representing one Unicode character, return an integer representing the Unicode code point of that character. For example, ord('a') returns the integer 97 and ord('€') (Euro sign) returns 8364. This is the inverse of chr().

Cannot return a float value of -1.00

I am currently doing an assignment for a computer science paper at university. I am in my first year.
in one of the questions, if the gender is incorrect the function is suppose to return a value of -1. But in the testing column, it says the expected value is -1.00. And I cannot seem to be able to return the value of '-1.00', it will always return a value of -1.0 (with one zero). I used the .format to make the value 2sf (so it will appear with two zero's) but when converting it to a float the value always returns "-1.0".
return float('{:.2f}'.format(-1))
This isn’t as clear as it could be. Does your instructor or testing
software expect a string '-1.00'? If so, just return that. Is a
float type expected? Then return -1.0; the number of digits shown does
not affect the value.
I don't know exactly what you have done, but i had tried this way and output what you expect.
b = -1
>>> print("%.2f" % (b))
-1.00
>>> print("%.2f" % (-1))
-1.00
What does the following code do?
print(float('{:.2f}'.format(-1)))
The '{:.2f}'.format(-1) creates some string representation of -1. defined by the format string. The float(...) converts this string to the float 1. The print command converts this float to a sting, using some default format, and prints this string to the screen. I think that isn't what you expected because the format you used does not effect the print command in formatting the string.
I assume you want
print('{:.2f}'.format(float(-1)))
and this actually does what you want, it prints
1.00
http://ideone.com/GyINQR
It is not necessary to convert -1 explicitely to float
print('{:.2f}'.format(-1))
gives the desired result:
http://ideone.com/U2RTMX

How to reduce a String to an Integer summing its characters

I'd like to take a String e.g. "1234" and convert it to an Integer which represents the sum of all the characters.
I thought perhaps treating the String as a List of characters and doing a reduce / inject, would be the simplest mechanism. However, In all my attempts I have not managed to succeed in getting the syntax correct.
I attempted something along these lines without success.
int sum = myString.inject (0, { Integer accu, Character value ->
return accu + Character.getNumericValue(value)
})
Can you help me determine a simple syntax to resolve this problem (I can easily solve it in an java like verbose way with loops etc)
Try:
"1234".collect { it.toInteger() }.sum()
Solution by #dmahapatro
"1234".toList()*.toInteger().sum()

string variable returning as double

Create a method called parseEqn which will receive 1 String variable and return the double value of the expression passed to it.
parseEqn("123+23") → 146.0
parseEqn("3+5") → 8.0
parseEqn("3-5") → -2.0
so thats the question^^^^ and i think what i need to do is first use a string tokenizer to split the string up and then convert the tokens into doubles and from there add or subtract depending on the operator...but im not sure..
this is what i have so far
public double parseEqn(String str) {
StringTokenizer st = new StringTokenizer(str, "+-", true);
String first= st.nextToken();
String op= st.nextToken();
String second= st.nextToken();
double num1 = Double.parseDouble(first);
double num2 = Double.parseDouble(second);
if (op.equals("+")){
return num1+num2;
}
else (op.equals("-")){
return num1-num2;
}
i have no clue though....
Writing an expression parser is not a trivial task. The standard algorithm for parsing arbitrary infix expressions is the shunting-yard algorithm. The idea is to run through each token and build a Reverse Polish Notation (RPN) expression from the input. An RPN expression is essentially a stack-based list of operations that is very easy for a computer to work with (and easy to write code to evaluate).

Arduino issue: String to float adds two zeros instead of the correct integer

Code snippet:
Serial.println(sensorString); //so you can see the captured string
char carray[sensorString.length() + 1]; //determine size of the array
Serial.println(sizeof(carray));
sensorString.toCharArray(carray, sizeof(carray)); //put sensorString into an array
float sensorStringFloat = atoi(carray); //convert the array into an Integer
Serial.println(sensorStringFloat);
Serial.println(sensorStringFloat) prints out 5.00 instead of the correct float value of 5.33. Why is that and how do I fix this issue? I would eventually like to pass sensorStringFloat over to:
aJson.addNumberToObject(sensor, "ph", sensorStringFloat);
atoi converts a numeral in ASCII to an integer. The comment on that line also says it converts to an integer. So you got an integer result, 5. To convert to floating-point, consider using atof. (Note that “f” stands for floating-point, not “float”. atof returns a double.)
you should pass another parameter which defines the format, in this case it is the number of digits after the floating point.
Serial.println(sensorString,2);
String temp = String (_float, 0);
say float x;
convert to String using
String _temp = String(x, 0);
The second parameter 0... says i want no trailing zeros.
Caution: However this is only suitable for whole numbers.
This solution would not work for say... 1.24
You'll get just 1.

Resources