Convert String to Int/int64/Decimal? - string

I'm actually using a module from HP that get's the BIOS version of a system. It was easy to convert a string to an integer on some systems because it only had one decimal point. But now I'm running into some systems that has BIOS version with two decimal points and I can no longer use [int] or [decimal] or [double] etc..
So if I have string that has a value of "02.01.06" and I try to change that type to an integer, it fails to do so.
Example:
[int]$InstalledBiosVersion = Get-HPBiosVersion
Cannot convert value "02.01.06" to type "System.Int32". Error: "Input string was not in a correct format.
I need to change the string to an integer because I'm comparing bios versions that installed on the system(s) to what is the latest version available. So if one number is -lt the other, the bios is out of date.
Any ideas?

Never mind, I ended up just removing the two digits and the . and then converting it to a decimal.
[decimal]$InstalledBiosVersion = (Get-HPBiosVersion).Remove(0,4)

Related

It is possible to represent a 64-bit number as string when hardware doesn't support 64-bit number?

I want to show a 64-bit number as string. The problem is that my hardware doesn't support 64-bit number, just 32-bit.
So, I have the 64-bit number splitted into two 32-bit number (High and low part).
Example: 64-bit number : 12345678987654321 (002B DC54 6291 F4B1h)
32-bit low part: 1653732529 (6291 F4B1h)
32-bit high part: 2874452 (002B DC54h)
I think the solution to my problem would be showing this number as string.
It is possible?
Thanks.
yes you can use an array of 32 bit uints or even lower bit-width ...
for printing you can use this:
hex to dec
so first print a hex string which is easy on any bit-width (as you just stack up the lower bit-widths prints together from MSW to LSW) and then convert the hex text to dec text...
With this chained array of units you can do the math operations like this:
Cant make value propagate through carry
Doing operation on array of uints is much much more faster than on strings ...
but if you insist yes you can use string representation too ...
There are also hybrid representation like BCD that are suitable for this but your MCU would need to have support for it ...
Depending on your language of choice, the language may allow you to use greater-than-32bit integers, even on 32bits architectures (like python).
If that is the case the problem becomes trivial: compute the value, then compute the corresponding hex string.

Python float() limitation on scientific notation

python 3.6.5
numpy 1.14.3
scipy 1.0.1
cerberus 1.2
I'm trying to convert a string '6.1e-7' to a float 0.00000061 so I can save it in a mongoDb field.
My problem here is that float('6.1e-7') doesn't work (it will work for float('6.1e-4'), but not float('6.1e-5') and more).
Python float
I can't seem to find any information about why this happen, on float limitations, and every examples I found shows a conversion on e-3, never up to that.
Numpy
I installed Numpy to try the float96()/float128() ...float96() doesn't exist and float128() return a float '6.09999999999999983e-07'
Format
I tried 'format(6.1E-07, '.8f')' which works, as it return a string '0.00000061' but when I convert the string to a float (so it can pass cerberus validation) it revert back to '6.1E-7'.
Any help on this subject would be greatly appreciated.
Thanks
'6.1e-7' is a string:
>>> type('6.1e-7')
<class 'str'>
While 6.1e-7 is a float:
>>> type(6.1e-7)
<class 'float'>
0.00000061 is the same as 6.1e-7
>>> 0.00000061 == 6.1e-7
True
And, internally, this float is represented by 0's and 1's. That's just yet another representation of the same float.
However, when converted into a string, they're no longer compared as numbers, they are just characters:
>>> '0.00000061' == '6.1e-7'
False
And you can't compare strings with numbers either:
>>> 0.00000061 == '6.1e-7'
False
Your problem description is too twisted to be precisely understood but I'll try to get some telepathy for this.
In an internal format, numbers don't keep any formatting information, neither integers nor floats do. For an integer 123, you can't restore whether it was presented as "123", " 123 " (with tons of spaces before and after it), 000000123 or +0123. For a floating number, 0.1, +0.0001e00003, 1.000000e-1 and myriads of other forms can be used. Internally, all they shall result in the same number.
(There are some specifics with it when you use IEEE754 "decimal floating", but I am sure it is not your case.)
When saving to a database, internal representation stops having much sense. Instead, the database specifics starts playing role, and it can be quite different. For example, SQL suggests using column types like numeric(10,4), and each value will be converted to decimal format corresponding to the column type (typically, saved on disk as text string, with or without decimal point). In MongoDB, you can keep a floating value either as JSON number (IEEE754 double) or as text. Each variant has its own specifics, but, if you choose text, it is your own responsibility to provide proper formatting each time you form this text. You want to see a fixed-point decimal number with 8 digits after point? OK, no problems: you just shall format according to %.8f on each preparing of such representation.
The issues with representation selection are:
Uniqueness: no different forms should be available for the same value. Otherwise you can, for example, store the same contents under multiple keys, and then mistake older one for a last one.
Ordering awareness: DB should be able to provide natural order of values, for requests like "ceiling key-value pair".
If you always format values using %.8f, you will reach uniqueness, but not ordering. The same for %.g, %.e and really other text format except special (not human readable) ones that are constructed to keep such ordering. If you need ordering, just use numbers as numbers, and don't concentrate on how they look like in text forms.
(And, your problem is not tied with numpy.)

Discrepencies in Python hard coding string vs str() methods

Okay. Here is my minimal working example. When I type this into python 3.6.2:
foo = '0.670'
str(foo)
I get
>>>'0.670'
but when I type
foo = 0.670
str(foo)
I get
>>>'0.67'
What gives? It is stripping off the zero, which I believe has to do with representing a float on a computer in general. But by using the str() method, why can it retain the extra 0 in the first case?
You are mixing strings and floats. The string is sequence of code points (one code point represents one character) representing some text and interpreter processing it as a text. The string is always inside single-quotes or double-quotes (e.g. 'Hello'). The float is a number and Python know it so it also know that 1.0000 is the same as 1.0.
In the first case you saved into foo a string. The str() call on string just take the string and return it as is.
In the second case you saved 0.670 as a float (because it's not wrapped in quotes). When Python converting float into a string it always tries create the shortest string possible.
Why Python automatically truncates the trailing zero?
When you try save some real number into computer's memory you have to convert it into binary representation. Usually (but there some exceptions) it's saved in format described in the standard IEEE 754 and Python uses it for floats too.
Let's go to the some example:
from struct import pack
x = -1.53
y = -1.53000
print("X:", pack(">d", x).hex())
print("Y:", pack(">d", y).hex())
The pack() function takes input and based on given format (>d) convert it into bytes. In this case it takes float number and give as how it is saved in memory. If you run the code you will see the x and y are saved in the memory in the same way. The memory doesn't contain information about the format of saved number.
Of course you can add some information about it but:
It would take another memory and it's good practice to use as much memory as you actually need and don't waste it.
What would be result of 0.10 + 0.1 should it be 0.2 or 0.20?
For scientific purposes and significant figures shouldn't it leave the value as the user defined it?
It doesn't matter how you defined the input number. The important is what format you want to use for presenting. As I said the str() always tries create the shortest string possible. str() is good for some simple scripts or tests. For scientific purposes (or for uses where some representation is required) you can convert your numbers to string as you want or need.
For example:
x = -1655484.4584631
y = 42.0
# always print number with sign and exactly 5 numbers from fractional part
print("{:+.5f}".format(x)) # -1655484.45846
print("{:+.5f}".format(y)) # +42.00000
# always print number in scientific format (sign is showed only when the number is negative)
print("{:-2e}".format(x)) # -1.66e+06
print("{:-2e}".format(y)) # 4.20e+01
For more information about formatting numbers and others types look at the Python's documentation.

string is entier tcl8.5

I have the following Problem:
I want to check if a string is an 64 Bit Integer.
I can not use the [string is integer $str] method since it only works with 32 bit integers.
At http://wiki.tcl.tk/10166 I found the solution with [string is entier $str] but this does not work in tcl8.5 I get the following error message:
bad class "entier": must be alnum, alpha, ascii, control, boolean, digit, double, false, graph, integer, list, lower, print, punct, space, true, upper, wideinteger, wordchar, or xdigit
Does tcl8.5 not support this method?
And can I check for 64 bit Integers?
In Tcl 8.5, string is doesn't support the entier class (which checks for general integers — the name comes from French, and was picked because everything else better was taken for something else already). However, the wideinteger class is supported, and does exactly the check for a 64-bit integer on all supported platforms; plain old string is integer might really be 32-bit or 64-bit depending on the CPU architecture.
Don't forget to use -strict unless you want an empty string to be accepted as a valid value. (There are a few cases where that's desirable, but usually it isn't. It's a minor specification botch from years ago.)

Fortran improper output on AIX and Linux

I have a scenario where i declare a variable real*8 and read a value
0.1234123412341234
which is stored in a file.
When i try to read it on Linux to a variable and display the value, it prints
0.12341234123412
whereas when i run the same code for AIX it prints the value
0.12341234123412370
Why does both the platform print different values for the same code? Is there any possibility to overcome this without using format specifier?
P.S
AIX compiler is xlf
Linux compiler is ifort
I assume that you are using list-directed IO, write (X, *). While this type of IO is convenient, the output is not fully specified by the standard. If you want your output to be extremely similar across compilers and platforms, you should use a format. (You might still have small variations in results due to the use of finite-precision arithmetic.)
You shouldn't use REAL*8 for declaring double variables, you should use
INTEGER, PARAMETER :: prec = SELECTED_REAL_KIND(15,307)
REAL(prec) :: variable
to specify a portable double-precision variables (see here for more details).
In any event, the problem you are experiencing is related to precision. A double-precision variable can go to 15 digits before becoming wrong, which appears to be the case with both compilers. In fact, I would argue that these are indeed the same number because of how close they are (a % difference of about 3E-12).

Resources