Arduino issue: String to float adds two zeros instead of the correct integer - string

Code snippet:
Serial.println(sensorString); //so you can see the captured string
char carray[sensorString.length() + 1]; //determine size of the array
Serial.println(sizeof(carray));
sensorString.toCharArray(carray, sizeof(carray)); //put sensorString into an array
float sensorStringFloat = atoi(carray); //convert the array into an Integer
Serial.println(sensorStringFloat);
Serial.println(sensorStringFloat) prints out 5.00 instead of the correct float value of 5.33. Why is that and how do I fix this issue? I would eventually like to pass sensorStringFloat over to:
aJson.addNumberToObject(sensor, "ph", sensorStringFloat);

atoi converts a numeral in ASCII to an integer. The comment on that line also says it converts to an integer. So you got an integer result, 5. To convert to floating-point, consider using atof. (Note that “f” stands for floating-point, not “float”. atof returns a double.)

you should pass another parameter which defines the format, in this case it is the number of digits after the floating point.
Serial.println(sensorString,2);

String temp = String (_float, 0);
say float x;
convert to String using
String _temp = String(x, 0);
The second parameter 0... says i want no trailing zeros.
Caution: However this is only suitable for whole numbers.
This solution would not work for say... 1.24
You'll get just 1.

Related

Float to Binary and Binary to Float in Python

Can anyone tell me how to convert a float number to 32-bit binary string and from a 32-bit binary string to a float number in python?
'bin' function in python works only for integers.
I need a single bit string as in internal representation. I do not want separate bit strings for the number before and after the decimal places joined by a decimal place in between.
EDIT: The question flagged does not explain how to convert binary string to float back.
Copied from this answer and edited per suggestion from Mark Dickinson:
import struct
def float_to_bin(num):
return format(struct.unpack('!I', struct.pack('!f', num))[0], '032b')
def bin_to_float(binary):
return struct.unpack('!f',struct.pack('!I', int(binary, 2)))[0]
print float_to_bin(3.14) yields “01000000010010001111010111000011”.
print bin_to_float("11000000001011010111000010100100") yields “-2.71000003815”.
I was able to create a program that takes bin decimals as string an returns int decimals!
I used a for loop to start from 1 until the len() of the str+1 to use i number to elevate 2 and, then just keep track of the result with result +=:
def binary_poin_to_number(bin1)->float:
#Try out string slicing here, later
result = 0
for i in range(1,len(bin1)+1):
if bin1[i-1] == '1':
result += 2**-i
return result

Convert large decimal number to hexadecimal notation

When creating a String object in Swift you can use a String Format Specifier to convert an integer to hexadecimal notation.
print(String(format:"%x", 1234))
// output: 4d2
// expected output: 4d2
But when numbers become bigger, the output is not as expected.
print(String(format:"%x", 12345678901234))
// output: 73ce2ff2
// expected output: b3a73ce2ff2
It seems that the output of String(format:"%x", n) is truncated at 8 characters. I don't think in hexadecimal natively, this makes debugging hard. I have seen answers for other programming languages where it is explained that you need to brake-up the large integer into parts, but that seems wrong to me.
What am I doing wrong here?
What is the right way to convert decimal numbers to hexadecimal numbers in Swift?
You need to use %lx or %llx
print(String(format:"%lx", 12345678901234))
b3a73ce2ff2
Table 2 on the site you linked specifies them
l -
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a long or unsigned long argument.
x is for unsigned 32 bit integers which only go up to 4.294.967.296

Hexadecimal string with float number to float

Iam working in python 3.6
I receive from serial comunication an string '3F8E353F'. This is a float number 1.111. How can I convert this string to a float number?
Thank you
Ah yes. Since this is 32-bits, unpack it into an int first then:
x='3F8E353F'
struct.unpack('f',struct.pack('i',int(x,16)))
On my system this gives:
>>> x='3F8E353F'
>>> struct.unpack('f',struct.pack('i',int(x,16)))
(1.1109999418258667,)
>>>
Very close to the expected value. However, this can give 'backwards' results based on the 'endianness' of bytes in your system. Some systems store their bytes least significant byte first, others most significant byte first. See this reference page for the descriptors to format based on byte order.
I used struct.unpack('f',struct.pack('i',int(x,16))) to convert Hex value to float but for negative value I am getting below error
struct.error: argument out of range
To resolve this I used below code which converts Hex (0xc395aa3d) to float value (-299.33). It works for both positive as well for negative value.
x = 0xc395aa3d
struct.unpack('f', struct.pack('I', int(x,16) ))
Another way is to use bytes.fromhex()
import struct
hexstring = '3F8E353F'
struct.unpack('!f', bytes.fromhex(hexstring))[0]
#answer:1.1109999418258667
Note: The form '!' is available for those poor souls who claim they can’t remember whether network byte order is big-endian or little-endian (from struct docs).

Convert a string to a hexadecimal number saved in a int variable

I need a function that can convert a string containg numbers to a hexadecimal integer saved in an integer variable,
for example the function atoi(char*) converts the string in that string into a decimal number , what i need is something similar but instead of decimal , hexadecimal
All integers store data in the same format: binary. That is neither decimal or hexadecimal.
If you want to create a string from an integer, that's when you can decide if you want decimal or hexadecimal notation.
You didn't mention what language you are using so I'll just assume C or C++ from the atoi() reference. There is also an itoa() function. It will create a string from an integer, and you can specify if the string will be created using base 16, base 10, or something else.

How do int-to-string casts work in Go?

I only started Go today, so this may be obvious but I couldn't find anything on it.
What does var x uint64 = 0x12345678; y := string(x) give y?
I know var x uint8 = 65; y := string(x) would give y the byte 65, character A, and common sense would suggest (since types larger than uint8 are allowed to be cast to strings) that they would simply be packed in to native byte order (i.e little endian) and assigned to the variable.
This does not seem to be the case:
hex.EncodeToString([]byte(y)) ==> "efbfbd"
First thought says this is an address with the last byte being left off because of some weird null terminator thingy, but if I allocate two x and y variables with two different values and print them out I get the same result.
var x, x2 uint64 = 0x10000000, 0x20000000
y, y2 := string(x), string(x2)
fmt.Println(hex.EncodeToString([]byte(y))) // "efbfbd"
fmt.Println(hex.EncodeToString([]byte(y2))) // "efbfbd"
Maddeningly I can't find the implementation for the string type anywhere although I probably haven't looked hard enough.
This is covered in the Spec: Conversions: Conversions to and from a string type:
Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD".
So effectively when you convert a numeric value to string, it can only yield a string having one rune (character). And since Go stores strings as the UTF-8 encoded byte sequences in memory, that is what you will see if you convert your string to []byte:
Converting a value of a string type to a slice of bytes type yields a slice whose successive elements are the bytes of the string.
When you try to conver the 0x12345678, 0x10000000 and 0x20000000 values to string, since they are outside of the range of valid Unicode code points, as per spec they are converted to "\uFFFD" which in UTF-8 encoding is []byte{239, 191, 189}; when encoded to hex string:
fmt.Println(hex.EncodeToString([]byte("\uFFFD"))) // Output: efbfbd
Or simply:
fmt.Printf("%x", "\uFFFD") // Output: efbfbd
Read the blog post Strings, bytes, runes and characters in Go for more details about string internals.
And btw since Go 1.5 the Go runtime is implemented (mostly) in Go, so these conversions are now implemented in Go and can be found in the runtime package: runtime/string.go, look for the intstring() function.

Resources