Float to Binary and Binary to Float in Python - python-3.x

Can anyone tell me how to convert a float number to 32-bit binary string and from a 32-bit binary string to a float number in python?
'bin' function in python works only for integers.
I need a single bit string as in internal representation. I do not want separate bit strings for the number before and after the decimal places joined by a decimal place in between.
EDIT: The question flagged does not explain how to convert binary string to float back.

Copied from this answer and edited per suggestion from Mark Dickinson:
import struct
def float_to_bin(num):
return format(struct.unpack('!I', struct.pack('!f', num))[0], '032b')
def bin_to_float(binary):
return struct.unpack('!f',struct.pack('!I', int(binary, 2)))[0]
print float_to_bin(3.14) yields “01000000010010001111010111000011”.
print bin_to_float("11000000001011010111000010100100") yields “-2.71000003815”.

I was able to create a program that takes bin decimals as string an returns int decimals!
I used a for loop to start from 1 until the len() of the str+1 to use i number to elevate 2 and, then just keep track of the result with result +=:
def binary_poin_to_number(bin1)->float:
#Try out string slicing here, later
result = 0
for i in range(1,len(bin1)+1):
if bin1[i-1] == '1':
result += 2**-i
return result

Related

ValueError: could not convert string to float: '[-0.32062087,0.27050002,......]'

My dataframe has columns where one has list of float values. When I train that column as X_train, I showing cannot string to float or tensorflow float data type.
DataSet:
I tried this:
df['sent_to_vec'].apply(lambda x: float(x))
or nested for loop to convert values in float type; but didn't get executed.
Try passing a string that's really just a floating-point number to the Python float() function:
f1 = float('0.1')
print(f1)
It works.
Try passing a string that's not just a floating-point number, but is instead some sort of array or list representation with multiple numbers separated by other punctuation:
f2 = float('[0.1, 0.2]')
print(f2)
You'll get the same error as you're asking about. That string, '[0.1, 0.2]' is not a representation of a floating-point number that float() can read.
You should look for a function that can read a string like '[0.1, 0.2]'. Can you see the code that wrote the Vectorized data.csv file? (Did you write that code, or that file?)
You'll want to use some function that does the reverse of whatever wrote that column of the file.

How to convert a string that represents a decimal number to a string that represents its binary form?

Take for example a string s="556852144786" that represents the decimal number n=556852144786. Is there a direct algorithm to transform it to s1="1000000110100110..." where 1000000110100110... is the binary representation of n?
I assume that you're looking for an algorithm that operates directly on strings, instead of converting to whatever integers are supported by the target language and using those?
There are various ways to do this. Here are two, though if you squint hard enough, they're almost the same algorithm.
Method 1: repeatedly divide the decimal string by two
In this method, we repeatedly divide the original decimal string by 2 (using decimal arithmetic), and keep track of the remainders: those remainders give you the bits of the result in reverse order.
Here's what that algorithm looks like, in Python. It's missing definitions of is_nonzero and divide_by_two. We'll provide those in a moment.
def dec_to_bin(s):
"""
Convert decimal string s to binary, using only string operations.
"""
bits = ""
while is_nonzero(s):
s, bit = divide_by_two(s)
bits += bit
# The above builds up the binary string in reverse.
return bits[::-1]
The algorithm generates the bits in reverse order, so we need a final reverse to give the resulting binary string.
The divide_by_two function takes a decimal string s and returns a new decimal string representing the quotient s / 2, along with the remainder. The remainder is a single bit, again represented as a string - either "0" or "1". It follows the usual digit-by-digit school-taught left-to-right division method: each step involves dividing a single digit, along with the carry brought in from the previous step, by two. Here's that function:
def divide_by_two(s):
"""
Divide the number represented by the decimal string s by 2,
giving a new decimal string quotient and a remainder bit.
"""
quotient = ""
bit = "0"
for digit in s:
quotient_digit, bit = divide_digit_by_two(bit, digit)
quotient += quotient_digit
return quotient, bit
We're left needing to define divide_digit_by_two, which takes a single digit plus a tens carry bit, and divides by two to get a digit quotient and a single-bit remainder. At this point, all inputs and outputs are strings of length one. Here we cheat and use integer arithmetic, but there are only 20 possible different combinations of inputs, so we could easily have used a lookup table instead.
def divide_digit_by_two(bit, digit):
"""
Divide a single digit (with tens carry) by two, giving
a digit quotient and a single bit remainder.
"""
digit, bit = divmod(int(bit) * 10 + int(digit), 2)
return str(digit), str(bit)
You can think of divide_digit_by_two as a primitive arithmetic operation that swaps two digits in different bases: it converts a nonnegative integer smaller than 20 represented in the form 10 * bit + digit into the same value represented in the form 2 * digit + bit.
We're still missing one definition: that of is_nonzero. A decimal string represents zero if and only if it consists entirely of zeros. Here's a quick Python test for that.
def is_nonzero(s):
"""
Determine whether the decimal string s represents zero or not.
"""
return s.strip('0') != ''
And now that we have all the bits in place, we can test:
>>> dec_to_bin("18")
'10010'
>>> dec_to_bin("556852144786")
'1000000110100110111110010110101010010010'
>>> format(556852144786, 'b') # For comparison
'1000000110100110111110010110101010010010'
Method 2: repeatedly multiply the binary string by 10
Here's a variant on the first method: instead of repeated divisions, we process the incoming decimal string digit by digit (from left to right). We start with an empty binary string representing the value 0, and each time we process a digit we multiply by 10 (in the binary representation) and add the value represented by that digit. As before, it's most convenient to build up the binary string in little-endian order (least-significant bit first) and then reverse at the end to get the traditional big-endian representation. Here's the top-level function:
def dec_to_bin2(s):
"""
Convert decimal string s to binary, using only string operations.
Digit-by-digit variant.
"""
bits = ""
for digit in s:
bits = times_10_plus(bits, digit)
return bits[::-1]
The job of the times_10_plus function is to take a binary string and a digit, and produce a new binary string representing the result of multiplying the value of the original by ten, and adding the value of the binary digit. It looks like this:
def times_10_plus(bits, digit):
"""
Multiply the value represented by a binary string by 10, add a digit,
and return the result as a binary string.
"""
result_bits = ""
for bit in bits:
digit, result_bit = divide_digit_by_two(bit, digit)
result_bits += result_bit
while digit != "0":
digit, result_bit = divide_digit_by_two("0", digit)
result_bits += result_bit
return result_bits
Note that we're using exactly the same arithmetic primitive divide_digit_by_two as before, but now we're thinking of it slightly differently: it's multiplying a single bit by ten, adding a carry digit, and turning that into a new bit (the least significant bit of the result) along with a new carry digit.
Efficiency notes
In the interests of clarity, I've left some Python-level inefficiencies. In particular, when building up the strings, it would be more efficient to build a list of bits or digits and then concatenate with a str.join operation right at the end. Also note that depending on the implementation language, it may make more sense to modify either the digit string or the bit string in-place instead of generating a new string at each step. I leave the necessary changes to you.

Hexadecimal string with float number to float

Iam working in python 3.6
I receive from serial comunication an string '3F8E353F'. This is a float number 1.111. How can I convert this string to a float number?
Thank you
Ah yes. Since this is 32-bits, unpack it into an int first then:
x='3F8E353F'
struct.unpack('f',struct.pack('i',int(x,16)))
On my system this gives:
>>> x='3F8E353F'
>>> struct.unpack('f',struct.pack('i',int(x,16)))
(1.1109999418258667,)
>>>
Very close to the expected value. However, this can give 'backwards' results based on the 'endianness' of bytes in your system. Some systems store their bytes least significant byte first, others most significant byte first. See this reference page for the descriptors to format based on byte order.
I used struct.unpack('f',struct.pack('i',int(x,16))) to convert Hex value to float but for negative value I am getting below error
struct.error: argument out of range
To resolve this I used below code which converts Hex (0xc395aa3d) to float value (-299.33). It works for both positive as well for negative value.
x = 0xc395aa3d
struct.unpack('f', struct.pack('I', int(x,16) ))
Another way is to use bytes.fromhex()
import struct
hexstring = '3F8E353F'
struct.unpack('!f', bytes.fromhex(hexstring))[0]
#answer:1.1109999418258667
Note: The form '!' is available for those poor souls who claim they can’t remember whether network byte order is big-endian or little-endian (from struct docs).

Cannot return a float value of -1.00

I am currently doing an assignment for a computer science paper at university. I am in my first year.
in one of the questions, if the gender is incorrect the function is suppose to return a value of -1. But in the testing column, it says the expected value is -1.00. And I cannot seem to be able to return the value of '-1.00', it will always return a value of -1.0 (with one zero). I used the .format to make the value 2sf (so it will appear with two zero's) but when converting it to a float the value always returns "-1.0".
return float('{:.2f}'.format(-1))
This isn’t as clear as it could be. Does your instructor or testing
software expect a string '-1.00'? If so, just return that. Is a
float type expected? Then return -1.0; the number of digits shown does
not affect the value.
I don't know exactly what you have done, but i had tried this way and output what you expect.
b = -1
>>> print("%.2f" % (b))
-1.00
>>> print("%.2f" % (-1))
-1.00
What does the following code do?
print(float('{:.2f}'.format(-1)))
The '{:.2f}'.format(-1) creates some string representation of -1. defined by the format string. The float(...) converts this string to the float 1. The print command converts this float to a sting, using some default format, and prints this string to the screen. I think that isn't what you expected because the format you used does not effect the print command in formatting the string.
I assume you want
print('{:.2f}'.format(float(-1)))
and this actually does what you want, it prints
1.00
http://ideone.com/GyINQR
It is not necessary to convert -1 explicitely to float
print('{:.2f}'.format(-1))
gives the desired result:
http://ideone.com/U2RTMX

Arduino issue: String to float adds two zeros instead of the correct integer

Code snippet:
Serial.println(sensorString); //so you can see the captured string
char carray[sensorString.length() + 1]; //determine size of the array
Serial.println(sizeof(carray));
sensorString.toCharArray(carray, sizeof(carray)); //put sensorString into an array
float sensorStringFloat = atoi(carray); //convert the array into an Integer
Serial.println(sensorStringFloat);
Serial.println(sensorStringFloat) prints out 5.00 instead of the correct float value of 5.33. Why is that and how do I fix this issue? I would eventually like to pass sensorStringFloat over to:
aJson.addNumberToObject(sensor, "ph", sensorStringFloat);
atoi converts a numeral in ASCII to an integer. The comment on that line also says it converts to an integer. So you got an integer result, 5. To convert to floating-point, consider using atof. (Note that “f” stands for floating-point, not “float”. atof returns a double.)
you should pass another parameter which defines the format, in this case it is the number of digits after the floating point.
Serial.println(sensorString,2);
String temp = String (_float, 0);
say float x;
convert to String using
String _temp = String(x, 0);
The second parameter 0... says i want no trailing zeros.
Caution: However this is only suitable for whole numbers.
This solution would not work for say... 1.24
You'll get just 1.

Resources