Iam working in python 3.6
I receive from serial comunication an string '3F8E353F'. This is a float number 1.111. How can I convert this string to a float number?
Thank you
Ah yes. Since this is 32-bits, unpack it into an int first then:
x='3F8E353F'
struct.unpack('f',struct.pack('i',int(x,16)))
On my system this gives:
>>> x='3F8E353F'
>>> struct.unpack('f',struct.pack('i',int(x,16)))
(1.1109999418258667,)
>>>
Very close to the expected value. However, this can give 'backwards' results based on the 'endianness' of bytes in your system. Some systems store their bytes least significant byte first, others most significant byte first. See this reference page for the descriptors to format based on byte order.
I used struct.unpack('f',struct.pack('i',int(x,16))) to convert Hex value to float but for negative value I am getting below error
struct.error: argument out of range
To resolve this I used below code which converts Hex (0xc395aa3d) to float value (-299.33). It works for both positive as well for negative value.
x = 0xc395aa3d
struct.unpack('f', struct.pack('I', int(x,16) ))
Another way is to use bytes.fromhex()
import struct
hexstring = '3F8E353F'
struct.unpack('!f', bytes.fromhex(hexstring))[0]
#answer:1.1109999418258667
Note: The form '!' is available for those poor souls who claim they can’t remember whether network byte order is big-endian or little-endian (from struct docs).
Related
what is the simplest way to print the result as follows using pyhton3
I have a Hex string s="FFFC"
In python if using this command line: print(int(s,16))
The result I'm expecting is -4 (which is in signed format). But this is not the case, It displays the Unsigned format which the result is 65,532.
How can I convert this the easiest way?
Thank you in advance.
There are several ways, but you could just do the math explicitly (assuming s has no more than 4 characters, otherwise use s[-4:]):
i = int(s, 16)
if i >= 0x8000:
i -= 0x10000
You can use the bytes.fromhex and int.from_bytes class methods.
s = bytes.fromhex('FFFC')
i = int.from_bytes(s, 'big', signed=True)
print(i)
Pretty self-explanatory, the only thing that might need clarification is the 'big' argument, but that just means that the byte array s has the most significant byte first.
I want to convert 16-digit hexadecimal numbers into doubles. I actually did the reverse of this before which worked fine:
import struct
import wrap
def double_to_hex(doublein):
return hex(struct.unpack('<Q', struct.pack('<d', doublein))[0])
for i in modified_list:
encoded_list.append(double_to_hex(i))
modified_list.clear()
encoded_msg = ''.join(encoded_list).replace('0x', '')
encoded_list.clear()
print_command('encode', encoded_message)
And now I want to sort of do the reverse. I tried this without success:
from textwrap import wrap
import struct
import binascii
MESSAGE = 'c030a85d52ae57eac0129263c4fffc34'
#Splitting up message into n 16-bit strings
MSGLIST = wrap(MESSAGE, 16)
doubles = []
print(MSGLIST)
for msg in MSGLIST:
doubles.append(struct.unpack('d', binascii.unhexlify(msg)))
print(doubles)
However, when I run this, I get crazy values, which are of course not what I put in:
[(-1.8561629252326087e+204,), (1.8922789420412524e-53,)]
Were your original numbers -16.657673995556173 and -4.642958715557189 ?
If so, then the problem is that your hex strings contain the big-endian (most-significant byte first) representation of the double, but the 'd' format string in your unpack call specifies conversion using your system's native format, which happens to be little-endian (least-significant byte first). The result is that unpack reads and processes the bytes of the unhexlify'ed string from the wrong end. Unsurprisingly, that will produce the wrong value.
To fix, do one of:
convert the hex string into little-endian format (reverse the bytes, so c030a85d52ae57ea becomes ea57ae525da830c0) before passing it to binascii.unhexlify, or
reverse the bytes produced by unhexlify (change binascii.unhexlify(msg) to binascii.unhexlify(msg)[::-1]) before you pass them to unpack, or
tell unpack to do the conversion using big-endian order (replace the format string 'd' with '>d')
I'd go with the last one, replacing the format string.
Can anyone tell me how to convert a float number to 32-bit binary string and from a 32-bit binary string to a float number in python?
'bin' function in python works only for integers.
I need a single bit string as in internal representation. I do not want separate bit strings for the number before and after the decimal places joined by a decimal place in between.
EDIT: The question flagged does not explain how to convert binary string to float back.
Copied from this answer and edited per suggestion from Mark Dickinson:
import struct
def float_to_bin(num):
return format(struct.unpack('!I', struct.pack('!f', num))[0], '032b')
def bin_to_float(binary):
return struct.unpack('!f',struct.pack('!I', int(binary, 2)))[0]
print float_to_bin(3.14) yields “01000000010010001111010111000011”.
print bin_to_float("11000000001011010111000010100100") yields “-2.71000003815”.
I was able to create a program that takes bin decimals as string an returns int decimals!
I used a for loop to start from 1 until the len() of the str+1 to use i number to elevate 2 and, then just keep track of the result with result +=:
def binary_poin_to_number(bin1)->float:
#Try out string slicing here, later
result = 0
for i in range(1,len(bin1)+1):
if bin1[i-1] == '1':
result += 2**-i
return result
When creating a String object in Swift you can use a String Format Specifier to convert an integer to hexadecimal notation.
print(String(format:"%x", 1234))
// output: 4d2
// expected output: 4d2
But when numbers become bigger, the output is not as expected.
print(String(format:"%x", 12345678901234))
// output: 73ce2ff2
// expected output: b3a73ce2ff2
It seems that the output of String(format:"%x", n) is truncated at 8 characters. I don't think in hexadecimal natively, this makes debugging hard. I have seen answers for other programming languages where it is explained that you need to brake-up the large integer into parts, but that seems wrong to me.
What am I doing wrong here?
What is the right way to convert decimal numbers to hexadecimal numbers in Swift?
You need to use %lx or %llx
print(String(format:"%lx", 12345678901234))
b3a73ce2ff2
Table 2 on the site you linked specifies them
l -
Length modifier specifying that a following d, o, u, x, or X conversion specifier applies to a long or unsigned long argument.
x is for unsigned 32 bit integers which only go up to 4.294.967.296
I only started Go today, so this may be obvious but I couldn't find anything on it.
What does var x uint64 = 0x12345678; y := string(x) give y?
I know var x uint8 = 65; y := string(x) would give y the byte 65, character A, and common sense would suggest (since types larger than uint8 are allowed to be cast to strings) that they would simply be packed in to native byte order (i.e little endian) and assigned to the variable.
This does not seem to be the case:
hex.EncodeToString([]byte(y)) ==> "efbfbd"
First thought says this is an address with the last byte being left off because of some weird null terminator thingy, but if I allocate two x and y variables with two different values and print them out I get the same result.
var x, x2 uint64 = 0x10000000, 0x20000000
y, y2 := string(x), string(x2)
fmt.Println(hex.EncodeToString([]byte(y))) // "efbfbd"
fmt.Println(hex.EncodeToString([]byte(y2))) // "efbfbd"
Maddeningly I can't find the implementation for the string type anywhere although I probably haven't looked hard enough.
This is covered in the Spec: Conversions: Conversions to and from a string type:
Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD".
So effectively when you convert a numeric value to string, it can only yield a string having one rune (character). And since Go stores strings as the UTF-8 encoded byte sequences in memory, that is what you will see if you convert your string to []byte:
Converting a value of a string type to a slice of bytes type yields a slice whose successive elements are the bytes of the string.
When you try to conver the 0x12345678, 0x10000000 and 0x20000000 values to string, since they are outside of the range of valid Unicode code points, as per spec they are converted to "\uFFFD" which in UTF-8 encoding is []byte{239, 191, 189}; when encoded to hex string:
fmt.Println(hex.EncodeToString([]byte("\uFFFD"))) // Output: efbfbd
Or simply:
fmt.Printf("%x", "\uFFFD") // Output: efbfbd
Read the blog post Strings, bytes, runes and characters in Go for more details about string internals.
And btw since Go 1.5 the Go runtime is implemented (mostly) in Go, so these conversions are now implemented in Go and can be found in the runtime package: runtime/string.go, look for the intstring() function.