I am communicating with a servo using python's serial module. When I perform a serial.read(1) I get the value '\x80'. I need to convert this to decimal (128). Any suggestions?
Oh, never mind. I should have thought a bit. Googling opposite of chr() found me this page in the python docs. And ord() does the trick.
Related
I'm practicing for an exam and I'm doing literals, what came up to me was a question that asked to convert 0128 octal into a decimal , so I also have the solution for this question which is that it has too many bits to be considered an octal so it can't be converted into a decimal as well, but the motivation of it is not described.
Do you know why because I'm trying to figure it out, but I couldn't find any answer yet.
One answer is "invalid input" but a different answer might be to consider the input as "012" with the first non-octal character acting as the termination of the octal number. The answer would therefore be 10 decimal.
Python version 3.8.10
I don't understand what's going on here. I input the following bytestring and it gives a different value on print.
packet = b'\x02\x00\x00\x00\x08\x35\x03\x19\x01\x00\x01\x00\x00'
print(packet) #result b'\x02\x00\x00\x00\x085\x03\x19\x01\x00\x01\x00\x00'
Same thing when using bytearray.
packet = bytearray()
packet.append(2)
...
packet.append(0)
print(packet) #result bytearray(b'\x02\x00\x00\x00\x085\x03\x19\x01\x00\x01\x00\x00')
I know Python handles strings in a specific encoding, but I think that shouldn't matter given the way I input the bytes (not a string). I considered the print function sees the \x3 and evaluates as ASCII, but that makes no sense for this case (for my understanding anyway).
I really just want to understand what's going on. I searched other posts (Google too) and was not able to find a similar situation. Most other posts were from incorrectly handling encode() and decode() for their ASCII values, or issues when packing bits/bytes, which is obviously not the same situation here.
Thanks in advance.
It's the same string/bytes/data anyway. What's the difference here?
b'\x02\x00\x00\x00\x08\x35\x03\x19\x01\x00\x01\x00\x00'
b'\x02\x00\x00\x00\x08 5\x03\x19\x01\x00\x01\x00\x00'
Evidently, b'\x35' == b'5'.
Indeed, 0x35 is the ASCII code for the character '5':
>>> b'\x35'
b'5'
How can one generate a random string of a given length in KDB? The string should be composed of both upper and lower case alphabet characters as well as digits, and the first character can not be a digit.
Example:
"i0J2Jx3qa" / OK
"30J2Jx3qa" / bad
Thank you very much for your help!
stringLength: 13
randomString: (1 ? .Q.A,.Q.a) , ((stringLength-1) ? .Q.nA,.Q.a)
If you prefer without the repetitions:
raze(1,stringLength-1)?'10 0_\:.Q.nA,.Q.a
For the purposes of creating random data you can also use ?/deal with a number of characters up to 8 as a symbol(which you could string). This doesn't include numbers though so just an alternative approach to your own answer.
1?`8
,`bghgobnj
There's already a fine answer above which has been accepted. Just wanted somewhere to note that if this is to generate truly random data you need to consider randomising your seed. This can be done in Linux by using $RANDOM in bash or reading up to four bytes from /dev/random (relatively recent versions of kdb can read directly from FIFOs).
Otherwise the seed is set to digits from pi: 314159
In labview, I convert an array to string and just output it:
But in python 3.6, when I use serial.write(string) function, string needed to be turned into a bytearray.
Is there anyway I can send a string without convert it to bytearray?
No.
A Python 3.x string is a sequence of Unicode code points. A Unicode code point is an abstract entity, a bit like the colour red: in order to store it or transmit it in digital form it has to be encoded into a specific representation of one or more bytes - a bit like encoding the colour red as #ff0000. To send a string to another computer, you need to encode it into a byte sequence, and because there are many possible encodings you might want to use, you need to specify which one:
bytesToSend = myString.encode(encoding="utf-8")
Why didn't you need to do this in LabVIEW? Many older programming languages, including both LabVIEW and Python before 3.x, are based on the assumption that strings are encoded 1:1 into bytes - every character is one byte and every byte is one character. That's the way it worked in the early days of computing when memory was tight and non-English text was unusual, but it's not good enough now that software has to be used globally in a networked world.
Python 3.x took the step of explicitly breaking the link and making strings and byte sequences distinct and incompatible types: this means you have to deal with the difference, but it's not a big deal, just encode and decode as necessary, and in my opinion it's less confusing than trying to pretend that strings and byte sequences are still the same thing.
LabVIEW is finally catching up with Unicode in the NXG version, although for backward compatibility it lets you get away with wiring strings directly to some functions that really operate on byte sequences.
For more information I recommend reading the Python 3.x Unicode HOWTO.
I'm working on a NodeJs script that handles strings with exponential values.
Something like this:
1.070000000000000e+003
Which is the best way to convert (or parse) this string and obtain a floating value?
Thanks for the help.
You may convert by using parseFloat or Number
If you prefer to parse, maybe the best way is by a regular expression:
/-?(?:0|[1-9]\d*)(?:\.\d*)?(?:[eE][+\-]?\d+)?/
as suggested here and convert the single capturing group.