Understanding Python sequence - python-3.x

I am doing a hackerrank example called Flipping bits where given a list of 32 bit unsigned integers. Flip all the bits (1->0 and 0->1) and return the result as an unsigned integer.
The correct code is:
def flippingBits(n):
seq = format(n, '032b')
return int(''.join(['0' if bit == '1' else '1' for bit in seq]), 2)
I dont understand the last line, what does the ''. part do? and why is there a ,2 at the end?
I have understood most of the code but need help in understanding the last part.

what does the ''. part do
'' represents an empty string which will be used as separator to join collection elements into string (some examples can be found here)
and why is there a ,2 at the end?
from int docs:
class int(x=0)
class int(x, base=10)
Return an integer object constructed from a number or string x
In this case it will parse the string provided in binary format (i.e. with base 2) into int.

I hope the below explanation helps:
def flippingBits(n):
seq = format(n, '032b') # change the format from base 10 to base 2 with 32bit size unsigned integer into string
return int(''.join(['0' if bit == '1' else '1' for bit in seq]), 2)
# ['0' if bit == '1' else '1' for bit in seq] => means: build a list of characters from "seq" string
# in which whenever there is 1 convert it to 0, and to 1 otherwise; then
# ''.join(char_list) => means: build string by joining characters in char_list
# without space between them ('' means empty delimiter); then
# int(num_string, 2) => convert num_string from string to integer in a base 2
Notice that you can do the bit flipping by using bit-wise operations without converting to string back and forth.
def flippingBits(n):
inverted_n = ~n # flip all bits from 0 to 1, and 1 to 0
return inverted_n+2**32 # because the number is a signed integer, the most significant bit should be flipped as well

Related

How to multiply numbers of a string by its position in the string

I am a newby on this. I am trying to multiply every single element of the string below ('10010010') by 2 to the power of the position of the element in the string and sum all the multiplications. So far I am trying to do it like this, but I cannot achieve to figure out how to do it.
def decodingvalue(str1):
# read each character in input string
for ch in str1:
q=sum(2^(ch-1)*ch.isdigit())
return q
Function call
print(decodingvalue('10010010'))
Thanks a lot for your help!
I think you trying convert binary to int. If that so you can do the following:
str = '101110101'
#length is counted 1 to n, decrementing by 1 changes to 0-(n-1)
c = len(str)-1
q = 0
for ch in str:
print(q,c,ch)
q = q + (int(ch)*(2**c)) #in python power is '**'
c = c-1
if c == -1:
break
print(q)
you can of course optimize it and finish in fewer lines.
In python ^ (caret operator) is a Bitwise XOR.

Issue with ASCii in Python3

I am trying to convert a string of varchar to ascii. Then i'm trying to make it so any number that's not 3 digits has a 0 in front of it. then i'm trying to add a 1 to the very beginning of the string and then i'm trying to make it a large number that I can apply math to it.
I've tried a lot of different coding techniques. The closest I've gotten is below:
s = 'Ak'
for c in s:
mgk = (''.join(str(ord(c)) for c in s))
num = [mgk]
var = 1
num.insert(0, var)
mgc = lambda num: int(''.join(str(i) for i in num))
num = mgc(num)
print(num)
With this code I get the output: 165107
It's almost doing exactly what I need to do but it's taking out the 0 from the ord(A) which is 65. I want it to be 165. everything else seems to be working great. I'm using '%03d'% to insert the 0.
How I want it to work is:
Get the ord() value from a string of numbers and letters.
if the ord() value is less than 100 (ex: A = 65, add a 0 to make it a 3 digit number)
take the ord() values and combine them into 1 number. 0 needs to stay in from of 65. then add a one to the list. so basically the output will look like:
1065107
I want to make sure I can take that number and apply math to it.
I have this code too:
s = 'Ak'
for c in s:
s = ord(c)
s = '%03d'%s
mgk = (''.join(str(s)))
s = [mgk]
var = 1
s.insert(0, var)
mgc = lambda s: int(''.join(str(i) for i in s))
s = mgc(s)
print(s)
but then it counts each letter as its own element and it will not combine them and I only want the one in front of the very first number.
When the number is converted to an integer, it
Is this what you want? I am kinda confused:
a = 'Ak'
result = '1' + ''.join(str(f'{ord(char):03d}') for char in a)
print(result) # 1065107
# to make it a number just do:
my_int = int(result)

Generate an integer for encryption from a string and vice versa

I am trying to write an RSA code in python3. I need to turn user input strings (containing any characters, not only numbers) into integers to then encrypt them. What is the best way to turn a sting into an integer in Python 3.6 without 3-rd party modules?
how to encode a string to an integer is far from unique... there are many ways! this is one of them:
strg = 'user input'
i = int.from_bytes(strg.encode('utf-8'), byteorder='big')
the conversion in the other direction then is:
s = int.to_bytes(i, length=len(strg), byteorder='big').decode('utf-8')
and yes, you need to know the length of the resulting string before converting back. if length is too large, the string will be padded with chr(0) from the left (with byteorder='big'); if length is too small, int.to_bytes will raise an OverflowError: int too big to convert.
The #hiro protagonist's answer requires to know the length of the string. So I tried to find another solution and good answers here: Python3 convert Unicode String to int representation. I just summary here my favourite solutions:
def str2num(string):
return int(binascii.hexlify(string.encode("utf-8")), 16)
def num2str(number):
return binascii.unhexlify(format(number, "x").encode("utf-8")).decode("utf-8")
def numfy(s, max_code=0x110000):
# 0x110000 is max value of unicode character
number = 0
for e in [ord(c) for c in s]:
number = (number * max_code) + e
return number
def denumfy(number, max_code=0x110000):
l = []
while number != 0:
l.append(chr(number % max_code))
number = number // max_code
return ''.join(reversed(l))
Intersting: testing some cases shows me that
str2num(s) = numfy(s, max_code=256) if ord(s[i]) < 128
and
str2num(s) = int.from_bytes(s.encode('utf-8'), byteorder='big') (#hiro protagonist's answer)

Converting int to string then back to int

How do I call out a particular digit from a number. For example: bringing out 6 from 768, then using 6 to multiply 3. I've tried using the code below, but it does not work.
digits = []
digits = str(input("no:"))
print (int(digits[1] * 5))
If my input is 234 since the value in[1] is 3, how can I multiply the 3 by 5?
input() returns a string (wether or not you explicitly convert it to str() again), so digits[1] is still a single character string.
You need to convert that single digit to an integer with int(), not the result of the multiplication:
print (int(digits[1]) * 5)
All I did was move a ) parenthesis there.
Your mistake was to multiply the single-character string; multiplying a string by n produces that string repeated n times.
digits[1] = '3' so digits[1] * 5 = '33333'. You want int(digits[1]) * 5.

How compiler is converting integer to string and vice versa

Many languages have functions for converting string to integer and vice versa. So what happens there? What algorithm is being executed during conversion?
I don't ask in specific language because I think it should be similar in all of them.
To convert a string to an integer, take each character in turn and if it's in the range '0' through '9', convert it to its decimal equivalent. Usually that's simply subtracting the character value of '0'. Now multiply any previous results by 10 and add the new value. Repeat until there are no digits left. If there was a leading '-' minus sign, invert the result.
To convert an integer to a string, start by inverting the number if it is negative. Divide the integer by 10 and save the remainder. Convert the remainder to a character by adding the character value of '0'. Push this to the beginning of the string; now repeat with the value that you obtained from the division. Repeat until the divided value is zero. Put out a leading '-' minus sign if the number started out negative.
Here are concrete implementations in Python, which in my opinion is the language closest to pseudo-code.
def string_to_int(s):
i = 0
sign = 1
if s[0] == '-':
sign = -1
s = s[1:]
for c in s:
if not ('0' <= c <= '9'):
raise ValueError
i = 10 * i + ord(c) - ord('0')
return sign * i
def int_to_string(i):
s = ''
sign = ''
if i < 0:
sign = '-'
i = -i
while True:
remainder = i % 10
i = i / 10
s = chr(ord('0') + remainder) + s
if i == 0:
break
return sign + s
I wouldn't call it an algorithm per se, but depending on the language it will involve the conversion of characters into their integral equivalent. Many languages will either stop on the first character that cannot be represented as an integer (e.g. the letter a), will blindly convert all characters into their ASCII value (e.g. the letter a becomes 97), or will ignore characters that cannot be represented as integers and only convert the ones that can - or return 0 / empty. You have to get more specific on the framework/language to provide more information.
String to integer:
Many (most) languages represent strings, on some level or another, as an array (or list) of characters, which are also short integers. Map the ones corresponding to number characters to their number value. For example, '0' in ascii is represented by 48. So you map 48 to 0, 49 to 1, and so on to 9.
Starting from the left, you multiply your current total by 10, add the next character's value, and move on. (You can make a larger or smaller map, change the number you multiply by at each step, and convert strings of any base you like.)
Integer to string is a longer process involving base conversion to 10. I suppose that since most integers have limited bits (32 or 64, usually), you know that it will come to a certain number of characters at most in a string (20?). So you can set up your own adder and iterate through each place for each bit after calculating its value (2^place).

Resources