Unexplained behavior in terminal with is operator [duplicate] - python-3.x

This question already has answers here:
Compare if two variables reference the same object in python
(6 answers)
Closed 5 months ago.
The is operator does not match the values of the variables, but the
instances themselves.
What does it really mean?
I declared two variables named x and y assigning the same values in both variables, but it returns false when I use the is operator.
I need a clarification. Here is my code.
x = [1, 2, 3]
y = [1, 2, 3]
print(x is y) # It prints false!

You misunderstood what the is operator tests. It tests if two variables point the same object, not if two variables have the same value.
From the documentation for the is operator:
The operators is and is not test for object identity: x is y is true if and only if x and y are the same object.
Use the == operator instead:
print(x == y)
This prints True. x and y are two separate lists:
x[0] = 4
print(y) # prints [1, 2, 3]
print(x == y) # prints False
If you use the id() function you'll see that x and y have different identifiers:
>>> id(x)
4401064560
>>> id(y)
4401098192
but if you were to assign y to x then both point to the same object:
>>> x = y
>>> id(x)
4401064560
>>> id(y)
4401064560
>>> x is y
True
and is shows both are the same object, it returns True.
Remember that in Python, names are just labels referencing values; you can have multiple names point to the same object. is tells you if two names point to one and the same object. == tells you if two names refer to objects that have the same value.

Another duplicate was asking why two equal strings are generally not identical, which isn't really answered here:
>>> x = 'a'
>>> x += 'bc'
>>> y = 'abc'
>>> x == y
True
>>> x is y
False
So, why aren't they the same string? Especially given this:
>>> z = 'abc'
>>> w = 'abc'
>>> z is w
True
Let's put off the second part for a bit. How could the first one be true?
The interpreter would have to have an "interning table", a table mapping string values to string objects, so every time you try to create a new string with the contents 'abc', you get back the same object. Wikipedia has a more detailed discussion on how interning works.
And Python has a string interning table; you can manually intern strings with the sys.intern method.
In fact, Python is allowed to automatically intern any immutable types, but not required to do so. Different implementations will intern different values.
CPython (the implementation you're using if you don't know which implementation you're using) auto-interns small integers and some special singletons like False, but not strings (or large integers, or small tuples, or anything else). You can see this pretty easily:
>>> a = 0
>>> a += 1
>>> b = 1
>>> a is b
True
>>> a = False
>>> a = not a
>>> b = True
a is b
True
>>> a = 1000
>>> a += 1
>>> b = 1001
>>> a is b
False
OK, but why were z and w identical?
That's not the interpreter automatically interning, that's the compiler folding values.
If the same compile-time string appears twice in the same module (what exactly this means is hard to define—it's not the same thing as a string literal, because r'abc', 'abc', and 'a' 'b' 'c' are all different literals but the same string—but easy to understand intuitively), the compiler will only create one instance of the string, with two references.
In fact, the compiler can go even further: 'ab' + 'c' can be converted to 'abc' by the optimizer, in which case it can be folded together with an 'abc' constant in the same module.
Again, this is something Python is allowed but not required to do. But in this case, CPython always folds small strings (and also, e.g., small tuples). (Although the interactive interpreter's statement-by-statement compiler doesn't run the same optimization as the module-at-a-time compiler, so you won't see exactly the same results interactively.)
So, what should you do about this as a programmer?
Well… nothing. You almost never have any reason to care if two immutable values are identical. If you want to know when you can use a is b instead of a == b, you're asking the wrong question. Just always use a == b except in two cases:
For more readable comparisons to the singleton values like x is None.
For mutable values, when you need to know whether mutating x will affect the y.

is only returns true if they're actually the same object. If they were the same, a change to one would also show up in the other. Here's an example of the difference.
>>> x = [1, 2, 3]
>>> y = [1, 2, 3]
>>> print x is y
False
>>> z = y
>>> print y is z
True
>>> print x is z
False
>>> y[0] = 5
>>> print z
[5, 2, 3]

Prompted by a duplicate question, this analogy might work:
# - Darling, I want some pudding!
# - There is some in the fridge.
pudding_to_eat = fridge_pudding
pudding_to_eat is fridge_pudding
# => True
# - Honey, what's with all the dirty dishes?
# - I wanted to eat pudding so I made some. Sorry about the mess, Darling.
# - But there was already some in the fridge.
pudding_to_eat = make_pudding(ingredients)
pudding_to_eat is fridge_pudding
# => False

is and is not are the two identity operators in Python. is operator does not compare the values of the variables, but compares the identities of the variables. Consider this:
>>> a = [1,2,3]
>>> b = [1,2,3]
>>> hex(id(a))
'0x1079b1440'
>>> hex(id(b))
'0x107960878'
>>> a is b
False
>>> a == b
True
>>>
The above example shows you that the identity (can also be the memory address in Cpython) is different for both a and b (even though their values are the same). That is why when you say a is b it returns false due to the mismatch in the identities of both the operands. However when you say a == b, it returns true because the == operation only verifies if both the operands have the same value assigned to them.
Interesting example (for the extra grade):
>>> del a
>>> del b
>>> a = 132
>>> b = 132
>>> hex(id(a))
'0x7faa2b609738'
>>> hex(id(b))
'0x7faa2b609738'
>>> a is b
True
>>> a == b
True
>>>
In the above example, even though a and b are two different variables, a is b returned True. This is because the type of a is int which is an immutable object. So python (I guess to save memory) allocated the same object to b when it was created with the same value. So in this case, the identities of the variables matched and a is b turned out to be True.
This will apply for all immutable objects:
>>> del a
>>> del b
>>> a = "asd"
>>> b = "asd"
>>> hex(id(a))
'0x1079b05a8'
>>> hex(id(b))
'0x1079b05a8'
>>> a is b
True
>>> a == b
True
>>>
Hope that helps.

x is y is same as id(x) == id(y), comparing identity of objects.
As #tomasz-kurgan pointed out in the comment below is operator behaves unusually with certain objects.
E.g.
>>> class A(object):
... def foo(self):
... pass
...
>>> a = A()
>>> a.foo is a.foo
False
>>> id(a.foo) == id(a.foo)
True
Ref;
https://docs.python.org/2/reference/expressions.html#is-not
https://docs.python.org/2/reference/expressions.html#id24

As you can check here to a small integers. Numbers above 257 are not an small ints, so it is calculated as a different object.
It is better to use == instead in this case.
Further information is here: http://docs.python.org/2/c-api/int.html

X points to an array, Y points to a different array. Those arrays are identical, but the is operator will look at those pointers, which are not identical.

It compares object identity, that is, whether the variables refer to the same object in memory. It's like the == in Java or C (when comparing pointers).

A simple example with fruits
fruitlist = [" apple ", " banana ", " cherry ", " durian "]
newfruitlist = fruitlist
verynewfruitlist = fruitlist [:]
print ( fruitlist is newfruitlist )
print ( fruitlist is verynewfruitlist )
print ( newfruitlist is verynewfruitlist )
Output:
True
False
False
If you try
fruitlist = [" apple ", " banana ", " cherry ", " durian "]
newfruitlist = fruitlist
verynewfruitlist = fruitlist [:]
print ( fruitlist == newfruitlist )
print ( fruitlist == verynewfruitlist )
print ( newfruitlist == verynewfruitlist )
The output is different:
True
True
True
That's because the == operator compares just the content of the variable. To compare the identities of 2 variable use the is operator
To print the identification number:
print ( id( variable ) )

The is operator is nothing but an English version of ==.
Because the IDs of the two lists are different so the answer is false.
You can try:
a=[1,2,3]
b=a
print(b is a )#True
*Because the IDs of both the list would be same

Related

Python is operator behaviour with integers [duplicate]

After dive into Python's source code, I find out that it maintains an array of PyInt_Objects ranging from int(-5) to int(256) (#src/Objects/intobject.c)
A little experiment proves it:
>>> a = 1
>>> b = 1
>>> a is b
True
>>> a = 257
>>> b = 257
>>> a is b
False
But if I run those code together in a py file (or join them with semi-colons), the result is different:
>>> a = 257; b = 257; a is b
True
I'm curious why they are still the same object, so I digg deeper into the syntax tree and compiler, I came up with a calling hierarchy listed below:
PyRun_FileExFlags()
mod = PyParser_ASTFromFile()
node *n = PyParser_ParseFileFlagsEx() //source to cst
parsetoke()
ps = PyParser_New()
for (;;)
PyTokenizer_Get()
PyParser_AddToken(ps, ...)
mod = PyAST_FromNode(n, ...) //cst to ast
run_mod(mod, ...)
co = PyAST_Compile(mod, ...) //ast to CFG
PyFuture_FromAST()
PySymtable_Build()
co = compiler_mod()
PyEval_EvalCode(co, ...)
PyEval_EvalCodeEx()
Then I added some debug code in PyInt_FromLong and before/after PyAST_FromNode, and executed a test.py:
a = 257
b = 257
print "id(a) = %d, id(b) = %d" % (id(a), id(b))
the output looks like:
DEBUG: before PyAST_FromNode
name = a
ival = 257, id = 176046536
name = b
ival = 257, id = 176046752
name = a
name = b
DEBUG: after PyAST_FromNode
run_mod
PyAST_Compile ok
id(a) = 176046536, id(b) = 176046536
Eval ok
It means that during the cst to ast transform, two different PyInt_Objects are created (actually it's performed in the ast_for_atom() function), but they are later merged.
I find it hard to comprehend the source in PyAST_Compile and PyEval_EvalCode, so I'm here to ask for help, I'll be appreciative if some one gives a hint?
Python caches integers in the range [-5, 256], so integers in that range are usually but not always identical.
What you see for 257 is the Python compiler optimizing identical literals when compiled in the same code object.
When typing in the Python shell each line is a completely different statement, parsed and compiled separately, thus:
>>> a = 257
>>> b = 257
>>> a is b
False
But if you put the same code into a file:
$ echo 'a = 257
> b = 257
> print a is b' > testing.py
$ python testing.py
True
This happens whenever the compiler has a chance to analyze the literals together, for example when defining a function in the interactive interpreter:
>>> def test():
... a = 257
... b = 257
... print a is b
...
>>> dis.dis(test)
2 0 LOAD_CONST 1 (257)
3 STORE_FAST 0 (a)
3 6 LOAD_CONST 1 (257)
9 STORE_FAST 1 (b)
4 12 LOAD_FAST 0 (a)
15 LOAD_FAST 1 (b)
18 COMPARE_OP 8 (is)
21 PRINT_ITEM
22 PRINT_NEWLINE
23 LOAD_CONST 0 (None)
26 RETURN_VALUE
>>> test()
True
>>> test.func_code.co_consts
(None, 257)
Note how the compiled code contains a single constant for the 257.
In conclusion, the Python bytecode compiler is not able to perform massive optimizations (like statically typed languages), but it does more than you think. One of these things is to analyze usage of literals and avoid duplicating them.
Note that this does not have to do with the cache, because it works also for floats, which do not have a cache:
>>> a = 5.0
>>> b = 5.0
>>> a is b
False
>>> a = 5.0; b = 5.0
>>> a is b
True
For more complex literals, like tuples, it "doesn't work":
>>> a = (1,2)
>>> b = (1,2)
>>> a is b
False
>>> a = (1,2); b = (1,2)
>>> a is b
False
But the literals inside the tuple are shared:
>>> a = (257, 258)
>>> b = (257, 258)
>>> a[0] is b[0]
False
>>> a[1] is b[1]
False
>>> a = (257, 258); b = (257, 258)
>>> a[0] is b[0]
True
>>> a[1] is b[1]
True
(Note that constant folding and the peephole optimizer can change behaviour even between bugfix versions, so which examples return True or False is basically arbitrary and will change in the future).
Regarding why you see that two PyInt_Object are created, I'd guess that this is done to avoid literal comparison. for example, the number 257 can be expressed by multiple literals:
>>> 257
257
>>> 0x101
257
>>> 0b100000001
257
>>> 0o401
257
The parser has two choices:
Convert the literals to some common base before creating the integer, and see if the literals are equivalent. then create a single integer object.
Create the integer objects and see if they are equal. If yes, keep only a single value and assign it to all the literals, otherwise, you already have the integers to assign.
Probably the Python parser uses the second approach, which avoids rewriting the conversion code and also it's easier to extend (for example it works with floats as well).
Reading the Python/ast.c file, the function that parses all numbers is parsenumber, which calls PyOS_strtoul to obtain the integer value (for intgers) and eventually calls PyLong_FromString:
x = (long) PyOS_strtoul((char *)s, (char **)&end, 0);
if (x < 0 && errno == 0) {
return PyLong_FromString((char *)s,
(char **)0,
0);
}
As you can see here the parser does not check whether it already found an integer with the given value and so this explains why you see that two int objects are created,
and this also means that my guess was correct: the parser first creates the constants and only afterward optimizes the bytecode to use the same object for equal constants.
The code that does this check must be somewhere in Python/compile.c or Python/peephole.c, since these are the files that transform the AST into bytecode.
In particular, the compiler_add_o function seems the one that does it. There is this comment in compiler_lambda:
/* Make None the first constant, so the lambda can't have a
docstring. */
if (compiler_add_o(c, c->u->u_consts, Py_None) < 0)
return 0;
So it seems like compiler_add_o is used to insert constants for functions/lambdas etc.
The compiler_add_o function stores the constants into a dict object, and from this immediately follows that equal constants will fall in the same slot, resulting in a single constant in the final bytecode.

Why is it that thing += ['x', 1] acts as .extend but thing = thing + ['x', 1] creates a new object? [duplicate]

The += operator in python seems to be operating unexpectedly on lists. Can anyone tell me what is going on here?
class foo:
bar = []
def __init__(self,x):
self.bar += [x]
class foo2:
bar = []
def __init__(self,x):
self.bar = self.bar + [x]
f = foo(1)
g = foo(2)
print f.bar
print g.bar
f.bar += [3]
print f.bar
print g.bar
f.bar = f.bar + [4]
print f.bar
print g.bar
f = foo2(1)
g = foo2(2)
print f.bar
print g.bar
OUTPUT
[1, 2]
[1, 2]
[1, 2, 3]
[1, 2, 3]
[1, 2, 3, 4]
[1, 2, 3]
[1]
[2]
foo += bar seems to affect every instance of the class, whereas foo = foo + bar seems to behave in the way I would expect things to behave.
The += operator is called a "compound assignment operator".
The general answer is that += tries to call the __iadd__ special method, and if that isn't available it tries to use __add__ instead. So the issue is with the difference between these special methods.
The __iadd__ special method is for an in-place addition, that is it mutates the object that it acts on. The __add__ special method returns a new object and is also used for the standard + operator.
So when the += operator is used on an object which has an __iadd__ defined the object is modified in place. Otherwise it will instead try to use the plain __add__ and return a new object.
That is why for mutable types like lists += changes the object's value, whereas for immutable types like tuples, strings and integers a new object is returned instead (a += b becomes equivalent to a = a + b).
For types that support both __iadd__ and __add__ you therefore have to be careful which one you use. a += b will call __iadd__ and mutate a, whereas a = a + b will create a new object and assign it to a. They are not the same operation!
>>> a1 = a2 = [1, 2]
>>> b1 = b2 = [1, 2]
>>> a1 += [3] # Uses __iadd__, modifies a1 in-place
>>> b1 = b1 + [3] # Uses __add__, creates new list, assigns it to b1
>>> a2
[1, 2, 3] # a1 and a2 are still the same list
>>> b2
[1, 2] # whereas only b1 was changed
For immutable types (where you don't have an __iadd__) a += b and a = a + b are equivalent. This is what lets you use += on immutable types, which might seem a strange design decision until you consider that otherwise you couldn't use += on immutable types like numbers!
For the general case, see Scott Griffith's answer. When dealing with lists like you are, though, the += operator is a shorthand for someListObject.extend(iterableObject). See the documentation of extend().
The extend function will append all elements of the parameter to the list.
When doing foo += something you're modifying the list foo in place, thus you don't change the reference that the name foo points to, but you're changing the list object directly. With foo = foo + something, you're actually creating a new list.
This example code will explain it:
>>> l = []
>>> id(l)
13043192
>>> l += [3]
>>> id(l)
13043192
>>> l = l + [3]
>>> id(l)
13059216
Note how the reference changes when you reassign the new list to l.
As bar is a class variable instead of an instance variable, modifying in place will affect all instances of that class. But when redefining self.bar, the instance will have a separate instance variable self.bar without affecting the other class instances.
The problem here is, bar is defined as a class attribute, not an instance variable.
In foo, the class attribute is modified in the init method, that's why all instances are affected.
In foo2, an instance variable is defined using the (empty) class attribute, and every instance gets its own bar.
The "correct" implementation would be:
class foo:
def __init__(self, x):
self.bar = [x]
Of course, class attributes are completely legal. In fact, you can access and modify them without creating an instance of the class like this:
class foo:
bar = []
foo.bar = [x]
There are two things involved here:
1. class attributes and instance attributes
2. difference between the operators + and += for lists
+ operator calls the __add__ method on a list. It takes all the elements from its operands and makes a new list containing those elements maintaining their order.
+= operator calls __iadd__ method on the list. It takes an iterable and appends all the elements of the iterable to the list in place. It does not create a new list object.
In class foo the statement self.bar += [x] is not an assignment statement but actually translates to
self.bar.__iadd__([x]) # modifies the class attribute
which modifies the list in place and acts like the list method extend.
In class foo2, on the contrary, the assignment statement in the init method
self.bar = self.bar + [x]
can be deconstructed as:
The instance has no attribute bar (there is a class attribute of the same name, though) so it accesses the class attribute bar and creates a new list by appending x to it. The statement translates to:
self.bar = self.bar.__add__([x]) # bar on the lhs is the class attribute
Then it creates an instance attribute bar and assigns the newly created list to it. Note that bar on the rhs of the assignment is different from the bar on the lhs.
For instances of class foo, bar is a class attribute and not instance attribute. Hence any change to the class attribute bar will be reflected for all instances.
On the contrary, each instance of the class foo2 has its own instance attribute bar which is different from the class attribute of the same name bar.
f = foo2(4)
print f.bar # accessing the instance attribute. prints [4]
print f.__class__.bar # accessing the class attribute. prints []
Hope this clears things.
Although much time has passed and many correct things were said, there is no answer which bundles both effects.
You have 2 effects:
a "special", maybe unnoticed behaviour of lists with += (as stated by Scott Griffiths)
the fact that class attributes as well as instance attributes are involved (as stated by Can Berk Büder)
In class foo, the __init__ method modifies the class attribute. It is because self.bar += [x] translates to self.bar = self.bar.__iadd__([x]). __iadd__() is for inplace modification, so it modifies the list and returns a reference to it.
Note that the instance dict is modified although this would normally not be necessary as the class dict already contains the same assignment. So this detail goes almost unnoticed - except if you do a foo.bar = [] afterwards. Here the instances's bar stays the same thanks to the said fact.
In class foo2, however, the class's bar is used, but not touched. Instead, a [x] is added to it, forming a new object, as self.bar.__add__([x]) is called here, which doesn't modify the object. The result is put into the instance dict then, giving the instance the new list as a dict, while the class's attribute stays modified.
The distinction between ... = ... + ... and ... += ... affects as well the assignments afterwards:
f = foo(1) # adds 1 to the class's bar and assigns f.bar to this as well.
g = foo(2) # adds 2 to the class's bar and assigns g.bar to this as well.
# Here, foo.bar, f.bar and g.bar refer to the same object.
print f.bar # [1, 2]
print g.bar # [1, 2]
f.bar += [3] # adds 3 to this object
print f.bar # As these still refer to the same object,
print g.bar # the output is the same.
f.bar = f.bar + [4] # Construct a new list with the values of the old ones, 4 appended.
print f.bar # Print the new one
print g.bar # Print the old one.
f = foo2(1) # Here a new list is created on every call.
g = foo2(2)
print f.bar # So these all obly have one element.
print g.bar
You can verify the identity of the objects with print id(foo), id(f), id(g) (don't forget the additional ()s if you are on Python3).
BTW: The += operator is called "augmented assignment" and generally is intended to do inplace modifications as far as possible.
The other answers would seem to pretty much have it covered, though it seems worth quoting and referring to the Augmented Assignments PEP 203:
They [the augmented assignment operators] implement the same operator
as their normal binary form, except that the operation is done
`in-place' when the left-hand side object supports it, and that the
left-hand side is only evaluated once.
...
The idea behind augmented
assignment in Python is that it isn't just an easier way to write the
common practice of storing the result of a binary operation in its
left-hand operand, but also a way for the left-hand operand in
question to know that it should operate `on itself', rather than
creating a modified copy of itself.
>>> elements=[[1],[2],[3]]
>>> subset=[]
>>> subset+=elements[0:1]
>>> subset
[[1]]
>>> elements
[[1], [2], [3]]
>>> subset[0][0]='change'
>>> elements
[['change'], [2], [3]]
>>> a=[1,2,3,4]
>>> b=a
>>> a+=[5]
>>> a,b
([1, 2, 3, 4, 5], [1, 2, 3, 4, 5])
>>> a=[1,2,3,4]
>>> b=a
>>> a=a+[5]
>>> a,b
([1, 2, 3, 4, 5], [1, 2, 3, 4])
>>> a = 89
>>> id(a)
4434330504
>>> a = 89 + 1
>>> print(a)
90
>>> id(a)
4430689552 # this is different from before!
>>> test = [1, 2, 3]
>>> id(test)
48638344L
>>> test2 = test
>>> id(test)
48638344L
>>> test2 += [4]
>>> id(test)
48638344L
>>> print(test, test2) # [1, 2, 3, 4] [1, 2, 3, 4]```
([1, 2, 3, 4], [1, 2, 3, 4])
>>> id(test2)
48638344L # ID is different here
We see that when we attempt to modify an immutable object (integer in this case), Python simply gives us a different object instead. On the other hand, we are able to make changes to an mutable object (a list) and have it remain the same object throughout.
ref : https://medium.com/#tyastropheus/tricky-python-i-memory-management-for-mutable-immutable-objects-21507d1e5b95
Also refer below url to understand the shallowcopy and deepcopy
https://www.geeksforgeeks.org/copy-python-deep-copy-shallow-copy/
listname.extend() works great for this purpose :)

Python assigning variables with an OR on assignment, multiple statements in one line?

I am not super familiar with python, and I am having trouble reading this code. I have never seen this syntax, where there multiple statements are paired together (I think) on one line, separated by commas.
if L1.data < L2.data:
tail.next, L1 = L1, L1.next
Also, I don't understand assignment in python with "or": where is the conditional getting evaluated? See this example. When would tail.next be assigned L1, and when would tail.next be assigned L2?
tail.next = L1 or L2
Any clarification would be greatly appreciated. I haven't been able to find much on either syntax
See below
>>> a = 0
>>> b = 1
>>> a, b
(0, 1)
>>> a, b = b, a
>>> a, b
(1, 0)
>>>
It allows one to swap values without requiring a temporary variable.
In your case, the line
tail.next, L1 = L1, L1.next
is equivalent to
tail.next = L1
L1 = L1.next
In python when we write any comma separated values it creates a tuple (a kind of a datastructure).
a = 4,5
type(a) --> tuple
This is called tuple packing.
When we do:
a, b = 4,5
This is called tuple unpacking. It is equivalent to:
a = 4
b = 5
or is the boolean operator here.

How can i convert many variable to int in one line

I started to learn Python a few days ago.
I know that I can convert variables into int, such as x = int (x)
but when I have 5 variables, for example, is there a better way to convert these variables in one line? In my code, I have 2 variables, but what if I have 5 or more variables to convert, I think there is a way
You for help
(Sorry for my English)
x,y=input().split()
y=int(y)
x=int(x)
print(x+y)
You could use something like this .
a,b,c,d=[ int(i) for i in input().split()]
Check this small example.
>>> values = [int(x) for x in input().split()]
1 2 3 4 5
>>> values
[1, 2, 3, 4, 5]
>>> values[0]
1
>>> values[1]
2
>>> values[2]
3
>>> values[3]
4
>>> values[4]
5
You have to enter value separated with spaces. Then it convert to integer and save into list. As a beginner you won't understand what the List Comprehensions is. This is what documentation mention about it.
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
So the extracted version of [int(x) for x in input().split()] is similar to below function,
>>> values = []
>>> input_values = input().split()
1 2 3 4 5
>>> for val in input_values:
... values.append(int(val))
...
>>> values
[1, 2, 3, 4, 5]
You don't need to create multiple variables to save your values, as this example all the values are saved in values list. So you can access the first element by values[0] (0th element is the first value). When the number of input values are large, let's say 100, you have to create 100 variables to save it. But you can access 100th value by values[99].
This will work with any number of values:
# Split the input and convert each value to int
valuesAsInt = [int(x) for x in input().split()]
# Print the sum of those values
print(sum(valuesAsInt))
The first line is a list comprehension, which is a handy way to map each value in a list to another value. Here you're mapping each string x to int(x), leaving you with a list of integers.
In the second line, sum() sums the whole array, simple as that.
There is one easy way of converting multiple variables into integer in python:
right, left, top, bottom = int(right), int(left), int(top), int(bottom)
You could use the map function.
x, y = map(int, input().split())
print x + y
if the input was:
1 2
the output would be:
3
You could also use tuple unpacking:
x, y = input().split()
x, y = int(x), int(y)
I hope this helped you, have a nice day!

I am loading a dataset in python,I tried with and without the comma and the result is the same. Can anyone explain its use?

I am loading a json dataset and assigning it to variables. I tried using it with or without comma but the result is same. Can anyone explain me the significance of this commma here?
with open('datafile01.json', 'rb') as fa, open('datafile02.json', 'rb') as fb:
policies, = json.load(fa).values()
shifts, = json.load(fb).values()
This:
a, = some_sequence
is a degenerate form of sequence unpacking, which only works for sequences of length 1 (else it raises a ValueError with either "too many values to unpack" or "need more than 0 values to unpack").
And this does NOT yield the same result as directly binding the sequence:
>>> d = {"x":1}
>>> a, = d.values()
>>> a
1
>>> a = d.values()
>>> a
dict_values([1])
>>>

Resources