Why map is not executing a function in python 3 - python-3.x

I have written a below small python program
def abc(x):
print(x)
and then called
map(abc, [1,2,3])
but the above map function has just displayed
<map object at 0x0000000001F0BC88>
instead of printing x value.
I know map is an iterator in python 3, but still it should have printed the 'x' value right. Does it mean that abc(x) method is not called when we use "map"?

The map iterator lazily computes the values, so you will not see the output until you iterate through them. Here's an explicit way you could make the values print out:
def abc(x):
print(x)
it = map(abc, [1,2,3])
next(it)
next(it)
next(it)
The next function calls it.__next__ to step to the next value. This is what is used under the hood when you use for i in it: # do something or when you construct a list from an iterator, list(it), so doing either of these things would also print out of the values.
So, why laziness? It comes in handy when working with very large or infinite sequences. Imagine if instead of passing [1,2,3] to map, you passed itertools.count() to it. The laziness allows you to still iterate over the resulting map without trying to generate all (and there are infinitely many) values up front.

lazy-evaluation
map(or range, etc.) in Python3 is lazy-evaluation: it will evaluate when you need it.
If you want a result of map, you can use:
list(map(abc, [1,2,3]))

Related

adding two lists works but using append() retuns none [duplicate]

I've noticed that many operations on lists that modify the list's contents will return None, rather than returning the list itself. Examples:
>>> mylist = ['a', 'b', 'c']
>>> empty = mylist.clear()
>>> restored = mylist.extend(range(3))
>>> backwards = mylist.reverse()
>>> with_four = mylist.append(4)
>>> in_order = mylist.sort()
>>> without_one = mylist.remove(1)
>>> mylist
[0, 2, 4]
>>> [empty, restored, backwards, with_four, in_order, without_one]
[None, None, None, None, None, None]
What is the thought process behind this decision?
To me, it seems hampering, since it prevents "chaining" of list processing (e.g. mylist.reverse().append('a string')[:someLimit]). I imagine it might be that "The Powers That Be" decided that list comprehension is a better paradigm (a valid opinion), and so didn't want to encourage other methods - but it seems perverse to prevent an intuitive method, even if better alternatives exist.
This question is specifically about Python's design decision to return None from mutating list methods like .append. Novices often write incorrect code that expects .append (in particular) to return the same list that was just modified.
For the simple question of "how do I append to a list?" (or debugging questions that boil down to that problem), see Why does "x = x.append([i])" not work in a for loop?.
To get modified versions of the list, see:
For .sort: How can I get a sorted copy of a list?
For .reverse: How can I get a reversed copy of a list (avoid a separate statement when chaining a method after .reverse)?
The same issue applies to some methods of other built-in data types, e.g. set.discard (see How to remove specific element from sets inside a list using list comprehension) and dict.update (see Why doesn't a python dict.update() return the object?).
The same reasoning applies to designing your own APIs. See Is making in-place operations return the object a bad idea?.
The general design principle in Python is for functions that mutate an object in-place to return None. I'm not sure it would have been the design choice I'd have chosen, but it's basically to emphasise that a new object is not returned.
Guido van Rossum (our Python BDFL) states the design choice on the Python-Dev mailing list:
I'd like to explain once more why I'm so adamant that sort() shouldn't
return 'self'.
This comes from a coding style (popular in various other languages, I
believe especially Lisp revels in it) where a series of side effects
on a single object can be chained like this:
x.compress().chop(y).sort(z)
which would be the same as
x.compress()
x.chop(y)
x.sort(z)
I find the chaining form a threat to readability; it requires that the
reader must be intimately familiar with each of the methods. The
second form makes it clear that each of these calls acts on the same
object, and so even if you don't know the class and its methods very
well, you can understand that the second and third call are applied to
x (and that all calls are made for their side-effects), and not to
something else.
I'd like to reserve chaining for operations that return new values,
like string processing operations:
y = x.rstrip("\n").split(":").lower()
There are a few standard library modules that encourage chaining of
side-effect calls (pstat comes to mind). There shouldn't be any new
ones; pstat slipped through my filter when it was weak.
I can't speak for the developers, but I find this behavior very intuitive.
If a method works on the original object and modifies it in-place, it doesn't return anything, because there is no new information - you obviously already have a reference to the (now mutated) object, so why return it again?
If, however, a method or function creates a new object, then of course it has to return it.
So l.reverse() returns nothing (because now the list has been reversed, but the identfier l still points to that list), but reversed(l) has to return the newly generated list because l still points to the old, unmodified list.
EDIT: I just learned from another answer that this principle is called Command-Query separation.
One could argue that the signature itself makes it clear that the function mutates the list rather than returning a new one: if the function returned a list, its behavior would have been much less obvious.
If you were sent here after asking for help fixing your code:
In the future, please try to look for problems in the code yourself, by carefully studying what happens when the code runs. Rather than giving up because there is an error message, check the result of each calculation, and see where the code starts working differently from what you expect.
If you had code calling a method like .append or .sort on a list, you will notice that the return value is None, while the list is modified in place. Study the example carefully:
>>> x = ['e', 'x', 'a', 'm', 'p', 'l', 'e']
>>> y = x.sort()
>>> print(y)
None
>>> print(x)
['a', 'e', 'e', 'l', 'm', 'p', 'x']
y got the special None value, because that is what was returned. x changed, because the sort happened in place.
It works this way on purpose, so that code like x.sort().reverse() breaks. See the other answers to understand why the Python developers wanted it that way.
To fix the problem
First, think carefully about the intent of the code. Should x change? Do we actually need a separate y?
Let's consider .sort first. If x should change, then call x.sort() by itself, without assigning the result anywhere.
If a sorted copy is needed instead, use y = x.sorted(). See How can I get a sorted copy of a list? for details.
For other methods, we can get modified copies like so:
.clear -> there is no point to this; a "cleared copy" of the list is just an empty list. Just use y = [].
.append and .extend -> probably the simplest way is to use the + operator. To add multiple elements from a list l, use y = x + l rather than .extend. To add a single element e wrap it in a list first: y = x + [e]. Another way in 3.5 and up is to use unpacking: y = [*x, *l] for .extend, y = [*x, e] for .append. See also How to allow list append() method to return the new list for .append and How do I concatenate two lists in Python? for .extend.
.reverse -> First, consider whether an actual copy is needed. The built-in reversed gives you an iterator that can be used to loop over the elements in reverse order. To make an actual copy, simply pass that iterator to list: y = list(reversed(x)). See How can I get a reversed copy of a list (avoid a separate statement when chaining a method after .reverse)? for details.
.remove -> Figure out the index of the element that will be removed (using .index), then use slicing to find the elements before and after that point and put them together. As a function:
def without(a_list, value):
index = a_list.index(value)
return a_list[:index] + a_list[index+1:]
(We can translate .pop similarly to make a modified copy, though of course .pop actually returns an element from the list.)
See also A quick way to return list without a specific element in Python.
(If you plan to remove multiple elements, strongly consider using a list comprehension (or filter) instead. It will be much simpler than any of the workarounds needed for removing items from the list while iterating over it. This way also naturally gives a modified copy.)
For any of the above, of course, we can also make a modified copy by explicitly making a copy and then using the in-place method on the copy. The most elegant approach will depend on the context and on personal taste.
As we know list in python is a mutable object and one of characteristics of mutable object is the ability to modify the state of this object without the need to assign its new state to a variable. we should demonstrate more about this topic to understand the root of this issue.
An object whose internal state can be changed is mutable. On the other hand, immutable doesn’t allow any change in the object once it has been created. Object mutability is one of the characteristics that makes Python a dynamically typed language.
Every object in python has three attributes:
Identity – This refers to the address that the object refers to in the computer’s memory.
Type – This refers to the kind of object that is created. For example integer, list, string etc.
Value – This refers to the value stored by the object. For example str = "a".
While ID and Type cannot be changed once it’s created, values can be changed for Mutable objects.
let us discuss the below code step-by-step to depict what it means in Python:
Creating a list which contains name of cities
cities = ['London', 'New York', 'Chicago']
Printing the location of the object created in the memory address in hexadecimal format
print(hex(id(cities)))
Output [1]: 0x1691d7de8c8
Adding a new city to the list cities
cities.append('Delhi')
Printing the elements from the list cities, separated by a comma
for city in cities:
print(city, end=', ')
Output [2]: London, New York, Chicago, Delhi
Printing the location of the object created in the memory address in hexadecimal format
print(hex(id(cities)))
Output [3]: 0x1691d7de8c8
The above example shows us that we were able to change the internal state of the object cities by adding one more city 'Delhi' to it, yet, the memory address of the object did not change. This confirms that we did not create a new object, rather, the same object was changed or mutated. Hence, we can say that the object which is a type of list with reference variable name cities is a MUTABLE OBJECT.
While the immutable object internal state can not be changed. For instance, consider the below code and associated error message with it, while trying to change the value of a Tuple at index 0
Creating a Tuple with variable name foo
foo = (1, 2)
Changing the index 0 value from 1 to 3
foo[0] = 3
TypeError: 'tuple' object does not support item assignment
We can conclude from the examples why mutable object shouldn't return anything when executing operations on it because it's modifying the internal state of the object directly and there is no point in returning new modified object. unlike immutable object which should return new object of the modified state after executing operations on it.
First of All, I should tell that what I am suggesting is without a doubt, a bad programming practice but if you want to use append in lambda function and you don't care about the code readability, there is way to just do that.
Imagine you have a list of lists and you want to append a element to each inner lists using map and lambda. here is how you can do that:
my_list = [[1, 2, 3, 4],
[3, 2, 1],
[1, 1, 1]]
my_new_element = 10
new_list = list(map(lambda x: [x.append(my_new_element), x][1], my_list))
print(new_list)
How it works:
when lambda wants to calculate to output, first it should calculate the [x.append(my_new_element), x] expression. To calculate this expression the append function will run and the result of expression will be [None, x] and by specifying that you want the second element of the list the result of [None,x][1] will be x
Using custom function is more readable and the better option:
def append_my_list(input_list, new_element):
input_list.append(new_element)
return input_list
my_list = [[1, 2, 3, 4],
[3, 2, 1],
[1, 1, 1]]
my_new_element = 10
new_list = list(map(lambda x: append_my_list(x, my_new_element), my_list))
print(new_list)

Does Python implement short-circuiting in built-in functions such as min()?

Does Python 3 implement short-circuiting in built-in functions whenever possible, just like it does for boolean statements?
A specific example, take the below code snippet:
min((20,11), key = lambda x : x % 10) # 20
Does Python evaluate beforehand that the minimum value possible of the function passed as the key argument is 0, and therefore stops right after evaluating the first integer in the iterable passed (20) as 20 % 10 is equal to 0?
Or does it have to evaluate all the elements in the iterable before returning the answer?
I guess short-circuiting isn't even always possible especially for more complex functions, but what about for well-known, built-in functions or operators like %?
I couldn't find the answer in the official docs.
Thanks,
python has to evaluate all values inside the iterable because the languaje evaluate element by element, if you have in your tuple something that is not a number it will trigger an exception when try to perform the % operation. Python can not guess what is inside your list. You can test this by defining a function instead of a lambda and set debug point inside.
def my_mod(x):
import ipdb; ipdb.set_trace()
return x % 20
then call the function
min((20,11), key = my_mod)
you can do a quick error test case with
min((20,11, "s"), key = my_mod)
It will trigger an exception but first had to evaluate all the previous element in the list.

Finding item at index i of an iterable (syntactic sugar)?

I know that maps, range, filters etc. in python3 return iterables, and only calculate value when required. Suppose that there is a map M. I want to print the i^th element of M.
One way would be to iterate till i^th value, and print it:
for _ in range(i):
next(M)
print(next(M))
The above takes O(i) time, where I have to find the i^th value.
Another way is to convert to a list, and print the i^th value:
print(list(M)[i])
This however, takes O(n) time and O(n) space (where n is the size of the list from which the map M is created). However, this suits the so-called "Pythonic way of writing one-liners."
I was wondering if there is a syntactic sugar to minimise writing in the first way? (i.e., if there is a way which takes O(i) time, no extra space, and is more suited to the "Pythonic way of writing".)
You can use islice:
from itertools import islice
i = 3
print(next(islice(iterable), i, i + 1))
This outputs '3'.
It actually doesn't matter what you use as the stop argument, as long as you call next once.
Thanks to #DeepSpace for the reference to the official docs, I found the following:
from more_itertools import nth
print(nth(M, i))
It prints the element at i^th index of the iterable.

what is the use of * in print(*a) where 'a' is a list in python

I am a python newbie. I saw a code which had * inside a print function // print(*a) // in which 'a' was a list. I know * is multiplication operator in python, but don't know what's it in a list
(If you don't know about the variable number of argument methods, leave this topic and learn this after that)
Unpacking elements in list
Consider new_list = [1, 2, 3]. Now assume you have a function called addNum(*arguments) that expects 'n' number of arguments at different instances.
case 1:
Consider calling our function with one parameter in the list. How will you call it? Will you do it by addNum(new_list[0])?
Cool! No problem.
case 2: Now consider calling our function with two parameters in the list. How will you call it? Will you do it by addNum(new_list[0], new_list[1])?
Seems tricky!!
Case 3: Now consider calling our function with all three parameters in the list. Will you call it by addNum(new_list[0], new_list[1], new_list[2])? Instead what if you can split the values like this with an operator?
Yes! addNum(new_list[0], new_list[1], new_list[2]) <=> addNum(*new_list)
Similarly, addNum(new_list[0], new_list[1]) <=> addNum(*new_list[:2])
Also, addNum(new_list[0]) <=> addNum(*new_list[:1])
By using this operator, you could achieve this!!
It'd print all items without the need of looping over the list. The * operator used here unpacks all items from the list.
a = [1,2,3]
print(a)
# [1,2,3]
print(*a)
# 1 2 3
print(*a,sep=",")
# 1,2,3

Python For Loop - Switching from PHP

I am new to python and usually use PHP. Can someone please clear me structure of For loop for python.
numbers = int(x) for x in numbers
In PHP everything realted to For loop used to inside the body. I can't understand why methods are before for loop in python.
First of all the statement is missing brackets:
numbers = [int(x) for x in numbers]
This is called a list comprehension and it is the equivalent of:
numbers = []
for x in numbers:
numbers.append(int(x))
Note that you can also use comprehensions as generator expressions, in which case the [] become ():
numbers = (int(x) for x in numbers)
which is the equivalent of:
def numbers(N):
for x in N:
yield int(x)
This means that the for loop will only execute one yield at the time as it is being processed. In other words, while the first example builds a list in memory, a generator returns one element at a time when executed. This is great to process large lists where you can generate one element a time without getting everything into memory (e.g. processing a file line-by-line).
So as you can see comprehensions and generator expressions are a great way to reduce the amount of code required to process lists, tuples and any other iterable.

Resources