Difference between normal for loop statement and for loop statement inside a list initialization in python [duplicate] - python-3.x

I have the following code:
[x ** 2 for x in range(10)]
When I run it in the Python shell, it returns:
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
I've searched and it seems this is called a list comprehension and similarly there seem to be set/dict comprehensions and generator expressions. But how does it work?

From the documentation:
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
About your question, the list comprehension does the same thing as the following "plain" Python code:
>>> l = []
>>> for x in range(10):
... l.append(x**2)
>>> l
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
How do you write it in one line? Hmm...we can...probably...use map() with lambda:
>>> list(map(lambda x: x**2, range(10)))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
But isn't it clearer and simpler to just use a list comprehension?
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Basically, we can do anything with x. Not only x**2. For example, run a method of x:
>>> [x.strip() for x in ('foo\n', 'bar\n', 'baz\n')]
['foo', 'bar', 'baz']
Or use x as another function's argument:
>>> [int(x) for x in ('1', '2', '3')]
[1, 2, 3]
We can also, for example, use x as the key of a dict object. Let's see:
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [d[x] for x in ['foo', 'baz']]
['10', '30']
How about a combination?
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [int(d[x].rstrip('0')) for x in ['foo', 'baz']]
[1, 3]
And so on.
You can also use if or if...else in a list comprehension. For example, you only want odd numbers in range(10). You can do:
>>> l = []
>>> for x in range(10):
... if x%2:
... l.append(x)
>>> l
[1, 3, 5, 7, 9]
Ah that's too complex. What about the following version?
>>> [x for x in range(10) if x%2]
[1, 3, 5, 7, 9]
To use an if...else ternary expression, you need put the if ... else ... after x, not after range(10):
>>> [i if i%2 != 0 else None for i in range(10)]
[None, 1, None, 3, None, 5, None, 7, None, 9]
Have you heard about nested list comprehension? You can put two or more fors in one list comprehension. For example:
>>> [i for x in [[1, 2, 3], [4, 5, 6]] for i in x]
[1, 2, 3, 4, 5, 6]
>>> [j for x in [[[1, 2], [3]], [[4, 5], [6]]] for i in x for j in i]
[1, 2, 3, 4, 5, 6]
Let's talk about the first part, for x in [[1, 2, 3], [4, 5, 6]] which gives [1, 2, 3] and [4, 5, 6]. Then, for i in x gives 1, 2, 3 and 4, 5, 6.
Warning: You always need put for x in [[1, 2, 3], [4, 5, 6]] before for i in x:
>>> [j for j in x for x in [[1, 2, 3], [4, 5, 6]]]
Traceback (most recent call last):
File "<input>", line 1, in <module>
NameError: name 'x' is not defined
We also have set comprehensions, dict comprehensions, and generator expressions.
set comprehensions and list comprehensions are basically the same, but the former returns a set instead of a list:
>>> {x for x in [1, 1, 2, 3, 3, 1]}
{1, 2, 3}
It's the same as:
>>> set([i for i in [1, 1, 2, 3, 3, 1]])
{1, 2, 3}
A dict comprehension looks like a set comprehension, but it uses {key: value for key, value in ...} or {i: i for i in ...} instead of {i for i in ...}.
For example:
>>> {i: i**2 for i in range(5)}
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
And it equals:
>>> d = {}
>>> for i in range(5):
... d[i] = i**2
>>> d
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
Does (i for i in range(5)) give a tuple? No!, it's a generator expression. Which returns a generator:
>>> (i for i in range(5))
<generator object <genexpr> at 0x7f52703fbca8>
It's the same as:
>>> def gen():
... for i in range(5):
... yield i
>>> gen()
<generator object gen at 0x7f5270380db0>
And you can use it as a generator:
>>> gen = (i for i in range(5))
>>> next(gen)
0
>>> next(gen)
1
>>> list(gen)
[2, 3, 4]
>>> next(gen)
Traceback (most recent call last):
File "<input>", line 1, in <module>
StopIteration
Note: If you use a list comprehension inside a function, you don't need the [] if that function could loop over a generator. For example, sum():
>>> sum(i**2 for i in range(5))
30
Related (about generators): Understanding Generators in Python.

There are list, dictionary, and set comprehensions, but no tuple comprehensions (though do explore "generator expressions").
They address the problem that traditional loops in Python are statements (don't return anything) not expressions which return a value.
They are not the solution to every problem and can be rewritten as traditional loops. They become awkward when state needs to be maintained & updated between iterations.
They typically consist of:
[<output expr> <loop expr <input expr>> <optional predicate expr>]
but can be twisted in lots of interesting and bizarre ways.
They can be analogous to the traditional map() and filter() operations which still exist in Python and continue to be used.
When done well, they have a high satisfaction quotient.

If you prefer a more visual way of figuring out what's going on then maybe this will help:
# for the example in the question...
y = []
for x in range(10):
y += [x**2]
# is equivalent to...
y = [x**2 for x in range(10)]
# for a slightly more complex example, it is useful
# to visualize where the various x's end up...
a = [1,2,3,4]
b = [3,4,5,6]
c = []
for x in a:
if x in b:
c += [x]
# \ \ /
# \ _____\______/
# \ / \
# \/ \
# /\ \
# / \ \
# / \ \
c = [x for x in a if x in b]
print(c)
...produces the output [3, 4]

I've seen a lot of confusion lately (on other SO questions and from coworkers) about how list comprehensions work. A wee bit of math education can help with why the syntax is like this, and what list comprehensions really mean.
The syntax
It's best to think of list comprehensions as predicates over a set/collection, like we would in mathematics by using set builder notation. The notation actually feels pretty natural to me, because I hold an undergrad degree in Mathematics. But forget about me, Guido van Rossum (inventor of Python) holds a masters in Mathematics and has a math background.
Set builder notation crash course
Here's the (very basics) of how set builder notation works:
So, this set builder notation represents the set of numbers that are strictly positive (i.e. [1,2,3,4,...]).
Points of confusion
1) The predicate filter in set builder notation only specifies which items we want to keep, and list comprehension predicates do the same thing. You don't have to include special logic for omitting items, they are omitted unless included by the predicate. The empty predicate (i.e. no conditional at the end) includes all items in the given collection.
2) The predicate filter in set builder notation goes at the end, and similarly in list comprehensions. (some) Beginners think something like [x < 5 for x in range(10)] will give them the list [0,1,2,3,4], when in fact it outputs [True, True, True, True, True, False, False, False, False, False]. We get the output [True, True, True, True, True, False, False, False, False, False] because we asked Python to evaluate x < 5 for all items in range(10). No predicate implies that we get everything from the set (just like in set builder notation).
If you keep set builder notation in the back of your mind while using list comprehensions, they're a bit easier to swallow.
HTH!

Introduction
A list comprehension is a high level, declarative way to create a list in Python. The main benefits of comprehensions are readability and maintainability. A lot of people find them very readable, and even developers who have never seen them before can usually guess correctly what it means.
# Snippet 1
squares = [n ** 2 for n in range(5)]
# Snippet 2
squares = []
for n in range(5):
squares.append(n ** 2)
Both snippets of code will produce squares to be equal to [0, 1, 4, 9, 16].
Notice that in the first snippet, what you type is declaring what kind of list you want, while the second is specifying how to create it. This is why a comprehension is a high-level and declarative.
Syntax
[EXPRESSION for VARIABLE in SEQUENCE]
EXPRESSION is any Python expression, but it is typical to have some variable in it. This variable is stated in VARIABLE field. SEQUENCE defines the source of values the variable enumerates through.
Considering Snippet 1, [n ** 2 for n in range(5)]:
EXPRESSION is n ** 2
VARIABLE is n
SEQUENCE is range(5)
Notice that if you check the type of squares you will get that the list comprehension is just a regular list:
>>> type(squares)
<class 'list'>
More about EXPRESSION
The expression can be anything that reduces to a value:
Arithmetic expressions such as n ** 2 + 3 * n + 1
A function call like f(n) using n as variable
A slice operation like s[::-1]
Method calls bar.foo()
...
Some examples:
>>> [2 * x + 3 for x in range(5)]
[3, 5, 7, 9, 11]
>>> [abs(num) for num in range(-5, 5)]
[5, 4, 3, 2, 1, 0, 1, 2, 3, 4]
>>> animals = ['dog', 'cat', 'lion', 'tiger']
>>> [animal.upper() for animal in animals]
['DOG', 'CAT', 'LION', 'TIGER']
Filtering:
The order of elements in the final list is determined by the order of SEQUENCE. However, you can filter out elements adding an if clause:
[EXPRESSION for VARIABLE in SEQUENCE if CONDITION]
CONDITION is an expression that evaluates to True or False. Technically, the condition doesn't have to depend upon VARIABLE, but it typically uses it.
Examples:
>>> [n ** 2 for n in range(5) if n % 2 == 0]
[0, 4, 16]
>>> animals = ['dog', 'cat', 'lion', 'tiger']
>>> [animal for animal in animals if len(animal) == 3]
['dog', 'cat']
Also, remember that Python allows you to write other kinds of comprehensions other than lists:
dictionary comprehensions
set comprehensions

Related

How can I get multiple output variables into a list?

I'm wondering if there's a way of getting multiple outputs from a function into a list. I'm not interested in creating a list inside of a function for reasons I'm not going to waste your time going into.
I know how many output variables I am expecting, but only through using the annotations["return"] expression (or whatever you call that, sorry for the noobish terminology) and this changes from case to case, which is why I need this to be dynamic.
I know I can use lists as multiple variables using function(*myList), but I'm interested in if there's a way of doing the equivalent when receiving return values from a function.
Cheers!
Pseudocode:
function():
x = 1
y = 2
return x, y
variables = function()
print(variables[0], " and ", variables[1]
result should be = "1 and 2"
yes, with the unpacking assignments expression ex a,b,c= myfunction(...), you can put * in one of those to make it take a variable number of arguments
>>> a,b,c=range(3) #if you know that the thing contains exactly 3 elements you can do this
>>> a,b,c
(0, 1, 2)
>>> a,b,*c=range(10) #for when you know that there at least 2 or more the first 2 will be in a and b, and whatever else in c which will be a list
>>> a,b,c
(0, 1, [2, 3, 4, 5, 6, 7, 8, 9])
>>> a,*b,c=range(10)
>>> a,b,c
(0, [1, 2, 3, 4, 5, 6, 7, 8], 9)
>>> *a,b,c=range(10)
>>> a,b,c
([0, 1, 2, 3, 4, 5, 6, 7], 8, 9)
>>>
additionally you can return from a function whatever you want, a list, a tuple, a dict, etc, but only one thing
>>> def fun():
return 1,"boo",[1,2,3],{1:10,3:23}
>>> fun()
(1, 'boo', [1, 2, 3], {1: 10, 3: 23})
>>>
in this example it return a tuple with all that stuff because , is the tuple constructor, so it make a tuple first (your one thing) and return it

How do I multiply elements in a list and find the sum in a simple way without numpy, zip etc

I have to multiply the same index of two lists and then find the sum of that.
Please help me out! Thank you.
Try this,
>>> A=[2, 3, -6, 7, 10, 11]
>>> B=[1, 2, 3, 4, 5, 6]
>>> sum([x * y for x, y in zip(A, B)])
134
Let me explain what I did in my answer, I used zip() python Built-in Function and this is what documentation mention about it.
Make an iterator that aggregates elements from each of the iterables.
Returns an iterator of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. The iterator stops when the shortest input iterable is exhausted. With a single iterable argument, it returns an iterator of 1-tuples. With no arguments, it returns an empty iterator.
Little bit confusing, right? Check the below example:-
>>> A=[2, 3, -6, 7, 10, 11]
>>> B=[1, 2, 3, 4, 5, 6]
>>> zip(A,B)
<zip at 0x1fde73e6f88> # it display something like this (zip object)
>>> list(zip(A,B)) # for visualization purpose, convert zip object to list
[(2, 1), (3, 2), (-6, 3), (7, 4), (10, 5), (11, 6)]
I think you can get clear idea what happen inside zip() function. Then multiply each and every value in zip object using python List Comprehensions to answer your question more pythonic. So zip object now created for us a new value series using A and B list. We assigned those values for x and y, multiply them and save in list.
>>> [x * y for x, y in zip(A, B)]
[2, 6, -18, 28, 50, 66]
After all the steps we used sum() to calculate the Sum a list of numbers in Python.
>>> sum([2, 6, -18, 28, 50, 66])
134
That's all, if you didn't get anything please add a comment to this answer.
AB = [A[i] * B[i] for i in range(len(A))]
sum(AB)
Alternatively, try
AB = [value_ * B[i] for i, value_ in enumerate(A)]
sum(AB)
I believe you want this:
def lists(A,B):
C = 0
for i in range(len(A)):
C += (A[i] * B[i])
return C
Now you can call your method lists with lists A and B like this:
A=[2, 3, -6, 7, 10, 11]
B=[1, 2, 3, 4, 5, 6]
lists(A,B)
Which will return 134. Your code was wrong because of your indentation. You had put your return statement inside the for loop, so your code would return C value in the first iteration, which was 0 + 2*1.
with list comprehension:
A = [2, 3, -6, 7, 10, 11]
B = [1, 2, 3, 4, 5, 6]
print (sum([A[i]*B[i] for i in range(len(A))]))
output:
134
your code:
def lists(A, B):
C = 0
for i in range(len(A)):
C += (A[i] * B[i])
return C # <-----
A = [2, 3, -6, 7, 10, 11]
B = [1, 2, 3, 4, 5, 6]
print (lists(A,B))
NOTE: you need to put your return statement out of the for loop.
If return statement is reached during the execution of a function, it will exit from function, what does it mean in your case if you have return in your for loop, in first iteration return function will be reached and exit (you got result 2 because in first iteration yuo have 2*1)

Extracting data from nested lists [duplicate]

Yes, I know this subject has been covered before:
Python idiom to chain (flatten) an infinite iterable of finite iterables?
Flattening a shallow list in Python
Comprehension for flattening a sequence of sequences?
How do I make a flat list out of a list of lists?
but as far as I know, all solutions, except for one, fail on a list like [[[1, 2, 3], [4, 5]], 6], where the desired output is [1, 2, 3, 4, 5, 6] (or perhaps even better, an iterator).
The only solution I saw that works for an arbitrary nesting is found in this question:
def flatten(x):
result = []
for el in x:
if hasattr(el, "__iter__") and not isinstance(el, basestring):
result.extend(flatten(el))
else:
result.append(el)
return result
Is this the best approach? Did I overlook something? Any problems?
Using generator functions can make your example easier to read and improve performance.
Python 2
Using the Iterable ABC added in 2.6:
from collections import Iterable
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, basestring):
for item in flatten(x):
yield item
else:
yield x
Python 3
In Python 3, basestring is no more, but the tuple (str, bytes) gives the same effect. Also, the yield from operator returns an item from a generator one at a time.
from collections.abc import Iterable
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
yield from flatten(x)
else:
yield x
My solution:
import collections
def flatten(x):
if isinstance(x, collections.Iterable):
return [a for i in x for a in flatten(i)]
else:
return [x]
A little more concise, but pretty much the same.
Generator using recursion and duck typing (updated for Python 3):
def flatten(L):
for item in L:
try:
yield from flatten(item)
except TypeError:
yield item
list(flatten([[[1, 2, 3], [4, 5]], 6]))
>>>[1, 2, 3, 4, 5, 6]
Here is my functional version of recursive flatten which handles both tuples and lists, and lets you throw in any mix of positional arguments. Returns a generator which produces the entire sequence in order, arg by arg:
flatten = lambda *n: (e for a in n
for e in (flatten(*a) if isinstance(a, (tuple, list)) else (a,)))
Usage:
l1 = ['a', ['b', ('c', 'd')]]
l2 = [0, 1, (2, 3), [[4, 5, (6, 7, (8,), [9]), 10]], (11,)]
print list(flatten(l1, -2, -1, l2))
['a', 'b', 'c', 'd', -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
Generator version of #unutbu's non-recursive solution, as requested by #Andrew in a comment:
def genflat(l, ltypes=collections.Sequence):
l = list(l)
i = 0
while i < len(l):
while isinstance(l[i], ltypes):
if not l[i]:
l.pop(i)
i -= 1
break
else:
l[i:i + 1] = l[i]
yield l[i]
i += 1
Slightly simplified version of this generator:
def genflat(l, ltypes=collections.Sequence):
l = list(l)
while l:
while l and isinstance(l[0], ltypes):
l[0:1] = l[0]
if l: yield l.pop(0)
This version of flatten avoids python's recursion limit (and thus works with arbitrarily deep, nested iterables). It is a generator which can handle strings and arbitrary iterables (even infinite ones).
import itertools as IT
import collections
def flatten(iterable, ltypes=collections.Iterable):
remainder = iter(iterable)
while True:
first = next(remainder)
if isinstance(first, ltypes) and not isinstance(first, (str, bytes)):
remainder = IT.chain(first, remainder)
else:
yield first
Here are some examples demonstrating its use:
print(list(IT.islice(flatten(IT.repeat(1)),10)))
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
print(list(IT.islice(flatten(IT.chain(IT.repeat(2,3),
{10,20,30},
'foo bar'.split(),
IT.repeat(1),)),10)))
# [2, 2, 2, 10, 20, 30, 'foo', 'bar', 1, 1]
print(list(flatten([[1,2,[3,4]]])))
# [1, 2, 3, 4]
seq = ([[chr(i),chr(i-32)] for i in range(ord('a'), ord('z')+1)] + list(range(0,9)))
print(list(flatten(seq)))
# ['a', 'A', 'b', 'B', 'c', 'C', 'd', 'D', 'e', 'E', 'f', 'F', 'g', 'G', 'h', 'H',
# 'i', 'I', 'j', 'J', 'k', 'K', 'l', 'L', 'm', 'M', 'n', 'N', 'o', 'O', 'p', 'P',
# 'q', 'Q', 'r', 'R', 's', 'S', 't', 'T', 'u', 'U', 'v', 'V', 'w', 'W', 'x', 'X',
# 'y', 'Y', 'z', 'Z', 0, 1, 2, 3, 4, 5, 6, 7, 8]
Although flatten can handle infinite generators, it can not handle infinite nesting:
def infinitely_nested():
while True:
yield IT.chain(infinitely_nested(), IT.repeat(1))
print(list(IT.islice(flatten(infinitely_nested()), 10)))
# hangs
def flatten(xs):
res = []
def loop(ys):
for i in ys:
if isinstance(i, list):
loop(i)
else:
res.append(i)
loop(xs)
return res
Pandas has a function that does this. It returns an iterator as you mentioned.
In [1]: import pandas
In [2]: pandas.core.common.flatten([[[1, 2, 3], [4, 5]], 6])
Out[2]: <generator object flatten at 0x7f12ade66200>
In [3]: list(pandas.core.common.flatten([[[1, 2, 3], [4, 5]], 6]))
Out[3]: [1, 2, 3, 4, 5, 6]
Here's another answer that is even more interesting...
import re
def Flatten(TheList):
a = str(TheList)
b,_Anon = re.subn(r'[\[,\]]', ' ', a)
c = b.split()
d = [int(x) for x in c]
return(d)
Basically, it converts the nested list to a string, uses a regex to strip out the nested syntax, and then converts the result back to a (flattened) list.
You could use deepflatten from the 3rd party package iteration_utilities:
>>> from iteration_utilities import deepflatten
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> list(deepflatten(L))
[1, 2, 3, 4, 5, 6]
>>> list(deepflatten(L, types=list)) # only flatten "inner" lists
[1, 2, 3, 4, 5, 6]
It's an iterator so you need to iterate it (for example by wrapping it with list or using it in a loop). Internally it uses an iterative approach instead of an recursive approach and it's written as C extension so it can be faster than pure python approaches:
>>> %timeit list(deepflatten(L))
12.6 µs ± 298 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit list(deepflatten(L, types=list))
8.7 µs ± 139 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit list(flatten(L)) # Cristian - Python 3.x approach from https://stackoverflow.com/a/2158532/5393381
86.4 µs ± 4.42 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit list(flatten(L)) # Josh Lee - https://stackoverflow.com/a/2158522/5393381
107 µs ± 2.99 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit list(genflat(L, list)) # Alex Martelli - https://stackoverflow.com/a/2159079/5393381
23.1 µs ± 710 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I'm the author of the iteration_utilities library.
It was fun trying to create a function that could flatten irregular list in Python, but of course that is what Python is for (to make programming fun). The following generator works fairly well with some caveats:
def flatten(iterable):
try:
for item in iterable:
yield from flatten(item)
except TypeError:
yield iterable
It will flatten datatypes that you might want left alone (like bytearray, bytes, and str objects). Also, the code relies on the fact that requesting an iterator from a non-iterable raises a TypeError.
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> def flatten(iterable):
try:
for item in iterable:
yield from flatten(item)
except TypeError:
yield iterable
>>> list(flatten(L))
[1, 2, 3, 4, 5, 6]
>>>
Edit:
I disagree with the previous implementation. The problem is that you should not be able to flatten something that is not an iterable. It is confusing and gives the wrong impression of the argument.
>>> list(flatten(123))
[123]
>>>
The following generator is almost the same as the first but does not have the problem of trying to flatten a non-iterable object. It fails as one would expect when an inappropriate argument is given to it.
def flatten(iterable):
for item in iterable:
try:
yield from flatten(item)
except TypeError:
yield item
Testing the generator works fine with the list that was provided. However, the new code will raise a TypeError when a non-iterable object is given to it. Example are shown below of the new behavior.
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> list(flatten(L))
[1, 2, 3, 4, 5, 6]
>>> list(flatten(123))
Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
list(flatten(123))
File "<pyshell#27>", line 2, in flatten
for item in iterable:
TypeError: 'int' object is not iterable
>>>
Here's a simple function that flattens lists of arbitrary depth. No recursion, to avoid stack overflow.
from copy import deepcopy
def flatten_list(nested_list):
"""Flatten an arbitrarily nested list, without recursion (to avoid
stack overflows). Returns a new list, the original list is unchanged.
>> list(flatten_list([1, 2, 3, [4], [], [[[[[[[[[5]]]]]]]]]]))
[1, 2, 3, 4, 5]
>> list(flatten_list([[1, 2], 3]))
[1, 2, 3]
"""
nested_list = deepcopy(nested_list)
while nested_list:
sublist = nested_list.pop(0)
if isinstance(sublist, list):
nested_list = sublist + nested_list
else:
yield sublist
Although an elegant and very pythonic answer has been selected I would present my solution just for the review:
def flat(l):
ret = []
for i in l:
if isinstance(i, list) or isinstance(i, tuple):
ret.extend(flat(i))
else:
ret.append(i)
return ret
Please tell how good or bad this code is?
When trying to answer such a question you really need to give the limitations of the code you propose as a solution. If it was only about performances I wouldn't mind too much, but most of the codes proposed as solution (including the accepted answer) fail to flatten any list that has a depth greater than 1000.
When I say most of the codes I mean all codes that use any form of recursion (or call a standard library function that is recursive). All these codes fail because for every of the recursive call made, the (call) stack grow by one unit, and the (default) python call stack has a size of 1000.
If you're not too familiar with the call stack, then maybe the following will help (otherwise you can just scroll to the Implementation).
Call stack size and recursive programming (dungeon analogy)
Finding the treasure and exit
Imagine you enter a huge dungeon with numbered rooms, looking for a treasure. You don't know the place but you have some indications on how to find the treasure. Each indication is a riddle (difficulty varies, but you can't predict how hard they will be). You decide to think a little bit about a strategy to save time, you make two observations:
It's hard (long) to find the treasure as you'll have to solve (potentially hard) riddles to get there.
Once the treasure found, returning to the entrance may be easy, you just have to use the same path in the other direction (though this needs a bit of memory to recall your path).
When entering the dungeon, you notice a small notebook here. You decide to use it to write down every room you exit after solving a riddle (when entering a new room), this way you'll be able to return back to the entrance. That's a genius idea, you won't even spend a cent implementing your strategy.
You enter the dungeon, solving with great success the first 1001 riddles, but here comes something you hadn't planed, you have no space left in the notebook you borrowed. You decide to abandon your quest as you prefer not having the treasure than being lost forever inside the dungeon (that looks smart indeed).
Executing a recursive program
Basically, it's the exact same thing as finding the treasure. The dungeon is the computer's memory, your goal now is not to find a treasure but to compute some function (find f(x) for a given x). The indications simply are sub-routines that will help you solving f(x). Your strategy is the same as the call stack strategy, the notebook is the stack, the rooms are the functions' return addresses:
x = ["over here", "am", "I"]
y = sorted(x) # You're about to enter a room named `sorted`, note down the current room address here so you can return back: 0x4004f4 (that room address looks weird)
# Seems like you went back from your quest using the return address 0x4004f4
# Let's see what you've collected
print(' '.join(y))
The problem you encountered in the dungeon will be the same here, the call stack has a finite size (here 1000) and therefore, if you enter too many functions without returning back then you'll fill the call stack and have an error that look like "Dear adventurer, I'm very sorry but your notebook is full": RecursionError: maximum recursion depth exceeded. Note that you don't need recursion to fill the call stack, but it's very unlikely that a non-recursive program call 1000 functions without ever returning. It's important to also understand that once you returned from a function, the call stack is freed from the address used (hence the name "stack", return address are pushed in before entering a function and pulled out when returning). In the special case of a simple recursion (a function f that call itself once -- over and over --) you will enter f over and over until the computation is finished (until the treasure is found) and return from f until you go back to the place where you called f in the first place. The call stack will never be freed from anything until the end where it will be freed from all return addresses one after the other.
How to avoid this issue?
That's actually pretty simple: "don't use recursion if you don't know how deep it can go". That's not always true as in some cases, Tail Call recursion can be Optimized (TCO). But in python, this is not the case, and even "well written" recursive function will not optimize stack use. There is an interesting post from Guido about this question: Tail Recursion Elimination.
There is a technique that you can use to make any recursive function iterative, this technique we could call bring your own notebook. For example, in our particular case we simply are exploring a list, entering a room is equivalent to entering a sublist, the question you should ask yourself is how can I get back from a list to its parent list? The answer is not that complex, repeat the following until the stack is empty:
push the current list address and index in a stack when entering a new sublist (note that a list address+index is also an address, therefore we just use the exact same technique used by the call stack);
every time an item is found, yield it (or add them in a list);
once a list is fully explored, go back to the parent list using the stack return address (and index).
Also note that this is equivalent to a DFS in a tree where some nodes are sublists A = [1, 2] and some are simple items: 0, 1, 2, 3, 4 (for L = [0, [1,2], 3, 4]). The tree looks like this:
L
|
-------------------
| | | |
0 --A-- 3 4
| |
1 2
The DFS traversal pre-order is: L, 0, A, 1, 2, 3, 4. Remember, in order to implement an iterative DFS you also "need" a stack. The implementation I proposed before result in having the following states (for the stack and the flat_list):
init.: stack=[(L, 0)]
**0**: stack=[(L, 0)], flat_list=[0]
**A**: stack=[(L, 1), (A, 0)], flat_list=[0]
**1**: stack=[(L, 1), (A, 0)], flat_list=[0, 1]
**2**: stack=[(L, 1), (A, 1)], flat_list=[0, 1, 2]
**3**: stack=[(L, 2)], flat_list=[0, 1, 2, 3]
**3**: stack=[(L, 3)], flat_list=[0, 1, 2, 3, 4]
return: stack=[], flat_list=[0, 1, 2, 3, 4]
In this example, the stack maximum size is 2, because the input list (and therefore the tree) have depth 2.
Implementation
For the implementation, in python you can simplify a little bit by using iterators instead of simple lists. References to the (sub)iterators will be used to store sublists return addresses (instead of having both the list address and the index). This is not a big difference but I feel this is more readable (and also a bit faster):
def flatten(iterable):
return list(items_from(iterable))
def items_from(iterable):
cursor_stack = [iter(iterable)]
while cursor_stack:
sub_iterable = cursor_stack[-1]
try:
item = next(sub_iterable)
except StopIteration: # post-order
cursor_stack.pop()
continue
if is_list_like(item): # pre-order
cursor_stack.append(iter(item))
elif item is not None:
yield item # in-order
def is_list_like(item):
return isinstance(item, list)
Also, notice that in is_list_like I have isinstance(item, list), which could be changed to handle more input types, here I just wanted to have the simplest version where (iterable) is just a list. But you could also do that:
def is_list_like(item):
try:
iter(item)
return not isinstance(item, str) # strings are not lists (hmm...)
except TypeError:
return False
This considers strings as "simple items" and therefore flatten_iter([["test", "a"], "b]) will return ["test", "a", "b"] and not ["t", "e", "s", "t", "a", "b"]. Remark that in that case, iter(item) is called twice on each item, let's pretend it's an exercise for the reader to make this cleaner.
Testing and remarks on other implementations
In the end, remember that you can't print a infinitely nested list L using print(L) because internally it will use recursive calls to __repr__ (RecursionError: maximum recursion depth exceeded while getting the repr of an object). For the same reason, solutions to flatten involving str will fail with the same error message.
If you need to test your solution, you can use this function to generate a simple nested list:
def build_deep_list(depth):
"""Returns a list of the form $l_{depth} = [depth-1, l_{depth-1}]$
with $depth > 1$ and $l_0 = [0]$.
"""
sub_list = [0]
for d in range(1, depth):
sub_list = [d, sub_list]
return sub_list
Which gives: build_deep_list(5) >>> [4, [3, [2, [1, [0]]]]].
I prefer simple answers. No generators. No recursion or recursion limits. Just iteration:
def flatten(TheList):
listIsNested = True
while listIsNested: #outer loop
keepChecking = False
Temp = []
for element in TheList: #inner loop
if isinstance(element,list):
Temp.extend(element)
keepChecking = True
else:
Temp.append(element)
listIsNested = keepChecking #determine if outer loop exits
TheList = Temp[:]
return TheList
This works with two lists: an inner for loop and an outer while loop.
The inner for loop iterates through the list. If it finds a list element, it (1) uses list.extend() to flatten that part one level of nesting and (2) switches keepChecking to True. keepchecking is used to control the outer while loop. If the outer loop gets set to true, it triggers the inner loop for another pass.
Those passes keep happening until no more nested lists are found. When a pass finally occurs where none are found, keepChecking never gets tripped to true, which means listIsNested stays false and the outer while loop exits.
The flattened list is then returned.
Test-run
flatten([1,2,3,4,[100,200,300,[1000,2000,3000]]])
[1, 2, 3, 4, 100, 200, 300, 1000, 2000, 3000]
I didn't go through all the already available answers here, but here is a one liner I came up with, borrowing from lisp's way of first and rest list processing
def flatten(l): return flatten(l[0]) + (flatten(l[1:]) if len(l) > 1 else []) if type(l) is list else [l]
here is one simple and one not-so-simple case -
>>> flatten([1,[2,3],4])
[1, 2, 3, 4]
>>> flatten([1, [2, 3], 4, [5, [6, {'name': 'some_name', 'age':30}, 7]], [8, 9, [10, [11, [12, [13, {'some', 'set'}, 14, [15, 'some_string'], 16], 17, 18], 19], 20], 21, 22, [23, 24], 25], 26, 27, 28, 29, 30])
[1, 2, 3, 4, 5, 6, {'age': 30, 'name': 'some_name'}, 7, 8, 9, 10, 11, 12, 13, set(['set', 'some']), 14, 15, 'some_string', 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
>>>
I'm not sure if this is necessarily quicker or more effective, but this is what I do:
def flatten(lst):
return eval('[' + str(lst).replace('[', '').replace(']', '') + ']')
L = [[[1, 2, 3], [4, 5]], 6]
print(flatten(L))
The flatten function here turns the list into a string, takes out all of the square brackets, attaches square brackets back onto the ends, and turns it back into a list.
Although, if you knew you would have square brackets in your list in strings, like [[1, 2], "[3, 4] and [5]"], you would have to do something else.
Just use a funcy library:
pip install funcy
import funcy
funcy.flatten([[[[1, 1], 1], 2], 3]) # returns generator
funcy.lflatten([[[[1, 1], 1], 2], 3]) # returns list
Here's the compiler.ast.flatten implementation in 2.7.5:
def flatten(seq):
l = []
for elt in seq:
t = type(elt)
if t is tuple or t is list:
for elt2 in flatten(elt):
l.append(elt2)
else:
l.append(elt)
return l
There are better, faster methods (If you've reached here, you have seen them already)
Also note:
Deprecated since version 2.6: The compiler package has been removed in Python 3.
I'm surprised no one has thought of this. Damn recursion I don't get the recursive answers that the advanced people here made. anyway here is my attempt on this. caveat is it's very specific to the OP's use case
import re
L = [[[1, 2, 3], [4, 5]], 6]
flattened_list = re.sub("[\[\]]", "", str(L)).replace(" ", "").split(",")
new_list = list(map(int, flattened_list))
print(new_list)
output:
[1, 2, 3, 4, 5, 6]
I am aware that there are already many awesome answers but i wanted to add an answer that uses the functional programming method of solving the question. In this answer i make use of double recursion :
def flatten_list(seq):
if not seq:
return []
elif isinstance(seq[0],list):
return (flatten_list(seq[0])+flatten_list(seq[1:]))
else:
return [seq[0]]+flatten_list(seq[1:])
print(flatten_list([1,2,[3,[4],5],[6,7]]))
output:
[1, 2, 3, 4, 5, 6, 7]
No recursion or nested loops. A few lines. Well formatted and easy to read:
def flatten_deep(arr: list):
""" Flattens arbitrarily-nested list `arr` into single-dimensional. """
while arr:
if isinstance(arr[0], list): # Checks whether first element is a list
arr = arr[0] + arr[1:] # If so, flattens that first element one level
else:
yield arr.pop(0) # Otherwise yield as part of the flat array
flatten_deep(L)
From my own code at https://github.com/jorgeorpinel/flatten_nested_lists/blob/master/flatten.py
I don't see anything like this posted around here and just got here from a closed question on the same subject, but why not just do something like this(if you know the type of the list you want to split):
>>> a = [1, 2, 3, 5, 10, [1, 25, 11, [1, 0]]]
>>> g = str(a).replace('[', '').replace(']', '')
>>> b = [int(x) for x in g.split(',') if x.strip()]
You would need to know the type of the elements but I think this can be generalised and in terms of speed I think it would be faster.
totally hacky but I think it would work (depending on your data_type)
flat_list = ast.literal_eval("[%s]"%re.sub("[\[\]]","",str(the_list)))
Here is another py2 approach, Im not sure if its the fastest or the most elegant nor safest ...
from collections import Iterable
from itertools import imap, repeat, chain
def flat(seqs, ignore=(int, long, float, basestring)):
return repeat(seqs, 1) if any(imap(isinstance, repeat(seqs), ignore)) or not isinstance(seqs, Iterable) else chain.from_iterable(imap(flat, seqs))
It can ignore any specific (or derived) type you would like, it returns an iterator, so you can convert it to any specific container such as list, tuple, dict or simply consume it in order to reduce memory footprint, for better or worse it can handle initial non-iterable objects such as int ...
Note most of the heavy lifting is done in C, since as far as I know thats how itertools are implemented, so while it is recursive, AFAIK it isn't bounded by python recursion depth since the function calls are happening in C, though this doesn't mean you are bounded by memory, specially in OS X where its stack size has a hard limit as of today (OS X Mavericks) ...
there is a slightly faster approach, but less portable method, only use it if you can assume that the base elements of the input can be explicitly determined otherwise, you'll get an infinite recursion, and OS X with its limited stack size, will throw a segmentation fault fairly quickly ...
def flat(seqs, ignore={int, long, float, str, unicode}):
return repeat(seqs, 1) if type(seqs) in ignore or not isinstance(seqs, Iterable) else chain.from_iterable(imap(flat, seqs))
here we are using sets to check for the type so it takes O(1) vs O(number of types) to check whether or not an element should be ignored, though of course any value with derived type of the stated ignored types will fail, this is why its using str, unicode so use it with caution ...
tests:
import random
def test_flat(test_size=2000):
def increase_depth(value, depth=1):
for func in xrange(depth):
value = repeat(value, 1)
return value
def random_sub_chaining(nested_values):
for values in nested_values:
yield chain((values,), chain.from_iterable(imap(next, repeat(nested_values, random.randint(1, 10)))))
expected_values = zip(xrange(test_size), imap(str, xrange(test_size)))
nested_values = random_sub_chaining((increase_depth(value, depth) for depth, value in enumerate(expected_values)))
assert not any(imap(cmp, chain.from_iterable(expected_values), flat(chain(((),), nested_values, ((),)))))
>>> test_flat()
>>> list(flat([[[1, 2, 3], [4, 5]], 6]))
[1, 2, 3, 4, 5, 6]
>>>
$ uname -a
Darwin Samys-MacBook-Pro.local 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64
$ python --version
Python 2.7.5
Without using any library:
def flat(l):
def _flat(l, r):
if type(l) is not list:
r.append(l)
else:
for i in l:
r = r + flat(i)
return r
return _flat(l, [])
# example
test = [[1], [[2]], [3], [['a','b','c'] , [['z','x','y']], ['d','f','g']], 4]
print flat(test) # prints [1, 2, 3, 'a', 'b', 'c', 'z', 'x', 'y', 'd', 'f', 'g', 4]
Using itertools.chain:
import itertools
from collections import Iterable
def list_flatten(lst):
flat_lst = []
for item in itertools.chain(lst):
if isinstance(item, Iterable):
item = list_flatten(item)
flat_lst.extend(item)
else:
flat_lst.append(item)
return flat_lst
Or without chaining:
def flatten(q, final):
if not q:
return
if isinstance(q, list):
if not isinstance(q[0], list):
final.append(q[0])
else:
flatten(q[0], final)
flatten(q[1:], final)
else:
final.append(q)
I used recursive to solve nested list with any depth
def combine_nlist(nlist,init=0,combiner=lambda x,y: x+y):
'''
apply function: combiner to a nested list element by element(treated as flatten list)
'''
current_value=init
for each_item in nlist:
if isinstance(each_item,list):
current_value =combine_nlist(each_item,current_value,combiner)
else:
current_value = combiner(current_value,each_item)
return current_value
So after i define function combine_nlist, it is easy to use this function do flatting. Or you can combine it into one function. I like my solution because it can be applied to any nested list.
def flatten_nlist(nlist):
return combine_nlist(nlist,[],lambda x,y:x+[y])
result
In [379]: flatten_nlist([1,2,3,[4,5],[6],[[[7],8],9],10])
Out[379]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The easiest way is to use the morph library using pip install morph.
The code is:
import morph
list = [[[1, 2, 3], [4, 5]], 6]
flattened_list = morph.flatten(list) # returns [1, 2, 3, 4, 5, 6]
This is a simple implement of flatten on python2
flatten=lambda l: reduce(lambda x,y:x+y,map(flatten,l),[]) if isinstance(l,list) else [l]
test=[[1,2,3,[3,4,5],[6,7,[8,9,[10,[11,[12,13,14]]]]]],]
print flatten(test)
#output [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]

How to write a function where the original list of integers is changed without using return?

Let us say we have a list of integers:
list = [6, 4, 1, 4, 4, 4, 4, 4, 2, 1]
I now wrote a function which returns another list with all the integers from the list above without repeats.
def no_repeats(s):
new_list = []
for number in s:
if new_list.count(number) < 1:
new_list.append(number)
return(new_list)
The new_list returns [6, 4, 1, 2] which is good! My question is how I would now write two similar functions:
A function clean(s) which does not return a new list like the function above, but changes the original list by deleting all the numbers that repeat. Thus, the result has to be the same and the function must not include "return" or create a new list. It must only clean the original list.
A function double(s) which, again, changes the original list (does not return a new list!) but this time, by doubling every number in the original list. Thus, double(list) should change the original list above to:
[6, 6, 4, 4, 1, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 2, 1, 1]
Thank you for all the help!
Removing duplicates inplace without preserving the order:
def no_repeats(L):
L[:] = set(L)
There are several variations possible (preserve order, support non-hashable items, support item that do not define total ordering) e.g., to preserve order:
from collections import OrderedDict
def no_repeats(L):
L[:] = OrderedDict.fromkeys(L)
To double each element's value inplace:
def double(L):
for i in range(len(L)):
L[i] *= 2
To duplicate each element:
def duplicate_elements(L):
L[:] = [x for x in L for _ in range(2)]
>>> def clean(s):
... s[:] = [s[i] for i in range(len(s)) if s[i] not in s[:i]]
...
>>> st = [1, 2, 3, 2, 1]
>>> clean(st)
>>> st
[1, 2, 3]
>>> def double(s):
... s[:] = [s[i//3] for i in range(3*len(s)) if i % 3]
...
>>> st = [1, 2, 3, 2, 1]
>>> double(st)
>>> st
[1, 1, 2, 2, 3, 3, 2, 2, 1, 1]
neither is particularly efficient nor pythonic, yet do address the OP question
def double(s):
... s[:] = [s[i//2] for i in range(2*len(s))]
will also do the trick, with a little less obsfucation

List Comprehensions to replace element

for i in range(1, len(A)):
A[i] = A[i-1] + A[i]
You can't do that with a list comprehension as they don't allow assignments.
You can use a simple generator function:
def func(lis):
yield lis[0]
for i,x in enumerate(lis[1:],1):
lis[i] = lis[i-1] + x
yield lis[i]
>>> A = [1, 2, 3, 4, 5, 6, 7]
>>> list(func(A))
[1, 3, 6, 10, 15, 21, 28]
though less efficient, this does give the desired output. But I think I'm getting closer to O(n**2) on this one.
A = [sum(A[:i+1]) for i, _ in enumerate(A)]
afaik this can't be done with a list comprehension the way you want to. I would suggest using the for loop version you've provided. Even if it was possible with a list comprehension, there's no point when you can just modify the list in place.
Use B (another temporary variable).
This should do the trick.
def func(L):
it = iter(L)
v = it.next()
yield v
for x in it:
v += x
yield v
A = [1, 2, 3, 4, 5, 6, 7]
print list(func(A))
This creates an iterator that returns one value at the time. To get the full new list at once you need to use a list() call around the function call, like:
list(func(A))
This generator function should work on any iterable (also those that don't support getting value based on index, like L[0])
I don't think there's an efficient way to do this with a comprehension list.
A one-line solution use reduce:
>>> the_list = [1,2,3,4,5]
>>> reduce(lambda result, x: result+[result[-1] + x] ,the_list, [0])[1:]
[1, 3, 6, 10, 15]

Resources