Yes, I know this subject has been covered before:
Python idiom to chain (flatten) an infinite iterable of finite iterables?
Flattening a shallow list in Python
Comprehension for flattening a sequence of sequences?
How do I make a flat list out of a list of lists?
but as far as I know, all solutions, except for one, fail on a list like [[[1, 2, 3], [4, 5]], 6], where the desired output is [1, 2, 3, 4, 5, 6] (or perhaps even better, an iterator).
The only solution I saw that works for an arbitrary nesting is found in this question:
def flatten(x):
result = []
for el in x:
if hasattr(el, "__iter__") and not isinstance(el, basestring):
result.extend(flatten(el))
else:
result.append(el)
return result
Is this the best approach? Did I overlook something? Any problems?
Using generator functions can make your example easier to read and improve performance.
Python 2
Using the Iterable ABC added in 2.6:
from collections import Iterable
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, basestring):
for item in flatten(x):
yield item
else:
yield x
Python 3
In Python 3, basestring is no more, but the tuple (str, bytes) gives the same effect. Also, the yield from operator returns an item from a generator one at a time.
from collections.abc import Iterable
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
yield from flatten(x)
else:
yield x
My solution:
import collections
def flatten(x):
if isinstance(x, collections.Iterable):
return [a for i in x for a in flatten(i)]
else:
return [x]
A little more concise, but pretty much the same.
Generator using recursion and duck typing (updated for Python 3):
def flatten(L):
for item in L:
try:
yield from flatten(item)
except TypeError:
yield item
list(flatten([[[1, 2, 3], [4, 5]], 6]))
>>>[1, 2, 3, 4, 5, 6]
Here is my functional version of recursive flatten which handles both tuples and lists, and lets you throw in any mix of positional arguments. Returns a generator which produces the entire sequence in order, arg by arg:
flatten = lambda *n: (e for a in n
for e in (flatten(*a) if isinstance(a, (tuple, list)) else (a,)))
Usage:
l1 = ['a', ['b', ('c', 'd')]]
l2 = [0, 1, (2, 3), [[4, 5, (6, 7, (8,), [9]), 10]], (11,)]
print list(flatten(l1, -2, -1, l2))
['a', 'b', 'c', 'd', -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
Generator version of #unutbu's non-recursive solution, as requested by #Andrew in a comment:
def genflat(l, ltypes=collections.Sequence):
l = list(l)
i = 0
while i < len(l):
while isinstance(l[i], ltypes):
if not l[i]:
l.pop(i)
i -= 1
break
else:
l[i:i + 1] = l[i]
yield l[i]
i += 1
Slightly simplified version of this generator:
def genflat(l, ltypes=collections.Sequence):
l = list(l)
while l:
while l and isinstance(l[0], ltypes):
l[0:1] = l[0]
if l: yield l.pop(0)
This version of flatten avoids python's recursion limit (and thus works with arbitrarily deep, nested iterables). It is a generator which can handle strings and arbitrary iterables (even infinite ones).
import itertools as IT
import collections
def flatten(iterable, ltypes=collections.Iterable):
remainder = iter(iterable)
while True:
first = next(remainder)
if isinstance(first, ltypes) and not isinstance(first, (str, bytes)):
remainder = IT.chain(first, remainder)
else:
yield first
Here are some examples demonstrating its use:
print(list(IT.islice(flatten(IT.repeat(1)),10)))
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
print(list(IT.islice(flatten(IT.chain(IT.repeat(2,3),
{10,20,30},
'foo bar'.split(),
IT.repeat(1),)),10)))
# [2, 2, 2, 10, 20, 30, 'foo', 'bar', 1, 1]
print(list(flatten([[1,2,[3,4]]])))
# [1, 2, 3, 4]
seq = ([[chr(i),chr(i-32)] for i in range(ord('a'), ord('z')+1)] + list(range(0,9)))
print(list(flatten(seq)))
# ['a', 'A', 'b', 'B', 'c', 'C', 'd', 'D', 'e', 'E', 'f', 'F', 'g', 'G', 'h', 'H',
# 'i', 'I', 'j', 'J', 'k', 'K', 'l', 'L', 'm', 'M', 'n', 'N', 'o', 'O', 'p', 'P',
# 'q', 'Q', 'r', 'R', 's', 'S', 't', 'T', 'u', 'U', 'v', 'V', 'w', 'W', 'x', 'X',
# 'y', 'Y', 'z', 'Z', 0, 1, 2, 3, 4, 5, 6, 7, 8]
Although flatten can handle infinite generators, it can not handle infinite nesting:
def infinitely_nested():
while True:
yield IT.chain(infinitely_nested(), IT.repeat(1))
print(list(IT.islice(flatten(infinitely_nested()), 10)))
# hangs
def flatten(xs):
res = []
def loop(ys):
for i in ys:
if isinstance(i, list):
loop(i)
else:
res.append(i)
loop(xs)
return res
Pandas has a function that does this. It returns an iterator as you mentioned.
In [1]: import pandas
In [2]: pandas.core.common.flatten([[[1, 2, 3], [4, 5]], 6])
Out[2]: <generator object flatten at 0x7f12ade66200>
In [3]: list(pandas.core.common.flatten([[[1, 2, 3], [4, 5]], 6]))
Out[3]: [1, 2, 3, 4, 5, 6]
Here's another answer that is even more interesting...
import re
def Flatten(TheList):
a = str(TheList)
b,_Anon = re.subn(r'[\[,\]]', ' ', a)
c = b.split()
d = [int(x) for x in c]
return(d)
Basically, it converts the nested list to a string, uses a regex to strip out the nested syntax, and then converts the result back to a (flattened) list.
You could use deepflatten from the 3rd party package iteration_utilities:
>>> from iteration_utilities import deepflatten
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> list(deepflatten(L))
[1, 2, 3, 4, 5, 6]
>>> list(deepflatten(L, types=list)) # only flatten "inner" lists
[1, 2, 3, 4, 5, 6]
It's an iterator so you need to iterate it (for example by wrapping it with list or using it in a loop). Internally it uses an iterative approach instead of an recursive approach and it's written as C extension so it can be faster than pure python approaches:
>>> %timeit list(deepflatten(L))
12.6 µs ± 298 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit list(deepflatten(L, types=list))
8.7 µs ± 139 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit list(flatten(L)) # Cristian - Python 3.x approach from https://stackoverflow.com/a/2158532/5393381
86.4 µs ± 4.42 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit list(flatten(L)) # Josh Lee - https://stackoverflow.com/a/2158522/5393381
107 µs ± 2.99 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit list(genflat(L, list)) # Alex Martelli - https://stackoverflow.com/a/2159079/5393381
23.1 µs ± 710 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I'm the author of the iteration_utilities library.
It was fun trying to create a function that could flatten irregular list in Python, but of course that is what Python is for (to make programming fun). The following generator works fairly well with some caveats:
def flatten(iterable):
try:
for item in iterable:
yield from flatten(item)
except TypeError:
yield iterable
It will flatten datatypes that you might want left alone (like bytearray, bytes, and str objects). Also, the code relies on the fact that requesting an iterator from a non-iterable raises a TypeError.
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> def flatten(iterable):
try:
for item in iterable:
yield from flatten(item)
except TypeError:
yield iterable
>>> list(flatten(L))
[1, 2, 3, 4, 5, 6]
>>>
Edit:
I disagree with the previous implementation. The problem is that you should not be able to flatten something that is not an iterable. It is confusing and gives the wrong impression of the argument.
>>> list(flatten(123))
[123]
>>>
The following generator is almost the same as the first but does not have the problem of trying to flatten a non-iterable object. It fails as one would expect when an inappropriate argument is given to it.
def flatten(iterable):
for item in iterable:
try:
yield from flatten(item)
except TypeError:
yield item
Testing the generator works fine with the list that was provided. However, the new code will raise a TypeError when a non-iterable object is given to it. Example are shown below of the new behavior.
>>> L = [[[1, 2, 3], [4, 5]], 6]
>>> list(flatten(L))
[1, 2, 3, 4, 5, 6]
>>> list(flatten(123))
Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
list(flatten(123))
File "<pyshell#27>", line 2, in flatten
for item in iterable:
TypeError: 'int' object is not iterable
>>>
Here's a simple function that flattens lists of arbitrary depth. No recursion, to avoid stack overflow.
from copy import deepcopy
def flatten_list(nested_list):
"""Flatten an arbitrarily nested list, without recursion (to avoid
stack overflows). Returns a new list, the original list is unchanged.
>> list(flatten_list([1, 2, 3, [4], [], [[[[[[[[[5]]]]]]]]]]))
[1, 2, 3, 4, 5]
>> list(flatten_list([[1, 2], 3]))
[1, 2, 3]
"""
nested_list = deepcopy(nested_list)
while nested_list:
sublist = nested_list.pop(0)
if isinstance(sublist, list):
nested_list = sublist + nested_list
else:
yield sublist
Although an elegant and very pythonic answer has been selected I would present my solution just for the review:
def flat(l):
ret = []
for i in l:
if isinstance(i, list) or isinstance(i, tuple):
ret.extend(flat(i))
else:
ret.append(i)
return ret
Please tell how good or bad this code is?
When trying to answer such a question you really need to give the limitations of the code you propose as a solution. If it was only about performances I wouldn't mind too much, but most of the codes proposed as solution (including the accepted answer) fail to flatten any list that has a depth greater than 1000.
When I say most of the codes I mean all codes that use any form of recursion (or call a standard library function that is recursive). All these codes fail because for every of the recursive call made, the (call) stack grow by one unit, and the (default) python call stack has a size of 1000.
If you're not too familiar with the call stack, then maybe the following will help (otherwise you can just scroll to the Implementation).
Call stack size and recursive programming (dungeon analogy)
Finding the treasure and exit
Imagine you enter a huge dungeon with numbered rooms, looking for a treasure. You don't know the place but you have some indications on how to find the treasure. Each indication is a riddle (difficulty varies, but you can't predict how hard they will be). You decide to think a little bit about a strategy to save time, you make two observations:
It's hard (long) to find the treasure as you'll have to solve (potentially hard) riddles to get there.
Once the treasure found, returning to the entrance may be easy, you just have to use the same path in the other direction (though this needs a bit of memory to recall your path).
When entering the dungeon, you notice a small notebook here. You decide to use it to write down every room you exit after solving a riddle (when entering a new room), this way you'll be able to return back to the entrance. That's a genius idea, you won't even spend a cent implementing your strategy.
You enter the dungeon, solving with great success the first 1001 riddles, but here comes something you hadn't planed, you have no space left in the notebook you borrowed. You decide to abandon your quest as you prefer not having the treasure than being lost forever inside the dungeon (that looks smart indeed).
Executing a recursive program
Basically, it's the exact same thing as finding the treasure. The dungeon is the computer's memory, your goal now is not to find a treasure but to compute some function (find f(x) for a given x). The indications simply are sub-routines that will help you solving f(x). Your strategy is the same as the call stack strategy, the notebook is the stack, the rooms are the functions' return addresses:
x = ["over here", "am", "I"]
y = sorted(x) # You're about to enter a room named `sorted`, note down the current room address here so you can return back: 0x4004f4 (that room address looks weird)
# Seems like you went back from your quest using the return address 0x4004f4
# Let's see what you've collected
print(' '.join(y))
The problem you encountered in the dungeon will be the same here, the call stack has a finite size (here 1000) and therefore, if you enter too many functions without returning back then you'll fill the call stack and have an error that look like "Dear adventurer, I'm very sorry but your notebook is full": RecursionError: maximum recursion depth exceeded. Note that you don't need recursion to fill the call stack, but it's very unlikely that a non-recursive program call 1000 functions without ever returning. It's important to also understand that once you returned from a function, the call stack is freed from the address used (hence the name "stack", return address are pushed in before entering a function and pulled out when returning). In the special case of a simple recursion (a function f that call itself once -- over and over --) you will enter f over and over until the computation is finished (until the treasure is found) and return from f until you go back to the place where you called f in the first place. The call stack will never be freed from anything until the end where it will be freed from all return addresses one after the other.
How to avoid this issue?
That's actually pretty simple: "don't use recursion if you don't know how deep it can go". That's not always true as in some cases, Tail Call recursion can be Optimized (TCO). But in python, this is not the case, and even "well written" recursive function will not optimize stack use. There is an interesting post from Guido about this question: Tail Recursion Elimination.
There is a technique that you can use to make any recursive function iterative, this technique we could call bring your own notebook. For example, in our particular case we simply are exploring a list, entering a room is equivalent to entering a sublist, the question you should ask yourself is how can I get back from a list to its parent list? The answer is not that complex, repeat the following until the stack is empty:
push the current list address and index in a stack when entering a new sublist (note that a list address+index is also an address, therefore we just use the exact same technique used by the call stack);
every time an item is found, yield it (or add them in a list);
once a list is fully explored, go back to the parent list using the stack return address (and index).
Also note that this is equivalent to a DFS in a tree where some nodes are sublists A = [1, 2] and some are simple items: 0, 1, 2, 3, 4 (for L = [0, [1,2], 3, 4]). The tree looks like this:
L
|
-------------------
| | | |
0 --A-- 3 4
| |
1 2
The DFS traversal pre-order is: L, 0, A, 1, 2, 3, 4. Remember, in order to implement an iterative DFS you also "need" a stack. The implementation I proposed before result in having the following states (for the stack and the flat_list):
init.: stack=[(L, 0)]
**0**: stack=[(L, 0)], flat_list=[0]
**A**: stack=[(L, 1), (A, 0)], flat_list=[0]
**1**: stack=[(L, 1), (A, 0)], flat_list=[0, 1]
**2**: stack=[(L, 1), (A, 1)], flat_list=[0, 1, 2]
**3**: stack=[(L, 2)], flat_list=[0, 1, 2, 3]
**3**: stack=[(L, 3)], flat_list=[0, 1, 2, 3, 4]
return: stack=[], flat_list=[0, 1, 2, 3, 4]
In this example, the stack maximum size is 2, because the input list (and therefore the tree) have depth 2.
Implementation
For the implementation, in python you can simplify a little bit by using iterators instead of simple lists. References to the (sub)iterators will be used to store sublists return addresses (instead of having both the list address and the index). This is not a big difference but I feel this is more readable (and also a bit faster):
def flatten(iterable):
return list(items_from(iterable))
def items_from(iterable):
cursor_stack = [iter(iterable)]
while cursor_stack:
sub_iterable = cursor_stack[-1]
try:
item = next(sub_iterable)
except StopIteration: # post-order
cursor_stack.pop()
continue
if is_list_like(item): # pre-order
cursor_stack.append(iter(item))
elif item is not None:
yield item # in-order
def is_list_like(item):
return isinstance(item, list)
Also, notice that in is_list_like I have isinstance(item, list), which could be changed to handle more input types, here I just wanted to have the simplest version where (iterable) is just a list. But you could also do that:
def is_list_like(item):
try:
iter(item)
return not isinstance(item, str) # strings are not lists (hmm...)
except TypeError:
return False
This considers strings as "simple items" and therefore flatten_iter([["test", "a"], "b]) will return ["test", "a", "b"] and not ["t", "e", "s", "t", "a", "b"]. Remark that in that case, iter(item) is called twice on each item, let's pretend it's an exercise for the reader to make this cleaner.
Testing and remarks on other implementations
In the end, remember that you can't print a infinitely nested list L using print(L) because internally it will use recursive calls to __repr__ (RecursionError: maximum recursion depth exceeded while getting the repr of an object). For the same reason, solutions to flatten involving str will fail with the same error message.
If you need to test your solution, you can use this function to generate a simple nested list:
def build_deep_list(depth):
"""Returns a list of the form $l_{depth} = [depth-1, l_{depth-1}]$
with $depth > 1$ and $l_0 = [0]$.
"""
sub_list = [0]
for d in range(1, depth):
sub_list = [d, sub_list]
return sub_list
Which gives: build_deep_list(5) >>> [4, [3, [2, [1, [0]]]]].
I prefer simple answers. No generators. No recursion or recursion limits. Just iteration:
def flatten(TheList):
listIsNested = True
while listIsNested: #outer loop
keepChecking = False
Temp = []
for element in TheList: #inner loop
if isinstance(element,list):
Temp.extend(element)
keepChecking = True
else:
Temp.append(element)
listIsNested = keepChecking #determine if outer loop exits
TheList = Temp[:]
return TheList
This works with two lists: an inner for loop and an outer while loop.
The inner for loop iterates through the list. If it finds a list element, it (1) uses list.extend() to flatten that part one level of nesting and (2) switches keepChecking to True. keepchecking is used to control the outer while loop. If the outer loop gets set to true, it triggers the inner loop for another pass.
Those passes keep happening until no more nested lists are found. When a pass finally occurs where none are found, keepChecking never gets tripped to true, which means listIsNested stays false and the outer while loop exits.
The flattened list is then returned.
Test-run
flatten([1,2,3,4,[100,200,300,[1000,2000,3000]]])
[1, 2, 3, 4, 100, 200, 300, 1000, 2000, 3000]
I didn't go through all the already available answers here, but here is a one liner I came up with, borrowing from lisp's way of first and rest list processing
def flatten(l): return flatten(l[0]) + (flatten(l[1:]) if len(l) > 1 else []) if type(l) is list else [l]
here is one simple and one not-so-simple case -
>>> flatten([1,[2,3],4])
[1, 2, 3, 4]
>>> flatten([1, [2, 3], 4, [5, [6, {'name': 'some_name', 'age':30}, 7]], [8, 9, [10, [11, [12, [13, {'some', 'set'}, 14, [15, 'some_string'], 16], 17, 18], 19], 20], 21, 22, [23, 24], 25], 26, 27, 28, 29, 30])
[1, 2, 3, 4, 5, 6, {'age': 30, 'name': 'some_name'}, 7, 8, 9, 10, 11, 12, 13, set(['set', 'some']), 14, 15, 'some_string', 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
>>>
I'm not sure if this is necessarily quicker or more effective, but this is what I do:
def flatten(lst):
return eval('[' + str(lst).replace('[', '').replace(']', '') + ']')
L = [[[1, 2, 3], [4, 5]], 6]
print(flatten(L))
The flatten function here turns the list into a string, takes out all of the square brackets, attaches square brackets back onto the ends, and turns it back into a list.
Although, if you knew you would have square brackets in your list in strings, like [[1, 2], "[3, 4] and [5]"], you would have to do something else.
Just use a funcy library:
pip install funcy
import funcy
funcy.flatten([[[[1, 1], 1], 2], 3]) # returns generator
funcy.lflatten([[[[1, 1], 1], 2], 3]) # returns list
Here's the compiler.ast.flatten implementation in 2.7.5:
def flatten(seq):
l = []
for elt in seq:
t = type(elt)
if t is tuple or t is list:
for elt2 in flatten(elt):
l.append(elt2)
else:
l.append(elt)
return l
There are better, faster methods (If you've reached here, you have seen them already)
Also note:
Deprecated since version 2.6: The compiler package has been removed in Python 3.
I'm surprised no one has thought of this. Damn recursion I don't get the recursive answers that the advanced people here made. anyway here is my attempt on this. caveat is it's very specific to the OP's use case
import re
L = [[[1, 2, 3], [4, 5]], 6]
flattened_list = re.sub("[\[\]]", "", str(L)).replace(" ", "").split(",")
new_list = list(map(int, flattened_list))
print(new_list)
output:
[1, 2, 3, 4, 5, 6]
I am aware that there are already many awesome answers but i wanted to add an answer that uses the functional programming method of solving the question. In this answer i make use of double recursion :
def flatten_list(seq):
if not seq:
return []
elif isinstance(seq[0],list):
return (flatten_list(seq[0])+flatten_list(seq[1:]))
else:
return [seq[0]]+flatten_list(seq[1:])
print(flatten_list([1,2,[3,[4],5],[6,7]]))
output:
[1, 2, 3, 4, 5, 6, 7]
No recursion or nested loops. A few lines. Well formatted and easy to read:
def flatten_deep(arr: list):
""" Flattens arbitrarily-nested list `arr` into single-dimensional. """
while arr:
if isinstance(arr[0], list): # Checks whether first element is a list
arr = arr[0] + arr[1:] # If so, flattens that first element one level
else:
yield arr.pop(0) # Otherwise yield as part of the flat array
flatten_deep(L)
From my own code at https://github.com/jorgeorpinel/flatten_nested_lists/blob/master/flatten.py
I don't see anything like this posted around here and just got here from a closed question on the same subject, but why not just do something like this(if you know the type of the list you want to split):
>>> a = [1, 2, 3, 5, 10, [1, 25, 11, [1, 0]]]
>>> g = str(a).replace('[', '').replace(']', '')
>>> b = [int(x) for x in g.split(',') if x.strip()]
You would need to know the type of the elements but I think this can be generalised and in terms of speed I think it would be faster.
totally hacky but I think it would work (depending on your data_type)
flat_list = ast.literal_eval("[%s]"%re.sub("[\[\]]","",str(the_list)))
Here is another py2 approach, Im not sure if its the fastest or the most elegant nor safest ...
from collections import Iterable
from itertools import imap, repeat, chain
def flat(seqs, ignore=(int, long, float, basestring)):
return repeat(seqs, 1) if any(imap(isinstance, repeat(seqs), ignore)) or not isinstance(seqs, Iterable) else chain.from_iterable(imap(flat, seqs))
It can ignore any specific (or derived) type you would like, it returns an iterator, so you can convert it to any specific container such as list, tuple, dict or simply consume it in order to reduce memory footprint, for better or worse it can handle initial non-iterable objects such as int ...
Note most of the heavy lifting is done in C, since as far as I know thats how itertools are implemented, so while it is recursive, AFAIK it isn't bounded by python recursion depth since the function calls are happening in C, though this doesn't mean you are bounded by memory, specially in OS X where its stack size has a hard limit as of today (OS X Mavericks) ...
there is a slightly faster approach, but less portable method, only use it if you can assume that the base elements of the input can be explicitly determined otherwise, you'll get an infinite recursion, and OS X with its limited stack size, will throw a segmentation fault fairly quickly ...
def flat(seqs, ignore={int, long, float, str, unicode}):
return repeat(seqs, 1) if type(seqs) in ignore or not isinstance(seqs, Iterable) else chain.from_iterable(imap(flat, seqs))
here we are using sets to check for the type so it takes O(1) vs O(number of types) to check whether or not an element should be ignored, though of course any value with derived type of the stated ignored types will fail, this is why its using str, unicode so use it with caution ...
tests:
import random
def test_flat(test_size=2000):
def increase_depth(value, depth=1):
for func in xrange(depth):
value = repeat(value, 1)
return value
def random_sub_chaining(nested_values):
for values in nested_values:
yield chain((values,), chain.from_iterable(imap(next, repeat(nested_values, random.randint(1, 10)))))
expected_values = zip(xrange(test_size), imap(str, xrange(test_size)))
nested_values = random_sub_chaining((increase_depth(value, depth) for depth, value in enumerate(expected_values)))
assert not any(imap(cmp, chain.from_iterable(expected_values), flat(chain(((),), nested_values, ((),)))))
>>> test_flat()
>>> list(flat([[[1, 2, 3], [4, 5]], 6]))
[1, 2, 3, 4, 5, 6]
>>>
$ uname -a
Darwin Samys-MacBook-Pro.local 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64
$ python --version
Python 2.7.5
Without using any library:
def flat(l):
def _flat(l, r):
if type(l) is not list:
r.append(l)
else:
for i in l:
r = r + flat(i)
return r
return _flat(l, [])
# example
test = [[1], [[2]], [3], [['a','b','c'] , [['z','x','y']], ['d','f','g']], 4]
print flat(test) # prints [1, 2, 3, 'a', 'b', 'c', 'z', 'x', 'y', 'd', 'f', 'g', 4]
Using itertools.chain:
import itertools
from collections import Iterable
def list_flatten(lst):
flat_lst = []
for item in itertools.chain(lst):
if isinstance(item, Iterable):
item = list_flatten(item)
flat_lst.extend(item)
else:
flat_lst.append(item)
return flat_lst
Or without chaining:
def flatten(q, final):
if not q:
return
if isinstance(q, list):
if not isinstance(q[0], list):
final.append(q[0])
else:
flatten(q[0], final)
flatten(q[1:], final)
else:
final.append(q)
I used recursive to solve nested list with any depth
def combine_nlist(nlist,init=0,combiner=lambda x,y: x+y):
'''
apply function: combiner to a nested list element by element(treated as flatten list)
'''
current_value=init
for each_item in nlist:
if isinstance(each_item,list):
current_value =combine_nlist(each_item,current_value,combiner)
else:
current_value = combiner(current_value,each_item)
return current_value
So after i define function combine_nlist, it is easy to use this function do flatting. Or you can combine it into one function. I like my solution because it can be applied to any nested list.
def flatten_nlist(nlist):
return combine_nlist(nlist,[],lambda x,y:x+[y])
result
In [379]: flatten_nlist([1,2,3,[4,5],[6],[[[7],8],9],10])
Out[379]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The easiest way is to use the morph library using pip install morph.
The code is:
import morph
list = [[[1, 2, 3], [4, 5]], 6]
flattened_list = morph.flatten(list) # returns [1, 2, 3, 4, 5, 6]
This is a simple implement of flatten on python2
flatten=lambda l: reduce(lambda x,y:x+y,map(flatten,l),[]) if isinstance(l,list) else [l]
test=[[1,2,3,[3,4,5],[6,7,[8,9,[10,[11,[12,13,14]]]]]],]
print flatten(test)
#output [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
Related
This question already has answers here:
Python3: How can I split a list based on condition?
(2 answers)
Closed last year.
I have a list like
a=['a',2,'[abcd]','bb',4,5,'kk','[efgh]',6,7,'no','[ijkl]',4,5,'lo']
So here we want group by after each '[]'
so the expected one would be
[['a',2],{'abcd': ['bb',4,5,'kk']},{'efgh': [6,7,'no']},{'ijkl': [4,5,'lo']}]
Any help would be appriciable
You can use groupby:
from itertools import groupby
a=['a',2,'[abcd]','bb',4,5,'kk','[efgh]',6,7,'no','[ijkl]',4,5,'lo']
def group_by_tag(li):
def tag_counter(x):
if isinstance(x, str) and x.startswith('[') and x.endswith(']'):
tag_counter.cnt += 1
return tag_counter.cnt
tag_counter.cnt = 0
return groupby(li, key=tag_counter)
Which you can use to make a list of tuples for each segment partitioned by the [tag]:
>>> x=[(k,list(l)) for k, l in group_by_tag(a)]
>>> x
[(0, ['a', 2]), (1, ['[abcd]', 'bb', 4, 5, 'kk']), (2, ['[efgh]', 6, 7, 'no']), (3, ['[ijkl]', 4, 5, 'lo'])]
And then create your desired mixed-type list from that:
>>> [v if k==0 else {v[0].strip('[]'):v[1:]} for k,v in x]
[['a', 2], {'abcd': ['bb', 4, 5, 'kk']}, {'efgh': [6, 7, 'no']}, {'ijkl': [4, 5, 'lo']}]
But consider that it is usually better to have a list of the same type of object to make processing that list easier.
If you want that, you could do:
>>> [{'no_tag':v} if k==0 else {v[0].strip('[]'):v[1:]} for k,v in x]
[{'no_tag': ['a', 2]}, {'abcd': ['bb', 4, 5, 'kk']}, {'efgh': [6, 7, 'no']}, {'ijkl': [4, 5, 'lo']}]
By "=>" I assume you want a dictionary if there is a preceding keyword and key be the word enclosed within the square brackets (if that's not what you intended feel free to comment and I'll edit this post)
import re
def sort(iterable):
result = {}
governing_match = None
for each in list(iterable):
match = re.findall(r"\[.{1,}\]", str(each))
if len(match) > 0:
governing_match = match[0][1:-1]
result[governing_match] = []
continue
result[governing_match].append(each)
return result
a=['[foo]', 'a' , 2 ,'[abcd]','bb',4,5,'kk','[efgh]',6,7,'no','[ijkl]',4,5,'lo']
for (k, v) in sort(a).items():
print(f"{k} : {v}")
Result :
foo : ['a', 2]
abcd : ['bb', 4, 5, 'kk']
efgh : [6, 7, 'no']
ijkl : [4, 5, 'lo']
The limitation of this is that every sequence should start with an element that is enclosed within square brackets.
You can use pairwise for that:
from itertools import pairwise
a=['a',2,'[abcd]','bb',4,5,'kk','[efgh]',6,7,'no','[ijkl]',4,5,'lo']
bracket_indices = [i for i, x in enumerate(a) if isinstance(x, str)
and x.startswith('[') and x.endswith(']')]
bracket_indices.append(len(a))
output = [a[:bracket_indices[0]]] # First item is special cased
output.extend({a[i][1:-1]: a[i+1:j]} for i, j in pairwise(bracket_indices))
I have a list ['','','',['',[['a','b']['c']]],[[['a','b'],['c']]],[[['d']]]]
I want to flatten the list with indices and the output should be as follows:
flat list=['','','','','a','b','c','a','b','c','d']
indices=[0,1,2,3,3,3,3,4,4,4,5]
How to do this?
I have tried this:
def flat(nums):
res = []
index = []
for i in range(len(nums)):
if isinstance(nums[i], list):
res.extend(nums[i])
index.extend([i]*len(nums[i]))
else:
res.append(nums[i])
index.append(i)
return res,index
But this doesn't work as expected.
TL;DR
This implementation handles nested iterables with unbounded depth:
def enumerate_items_from(iterable):
cursor_stack = [iter(iterable)]
item_index = -1
while cursor_stack:
sub_iterable = cursor_stack[-1]
try:
item = next(sub_iterable)
except StopIteration:
cursor_stack.pop()
continue
if len(cursor_stack) == 1:
item_index += 1
if not isinstance(item, str):
try:
cursor_stack.append(iter(item))
continue
except TypeError:
pass
yield item, item_index
def flat(iterable):
return map(list, zip(*enumerate_items_from(a)))
Which can be used to produce the desired output:
>>> nested = ['', '', '', ['', [['a', 'b'], ['c']]], [[['a', 'b'], ['c']]], [[['d']]]]
>>> flat_list, item_indexes = flat(nested)
>>> print(item_indexes)
[0, 1, 2, 3, 3, 3, 3, 4, 4, 4, 5]
>>> print(flat_list)
['', '', '', '', 'a', 'b', 'c', 'a', 'b', 'c', 'd']
Note that you should probably put the index first to mimic what enumerate does. It would be easier to use for people that already know enumerate.
Important remark unless you are certain your lists will not be nested too much, you shouldn't use any recursion-based solution. Otherwise as soon as you'll have a nested list with depth greater than 1000, your code will crash. I explain this here. Note that a simple call to str(list) will crash on a test case with depth > 1000 (for some python implementations it's more than that, but it's always bounded). The typical exception you'll have when using recursion-based solutions is (this in short is due to how python call stack works):
RecursionError: maximum recursion depth exceeded ...
Implementation details
I'll go step by step, first we will flatten a list, then we will output both the flattened list and the depth of all items, and finally we will output both the list and the corresponding item indexes in the "main list".
Flattening list
That being said, this is actually quite interesting as the iterative solution is perfectly designed for that, you can take a simple (non-recursive) list flattening algorithm:
def flatten(iterable):
return list(items_from(iterable))
def items_from(iterable):
cursor_stack = [iter(iterable)]
while cursor_stack:
sub_iterable = cursor_stack[-1]
try:
item = next(sub_iterable)
except StopIteration: # post-order
cursor_stack.pop()
continue
if isinstance(item, list): # pre-order
cursor_stack.append(iter(item))
else:
yield item # in-order
Computing depth
We can have access to the depth by looking at the stack size, depth = len(cursor_stack) - 1
else:
yield item, len(cursor_stack) - 1 # in-order
This will return an iterative on pairs (item, depth), if we need to separate this result in two iterators we can use the zip function:
>>> a = [1, 2, 3, [4 , [[5, 6], [7]]], [[[8, 9], [10]]], [[[11]]]]
>>> flatten(a)
[(1, 0), (2, 0), (3, 0), (4, 1), (5, 3), (6, 3), (7, 3), (8, 3), (9, 3), (10, 3), (11, 3)]
>>> flat_list, depths = zip(*flatten(a))
>>> print(flat_list)
(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
>>> print(depths)
(0, 0, 0, 1, 3, 3, 3, 3, 3, 3, 3)
We will now do something similar to have item indexes instead of the depth.
Computing item indexes
To instead compute item indexes (in the main list), you'll need to count the number of items you've seen so far, which can be done by adding 1 to an item_index every time we iterate over an item that is at depth 0 (when the stack size is equal to 1):
def flatten(iterable):
return list(items_from(iterable))
def items_from(iterable):
cursor_stack = [iter(iterable)]
item_index = -1
while cursor_stack:
sub_iterable = cursor_stack[-1]
try:
item = next(sub_iterable)
except StopIteration: # post-order
cursor_stack.pop()
continue
if len(cursor_stack) == 1: # If current item is in "main" list
item_index += 1
if isinstance(item, list): # pre-order
cursor_stack.append(iter(item))
else:
yield item, item_index # in-order
Similarly we will break pairs in two itératifs using ˋzip, we will also use ˋmap to transform both iterators to lists:
>>> a = [1, 2, 3, [4 , [[5, 6], [7]]], [[[8, 9], [10]]], [[[11]]]]
>>> flat_list, item_indexes = map(list, zip(*flatten(a)))
>>> print(flat_list)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
>>> print(item_indexes)
[0, 1, 2, 3, 3, 3, 3, 4, 4, 4, 5]
improvement — Handling iterable inputs
Being able to take a broader palette of nested iterables as input could be desirable (especially if you build this for others to use). For example, the current implementation doesn't work as expected if we have nested iterables as input, for example:
>>> a = iter([1, '2', 3, iter([4, [[5, 6], [7]]])])
>>> flat_list, item_indexes = map(list, zip(*flatten(a)))
>>> print(flat_list)
[1, '2', 3, <list_iterator object at 0x100f6a390>]
>>> print(item_indexes)
[0, 1, 2, 3]
If we want this to work we need to be a bit careful because strings are iterable but we want them to be considered as atomic items (not a as lists of characters). Instead of assuming the input is a list as we did before:
if isinstance(item, list): # pre-order
cursor_stack.append(iter(item))
else:
yield item, item_index # in-order
We will not inspect the input type, instead we will try to use it as if it was an iterable and if it fails we will know that it’s not an iterable (duck typing):
if not isinstance(item, str):
try:
cursor_stack.append(iter(item))
continue
# item is not an iterable object:
except TypeError:
pass
yield item, item_index
With this implementation, we have:
>>> a = iter([1, 2, 3, iter([4, [[5, 6], [7]]])])
>>> flat_list, item_indexes = map(list, zip(*flatten(a)))
>>> print(flat_list)
[1, 2, 3, 4, 5, 6, 7]
>>> print(item_indexes)
[0, 1, 2, 3, 3, 3, 3]
Building test cases
If you need to generate tests cases with large depths, you can use this piece of code:
def build_deep_list(depth):
"""Returns a list of the form $l_{depth} = [depth-1, l_{depth-1}]$
with $depth > 1$ and $l_0 = [0]$.
"""
sub_list = [0]
for d in range(1, depth):
sub_list = [d, sub_list]
return sub_list
You can use this to make sure my implementation doesn't crash when the depth is large:
a = build_deep_list(1200)
flat_list, item_indexes = map(list, zip(*flatten(a)))
We can also check that we can't print such a list by using the str function:
>>> a = build_deep_list(1200)
>>> str(a)
RecursionError: maximum recursion depth exceeded while getting the repr of an object
Function repr is called by str(list) on every element from the input list.
Concluding remarks
In the end I agree that recursive implementations are way easier to read (as the call stack does half the hard work for us), but when implementing low level function like that I think it is a good investment to have a code that works in all cases (or at least all the cases you can think of). Especially when the solution is not that hard. That's also a way not to forget how to write non-recursive code working on tree-like structures (which may not happen a lot unless you are implementing data structures yourself, but that's a good exercise).
Note that everything I say “against” recursion is only true because python doesn't optimize call stack usage when facing recursion: Tail Recursion Elimination in Python. Whereas many compiled languages do Tail Call recursion Optimization (TCO). Which means that even if you write the perfect tail-recursive function in python, it will crash on deeply nested lists.
If you need more details on the list flattening algorithm you can refer to the post I linked.
Simple and elegant solution:
def flat(main_list):
res = []
index = []
for main_index in range(len(main_list)):
# Check if element is a String
if isinstance(main_list[main_index], str):
res.append(main_list[main_index])
index.append(main_index)
# Check if element is a List
else:
sub_list = str(main_list[main_index]).replace('[', '').replace(']', '').replace(" ", '').replace("'", '').split(',')
res += sub_list
index += ([main_index] * len(sub_list))
return res, index
this does the job, but if you want it to be just returned then I'll enhance it for you
from pprint import pprint
ar = ["","","",["",[["a","b"],["c"]]],[[["a","b"],["c"]]],[[["d"]]]]
flat = []
indices= []
def squash(arr,indx=-1):
for ind,item in enumerate(arr):
if isinstance(item, list):
squash(item,ind if indx==-1 else indx)
else:
flat.append(item)
indices.append(ind if indx==-1 else indx)
squash(ar)
pprint(ar)
pprint(flat)
pprint(indices)
EDIT
and this is if you don't want to keep the lists in memory and return them
from pprint import pprint
ar = ["","","",["",[["a","b"],["c"]]],[[["a","b"],["c"]]],[[["d"]]]]
def squash(arr,indx=-1,fl=[],indc=[]):
for ind,item in enumerate(arr):
if isinstance(item, list):
fl,indc = squash(item,ind if indx==-1 else indx, fl, indc)
else:
fl.append(item)
indc.append(ind if indx==-1 else indx)
return fl,indc
flat,indices = squash(ar)
pprint(ar)
pprint(flat)
pprint(indices)
I'm not expecting you would need more than 1k recursion depth which is the default setting
I have the following code:
[x ** 2 for x in range(10)]
When I run it in the Python shell, it returns:
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
I've searched and it seems this is called a list comprehension and similarly there seem to be set/dict comprehensions and generator expressions. But how does it work?
From the documentation:
List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
About your question, the list comprehension does the same thing as the following "plain" Python code:
>>> l = []
>>> for x in range(10):
... l.append(x**2)
>>> l
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
How do you write it in one line? Hmm...we can...probably...use map() with lambda:
>>> list(map(lambda x: x**2, range(10)))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
But isn't it clearer and simpler to just use a list comprehension?
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Basically, we can do anything with x. Not only x**2. For example, run a method of x:
>>> [x.strip() for x in ('foo\n', 'bar\n', 'baz\n')]
['foo', 'bar', 'baz']
Or use x as another function's argument:
>>> [int(x) for x in ('1', '2', '3')]
[1, 2, 3]
We can also, for example, use x as the key of a dict object. Let's see:
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [d[x] for x in ['foo', 'baz']]
['10', '30']
How about a combination?
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [int(d[x].rstrip('0')) for x in ['foo', 'baz']]
[1, 3]
And so on.
You can also use if or if...else in a list comprehension. For example, you only want odd numbers in range(10). You can do:
>>> l = []
>>> for x in range(10):
... if x%2:
... l.append(x)
>>> l
[1, 3, 5, 7, 9]
Ah that's too complex. What about the following version?
>>> [x for x in range(10) if x%2]
[1, 3, 5, 7, 9]
To use an if...else ternary expression, you need put the if ... else ... after x, not after range(10):
>>> [i if i%2 != 0 else None for i in range(10)]
[None, 1, None, 3, None, 5, None, 7, None, 9]
Have you heard about nested list comprehension? You can put two or more fors in one list comprehension. For example:
>>> [i for x in [[1, 2, 3], [4, 5, 6]] for i in x]
[1, 2, 3, 4, 5, 6]
>>> [j for x in [[[1, 2], [3]], [[4, 5], [6]]] for i in x for j in i]
[1, 2, 3, 4, 5, 6]
Let's talk about the first part, for x in [[1, 2, 3], [4, 5, 6]] which gives [1, 2, 3] and [4, 5, 6]. Then, for i in x gives 1, 2, 3 and 4, 5, 6.
Warning: You always need put for x in [[1, 2, 3], [4, 5, 6]] before for i in x:
>>> [j for j in x for x in [[1, 2, 3], [4, 5, 6]]]
Traceback (most recent call last):
File "<input>", line 1, in <module>
NameError: name 'x' is not defined
We also have set comprehensions, dict comprehensions, and generator expressions.
set comprehensions and list comprehensions are basically the same, but the former returns a set instead of a list:
>>> {x for x in [1, 1, 2, 3, 3, 1]}
{1, 2, 3}
It's the same as:
>>> set([i for i in [1, 1, 2, 3, 3, 1]])
{1, 2, 3}
A dict comprehension looks like a set comprehension, but it uses {key: value for key, value in ...} or {i: i for i in ...} instead of {i for i in ...}.
For example:
>>> {i: i**2 for i in range(5)}
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
And it equals:
>>> d = {}
>>> for i in range(5):
... d[i] = i**2
>>> d
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
Does (i for i in range(5)) give a tuple? No!, it's a generator expression. Which returns a generator:
>>> (i for i in range(5))
<generator object <genexpr> at 0x7f52703fbca8>
It's the same as:
>>> def gen():
... for i in range(5):
... yield i
>>> gen()
<generator object gen at 0x7f5270380db0>
And you can use it as a generator:
>>> gen = (i for i in range(5))
>>> next(gen)
0
>>> next(gen)
1
>>> list(gen)
[2, 3, 4]
>>> next(gen)
Traceback (most recent call last):
File "<input>", line 1, in <module>
StopIteration
Note: If you use a list comprehension inside a function, you don't need the [] if that function could loop over a generator. For example, sum():
>>> sum(i**2 for i in range(5))
30
Related (about generators): Understanding Generators in Python.
There are list, dictionary, and set comprehensions, but no tuple comprehensions (though do explore "generator expressions").
They address the problem that traditional loops in Python are statements (don't return anything) not expressions which return a value.
They are not the solution to every problem and can be rewritten as traditional loops. They become awkward when state needs to be maintained & updated between iterations.
They typically consist of:
[<output expr> <loop expr <input expr>> <optional predicate expr>]
but can be twisted in lots of interesting and bizarre ways.
They can be analogous to the traditional map() and filter() operations which still exist in Python and continue to be used.
When done well, they have a high satisfaction quotient.
If you prefer a more visual way of figuring out what's going on then maybe this will help:
# for the example in the question...
y = []
for x in range(10):
y += [x**2]
# is equivalent to...
y = [x**2 for x in range(10)]
# for a slightly more complex example, it is useful
# to visualize where the various x's end up...
a = [1,2,3,4]
b = [3,4,5,6]
c = []
for x in a:
if x in b:
c += [x]
# \ \ /
# \ _____\______/
# \ / \
# \/ \
# /\ \
# / \ \
# / \ \
c = [x for x in a if x in b]
print(c)
...produces the output [3, 4]
I've seen a lot of confusion lately (on other SO questions and from coworkers) about how list comprehensions work. A wee bit of math education can help with why the syntax is like this, and what list comprehensions really mean.
The syntax
It's best to think of list comprehensions as predicates over a set/collection, like we would in mathematics by using set builder notation. The notation actually feels pretty natural to me, because I hold an undergrad degree in Mathematics. But forget about me, Guido van Rossum (inventor of Python) holds a masters in Mathematics and has a math background.
Set builder notation crash course
Here's the (very basics) of how set builder notation works:
So, this set builder notation represents the set of numbers that are strictly positive (i.e. [1,2,3,4,...]).
Points of confusion
1) The predicate filter in set builder notation only specifies which items we want to keep, and list comprehension predicates do the same thing. You don't have to include special logic for omitting items, they are omitted unless included by the predicate. The empty predicate (i.e. no conditional at the end) includes all items in the given collection.
2) The predicate filter in set builder notation goes at the end, and similarly in list comprehensions. (some) Beginners think something like [x < 5 for x in range(10)] will give them the list [0,1,2,3,4], when in fact it outputs [True, True, True, True, True, False, False, False, False, False]. We get the output [True, True, True, True, True, False, False, False, False, False] because we asked Python to evaluate x < 5 for all items in range(10). No predicate implies that we get everything from the set (just like in set builder notation).
If you keep set builder notation in the back of your mind while using list comprehensions, they're a bit easier to swallow.
HTH!
Introduction
A list comprehension is a high level, declarative way to create a list in Python. The main benefits of comprehensions are readability and maintainability. A lot of people find them very readable, and even developers who have never seen them before can usually guess correctly what it means.
# Snippet 1
squares = [n ** 2 for n in range(5)]
# Snippet 2
squares = []
for n in range(5):
squares.append(n ** 2)
Both snippets of code will produce squares to be equal to [0, 1, 4, 9, 16].
Notice that in the first snippet, what you type is declaring what kind of list you want, while the second is specifying how to create it. This is why a comprehension is a high-level and declarative.
Syntax
[EXPRESSION for VARIABLE in SEQUENCE]
EXPRESSION is any Python expression, but it is typical to have some variable in it. This variable is stated in VARIABLE field. SEQUENCE defines the source of values the variable enumerates through.
Considering Snippet 1, [n ** 2 for n in range(5)]:
EXPRESSION is n ** 2
VARIABLE is n
SEQUENCE is range(5)
Notice that if you check the type of squares you will get that the list comprehension is just a regular list:
>>> type(squares)
<class 'list'>
More about EXPRESSION
The expression can be anything that reduces to a value:
Arithmetic expressions such as n ** 2 + 3 * n + 1
A function call like f(n) using n as variable
A slice operation like s[::-1]
Method calls bar.foo()
...
Some examples:
>>> [2 * x + 3 for x in range(5)]
[3, 5, 7, 9, 11]
>>> [abs(num) for num in range(-5, 5)]
[5, 4, 3, 2, 1, 0, 1, 2, 3, 4]
>>> animals = ['dog', 'cat', 'lion', 'tiger']
>>> [animal.upper() for animal in animals]
['DOG', 'CAT', 'LION', 'TIGER']
Filtering:
The order of elements in the final list is determined by the order of SEQUENCE. However, you can filter out elements adding an if clause:
[EXPRESSION for VARIABLE in SEQUENCE if CONDITION]
CONDITION is an expression that evaluates to True or False. Technically, the condition doesn't have to depend upon VARIABLE, but it typically uses it.
Examples:
>>> [n ** 2 for n in range(5) if n % 2 == 0]
[0, 4, 16]
>>> animals = ['dog', 'cat', 'lion', 'tiger']
>>> [animal for animal in animals if len(animal) == 3]
['dog', 'cat']
Also, remember that Python allows you to write other kinds of comprehensions other than lists:
dictionary comprehensions
set comprehensions
Problem:
I need to define a peaks function which passed to an iterable as parameter, by computing this iterable, the function should return a list of peaks. The only data structure that I can create is the list is returning; use no intermediate data structures to help the computation: e.g., I cannot create a list with all the values in the iterable. Note that I also cannot assume the argument is indexable, nor can I compute len for it: it is just iterable.
For example:
peaks([0,1,-1,3,8,4,3,5,4,3,8]) returns [1, 8, 5].
This result means the values 1, 8, and 5 are strictly bigger than the values immediately preceding and following them.
def peaks(iterable):
l=[]
i=iter(iterable)
v=next(i)
try:
while True:
if v>next(iter(iterable)):
l.append(v)
v=next(i)
except StopIteration:
pass
return l
Calling these should give me:
peaks([0,1,-1,3,8,4,3,5,4,3,8]) --> [1,8,5]
peaks([5,2,4,9,6,1,3,8,0,7]) -->[9,8]
But I am getting:
peaks([0,1,-1,3,8,4,3,5,4,3,8]) --> [1, 3, 8, 4, 3, 5, 4, 3, 8]
peaks([5,2,4,9,6,1,3,8,0,7]) --> [9, 6, 8, 7]
Please help me with this problem, I have spent so much time on it and am making no progress on it. And, I don't know how to write the if statement to check the values immediately preceding and following. Any helps would be great! Actual codes would be really appreciated since my English is bad.
def peaks(L):
answer = []
a,b,c = itertools.tee(L, 3)
next(b)
next(c)
next(c)
for first, second, third in zip(a,b,c):
if first <= second >= third:
answer.append(second)
return answer
In [61]: peaks([0,1,-1,3,8,4,3,5,4,3,8])
Out[61]: [1, 8, 5]
In [62]: peaks([5,2,4,9,6,1,3,8,0,7])
Out[62]: [9, 8]
for i in range(1, len(A)):
A[i] = A[i-1] + A[i]
You can't do that with a list comprehension as they don't allow assignments.
You can use a simple generator function:
def func(lis):
yield lis[0]
for i,x in enumerate(lis[1:],1):
lis[i] = lis[i-1] + x
yield lis[i]
>>> A = [1, 2, 3, 4, 5, 6, 7]
>>> list(func(A))
[1, 3, 6, 10, 15, 21, 28]
though less efficient, this does give the desired output. But I think I'm getting closer to O(n**2) on this one.
A = [sum(A[:i+1]) for i, _ in enumerate(A)]
afaik this can't be done with a list comprehension the way you want to. I would suggest using the for loop version you've provided. Even if it was possible with a list comprehension, there's no point when you can just modify the list in place.
Use B (another temporary variable).
This should do the trick.
def func(L):
it = iter(L)
v = it.next()
yield v
for x in it:
v += x
yield v
A = [1, 2, 3, 4, 5, 6, 7]
print list(func(A))
This creates an iterator that returns one value at the time. To get the full new list at once you need to use a list() call around the function call, like:
list(func(A))
This generator function should work on any iterable (also those that don't support getting value based on index, like L[0])
I don't think there's an efficient way to do this with a comprehension list.
A one-line solution use reduce:
>>> the_list = [1,2,3,4,5]
>>> reduce(lambda result, x: result+[result[-1] + x] ,the_list, [0])[1:]
[1, 3, 6, 10, 15]