Efficient way of calculating specific length combinations of adjacent data? - python-3.x

I have a list of elements, of which I'd like to determine all possible combinations that can be arranged - preserving their order - to arrive at 'n' groups
So as an example, if I have an ordered list of A, B, C, D, E, and only want 2 groups, the four solutions would be;
ABCD, E
ABC, DE
AB, CDE
A, BCDE
Now, with some help from another StackOverflow post I've come up with a workable brute-force solution that calculates all possible combinations of all possible groupings from which I simply extract those cases that meet my target number of groupings.
For reasonable numbers of elements, this is just fine, but as I extend the numbers of elements, the number of combinations increases very very quickly, and I was wondering if there might be a clever way to limit the solutions calculated to only those that meet my target groupings number?
Code so far is as follows;
import itertools
import string
import collections
def generate_combination(source, comb):
res = []
for x, action in zip(source,comb + (0,)):
res.append(x)
if action == 0:
yield "".join(res)
res = []
#Create a list of first 20 letters of the alphabet
seq = list(string.ascii_uppercase[0:20])
seq
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T']
#Generate all possible combinations
combinations = [list(generate_combination(seq,c)) for c in itertools.product((0,1), repeat=len(seq)-1)]
len(combinations)
524288
#Create a list that counts the number of groups in each solution,
#and counter to allow easy query
group_counts = [len(i) for i in combinations]
count_dic = collections.Counter(group_counts)
count_dic[1], count_dic[2], count_dic[3], count_dic[4], count_dic[5], count_dic[6]
(1, 19, 171, 969, 3876, 11628)
So as you can see, while over half a million combinations were calculated, if I had only wanted ones of length = 5, only 3,876 need have been calculated
Any suggestions?

A partition of seq into 5 parts is equivalent to a choice of 4 locations in range(1, len(seq)) at which to cut seq.
Thus you could use itertools.combinations(range(1, len(seq)), 4) to generate all the partitions of seq into 5 parts:
import itertools as IT
import string
def partition_into_n(iterable, n, chain=IT.chain, map=map):
"""
Return a generator of all partitions of iterable into n parts.
Based on http://code.activestate.com/recipes/576795/ (Raymond Hettinger)
which generates all partitions.
"""
s = iterable if hasattr(iterable, '__getitem__') else tuple(iterable)
size = len(s)
first, middle, last = [0], range(1, size), [size]
getitem = s.__getitem__
return (map(getitem, map(slice, chain(first, div), chain(div, last)))
for div in IT.combinations(middle, n-1))
seq = list(string.ascii_uppercase[0:20])
ngroups = 5
for partition in partition_into_n(seq, ngroups):
print(' '.join([''.join(grp) for grp in partition]))
print(len(list(partition_into_n(seq, ngroups))))
yields
A B C D EFGHIJKLMNOPQRST
A B C DE FGHIJKLMNOPQRST
A B C DEF GHIJKLMNOPQRST
A B C DEFG HIJKLMNOPQRST
...
ABCDEFGHIJKLMNO P Q RS T
ABCDEFGHIJKLMNO P QR S T
ABCDEFGHIJKLMNO PQ R S T
ABCDEFGHIJKLMNOP Q R S T
3876

Related

find the lengths of all sublists containing common repeated element

I need to find all the sublists from a list where the element is 'F' and that must come one after other
g= ['T','F','F,'F','F','T','T','T','F,'F','F','T]
so, here in this case there are two sublists present in this list which contains element 'F' in repeat
i.e; ['F','F,'F','F'] in index 1,2,3,4 which is in repeat ,so answer is 4
and
['F','F,'F'] in index 8,9,10 which is again in continuous index,so answer is 3
Note:
The list contains only two elements 'T' and 'F' and every time we are doing these operations for element 'F'
You can get the lengths of consecutive sequences with itertools.groupby:
from itertools import groupby
data = ['T','F','F','F','F','T','T','T','F','F','F','T']
# Consecutive sequences of "F".
# "groupby(data)" produces an iterator that calculates on-the-fly.
# The iterator returns consecutive keys and groups from the iterable "data".
seqs = [list(g) for k, g in groupby(data) if k == 'F']
print(seqs)
# [['F', 'F', 'F', 'F'], ['F', 'F', 'F']]
seq_lens = [len(k) for k in seqs]
print(seq_lens)
# [4, 3]
Also cool is max length of such consecutive sequences:
max_len_seq = len(max(seqs, key=len))
print(max_len_seq)
# 4
See itertools.groupby for more info:
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
...
etc
You can create 2 variable to keep count of the repeated letter. Traverse the array and when you found t increase t, when you find a f check the tcount first if it is bigger than 1 it means there is a repeat print the count of the repetition.
tcount = 0;
fcount = 0;
for e in g:
if e=="T":
tcount++
if fcount>1
print(fcount)
fcount=0
//do same operation for F

Check if element is occurring very first time in python list

I have a list with values occurring multiple times. I want to loop over the list and check if value is occurring very first time.
For eg: Let's say I have a one list like ,
L = ['a','a','a','b','b','b','b','b','e','e','e'.......]
Now, at every first occurrence of element, I want to perform some set of tasks.
How to get the first occurrence of element?
Thanks in Advance!!
Use a set to check if you had processed that item already:
visited = set()
L = ['a','a','a','b','b','b','b','b','e','e','e'.......]
for e in L:
if e not in visited:
visited.add(e)
# process first time tasks
else:
# process not first time tasks
You can use unique_everseen from itertools recipes.
This function returns a generator which yield only the first occurence of an element.
Code
from itertools import filterfalse
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
Example
lst = ['a', 'a', 'b', 'c', 'b']
for x in unique_everseen(lst):
print(x) # Do something with the element
Output
a
b
c
The function unique_everseen also allows to pass a key for comparison of elements. This is useful in many cases, by example if you also need to know the position of each first occurence.
Example
lst = ['a', 'a', 'b', 'c', 'b']
for i, x in unique_everseen(enumerate(lst), key=lambda x: x[1]):
print(i, x)
Output
0 a
2 b
3 c
Why not using that?
L = ['a','a','a','b','b','b','b','b','e','e','e'.......]
for idxL, L_idx in enumerate(L):
if (L.index(L_idx) == idxL):
print("This is first occurence")
For very long lists, it is less efficient than building a set prior to the loop, but seems more direct to write.

Python: Symmetrical Difference Between List of Sets of Strings

I have a list that contains multiple sets of strings, and I would like to find the symmetric difference between each string and the other strings in the set.
For example, I have the following list:
targets = [{'B', 'C', 'A'}, {'E', 'C', 'D'}, {'F', 'E', 'D'}]
For the above, desired output is:
[2, 0, 1]
because in the first set, A and B are not found in any of the other sets, for the second set, there are no unique elements to the set, and for the third set, F is not found in any of the other sets.
I thought about approaching this backwards; finding the intersection of each set and subtracting the length of the intersection from the length of the list, but set.intersection(*) does not appear to work on strings, so I'm stuck:
set1 = {'A', 'B', 'C'}
set2 = {'C', 'D', 'E'}
set3 = {'D', 'E', 'F'}
targets = [set1, set2, set3]
>>> set.intersection(*targets)
set()
The issue you're having is that there are no strings shared by all three sets, so your intersection comes up empty. That's not a string issue, it would work the same with numbers or anything else you can put in a set.
The only way I see to do a global calculation over all the sets, then use that to find the number of unique values in each one is to first count all the values (using collections.Counter), then for each set, count the number of values that showed up only once in the global count.
from collections import Counter
def unique_count(sets):
count = Counter()
for s in sets:
count.update(s)
return [sum(count[x] == 1 for x in s) for s in sets]
Try something like below:
Get symmetric difference with every set. Then intersect with the given input set.
def symVal(index,targets):
bseSet = targets[index]
symSet = bseSet
for j in range(len(targets)):
if index != j:
symSet = symSet ^ targets[j]
print(len(symSet & bseSet))
for i in range(len(targets)):
symVal(i,targets)
Your code example doesn't work because it's finding the intersection between all of the sets, which is 0 (since no element occurs everywhere). You want to find the difference between each set and the union of all other sets. For example:
set1 = {'A', 'B', 'C'}
set2 = {'C', 'D', 'E'}
set3 = {'D', 'E', 'F'}
targets = [set1, set2, set3]
result = []
for set_element in targets:
result.append(len(set_element.difference(set.union(*[x for x in targets if x is not set_element]))))
print(result)
(note that the [x for x in targets if x != set_element] is just the set of all other sets)

Getting all str type elements in a pd.DataFrame

Based on my little knowledge on pandas,pandas.Series.str.contains can search a specific str in pd.Series. But what if the dataframe is large and I just want to glance all kinds of str element in it before I do anything?
Example like this:
pd.DataFrame({'x1':[1,2,3,'+'],'x2':[2,'a','c','this is']})
x1 x2
0 1 2
1 2 a
2 3 c
3 + this is
I need a function to return ['+','a','c','this is']
If you are looking strictly at what are string values and performance is not a concern, then this is a very simple answer.
df.where(df.applymap(type).eq(str)).stack().tolist()
['a', 'c', '+', 'this is']
There are 2 possible ways - check numeric values saved as strings or not.
Check difference:
df = pd.DataFrame({'x1':[1,'2.78','3','+'],'x2':[2.8,'a','c','this is'], 'x3':[1,4,5,4]})
print (df)
x1 x2 x3
0 1 2.8 1
1 2.78 a 4 <-2.78 is float saved as string
2 3 c 5 <-3 is int saved as string
3 + this is 4
#flatten all values
ar = df.values.ravel()
#errors='coerce' parameter in pd.to_numeric return NaNs for non numeric
L = np.unique(ar[np.isnan(pd.to_numeric(ar, errors='coerce'))]).tolist()
print (L)
['+', 'a', 'c', 'this is']
Another solution is use custom function for check if possible convert to floats:
def is_not_float_try(str):
try:
float(str)
return False
except ValueError:
return True
s = df.stack()
L = s[s.apply(is_not_float_try)].unique().tolist()
print (L)
['a', 'c', '+', 'this is']
If need all values saved as strings use isinstance:
s = df.stack()
L = s[s.apply(lambda x: isinstance(x, str))].unique().tolist()
print (L)
['2.78', 'a', '3', 'c', '+', 'this is']
You can using str.isdigit with unstack
df[df.apply(lambda x : x.str.isdigit()).eq(0)].unstack().dropna().tolist()
Out[242]: ['+', 'a', 'c', 'this is']
Using regular expressions and set union, could try something like
>>> set.union(*[set(df[c][~df[c].str.findall('[^\d]+').isnull()].unique()) for c in df.columns])
{'+', 'a', 'c', 'this is'}
If you use a regular expression for a number in general, you could omit floating point numbers as well.

find elements in lists using For Loop

keys = ['a','H','c','D','m','l']
values = ['a','c','H','D']
category = []
for index, i in enumerate(keys):
for j in values:
if j in i:
category.append(j)
break
if index == len(category):
category.append("other")
print(category)
My expected output is ['a', 'H', 'c', 'D', 'other', 'other']
But i am getting ['a', 'other', 'H', 'c', 'D', 'other']
EDIT: OP edited his question multiple times.
Python documentation break statement:
It terminates the nearest enclosing loop.
You break out of the outer loop using the "break" statement. The execution never even reaches the inner while loop.
Now.. To solve your problem of categorising strings:
xs = ['Am sleeping', 'He is walking','John is eating']
ys = ['walking','eating','sleeping']
categories = []
for x in xs:
for y in ys:
if y in x:
categories.append(y)
break
categories.append("other")
print(categories) # ['sleeping', 'walking', 'eating']
Iterate over both lists and check if any categories match. If they do append to the categories list and continue with the next string to categorise. If didn't find any matching category (defined by the count of matched categories being less than the current index (index is 0 based, so they are shifted by 1, which means == is less than in this case) then categorise as "other.

Resources