how to hide index 0 of PANDAS and Jupyter - python-3.x

i want to hide de index 0 (rows) from my jupyter notebook:
display(df.style.set_properties(**{'text-align': 'left'})
the method i use is style in order to get it as an html and left align,
where df is a simple PANDAS dataframe.
THANK YOU!

It is very unclear what you want. So I'm going to take a guess.
Consider the dataframe df
df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], list('ABC'), list('XY'))
Since simply "hiding" the first row is super easy as #chtonicdaemon pointed out:
df.iloc[1:]
X Y
B 3 4
C 5 6
I'm going to assume you wanted something different. What I've seen asked before is how do I not display the index? I'll first point at a previous answer of mine ColorChange
This is the function I'll use to apply my own css
def HTML_with_style(df, style=None, random_id=None):
from IPython.display import HTML
import numpy as np
import re
df_html = df.to_html()
if random_id is None:
random_id = 'id%d' % np.random.choice(np.arange(1000000))
if style is None:
style = """
<style>
table#{random_id} {{color: blue}}
</style>
""".format(random_id=random_id)
else:
new_style = []
s = re.sub(r'</?style>', '', style).strip()
for line in s.split('\n'):
line = line.strip()
if not re.match(r'^table', line):
line = re.sub(r'^', 'table ', line)
new_style.append(line)
new_style = ['<style>'] + new_style + ['</style>']
style = re.sub(r'table(#\S+)?', 'table#%s' % random_id, '\n'.join(new_style))
df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html)
return HTML(style + df_html)
And I'll call it with my own css to suppress the index
style = """
<style>
table tr :first-child{display: none;}
</style>
"""
HTML_with_style(df, style)
This doesn't work for print(df)

Related

Merge two lists (non equal elements) in a csv using pandas

I have two lists whose no. of elements are not equal. I want to put these two lists in a csv whose heading is 'video' and 'image'. But as the elements of these 2 lists are not the same I got an error.
ValueError: All arrays must be of the same length
import pandas as pd
VID = ['00001', '05445', '987457', '15455']
IMG = ['00001', '05445', '987457']
df = pd.DataFrame(data={"video": VID, "image": IMG})
df.to_csv("./file.csv", sep=',',index=False)
You can convert to Series first:
import pandas as pd
VID = ['00001', '05445', '987457', '15455']
IMG = ['00001', '05445', '987457']
df = pd.DataFrame(data={"video": pd.Series(VID), "image": pd.Series(IMG)})
df.to_csv("./file.csv", sep=',',index=False)
More generic approach if many items:
names = ['video', 'image']
vals = [VID, IMG]
df = pd.DataFrame(dict(zip(names, map(pd.Series, vals))))
df.to_csv("./file.csv", sep=',',index=False)
Suggestion 1:
Using None to create an "empty value"/null value
import pandas as pd
VID = ['00001', '05445', '987457', '15455']
IMG = ['00001', '05445', '987457', None]
df = pd.DataFrame(data={"video": VID, "image": IMG})
Suggestion 2:
import pandas as pd
VID = ['00001', '05445', '987457', '15455']
IMG = ['00001', '05445', '987457', '0']
df = pd.DataFrame(data={"video": VID, "image": IMG})
Because these are ids and looks sorted , we can create temp lists. However we need O(m+n) extra space and o(m) or o(n) extra time to make solution work
temp_vid = []
temp_img = []
i,j = 0,0
while i<len(VID) and j<len(IMG):
#In case when number in both of list , add it to temp lists and move forward
if VID[i] == IMG[j]:
temp_vid+=[VID[i]]
temp_img+=[IMG[j]]
i+=1
j+=1
#In case when number is in any of the list, keep that number in corresponding temp list and None to another list
elif VID[i]<IMG[j]:
temp_vid+=[VID[i]]
temp_img.append(None)
i+=1
#In case when number is in any of the list, keep that number in corresponding temp list and None to another list
else:
temp_vid.append(None)
temp_img+=[IMG[j]]
j+=1
# If Any of list ends, add None to corresponding temp list and add numbers of the other list to corresponding other temp list.
while j<len(IMG):
temp_img.append(None)
j+=1
if j== len(IMG):
while i<len(VID):
temp_vid.append(None)
i+=1
df = pd.DataFrame({"video": pd.Series(temp_vid), "image": pd.Series(temp_img)})
df
video image
0 00001 00001
1 05445 05445
2 15455 None
3 987457 987457

Altair/Vega-Lite heatmap: Filter top k

I am attempting to create a heatmap and retain only the top 5 samples based on the mean of the relative abundance. I am able to sort the heatmap properly, but I can't figure out how to retain only the top 5, in this case samples c, e, b, y, a. I am pasting a subset of the df with the image. I've tried myriad permutations of the "Top K Items Tutorial" link at the altair-viz website. I'd prefer to use altair to do the filtering if possible, as opposed to filtering the df itself in the python code.
Dateframe:
,Lowest_Taxon,p1,p2,p3,p4,p5,p6,p7
0,a,0.03241281,0.0,0.467738067,3.14456785,0.589519651,13.5744323,0.0
1,b,0.680669,9.315121951,2.848404893,13.99058458,2.139737991,16.60779366,7.574639383
2,c,40.65862829,1.244878049,71.01223315,4.82197541,83.18777293,0.0,0.0
3,d,0.0,0.0,0.0,0.548471137,0.272925764,0.925147183,0.0
4,e,0.090755867,13.81853659,5.205085152,27.75721011,1.703056769,19.6691898,12.27775914
5,f,0.0,0.0,0.0,0.0,0.0,0.0,0.0
6,g,0.187994295,0.027317073,0.0,0.0,0.0,0.02242781,0.0
7,h,0.16854661,0.534634146,1.217318302,7.271813154,1.73580786,0.57751612,0.57027843
8,i,0.142616362,2.528780488,1.163348525,0.34279446,0.0,0.0,0.0
9,j,1.711396344,0.694634146,0.251858959,4.273504274,0.087336245,1.721334455,0.899027172
10,k,0.0,1.475121951,0.0,0.0,0.0,5.573310906,0.0
11,l,0.194476857,0.253658537,1.517150396,2.413273002,0.949781659,5.147182506,1.650452868
12,m,0.0,1.736585366,0.0,0.063988299,0.0,8.42724979,0.623951694
13,n,0.0,0.0,0.0,0.0,0.0,0.0,0.0
14,o,4.68689226,0.12097561,0.0,0.0,0.0,0.0,0.0
15,p,0.0,0.885853659,0.0,0.0,0.0,0.913933277,0.046964106
16,q,0.252819914,0.050731707,0.023986568,0.0,0.087336245,0.0,0.0
17,r,0.0,0.0,0.0,0.0,0.0,0.0,0.0
18,s,0.0,0.0,0.0,0.0,0.0,0.0,0.0
19,t,0.0,0.0,0.0,0.0,0.0,0.0,0.0
20,u,0.0,0.058536585,0.089949628,0.356506239,0.0,0.285954584,1.17410265
21,v,0.0,0.0,0.0,0.0,0.0,0.0,0.0
22,w,0.0,0.0,0.0,0.0,0.0,0.0,0.0
23,x,1.471541553,2.396097561,0.593667546,0.278806161,0.065502183,0.280347631,0.952700436
24,y,0.0,0.32,0.0,0.461629873,0.0,7.804878049,18.38980208
25,z,0.0,0.0,0.0,0.0,0.0,0.0,0.0
Code block:
import pandas as pd
import numpy as np
import altair as alt
from vega_datasets import data
from altair_saver import save
# Read in the file and fill empty cells with zero
df = pd.read_excel("path\to\df")
doNotMelt = df.drop(df.iloc[:,1:], axis=1)
df_melted = pd.melt(df, id_vars = doNotMelt, var_name = 'SampleID', value_name = 'Relative_abundance')
# Tell altair to plot as many rows as is necessary
alt.data_transformers.disable_max_rows()
alt.Chart(df_melted).mark_rect().encode(
alt.X('SampleID:N'),
alt.Y('Lowest_Taxon:N', sort=alt.EncodingSortField(field='Relative_abundance', op='mean', order='descending')),
alt.Color('Relative_abundance:Q')
)
If you know what you want to show is the entries with c, e, b, y and a (and it will not change later) you could simply apply a transform_filter on the field Lowest_Taxon.
If you want to calculate on the spot which ones make it into the top five, it needs a bit more effort, i.e. a combination of joinaggregate, window and filter transforms.
For both I paste an example below. By the way, I converted the original data that you pasted into a csv file which is imported by the code snippets. You can make it easier for others to to use your pandas toy data by providing it as a dict, which can then be simply read directly in the code.
Simple approach:
import pandas as pd
import altair as alt
import numpy as np
alt.data_transformers.disable_max_rows()
df = pd.read_csv('df.csv', index_col=0)
doNotMelt = df.drop(df.iloc[:,1:], axis=1)
df_melted = pd.melt(df, id_vars = doNotMelt, var_name = 'SampleID', value_name = 'Relative_abundance')
alt.Chart(df_melted).mark_rect().encode(
alt.X('SampleID:N'),
alt.Y('Lowest_Taxon:N', sort=alt.EncodingSortField(field='Relative_abundance', op='mean', order='descending')),
alt.Color('Relative_abundance:Q')
).transform_filter(
alt.FieldOneOfPredicate(field='Lowest_Taxon', oneOf=['c', 'e', 'b', 'y', 'a'])
)
Flexible approach:
set n to how many of the top entries you want to see
import pandas as pd
import altair as alt
import numpy as np
alt.data_transformers.disable_max_rows()
df = pd.read_csv('df.csv', index_col=0)
doNotMelt = df.drop(df.iloc[:,1:], axis=1)
df_melted = pd.melt(df, id_vars = doNotMelt, var_name = 'SampleID', value_name = 'Relative_abundance')
n = 5 # number of entries to display
alt.Chart(df_melted).mark_rect().encode(
alt.X('SampleID:N'),
alt.Y('Lowest_Taxon:N', sort=alt.EncodingSortField(field='Relative_abundance', op='mean', order='descending')),
alt.Color('Relative_abundance:Q')
).transform_joinaggregate(
mean_rel_ab = 'mean(Relative_abundance)',
count_of_samples = 'valid(Relative_abundance)',
groupby = ['Lowest_Taxon']
).transform_window(
rank='rank(mean_rel_ab)',
sort=[alt.SortField('mean_rel_ab', order='descending')],
frame = [None, None]
).transform_filter(
(alt.datum.rank <= (n-1) * alt.datum.count_of_samples + 1)

How do I add a dynamic list of variable to the command pd.concat

I am using python3 and pandas to create a script that will:
Be dynamic across different dataset lengths(rows) and unique values - completed
Take unique values from column A and create separate dataframes as variables for each unique entry - completed
Add totals to the bottom of each dataframe - completed
Concatenate the separate dataframes back together - incomplete
The issue is I am unable to formulate a way to create a list of the variables in use and apply them as arg in to the command pd.concat.
The sample dataset. The dataset may have more unique BrandFlavors or less which is why the script must be flexible and dynamic.
Script:
import pandas as pd
import warnings
warnings.simplefilter(action='ignore')
excel_file = ('testfile.xlsx')
df = pd.read_excel(excel_file)
df = df.sort_values(by='This', ascending=False)
colarr = df.columns.values
arr = df[colarr[0]].unique()
for i in range(len(arr)):
globals()['var%s' % i] = df.loc[df[colarr[0]] == arr[i]]
for i in range(len(arr)):
if globals()['var%s' % i].empty:
''
else:
globals()['var%s' % i] = globals()['var%s' % i].append({'BrandFlavor':'Total',
'This':globals()['var%s' % i]['This'].sum(),
'Last':globals()['var%s' % i]['Last'].sum(),
'Diff':globals()['var%s' % i]['Diff'].sum(),
'% Chg':globals()['var%s' % i]['Diff'].sum()/globals()['var%s' % i]['Last'].sum() * 100}, ignore_index=True)
globals()['var%s' % i]['% Chg'].fillna(0, inplace=True)
globals()['var%s' % i].fillna(' ', inplace=True)
I have tried this below, however the list is a series of strings
vararr = []
count = 0
for x in range(len(arr)):
vararr.append('var' + str(count))
count = count + 1
df = pd.concat([vararr])
pd.concat does not recognize a string. I tired to build a class with an arg defined but had the same issue.
The desired outcome would be a code snippet that generated a list of variables that matched the ones created by lines 9/10 and could be referenced by pd.concat([list, of, vars, here]). It must be dynamic. Thank you
Just fixing the issue at hand, you shouldn't use globals to make variables, that is not considered good practice. Your code should work with some minor modifications.
import pandas as pd
import warnings
warnings.simplefilter(action='ignore')
excel_file = ('testfile.xlsx')
df = pd.read_excel(excel_file)
df = df.sort_values(by='This', ascending=False)
def good_dfs(dataframe):
if dataframe.empty:
pass
else:
this = dataframe.This.sum()
last = dataframe.Last.sum()
diff = dataframe.Diff.sum()
data = {
'BrandFlavor': 'Total',
'This': this,
'Last': last,
'Diff': diff,
'Pct Change': diff / last * 100
}
dataframe.append(data, ignore_index=True)
dataframe['Pct Change'].fillna(0.0, inplace=True)
dataframe.fillna(' ', inplace=True)
return dataframe
colarr = df.columns.values
arr = df[colarr[0]].unique()
dfs = []
for i in range(len(arr)):
temp = df.loc[df[colarr[0]] == arr[i]]
dfs.append(temp)
final_dfs = [good_dfs(d) for d in dfs]
final_df = pd.concat(final_dfs)
Although I will say, there are far easier ways to accomplish what you want without doing all of this, however that can be a separate question.

Apply function to pandas series given varying arguments

Initial question
I want to calculate the Levenshtein distance between multiple strings, one in a series, the other in a list. I tried my hands on map, zip, etc., but I only got the desired result using a for loop and apply. Is there a way to improve style and especially speed?
Here is what I tried and it does what it is supposed to do, but lacks of speed given a large series.
import stringdist
strings = ['Hello', 'my', 'Friend', 'I', 'am']
s = pd.Series(data=strings, index=strings)
c = ['me', 'mine', 'Friend']
df = pd.DataFrame()
for w in c:
df[w] = s.apply(lambda x: stringdist.levenshtein(x, w))
## Result: ##
me mine Friend
Hello 4 5 6
my 1 3 6
Friend 5 4 0
I 2 4 6
am 2 4 6
Solution
Thanks to #Dames and #molybdenum42, I can provide the solution I used, directly beneath the question. For more insights, please check their great answers below.
import stringdist
from itertools import product
strings = ['Hello', 'my', 'Friend', 'I', 'am']
s = pd.Series(data=strings, index=strings)
c = ['me', 'mine', 'Friend']
word_combinations = np.array(list(product(s.values, c)))
vectorized_levenshtein = np.vectorize(stringdist.levenshtein)
result = vectorized_levenshtein(word_combinations[:, 0],
word_combinations[:, 1])
result = result.reshape((len(s), len(c)))
df = pd.DataFrame(result, columns=c, index=s)
This results in the desired data frame.
Setup:
import stringdist
import pandas as pd
import numpy as np
import itertools
s = pd.Series(data=['Hello', 'my', 'Friend'],
index=['Hello', 'my', 'Friend'])
c = ['me', 'mine', 'Friend']
Options
option: an easy one-liner
df = pd.DataFrame([s.apply(lambda x: stringdist.levenshtein(x, w)) for w in c])
option: np.fromfunction (thanks to #baccandr)
#np.vectorize
def lavdist(a, b):
return stringdist.levenshtein(c[a], s[b])
df = pd.DataFrame(np.fromfunction(lavdist, (len(c), len(s)), dtype = int),
columns=c, index=s)
option: see #molybdenum42
word_combinations = np.array(list(itertools.product(s.values, c)))
vectorized_levenshtein = np.vectorize(stringdist.levenshtein)
result = vectorized_levenshtein(word_combinations[:,0], word_combinations[:,1])
df = pd.DataFrame([word_combinations[:,1], word_combinations[:,1], result])
df = df.set_index([0,1])[2].unstack()
(the best) option: modified option 3
word_combinations = np.array(list(itertools.product(s.values, c)))
vectorized_levenshtein = np.vectorize(distance)
result = vectorized_levenshtein(word_combinations[:,0], word_combinations[:,1])
result = result.reshape((len(s), len(c)))
df = pd.DataFrame(result, columns=c, index=s)
Performance testing:
import timeit
from Levenshtein import distance
import pandas as pd
import numpy as np
import itertools
s = pd.Series(data=['Hello', 'my', 'Friend'],
index=['Hello', 'my', 'Friend'])
c = ['me', 'mine', 'Friend']
test_code0 = """
df = pd.DataFrame()
for w in c:
df[w] = s.apply(lambda x: distance(x, w))
"""
test_code1 = """
df = pd.DataFrame({w:s.apply(lambda x: distance(x, w)) for w in c})
"""
test_code2 = """
#np.vectorize
def lavdist(a, b):
return distance(c[a], s[b])
df = pd.DataFrame(np.fromfunction(lavdist, (len(c), len(s)), dtype = int),
columns=c, index=s)
"""
test_code3 = """
word_combinations = np.array(list(itertools.product(s.values, c)))
vectorized_levenshtein = np.vectorize(distance)
result = vectorized_levenshtein(word_combinations[:,0], word_combinations[:,1])
df = pd.DataFrame([word_combinations[:,1], word_combinations[:,1], result])
df = df.set_index([0,1])[2] #.unstack() produces error
"""
test_code4 = """
word_combinations = np.array(list(itertools.product(s.values, c)))
vectorized_levenshtein = np.vectorize(distance)
result = vectorized_levenshtein(word_combinations[:,0], word_combinations[:,1])
result = result.reshape((len(s), len(c)))
df = pd.DataFrame(result, columns=c, index=s)
"""
test_setup = "from __main__ import distance, s, c, pd, np, itertools"
print("test0", timeit.timeit(test_code0, number = 1000, setup = test_setup))
print("test1", timeit.timeit(test_code1, number = 1000, setup = test_setup))
print("test2", timeit.timeit(test_code2, number = 1000, setup = test_setup))
print("test3", timeit.timeit(test_code3, number = 1000, setup = test_setup))
print("test4", timeit.timeit(test_code4, number = 1000, setup = test_setup))
Results
# results
# test0 1.3671939949999796
# test1 0.5982696900009614
# test2 0.3246431229999871
# test3 2.0100400850005826
# test4 0.23796007100099814
Using itertools, you can at least get all the required combinations. Using a vectorized version of stringcount.levenshtein (made using numpy.vectorize()) you can then get your desired result without looping at all, though I haven't tested the performance of the vectorized levenshtein function.
The code could look something like this:
import stringdist
import numpy as np
import pandas as pd
import itertools
s = pd.Series(["Hello", "my","Friend"])
c = ['me', 'mine', 'Friend']
word_combinations = np.array(list(itertools.product(s.values, c)))
vectorized_levenshtein = np.vectorize(stringdist.levenshtein)
result = vectorized_levenshtein(word_combinations[:,0], word_combinations[:,1])
At this point you have the results in a numpy array, each corresponding to one of all the possible combinations of your two intial arrays. If you want to get it into the shape you have in your example, there's some pandas trickery to be done:
df = pd.DataFrame([word_combinations[:,0], word_combinations[:,1], result]).T
### initially looks like: ###
# 0 1 2
# 0 Hello me 4
# 1 Hello mine 5
# 2 Hello Friend 6
# 3 my me 1
# 4 my mine 3
# 5 my Friend 6
# 6 Friend me 5
# 7 Friend mine 4
# 8 Friend Friend 0
df = df.set_index([0,1])[2].unstack()
### Now looks like: ###
# Friend Hello my
# Friend 0 6 6
# me 5 4 1
# mine 4 5 3
Again, I haven't tested the performance of this method, so I recommend checking that out - it should be faster than iteration though.
EDIT:
User #Dames has a better suggestion for making the result all pretty-like:
result = result.reshape(len(c), len(s))
df = pd.DataFrame(result, columns=c, index=s)

Python: Venn diagram: how to show the diagram contents?

I have the working code below.
from matplotlib import pyplot as plt
import numpy as np
from matplotlib_venn import venn3, venn3_circles
Gastric_tumor_promoters = set(['DPEP1', 'CDC42BPA', 'GNG4', 'RAPGEFL1', 'MYH7B', 'SLC13A3', 'PHACTR3', 'SMPX', 'NELL2', 'PNMAL1', 'KRT23', 'PCP4', 'LOX', 'CDC42BPA'])
Ovarian_tumor_promoters = set(['ABLIM1','CDC42BPA','VSNL1','LOX','PCP4','SLC13A3'])
Gastric_tumor_suppressors = set(['PLCB4', 'VSNL1', 'TOX3', 'VAV3'])
#Ovarian_tumor_suppressors = set(['VAV3', 'FREM2', 'MYH7B', 'RAPGEFL1', 'SMPX', 'TOX3'])
venn3([Gastric_tumor_promoters,Ovarian_tumor_promoters, Gastric_tumor_suppressors], ('GCPromoters', 'OCPromoters', 'GCSuppressors'))
venn3([Gastric_tumor_promoters,Ovarian_tumor_promoters, Gastric_tumor_suppressors], ('GCPromoters', 'OCPromoters', 'GCSuppressors'))
plt.show()
How can I show the contents of each of the set in these 3 circles? With the color alpha being 0.6. Circles must be bigger to accommodate all the symbols.
I'm not sure there is a simple way to do this automatically for any possible combination of sets. If you're ready to do some manual tuning in your particular example, start with something like that:
A = set(['DPEP1', 'CDC42BPA', 'GNG4', 'RAPGEFL1', 'MYH7B', 'SLC13A3', 'PHACTR3', 'SMPX', 'NELL2', 'PNMAL1', 'KRT23', 'PCP4', 'LOX', 'CDC42BPA'])
B = set(['ABLIM1','CDC42BPA','VSNL1','LOX','PCP4','SLC13A3'])
C = set(['PLCB4', 'VSNL1', 'TOX3', 'VAV3'])
v = venn3([A,B,C], ('GCPromoters', 'OCPromoters', 'GCSuppressors'))
v.get_label_by_id('100').set_text('\n'.join(A-B-C))
v.get_label_by_id('110').set_text('\n'.join(A&B-C))
v.get_label_by_id('011').set_text('\n'.join(B&C-A))
v.get_label_by_id('001').set_text('\n'.join(C-A-B))
v.get_label_by_id('010').set_text('')
plt.annotate(',\n'.join(B-A-C), xy=v.get_label_by_id('010').get_position() +
np.array([0, 0.2]), xytext=(-20,40), ha='center',
textcoords='offset points',
bbox=dict(boxstyle='round,pad=0.5', fc='gray', alpha=0.1),
arrowprops=dict(arrowstyle='->',
connectionstyle='arc',color='gray'))
Note that methods like v.get_label_by_id('001') return the matplotlib Text objects, and you are free to configure them to your liking (e.g. you can change font size by calling set_fontsize(8), etc).
Here is an example which automates the whole thing. It creates a temporary dictionary which contains the id's needed by venn as keys and the intersections of all participating sets for this very id.
If you don't want the labels sorted remove the sorted() call in the second last line.
import math
from matplotlib import pyplot as plt
from matplotlib_venn import venn2, venn3
import numpy as np
# Convert number to indices into binary
# e.g. 5 -> '101' > [2, 0]
def bits2indices(b):
l = []
if b == 0:
return l
for i in reversed(range(0, int(math.log(b, 2)) + 1)):
if b & (1 << i):
l.append(i)
return l
# Make dictionary containing venn id's and set intersections
# e.g. d = {'100': {'c', 'b', 'a'}, '010': {'c', 'd', 'e'}, ... }
def set2dict(s):
d = {}
for i in range(1, 2**len(s)):
# Make venn id strings
key = bin(i)[2:].zfill(len(s))
key = key[::-1]
ind = bits2indices(i)
# Get the participating sets for this id
participating_sets = [s[x] for x in ind]
# Get the intersections of those sets
inter = set.intersection(*participating_sets)
d[key] = inter
return d
# Define some sets
a = set(['a', 'b', 'c'])
b = set(['c', 'd', 'e'])
c = set(['e', 'f', 'a'])
s = [a, b, c]
# Create dictionary from sets
d = set2dict(s)
# Plot it
h = venn3(s, ('A', 'B', 'C'))
for k, v in d.items():
l = h.get_label_by_id(k)
if l:
l.set_text('\n'.join(sorted(v)))
plt.show()
/edit
I'm sorry I just figured out that the above code does not remove duplicate labels and is therefor wrong. The number of elements shown by venn and the number of labels was different. Here is a new version which removes wrong duplicates from other intersections. I guess there is a smarter and more functional way to do that instead of iterating over all intersections twice...
import math, itertools
from matplotlib import pyplot as plt
from matplotlib_venn import venn2, venn3
import numpy as np
# Generate list index for itertools combinations
def gen_index(n):
x = -1
while True:
while True:
x = x + 1
if bin(x).count('1') == n:
break
yield x
# Generate all combinations of intersections
def make_intersections(sets):
l = [None] * 2**len(sets)
for i in range(1, len(sets) + 1):
ind = gen_index(i)
for subset in itertools.combinations(sets, i):
inter = set.intersection(*subset)
l[next(ind)] = inter
return l
# Get weird reversed binary string id for venn
def number2venn_id(x, n_fill):
id = bin(x)[2:].zfill(n_fill)
id = id[::-1]
return id
# Iterate over all combinations and remove duplicates from intersections with
# more sets
def sets2dict(sets):
l = make_intersections(sets)
d = {}
for i in range(1, len(l)):
d[number2venn_id(i, len(sets))] = l[i]
for j in range(1, len(l)):
if bin(j).count('1') < bin(i).count('1'):
l[j] = l[j] - l[i]
d[number2venn_id(j, len(sets))] = l[j] - l[i]
return d
# Define some sets
a = set(['a', 'b', 'c', 'f'])
b = set(['c', 'd', 'e'])
c = set(['e', 'f', 'a'])
sets = [a, b, c]
d = sets2dict(sets)
# Plot it
h = venn3(sets, ('A', 'B', 'C'))
for k, v in d.items():
l = h.get_label_by_id(k)
if l:
l.set_fontsize(12)
l.set_text('\n'.join(sorted(v)))
# Original for comparison
f = plt.figure(2)
venn3(sets, ('A', 'B', 'C'))
plt.show()
Thanks for the automation, #Vinci! I wonder if you (or somebody else) has written a version that rearranges the content so that the elements stay within the bubble(s) in a random fashion instead of a long list? ... bonus track: re-dimensioning the bubbles if the elements do not fit? ;)

Resources