Edited: K means clustering and finding points closest to the centroid - python-3.x

I am trying to apply k means to cluster actors based on the information in the following columns
Actors Movies TvGuest Awards Shorts Special LiveShows
Robert De Niro 111 2 6 0 0 0
Jack Nicholson 70 2 4 0 5 0
Marlon Brando 64 2 5 0 0 28
Denzel Washington 25 2 3 24 0 0
Katharine Hepburn 90 1 2 0 0 0
Humphrey Bogart 105 2 1 0 0 52
Meryl Streep 27 2 2 5 0 0
Daniel Day-Lewis 90 2 1 0 71 22
Sidney Poitier 63 2 3 0 0 0
Clark Gable 34 2 4 0 3 0
Ingrid Bergman 22 2 2 3 0 4
Tom Hanks 82 11 6 21 11 22
#began by scaling my data
X = StandardScaler().fit_transform(data)
#used an elbow plot to find optimal k value
sum_of_squared_distances = []
K = range(1,15)
for k in K:
k_means = KMeans(n_clusters=k)
model = k_means.fit(X)
sum_of_squared_distances.append(k_means.inertia_)
plt.plot(K, sum_of_squared_distances, 'bx-')
plt.show()
#found yhat for the calculated k value
kmeans = KMeans(n_clusters=3)
model = kmeans.fit(X)
yhat = kmeans.predict(X)
Unable to figure out create scatter plots by actors.
EDIT:
Is there a way to find which actors are closest to centroids if the centroids were also plotted using
centers = kmeans.cluster_centers_ (The kmeans here refers to Eric's solution below)
plt.scatter(centers[:,0],centers[:,1],color='purple',marker='*',label='centroid')

K means clustering in Pandas - Scatter plot
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
df = pd.DataFrame(columns=['Actors', 'Movies', 'TvGuest', "Awards", "Shorts"])
df.loc[0] = ["Robert De Niro", 111, 2, 6, 0]
df.loc[1] = ["Jack Nicholson", 70, 2, 4, 0]
df.loc[2] = ["Marlon Brando", 64, 4, 5, 0]
df.loc[3] = ["Denzel Washington", 25, 2, 3, 24]
df.loc[4] = ["Katharine Hepburn", 90, 1, 2, 0]
df.loc[5] = ["Humphrey Bogart", 105, 2, 1, 0]
df.loc[6] = ["Meryl Streep", 27, 3, 2, 5]
df.loc[7] = ["Daniel Day-Lewis", 90, 2, 1, 0]
df.loc[8] = ["Sidney Poitier", 63, 2, 3, 0]
df.loc[9] = ["Clark Gable", 34, 2, 4, 0]
df.loc[10] = ["Ingrid Bergman", 22, 5, 2, 3]
kmeans = KMeans(n_clusters=4)
y = kmeans.fit_predict(df[['Movies', 'TvGuest', 'Awards']])
df['Cluster'] = y
plt.scatter(df.Movies, df.TvGuest, c=df.Cluster, alpha = 0.6)
plt.title('K-means Clustering 2 dimensions and 4 clusters')
plt.show()
Shows:
Notice the data points presented on the 2 dimensional scatterplot is Movies and TvGuest, however the Kmeans fit was given 3 variables: Movies, TvGuest, Awards. Imagine there is an additional dimension going into the screen which are used to calculate membership to a cluster.
Source links:
https://datasciencelab.wordpress.com/2013/12/12/clustering-with-k-means-in-python/
https://datascience.stackexchange.com/questions/48693/perform-k-means-clustering-over-multiple-columns
https://towardsdatascience.com/visualizing-clusters-with-pythons-matplolib-35ae03d87489

You can calculate Euclidean distance between points and centroid and find the min distance which indicates closest point to centroids
dist = numpy.linalg.norm(centroid-point)

Related

How to encode the data based on range of numbers

import pandas as pd
from sklearn import preprocessing
my_data = {
"Marks" : [50, 62, 42, 90, 12],
"Exam" : ['FirstSem', 'SecondSem', 'ThirdSem', 'FourthSem','FifthSem']
}
blk = pd.DataFrame( my_data )
print( blk )
Required solution
Marks Exam
0 1 FirstSem
1 1 SecondSem
2 0 ThirdSem
3 1 FourthSem
4 0 FifthSem
Is there any solution to encode the values if marks greater than 45 is 1 and marks less than 45 is 0
blk["Marks"] = np.where(blk["Marks"]>45,1,0)
blk
Marks Exam
0 1 FirstSem
1 1 SecondSem
2 0 ThirdSem
3 1 FourthSem
4 0 FifthSem

Setting specific bin length in python list

I have a straightforward question but I'm facing issues for conversion.
I have a pandas dataframe column which I converted to a list. It has both positive and negative values:
bin_length = 5
list = [-200, -112, -115, 0, 50, 120, 250]
I need to group these numbers into a bin of length 5.
For example:
-100 to -95 should have a value of -100
-95 to -90 should have a value of -95
Similarly for positive values:
0 to 5 should be 5
5 to 10 should be 10
What I have tried until now:
df = pd.DataFrame(dataframe['rd2'].values.tolist(), columns = ['values'])
bins = np.arange(0, df['values'].max() + 5, 5)
df['bins'] = pd.cut(df['values'], bins, include_lowest = True)
But this doesn't account for negative values and then I get problems in converting the pandas interval into a separate columns for list.
Any help would be amazing.
Setting up the correct lower limit with np.arange:
bins = np.arange(df["values"].min(), df['values'].max() + 5, 5)
df['bins'] = pd.cut(df['values'], bins, include_lowest = True)
print (df)
values bins
0 -200 (-200.001, -195.0]
1 -112 (-115.0, -110.0]
2 -115 (-120.0, -115.0]
3 0 (-5.0, 0.0]
4 50 (45.0, 50.0]
5 120 (115.0, 120.0]
6 250 (245.0, 250.0]
Convert the intervals back to a list:
s = pd.IntervalIndex(df["bins"])
print ([[x,y] for x,y in zip(s.left, s.right)])
[[-200.001, -195.0], [-115.0, -110.0], [-120.0, -115.0], [-5.0, 0.0], [45.0, 50.0], [115.0, 120.0], [245.0, 250.0]]

Python 3 Pandas fast lookup in dictionary for column

I have a Pandas DataFrame where I need to add new columns of data from lookup Dictionaries. I am looking for the fastest way to do this. I have a way that works using DataFrame.map() with a lambda but I wanted to know if this was the best practice and best performance I could achieve. I am used to doing with work with R and the excellent data.table library. I am working in a Jupyter notebook which is what is letting me use %time on the final line.
Here is what I have:
import numpy as np
import pandas as pd
np.random.seed(123)
num_samples = 100_000_000
ids = np.arange(0, num_samples)
states = ['Oregon', 'Michigan']
cities = ['Portland', 'Detroit']
state_data = {
0:{'Name': 'Oregon', 'mean': 100, 'std_dev': 5},
1:{'Name': 'Michigan', 'mean':90, 'std_dev': 8}
}
city_data = {
0:{'Name': 'Portland', 'mean': 8, 'std_dev':3},
1:{'Name': 'Detroit','mean': 4, 'std_dev':3}
}
state_df = pd.DataFrame.from_dict(state_data,orient='index')
print(state_df)
city_df = pd.DataFrame.from_dict(city_data,orient='index')
print(city_df)
sample_df = pd.DataFrame({'id':ids})
sample_df['state_id'] = np.random.randint(0, 2, num_samples)
sample_df['city_id'] = np.random.randint(0, 2, num_samples)
%time sample_df['state_mean'] = sample_df['state_id'].map(state_data).map(lambda x : x['mean'])
The last line is what I am most focused on.
I have also tried the following but saw no significant performance difference:
%time sample_df['state_mean'] = sample_df['state_id'].map(lambda x : state_data[x]['mean'])
What I ultimately want is to get sample_df to have columns for each of the states and cities. So I would have the following columns in the table:
id | state | state_mean | state_std_dev | city | city_mean | city_std_dev
Use DataFrame.join if you want add all columns:
sample_df = sample_df.join(state_df,on = 'state_id')
# id state_id city_id Name mean std_dev
#0 0 0 0 Oregon 100 5
#1 1 1 1 Michigan 90 8
#2 2 0 0 Oregon 100 5
#3 3 0 0 Oregon 100 5
#4 4 0 0 Oregon 100 5
#... ... ... ... ... ... ...
#9995 9995 1 0 Michigan 90 8
#9996 9996 1 1 Michigan 90 8
#9997 9997 0 1 Oregon 100 5
#9998 9998 1 1 Michigan 90 8
#9999 9999 1 0 Michigan 90 8
for one column
sample_df['state_mean'] = sample_df['state_id'].map(state_df['mean'])

Reorder columns in groups by number embedded in column name?

I have a very large dataframe with 1,000 columns. The first few columns occur only once, denoting a customer. The next few columns are representative of multiple encounters with the customer, with an underscore and the number encounter. Every additional encounter adds a new column, so there is NOT a fixed number of columns -- it'll grow with time.
Sample dataframe header structure excerpt:
id dob gender pro_1 pro_10 pro_11 pro_2 ... pro_9 pre_1 pre_10 ...
I'm trying to re-order the columns based on the number after the column name, so all _1 should be together, all _2 should be together, etc, like so:
id dob gender pro_1 pre_1 que_1 fre_1 gen_1 pro2 pre_2 que_2 fre_2 ...
(Note that the re-order should order the numbers correctly; the current order treats them like strings, which orders 1, 10, 11, etc. rather than 1, 2, 3)
Is this possible to do in pandas, or should I be looking at something else? Any help would be greatly appreciated! Thank you!
EDIT:
Alternatively, is it also possible to re-arrange column names based on the string part AND number part of the column names? So the output would then look similar to the original, except the numbers would be considered so that the order is more intuitive:
id dob gender pro_1 pro_2 pro_3 ... pre_1 pre_2 pre_3 ...
EDIT 2.0:
Just wanted to thank everyone for helping! While only one of the responses worked, I really appreciate the effort and learned a lot about other approaches / ways to think about this.
Here is one way you can try:
# column names copied from your example
example_cols = 'id dob gender pro_1 pro_10 pro_11 pro_2 pro_9 pre_1 pre_10'.split()
# sample DF
df = pd.DataFrame([range(len(example_cols))], columns=example_cols)
df
# id dob gender pro_1 pro_10 pro_11 pro_2 pro_9 pre_1 pre_10
#0 0 1 2 3 4 5 6 7 8 9
# number of columns excluded from sorting
N = 3
# get a list of columns from the dataframe
cols = df.columns.tolist()
# split, create an tuple of (column_name, prefix, number) and sorted based on the 2nd and 3rd item of the tuple, then retrieved the first item.
# adjust "key = lambda x: x[2]" to group cols by numbers only
cols_new = cols[:N] + [ a[0] for a in sorted([ (c, p, int(n)) for c in cols[N:] for p,n in [c.split('_')]], key = lambda x: (x[1], x[2])) ]
# get the new dataframe based on the cols_new
df_new = df[cols_new]
# id dob gender pre_1 pre_10 pro_1 pro_2 pro_9 pro_10 pro_11
#0 0 1 2 8 9 3 6 7 4 5
Luckily there is a one liner in python that can fix this:
df = df.reindex(sorted(df.columns), axis=1)
For Example lets say you had this dataframe:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Name': [2, 4, 8, 0],
'ID': [2, 0, 0, 0],
'Prod3': [10, 2, 1, 8],
'Prod1': [2, 4, 8, 0],
'Prod_1': [2, 4, 8, 0],
'Pre7': [2, 0, 0, 0],
'Pre2': [10, 2, 1, 8],
'Pre_2': [10, 2, 1, 8],
'Pre_9': [10, 2, 1, 8]}
)
print(df)
Output:
Name ID Prod3 Prod1 Prod_1 Pre7 Pre2 Pre_2 Pre_9
0 2 2 10 2 2 2 10 10 10
1 4 0 2 4 4 0 2 2 2
2 8 0 1 8 8 0 1 1 1
3 0 0 8 0 0 0 8 8 8
Then used
df = df.reindex(sorted(df.columns), axis=1)
Then the dataframe will then look like:
ID Name Pre2 Pre7 Pre_2 Pre_9 Prod1 Prod3 Prod_1
0 2 2 10 2 10 10 2 10 2
1 0 4 2 0 2 2 4 2 4
2 0 8 1 0 1 1 8 1 8
3 0 0 8 0 8 8 0 8 0
As you can see, the columns without underscore will come first, followed by an ordering based on the number after the underscore. However this also sorts of the column names, so the column names that come first in the alphabet will be first.
You need to split you column on '_' then convert to int:
c = ['A_1','A_10','A_2','A_3','B_1','B_10','B_2','B_3']
df = pd.DataFrame(np.random.randint(0,100,(2,8)), columns = c)
df.reindex(sorted(df.columns, key = lambda x: int(x.split('_')[1])), axis=1)
Output:
A_1 B_1 A_2 B_2 A_3 B_3 A_10 B_10
0 68 11 59 69 37 68 76 17
1 19 37 52 54 23 93 85 3
Next case, you need human sorting:
import re
def atoi(text):
return int(text) if text.isdigit() else text
def natural_keys(text):
'''
alist.sort(key=natural_keys) sorts in human order
http://nedbatchelder.com/blog/200712/human_sorting.html
(See Toothy's implementation in the comments)
'''
return [ atoi(c) for c in re.split(r'(\d+)', text) ]
df.reindex(sorted(df.columns, key = lambda x:natural_keys(x)), axis=1)
Output:
A_1 A_2 A_3 A_10 B_1 B_2 B_3 B_10
0 68 59 37 76 11 69 68 17
1 19 52 23 85 37 54 93 3
Try this.
To re-order the columns based on the number after the column name
cols_fixed = df.columns[:3] # change index no based on your df
cols_variable = df.columns[3:] # change index no based on your df
cols_variable = sorted(cols_variable, key=lambda x : int(x.split('_')[1])) # split based on the number after '_'
cols_new = cols_fixed + cols_variable
new_df = pd.DataFrame(df[cols_new])
To re-arrange column names based on the string part AND number part of the column names
cols_fixed = df.columns[:3] # change index no based on your df
cols_variable = df.columns[3:] # change index no based on your df
cols_variable = sorted(cols_variable)
cols_new = cols_fixed + cols_variable
new_df = pd.DataFrame(df[cols_new])

Pandas: Random integer between values in two columns

How can I create a new column that calculates random integer between values of two columns in particular row.
Example df:
import pandas as pd
import numpy as np
data = pd.DataFrame({'start': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'end': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]})
data = data.iloc[:, [1, 0]]
Result:
Now I am trying something like this:
data['rand_between'] = data.apply(lambda x: np.random.randint(data.start, data.end))
or
data['rand_between'] = np.random.randint(data.start, data.end)
But it doesn't work of course because data.start is a Series not a number.
how can I used numpy.random with data from columns as vectorized operation?
You are close, need specify axis=1 for process data by rows and change data.start/end to x.start/end for working with scalars:
data['rand_between'] = data.apply(lambda x: np.random.randint(x.start, x.end), axis=1)
Another possible solution:
data['rand_between'] = [np.random.randint(s, e) for s,e in zip(data['start'], data['end'])]
print (data)
start end rand_between
0 1 10 8
1 2 20 3
2 3 30 23
3 4 40 35
4 5 50 30
5 6 60 28
6 7 70 60
7 8 80 14
8 9 90 85
9 10 100 83
If you want to truly vectorize this, you can generate a random number between 0 and 1 and normalize it with your min/max numbers:
(
data['start'] + np.random.rand(len(data)) * (data['end'] - data['start'] + 1)
).astype('int')
Out:
0 1
1 18
2 18
3 35
4 22
5 27
6 35
7 23
8 33
9 81
dtype: int64

Resources