Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I am working on data preprocessing for machine learning and faced a problem.
Here is what I want to do.
table image:
Table's type is pandas dataframe.
My current table is left one, and I want to transform my table to right one.
The number of movies and actors are not fixed.
EDIT :
Data input
df=pd.DataFrame({'name':['A','B','C'],'actors':['a,b','b,d','c,m']})
Expected output :
a b c d m
A 1 1 0 0 0
B 0 1 0 1 0
C 0 0 1 0 1
Try this ? (BTW , kaggle movie dataset, better using LabelEncoder)
PS: I did not add the column name, you can simply do out['name']=df.name
Option 1 pd.crosstab
df.actors=df.actors.str.split(',')
df1=df.set_index('name').actors.apply(pd.Series).stack()
pd.crosstab(df1.index.get_level_values(0),df1).rename_axis(None).rename_axis(None,1)
Out[246]:
a b c d m
A 1 1 0 0 0
B 0 1 0 1 0
C 0 0 1 0 1
Option 2
get_dummies
pd.get_dummies(df.actors.str.split(',').apply(pd.Series).stack()).sum(level=0)
Out[230]:
a b c d m
0 1 1 0 0 0
1 0 1 0 1 0
2 0 0 1 0 1
Option 3
MultiLabelBinarizer
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
pd.DataFrame(mlb.fit_transform(df.actors.str.split(',')),columns=mlb.classes_,index=df.name).reset_index()
Out[238]:
name a b c d m
0 A 1 1 0 0 0
1 B 0 1 0 1 0
2 C 0 0 1 0 1
Data Input
df=pd.DataFrame({'name':['A','B','C'],'actors':['a,b','b,d','c,m']})
Related
I have a pandas.DataFrame that looks like this:
A B C D E F
0 0 1 0 0 0
1 1 0 0 0 0
2 0 1 0 0 0
3 0 0 0 1 0
4 0 0 1 0 0
There are several rows that share a 1 in their columns and in each row there is only one 1 present. I want to merge the rows with each other so the resulting dataFrame would onyl consist of one row, that combines all the 1s of the dataframe, like this:
A B C D E F
0 1 1 1 1 0
Is there a smart, easy way to do this with pandas?
Use DataFrame.sum, then compare for greater or equal by Series.ge and last convert to 0,1 by Series.view:
s = df.sum().ge(1).view('i1')
Another idea if 0,1 values only is use DataFrame.any with convert mask to 0,1:
s = df.any().view('i1')
print (s)
A 1
B 1
C 1
D 1
E 1
F 0
dtype: int8
We can do
df.sum().ge(1).astype(int)
Out[316]:
A 1
B 1
C 1
D 1
E 1
F 0
dtype: int32
I have a dataset like this,
sample = {'Theme': ['never give a ten','interaction speed','no feedback,premium'],
'cat1': [0,0,0],
'cat2': [0,0,0],
'cat3': [0,0,0],
'cat4': [0,0,0]
}
pd.DataFrame(sample,columns = ['Theme','cat1','cat2','cat3','cat4'])
Theme cat1 cat2 cat3 cat4
0 never give a ten 0 0 0 0
1 interaction speed 0 0 0 0
2 no feedback,premium 0 0 0 0
Now, I need to replace the values in cat columns based on value in Theme. If the Theme column has 'never give a ten', then change cat1 as 1, similarly if the theme column has 'interaction speed', then change cat2 as 1, if the theme column has 'no feedback' in it, change 'cat3' as 1 and for 'premium' change cat4 as 1.
In this sample I have provided 4 categories, I have in total 21 categories. I can do if word in string 21 times for 21 categories, but I am looking for an efficient way to write this in a function, loop every row and go through the logic and update the corresponding columns, can anyone help please?
Thanks in advance.
Here is possible set columns names by categories with Series.str.get_dummies - columns names are sorted:
df1 = df['Theme'].str.get_dummies(',')
print (df1)
interaction speed never give a ten no feedback premium
0 0 1 0 0
1 1 0 0 0
2 0 0 1 1
If need first column in output add DataFrame.join:
df11 = df[['Theme']].join(df['Theme'].str.get_dummies(','))
print (df11)
Theme interaction speed never give a ten no feedback \
0 never give a ten 0 1 0
1 interaction speed 1 0 0
2 no feedback,premium 0 0 1
premium
0 0
1 0
2 1
If order of columns is important add DataFrame.reindex:
#removed posible duplicates with remain ordering
cols = dict.fromkeys([y for x in df['Theme'] for y in x.split(',')]).keys()
df2 = df['Theme'].str.get_dummies(',').reindex(cols, axis=1)
print (df2)
never give a ten interaction speed no feedback premium
0 1 0 0 0
1 0 1 0 0
2 0 0 1 1
cols = dict.fromkeys([y for x in df['Theme'] for y in x.split(',')]).keys()
df2 = df[['Theme']].join(df['Theme'].str.get_dummies(',').reindex(cols, axis=1))
print (df2)
Theme never give a ten interaction speed no feedback \
0 never give a ten 1 0 0
1 interaction speed 0 1 0
2 no feedback,premium 0 0 1
premium
0 0
1 0
2 1
I am looking to create a counter column based on row values in 2 dataframe columns, represented here at Col1 and Col2.
An example of the dataset is as follows:
Col1 Col2
a 0
a 0
a 0
a 1
a 0
a 0
a 0
a 1
a 1
b 0
b 0
b 1
b 1
b 0
b 0
Where Col1 is an identification variable, and where I want the counter to start over when a new identification variable comes across (so when 'a' switches to 'b', the counter returns to 0).
Col2 is an indication of a new input in the data. When a 1 arises, a new input arises, and the 0s after that correspond to measurements in that input. Each time a 1 arises, I want the counter variable to increment 1. Each time the 1 returns to a 0 (and vice versa), I also want the counter to increment 1. Based on the above dataset, I want the output to look like the following in Col3:
Col1 Col2 Col3
a 0 0
a 0 0
a 0 0
a 1 1
a 0 2
a 0 2
a 0 2
a 1 3
a 1 4
b 0 0
b 0 0
b 1 1
b 1 2
b 0 3
b 0 3
So basically every time Col2 switches from a 0 to a 1, and each time a 1 arises, I want the counter to increment. Each time a 0 is present in Col2, I want the counter to remain the same value. And every time Col1 changes to a new ID (in this case, from 'a' to 'b') I want the counter to start over at 0.
I've been mainly doing this with conditional statements, but there are a ton of them and I'm looking to run this on a large dataset, which would take hours to run. Is there a quick and easy way to run something like this, with these conditions on both columns? Or does anyone have suggestions on transformations to this data that would make running a categorization like this easier?
I understand that this is a slightly confusing request, so please let me know if there is anything I can do to provide more clarity into what I'm looking for.
Thanks!
df.assign(Col4=df1.groupby('Col1').Col2.apply(lambda x:
pd.Series(pd.np.r_[False,(x[1:]==1) |(x.values[1:] != x.values[:-1])].cumsum())).values)
Col1 Col2 Col3 Col4
0 a 0 0 0
1 a 0 0 0
2 a 0 0 0
3 a 1 1 1
4 a 0 2 2
5 a 0 2 2
6 a 0 2 2
7 a 1 3 3
8 a 1 4 4
9 b 0 0 0
10 b 0 0 0
11 b 1 1 1
12 b 1 2 2
13 b 0 3 3
14 b 0 3 3
I have a two data sets that are composed of different x values. It looks like the following.
import pandas as pd
data1=pd.csv_read('Data1.csv')
data2=pd.csv_read('Data2.csv')
print(data1)
data1_x data1_y1 data1_y2 data1_y3
-347.2498 0 2 8
-237.528509 0 3 7
-127.807218 0 0 6
-18.085927 11 5 0
print(data2)
data2_x data2_y1 data2_y2 data2_y3
-394.798507 2 0 0
-285.265994 1 0 0
-175.733482 0 0 1
-66.200969 4 0 0
I am creating new x that includes all the values by using the following code. new_x=reduce(np.union1d, (data1.iloc[:,0], data1.iloc[:,0]))
print(new_x)
array([-394.799,-347.25,-285.266,-237.529,-175.733,-127.807,-66.201,-18.0859])
Currently, I am trying to create a new y lists for each data set that keeps the same y values if the corresponding x values are present but fills with blank if there is no corresponding x value initially.
For instance, print(New_data2) would look something like this.
New_x_data2 New_y1_data2 New_y2_data2 New_y3_data2
-394.799 2 0 0
-347.25
-285.266 1 0 0
-237.529
-175.733 0 0 1
-127.807 0 0 6
-66.201 4 0 0
-18.0859 11 5 0
Especially, I am lost in figuring out how to get the new y value. Any ideas?
import pandas as pd
from re import sub
repl = lambda x : sub("data\d_(\w+)", "New_\\1_data2", x)
data1.rename(repl, axis = 'columns').append(data2.rename(repl, axis='columns')).sort_values('New_x_data2')
Out[1024]:
New_x_data2 New_y1_data2 New_y2_data2 New_y3_data2
0 -394.798507 2 0 0
0 -347.249800 0 2 8
1 -285.265994 1 0 0
1 -237.528509 0 3 7
2 -175.733482 0 0 1
2 -127.807218 0 0 6
3 -66.200969 4 0 0
3 -18.085927 11 5 0
I have a bag-of-words representation of a corpus stored in an D by W sparse matrix word_freqs. Each row is a document and each column is a word. A given element word_freqs[d,w] represents the number of occurrences of word w in document d.
I'm trying to obtain another D by W matrix not_word_occs where, for each element of word_freqs:
If word_freqs[d,w] is zero, not_word_occs[d,w] should be one.
Otherwise, not_word_occs[d,w] should be zero.
Eventually, this matrix will need to be multiplied with other matrices which might be dense or sparse.
I've tried a number of methods, including:
not_word_occs = (word_freqs == 0).astype(int)
This words for toy examples, but results in a MemoryError for my actual data (which is approx. 18,000x16,000).
I've also tried np.logical_not():
word_occs = sklearn.preprocessing.binarize(word_freqs)
not_word_occs = np.logical_not(word_freqs).astype(int)
This seemed promising, but np.logical_not() does not work on sparse matrices, giving the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().
Any ideas or guidance would be appreciated.
(By the way, word_freqs is generated by sklearn's preprocessing.CountVectorizer(). If there's a solution that involves converting this to another kind of matrix, I'm certainly open to that.)
The complement of the nonzero positions of a sparse matrix is dense. So if you want to achieve your stated goals with standard numpy arrays you will require quite a bit of RAM. Here's a quick and totally unscientific hack to give you an idea, how many arrays of that sort your computer can handle:
>>> import numpy as np
>>> a = []
>>> for j in range(100):
... print(j)
... a.append(np.ones((16000, 18000), dtype=int))
My laptop chokes at j=1. So unless you have a really good computer even if you can get the complement (you can do
>>> compl = np.ones(S.shape,int)
>>> compl[S.nonzero()] = 0
) memory will be an issue.
One way out may be to not explicitly compute the complement let's call it C = B1 - A, where B1 is the same-shape matrix completely filled with ones and A the adjacency matrix of your original sparse matrix. For example the matrix product XC can be written as XB1 - XA so you have one multiplication with the sparse A and one with B1 which is actually cheap because it boils down to computing row sums. The point here is that you can compute that without computing C first.
A particularly simple example would be multiplication with a one-hot vector. Such a multiplication just selects a column (if multiplying from the right) or row (if multiplying from the left) of the other matrix. Meaning you just need to find that column or row of the sparse matrix and take the complement (for a single slice no problem) and if you do this for a one-hot matrix, as above you needn't compute the complement explicitly.
Make a small sparse matrix:
In [743]: freq = sparse.random(10,10,.1)
In [744]: freq
Out[744]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 10 stored elements in COOrdinate format>
the repr(freq) shows the shape, elements and format.
In [745]: freq==0
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:213: SparseEfficiencyWarning: Comparing a sparse matrix with 0 using == is inefficient, try using != instead.
", try using != instead.", SparseEfficiencyWarning)
Out[745]:
<10x10 sparse matrix of type '<class 'numpy.bool_'>'
with 90 stored elements in Compressed Sparse Row format>
If do your first action, I get a warning and new array with 90 (out of 100) nonzero terms. That not is no longer sparse.
In general numpy functions do not work when applied to sparse matrices. To work they have to delegate the task to sparse methods. But even if logical_not worked it wouldn't solve the memory issue.
Here is an example of using Pandas.SparseDataFrame:
In [42]: X = (sparse.rand(10, 10, .1) != 0).astype(np.int64)
In [43]: X = (sparse.rand(10, 10, .1) != 0).astype(np.int64)
In [44]: d1 = pd.SparseDataFrame(X.toarray(), default_fill_value=0, dtype=np.int64)
In [45]: d2 = pd.SparseDataFrame(np.ones((10,10)), default_fill_value=1, dtype=np.int64)
In [46]: d1.memory_usage()
Out[46]:
Index 80
0 16
1 0
2 8
3 16
4 0
5 0
6 16
7 16
8 8
9 0
dtype: int64
In [47]: d2.memory_usage()
Out[47]:
Index 80
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
dtype: int64
math:
In [48]: d2 - d1
Out[48]:
0 1 2 3 4 5 6 7 8 9
0 1 1 0 0 1 1 0 1 1 1
1 1 1 1 1 1 1 1 1 0 1
2 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 0 1 1
4 1 1 1 1 1 1 1 1 1 1
5 0 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1
7 0 1 1 0 1 1 1 0 1 1
8 1 1 1 1 1 1 0 1 1 1
9 1 1 1 1 1 1 1 1 1 1
source sparse matrix:
In [49]: d1
Out[49]:
0 1 2 3 4 5 6 7 8 9
0 0 0 1 1 0 0 1 0 0 0
1 0 0 0 0 0 0 0 0 1 0
2 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 1 0 0
4 0 0 0 0 0 0 0 0 0 0
5 1 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0
7 1 0 0 1 0 0 0 1 0 0
8 0 0 0 0 0 0 1 0 0 0
9 0 0 0 0 0 0 0 0 0 0
memory usage:
In [50]: (d2 - d1).memory_usage()
Out[50]:
Index 80
0 16
1 0
2 8
3 16
4 0
5 0
6 16
7 16
8 8
9 0
dtype: int64
PS if you can't build the whole SparseDataFrame at once (because of memory constraints), you can use an approach similar to one used in this answer