ValueError: could not convert string to float: 'Mme' - python-3.x

When I run the following code in Jupyter Lab
import numpy as np
from sklearn.feature_selection import SelectKBest,f_classif
import matplotlib.pyplot as plt
predictors = ["Pclass","Sex","Age","SibSp","Parch","Fare","Embarked","FamilySize","Title","NameLength"]
selector = SelectKBest(f_classif,k=5)
selector.fit(titanic[predictors],titanic["Survived"])
Then it went errors and note that ValueError: could not convert string to float: 'Mme',details are like these:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Users\ADMINI~1\AppData\Local\Temp/ipykernel_17760/1637555559.py in <module>
5 predictors = ["Pclass","Sex","Age","SibSp","Parch","Fare","Embarked","FamilySize","Title","NameLength"]
6 selector = SelectKBest(f_classif,k=5)
----> 7 selector.fit(titanic[predictors],titanic["Survived"])
......
ValueError: could not convert string to float: 'Mme'
I tried to print titanic[predictors] and titanic["Survived"],then the details are follows:
Pclass Sex Age SibSp Parch Fare Embarked FamilySize Title NameLength
0 3 0 22.0 1 0 7.2500 0 1 1 23
1 1 1 38.0 1 0 71.2833 1 1 3 51
2 3 1 26.0 0 0 7.9250 0 0 2 22
3 1 1 35.0 1 0 53.1000 0 1 3 44
4 3 0 35.0 0 0 8.0500 0 0 1 24
... ... ... ... ... ... ... ... ... ... ...
886 2 0 27.0 0 0 13.0000 0 0 6 21
887 1 1 19.0 0 0 30.0000 0 0 2 28
888 3 1 28.0 1 2 23.4500 0 3 2 40
889 1 0 26.0 0 0 30.0000 1 0 1 21
890 3 0 32.0 0 0 7.7500 2 0 1 19
891 rows × 10 columns
0 0
1 1
2 1
3 1
4 0
..
886 0
887 1
888 0
889 1
890 0
Name: Survived, Length: 891, dtype: int64
How to Solve this Problem?

When you are trying to fit some algorithm (in your case SelectKBest), you need to be aware of your data. And, almost all time you need to preprocess it.
Take a look to your data:
Do you have categorical features or they are numerical? Or a mix?
Do you have NaN values?
...
Most of algorithm don't accept categorical features, and you will need to make a transformation to numerical one (evaluate the use of OneHotEncoder).
In your case it seems you have a categorical value called Mme, which is in the feature Title. Check it.
You will have the same problem with NaN values.
In conclusion, before start fitting, you have to preprocess your data.

is it printing column labels in first line?
if so then you do proper data assigning so assign the array starting from second row array[1:,:]
otherwise try to look into it and see where is "Mme" string located so you understand how the code is fetching it.

Related

How to return first item when the items in the pandas dataframe window are the same?

I am a python beginner.
I have the following pandas DataFrame, with only two columns; "Time" and "Input".
I want to loop over the "Input" column. Assuming we have a window size w= 3. (three consecutive values) such that for every selected window, we will check if all the items/elements within that window are 1's, then return the first item as 1 and change the remaining values to 0's.
index Time Input
0 11 0
1 22 0
2 33 0
3 44 1
4 55 1
5 66 1
6 77 0
7 88 0
8 99 0
9 1010 0
10 1111 1
11 1212 1
12 1313 1
13 1414 0
14 1515 0
My intended output is as follows
index Time Input What_I_got What_I_Want
0 11 0 0 0
1 22 0 0 0
2 33 0 0 0
3 44 1 1 1
4 55 1 1 0
5 66 1 1 0
6 77 1 1 1
7 88 1 0 0
8 99 1 0 0
9 1010 0 0 0
10 1111 1 1 1
11 1212 1 0 0
12 1313 1 0 0
13 1414 0 0 0
14 1515 0 0 0
What should I do to get the desired output? Am I missing something in my code?
import pandas as pd
import re
pd.Series(list(re.sub('111', '100', ''.join(df.Input.astype(str))))).astype(int)
Out[23]:
0 0
1 0
2 0
3 1
4 0
5 0
6 1
7 0
8 0
9 0
10 1
11 0
12 0
13 0
14 0
dtype: int32

how to add missing rows of time series data to panda dataframes in python

I have a time series dataset of product given below:
date product price amount
11/17/2019 A 10 20
11/19/2019 A 15 20
11/24/2019 A 20 30
12/01/2019 C 40 50
12/05/2019 C 45 35
This data has a missing days ("MM/dd/YYYY") between the start and end date of data for each product. I am trying to fill missing date with zero rows and convert to previous table into a table given below:
date product price amount
11/17/2019 A 10 20
11/18/2019 A 0 0
11/19/2019 A 15 20
11/20/2019 A 0 0
11/21/2019 A 0 0
11/22/2019 A 0 0
11/23/2019 A 0 0
11/24/2019 A 20 30
12/01/2019 C 40 50
12/02/2019 C 0 0
12/03/2019 C 0 0
12/04/2019 C 0 0
12/05/2019 C 45 35
To get this conversion, I used the code:
import pandas as pd
import numpy as np
data=pd.read_csv("test.txt", sep="\t", parse_dates=['date'])
data=data.set_index(["date", "product"])
start=data.first_valid_index()[0]
end=data.last_valid_index()[0]
df=data.set_index("date").reindex(pd.date_range(start,end, freq="1D"), fill_values=0)
However the code gives an error. Is there any way to get this conversion efficiently?
If need add 0 for missing Datetimes for each product separately use custom function in GroupBy.apply with DataFrame.reindex by minimal and maximal datetime:
df = pd.read_csv("test.txt", sep="\t", parse_dates=['date'])
f = lambda x: x.reindex(pd.date_range(x.index.min(),
x.index.max(), name='date'), fill_value=0)
df = (df.set_index('date')
.groupby('product')
.apply(f)
.drop('product', axis=1)
.reset_index())
print (df)
product date price amount
0 A 2019-11-17 10 20
1 A 2019-11-18 0 0
2 A 2019-11-19 15 20
3 A 2019-11-20 0 0
4 A 2019-11-21 0 0
5 A 2019-11-22 0 0
6 A 2019-11-23 0 0
7 A 2019-11-24 20 30
8 C 2019-12-01 40 50
9 C 2019-12-02 0 0
10 C 2019-12-03 0 0
11 C 2019-12-04 0 0
12 C 2019-12-05 45 35
one option is to use the complete function from pyjanitor to expose the missing rows per group:
#pip install git+https://github.com/pyjanitor-devs/pyjanitor.git
import pandas as pd
import janitor
# build the dates to be applied per group
dates = dict(date = lambda df: pd.date_range(df.min(), df.max(), freq='1D'))
df.complete(dates, by='product', sort = True).fillna(0, downcast='infer')
date product price amount
0 2019-11-17 00:00:00 A 10 20
1 2019-11-18 00:00:00 A 0 0
2 2019-11-19 00:00:00 A 15 20
3 2019-11-20 00:00:00 A 0 0
4 2019-11-21 00:00:00 A 0 0
5 2019-11-22 00:00:00 A 0 0
6 2019-11-23 00:00:00 A 0 0
7 2019-11-24 00:00:00 A 20 30
8 2019-12-01 00:00:00 C 40 50
9 2019-12-02 00:00:00 C 0 0
10 2019-12-03 00:00:00 C 0 0
11 2019-12-04 00:00:00 C 0 0
12 2019-12-05 00:00:00 C 45 35
There's an easier method for this case:
#create the full date range, and then create a DataFrame with the range
#if needed, you can expand the range a bit using datetime.timedelta()
alldates=pd.DataFrame(pd.date_range(data.index.min()-timedelta(1),data.index.max()+timedelta(4), freq="1D",name="newdate"))
#make 'newdate' the index, and you no longer need it as a column
alldates.index=alldates.newdate
alldates.drop(columns="newdate", inplace=True)
#now, join the tables, missing dates in the original table will be filled with NaN
data=alldates.join(data)

how to count rows when value change from value greater than threshold to 0

I have three columns in dataframe , X1 X2 X3 , i want to count rows when value change from value greater than 1 to 0 . if before 0 value less than 1 dont need to count.
input df:
df1=pd.DataFrame({'x1':[3,4,7,0,0,0,0,20,15,16,0,0,70],
'X2':[3,4,7,0,0,0,0,20,15,16,0,0,70],
'X3':[6,3,0.5,0,0,0,0,20,15,16,0,0,70]})
print(df1)
x1 X2 X3
0 3 3 6.0
1 4 4 3.0
2 7 7 0.5
3 0 0 0.0
4 0 0 0.0
5 0 0 0.0
6 0 0 0.0
7 20 20 20.0
8 15 15 15.0
9 16 16 16.0
10 0 0 0.0
11 0 0 0.0
12 70 70 70.0
desired_output
x1_count X2_count X3_count
0 6 6 2
Idea is replace 0 to missing values, forward filling them, convert all another values to NaNs, compare greater like 1 and count Trues by sum to Series converted to one row DataFrame with transpose:
m = df1.eq(0)
df2 = (df1.mask(m)
.ffill()
.where(m)
.gt(1)
.sum()
.add_suffix('_count')
.to_frame()
.T
)
print (df2)
x1_count X2_count X3_count
0 6 6 2

How to switch 1 (ON) flags occurring together in batch of size more than a specified threshold to 0 in pandas dataframe?

A flag column in a pandas dataframe is populated by 1 or 0
The problem is to identify continuous 1s.
Let t be the number of days thresholds
There are two types of transformations required:
i) If there are more than t 1s together, turn the (t+1)th onwards 1 to 0
ii) If there are more than t 1s together, turn all the 1s to 0s
My approach is to create 2 columns called result1 and result2, and filter using these columns:
Please see image here
I have not been able to think of anything as such, so not posting any code.
A nudge or hint in the right direction would be appreciated.
Use:
#compare 0 values
m = df['Value'].eq(0)
#get cumulative sum and filter only 1 rows
g = m.cumsum()[~m]
#set by condition - 0 or ccounter per groups
df['Result1'] = np.where(m, 0, df.groupby(g).cumcount().add(1))
#get maximum per groups with transform for new Series
df['Result2'] = np.where(m, 0, df.groupby(g)['Result1'].transform('max')).astype(int)
print (df)
Value Result1 Result2
0 1 1 1
1 0 0 0
2 0 0 0
3 1 1 2
4 1 2 2
5 0 0 0
6 1 1 4
7 1 2 4
8 1 3 4
9 1 4 4
10 0 0 0
11 0 0 0
12 1 1 1
13 0 0 0
14 1 1 1
15 0 0 0
16 0 0 0
17 1 1 6
18 1 2 6
19 1 3 6
20 1 4 6
21 1 5 6
22 1 6 6
23 0 0 0
24 1 1 1
25 0 0 0
26 0 0 0
27 1 1 1
28 0 0 0

Pandas series.groupby().apply( .sum() ), .sum() not summing values

I have the following test code:
import pandas as pd
import numpy as np
df = pd.DataFrame({'MONTH': [1,2,3,1,1,1,1,1,1,2,3,2,2,3,2,1,1,1,1,1,1,1],
'HOUR': [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
'CIGFT': [np.NaN,12000,2500,73300,73300,np.NaN,np.NaN,np.NaN,np.NaN,12000,100,100,15000,2500,np.NaN,15000,11000,np.NaN,np.NaN,np.NaN,np.NaN,np.NaN]})
cigs = pd.DataFrame()
cigs['cigsum'] = df.groupby(['MONTH','HOUR'])['CIGFT'].apply(lambda c: (c>=0.0).sum())
cigs['cigcount'] = df.groupby(['MONTH','HOUR'])['CIGFT'].apply(lambda c: (c>=0.0).count())
df.fillna(value='-', inplace=True)
cigs['cigminus'] = df.groupby(['MONTH','HOUR'])['CIGFT'].apply(lambda c: (c>=0.0).sum())
tfile = open('test_COUNT_manual.txt', 'a')
tfile.write(cigs.to_string())
tfile.close()
I wind up with the following results:
The dataframe:
CIGFT HOUR MONTH
0 NaN 0 1
1 12000.0 0 2
2 2500.0 0 3
3 73300.0 0 1
4 73300.0 0 1
5 NaN 0 1
6 NaN 0 1
7 NaN 0 1
8 NaN 0 1
9 12000.0 0 2
10 100.0 0 3
11 100.0 0 2
12 15000.0 0 2
13 2500.0 0 3
14 NaN 0 2
15 15000.0 0 1
16 11000.0 0 1
17 NaN 0 1
18 NaN 0 1
19 NaN 0 1
20 NaN 0 1
21 NaN 0 1
The results in the write to file:
cigsum cigcount cigminus
MONTH HOUR
1 0 4 14 14
2 0 4 5 5
3 0 3 3 3
My issue is that the .sum() is not summing the values. It is doing a count of the non null values. When I replace the null values with a minus, the .sum()
produces the same result as the count().
So what do I use to get the sum of the values if .sum() does not do it?
Series.sum() -> return the sum of the series values excluding NA/null values by default as mentioned in official docs.
You are getting series in lambda function each time, just apply sum function to series in lambda will give you correct result.
Do this,
cigs['cigsum'] = df.groupby(['MONTH','HOUR'])['CIGFT'].apply(lambda c: c.sum())
Result of this code will be,
MONTH HOUR
1 0 172600.0
2 0 39100.0
3 0 5100.0

Resources