How to Replace Multiple String in a Data frame Using Python - string

I have a data frame with 73k rows, and here's the following sample data :
Index Customers' Name States
0 Alpha Oregon
1 Alpha Oregon
2 Bravo Utah
3 Bravo Utah
4 Charlie Alabama
5 Charlie Alabama
6 Alpha Oregon
7 Alpha Oregon
8 Bravo Utah
The data have a unique value but I am not allowed to delete or remove it because it's needed or mandatory for my research. On the other hand, I would like to change the customers' names with some specific pseudocode so the result can look like this :
Index Customers' Name States
0 z1 Oregon
1 z1 Oregon
2 z2 Utah
3 z2 Utah
4 z3 Alabama
5 z3 Alabama
6 z1 Oregon
7 z1 Oregon
8 z2 Utah
I'm still a beginner, learning Python for around 3 months. So, how can I change this in a 'bulky' way remembering that I have 73k rows like this? I assume that it must be executed using a looping ('For'). I already tried, but I can't wrap up this well. Please help me finish/solve this.

You can use .groupby() with .ngroup():
df["Customers' Name"] = "z" + (
df.groupby("Customers' Name").ngroup() + 1
).astype("str")
print(df)
Prints:
Customers' Name States
0 z1 Oregon
1 z1 Oregon
2 z2 Utah
3 z2 Utah
4 z3 Alabama
5 z3 Alabama
6 z1 Oregon
7 z1 Oregon
8 z2 Utah

Related

Adding values to a new column in Pandas depending on values in an existing column

I have a pandas dataframe as follows:
Name Age City Country percentage
a Jack 34 Sydney Australia 0.23
b Riti 30 Delhi India 0.45
c Vikas 31 Mumbai India 0.55
d Neelu 32 Bangalore India 0.73
e John 16 New York US 0.91
f Mike 17 las vegas US 0.78
I am planning to add one more column called bucket whose definition depends on the percentage column as follows:
less than 0.25 = 1
between 0.25 and 0.5 = 2
between 0.5 and 0.75 = 3
greater than 0.75 = 4
I tried the inbuilt conditions and choices properties of pandas follows:
conditions = [(df_obj['percentage'] < .25),
(df_obj['percentage'] >=.25 & df_obj['percentage'] < .5),
(df_obj['percentage'] >=.5 & df_obj['percentage'] < .75),
(df_obj['percentage'] >= .75)]
choices = [1,2,3,4]
df_obj['bucket'] = np.select(conditions, choices)
However, this gives me a random error as follows in the line where I create the conditions:
TypeError: Cannot perform 'rand_' with a dtyped [float64] array and scalar of type [bool]
A quick fix to your code is that you need more parentheses, for example:
((df_obj['percentage'] >=.25) & (df_obj['percentage'] < .5) )
^ ^ ^ ^
However, I think it's cleaner with pd.cut:
pd.cut(df['percentage'], bins=[0,0.25, 0.5, 0.75, 1],
include_lowest=True, right=False,
labels=[1,2,3,4])
Or since your buckets are linear:
df['bucket'] = (df['percentage']//0.25).add(1).astype(int)
Output
Name Age City Country percentage bucket
a Jack 34 Sydney Australia 0.23 1
b Riti 30 Delhi India 0.45 2
c Vikas 31 Mumbai India 0.55 3
d Neelu 32 Bangalore India 0.73 3
e John 16 New York US 0.91 4
f Mike 17 las vegas US 0.78 4
I think the easiest/most readable way to do this is to use the apply function:
def percentage_to_bucket(percentage):
if percentage < .25:
return 1
elif percentage >= .25 and percentage < .5:
return 2
elif percentage >= .5 and percentage < .75:
return 3
else:
return 4
df["bucket"] = df["percentage"].apply(percentage_to_bucket)
Pandas apply will take each value of a given column and apply the passed function to this value, returning a pandas series with the results, which you can then assign to your new column.

How to display the rows with the most number of occurrences in a column of a dataframe?

I have a data frame with 6 columns:
taken person quant reading personal family
0 1 lake rad 9.7 Anderson Lake
1 1 lake sal 0.21 Anderson Lake
2 5 Lim sal 0.08 Andy Lim
3 2 Lim rad 9.82 Andy Lim
4 2 Lim sal 0.13 Andy Lim
5 3 dyer rad 7.7 William Dyer
Output i want:
taken person quant reading personal family
0 5 Lim sal 0.08 Andy Lim
1 2 Lim rad 9.82 Andy Lim
2 2 Lim sal 0.13 Andy Lim
Basically, i want to display all the rows in the df based on the most number of occurrences in the personal column. This is what i've tried but it doesn't work
test = df.personal.mode()
test1 = df.loc[df.personal == test]
display(test1)
You can combine value_counts and boolean indexing:
df[df['person'] == df['person'].value_counts().index[0] ]
Output:
taken person quant reading personal family
2 5 Lim sal 0.08 Andy Lim
3 2 Lim rad 9.82 Andy Lim
4 2 Lim sal 0.13 Andy Lim
Note that this only keep one person in the case there are several persons with same number of appearances. If you want to keep all of them, mode and isin is a better choice:
df[df['person'].isin(df['person'].mode())]

How to access a column of grouped data to perform linear regression in pandas?

I want to perform a linear regression on groupes of grouped data frame in pandas. The function I am calling throws a KeyError that I cannot resolve.
I have an environmental data set called dat that includes concentration data of a chemical in different tree species of various age classes in different country sites over the course of several time steps. I now want to do a regression of concentration over time steps within each group of (site, species, age).
This is my code:
```
import pandas as pd
import statsmodels.api as sm
dat = pd.read_csv('data.csv')
dat.head(15)
SampleName Concentration Site Species Age Time_steps
0 batch1 2.18 Germany pine 1 1
1 batch2 5.19 Germany pine 1 2
2 batch3 11.52 Germany pine 1 3
3 batch4 16.64 Norway spruce 0 1
4 batch5 25.30 Norway spruce 0 2
5 batch6 31.20 Norway spruce 0 3
6 batch7 12.63 Norway spruce 1 1
7 batch8 18.70 Norway spruce 1 2
8 batch9 43.91 Norway spruce 1 3
9 batch10 9.41 Sweden birch 0 1
10 batch11 11.10 Sweden birch 0 2
11 batch12 15.73 Sweden birch 0 3
12 batch13 16.87 Switzerland beech 0 1
13 batch14 22.64 Switzerland beech 0 2
14 batch15 29.75 Switzerland beech 0 3
def ols_res_grouped(group):
xcols_const = sm.add_constant(group['Time_steps'])
linmod = sm.OLS(group['Concentration'], xcols_const).fit()
return linmod.params[1]
grouped = dat.groupby(['Site','Species','Age']).agg(ols_res_grouped)
```
I want to get the regression coefficient of concentration data over Time_steps but get a KeyError: 'Time_steps'. How can the sm method access group["Time_steps"]?
According to pandas's documentation, agg applies functions to each column independantly.
It might be possible to use NamedAgg but I am not sure.
I think it is a lot easier to just use a for loop for this :
for _, group in dat.groupby(['Site','Species','Age']):
coeff = ols_res_grouped(group)
# if you want to put the coeff inside the dataframe
dat.loc[group.index, 'coeff'] = coeff

Excel match/index by column pair?

I am definitely a beginner when it comes to advanced Excel syntax. I tried searching for an answer to my question, but I may be too inexperienced to know how to look. Also possible I misspecified my title. Apologies if this has been answered previously.
Say I have the following 3 columns
A B C
1 Alabama Alabama Red
2 Alaska Wyoming Blue
3 Arizona Alaska Yellow
4 Arkansas Florida Green
5 California Arkansas Black
And I want to produce a list of same-row B/C pairs that have B values equal to A values:
D E
1 Alabama Red
2 Alaska Yellow
3 Arkansas Black
How can I do this?
Thank you thank you thank!

How can you identify the best companies for each variable and copy the cases?

i want to compare the means of subgroups. The cases of the subgroup with the lowest and the highest mean should be copied and applied to the end of the dataset:
Input
df.head(10)
Outcome
Company Satisfaction Image Forecast Contact
0 Blue 2 3 3 1
1 Blue 2 1 3 2
2 Yellow 4 3 3 3
3 Yellow 3 4 3 2
4 Yellow 4 2 1 5
5 Blue 1 5 1 2
6 Blue 4 2 4 3
7 Yellow 5 4 1 5
8 Red 3 1 2 2
9 Red 1 1 1 2
I have around 100 cases in my sample. Now i look at the means for each company.
Input
df.groupby(['Company']).mean()
Outcome
Satisfaction Image Forecast Contact
Company
Blue 2.666667 2.583333 2.916667 2.750000
Green 3.095238 3.095238 3.476190 3.142857
Orange 3.125000 2.916667 3.416667 2.625000
Red 3.066667 2.800000 2.866667 3.066667
Yellow 3.857143 3.142857 3.000000 2.714286
So for satisfaction Yellow got the best and Blue the worst value. I want to copy the cases of yellow and blue and add them to the dataset but now with the new lable "Best" and "Worst". I dont want to rename it and i want to iterate over the dataset and to this for other columns, too (for example Image). Is there a solution for it? After i added the cases i want an output like this:
Input
df.groupby(['Company']).mean()
Expected Outcome
Satisfaction Image Forecast Contact
Company
Blue 2.666667 2.583333 2.916667 2.750000
Green 3.095238 3.095238 3.476190 3.142857
Orange 3.125000 2.916667 3.416667 2.625000
Red 3.066667 2.800000 2.866667 3.066667
Yellow 3.857143 3.142857 3.000000 2.714286
Best 3.857143 3.142857 3.000000 3.142857
Worst 2.666667 2.583333 2.866667 2.625000
But how i said. It is really important that the companies with the best and worst values for each column will be added again and not just be renamed because i want to do to further data processing with another software.
************************UPDATE****************************
I found out how to copy the correct cases:
Input
df2 = df.loc[df['Company'] == 'Yellow']
df2 = df2.replace('Yellow','Best')
df2 = df2[['Company','Satisfaction']]
new = [df,df2]
result = pd.concat(new)
result
Output
Company Contact Forecast Image Satisfaction
0 Blue 1.0 3.0 3.0 2
1 Blue 2.0 3.0 1.0 2
2 Yellow 3.0 3.0 3.0 4
3 Yellow 2.0 3.0 4.0 3
..........................................
87 Best NaN NaN NaN 3
90 Best NaN NaN NaN 4
99 Best NaN NaN NaN 1
111 Best NaN NaN NaN 2
Now i want to copy the cases of the company with the best values for the other variables, too. But now i have to identify manually which company is best for each category. Isnt there a more comfortable solution?
I have a solution. First i create a dictionary with the variables i want to create a dummy company for best and worst:
variables = ['Contact','Forecast','Satisfaction','Image']
After i loop over this columns and adding the cases again with the new label "Best" or "Worst":
for n in range(0,len(variables),1):
Start = variables[n-1]
neu = df.groupby(['Company'], as_index=False)[Start].mean()
Best = neu['Company'].loc[neu[Start].idxmax()]
Worst = neu['Company'].loc[neu[Start].idxmin()]
dfBest = df.loc[df['Company'] == Best]
dfWorst = df.loc[df['Company'] == Worst]
dfBest = dfBest.replace(Best,'Best')
dfWorst = dfWorst.replace(Worst,'Worst')
dfBest = dfBest[['Company',Start]]
dfWorst = dfWorst[['Company',Start]]
new = [df,dfBest,dfWorst]
df = pd.concat(new)
Thanks guys :)

Resources