Selecting columns/axes for correlation from Pandas df - python-3.x

I have a pandas dataframe like the one below. I would like to build a correlation matrix that establishes the relationship between product ownership and the profit/cost/rev for a series of customer records.
prod_owned_a prod_owned_b profit cost rev
0 1 0 100 75 175
1 0 1 125 100 225
2 1 0 100 75 175
3 1 1 225 175 400
4 0 1 125 100 225
Ideally, the matrix will have all prod_owned along one axis with profit/cost/rev along another. I would like to avoid including the correlation between prod_owned_a and prod_owned_b in the correlation matrix.
Question: How can I select specific columns for each axis? Thank you!

As long as the order of the columns does not change, you can use slicing:
df.corr().loc[:'prod_owned_b', 'profit':]
# profit cost rev
#prod_owned_a 0.176090 0.111111 0.147442
#prod_owned_b 0.616316 0.666667 0.638915
A more robust solution locates all "prod_*" columns:
prod_cols = df.columns.str.match('prod_')
df.corr().loc[prod_cols, ~prod_cols]
# profit cost rev
#prod_owned_a 0.176090 0.111111 0.147442
#prod_owned_b 0.616316 0.666667 0.638915

Not very optimized but still;
df.corr().loc[['prod_owned_a', 'prod_owned_b'], ['profit', 'cost', 'rev']]

Related

Pandas - number of occurances of IDs from a column in one dataframe in several columns of a second dataframe

I'm new to python and pandas, and trying to "learn by doing."
I'm currently working with two football/soccer (depending on where you're from!) dataframes:
player_table has several columns, among others 'player_name' and 'player_id'
player_id player_name
0 223 Lionel Messi
1 157 Cristiano Ronaldo
2 962 Neymar
match_table also has several columns, among others 'home_player_1', '..._2', '..._3' and so on, as well as the corresponding 'away_player_1', '...2' , '..._3' and so on. The content of these columns is a player_id, such that you can tell which 22 (2x11) players participated in a given match through their respective unique IDs.
I'll just post a 2 vs. 2 example here, because that works just as well:
match_id home_player_1 home_player_2 away_player_1 away_player_2
0 321 223 852 729 853
1 322 223 858 157 159
2 323 680 742 223 412
What I would like to do now is to add a new column to player_table which gives the number of appearances - player_table['appearances'] by counting the number of times each player_id is mentioned in the part of the dataframe match_table bound horizontally by (home player 1, away player 2) and vertically by (first match, last match)
Desired result:
player_id player_name appearances
0 223 Lionel Messi 3
1 157 Cristiano Ronaldo 1
2 962 Neymar 0
Coming from other programming languages I think my standard solution would be a nested for loop, but I understand that is frowned upon in python...
I have tried several solutions but none really work, this seems to at least give the number of appearances as "home_player_1"
player_table['appearances'] = player_table['player_id'].map(match_table['home_player_1'].value_counts())
Is there a way to expand the map function to include several columns in a dataframe? Or do I have to stack the 22 columns on top of one another in a new dataframe, and then map? Or is map not the appropriate function?
Would really appreciate your support, thanks!
Philipp
Edit: added specific input and desired output as requested
What you could do is use .melt() on the match_table player columns (so it'll turn your wide table in to a tall/long table of a single column). Then do a .value_counts on the that one column. Finally join it to the player_table on the 'player_id' column
import pandas as pd
player_table = pd.DataFrame({'player_id':[223,157,962],
'player_name':['Lionel Messi','Cristiano Ronaldo','Neymar']})
match_table = pd.DataFrame({
'match_id':[321,322,323],
'home_player_1':[223,223,680],
'home_player_2':[852,858,742],
'away_player_1':[729,157,223],
'away_player_2':[853,159,412]})
player_cols = [x for x in match_table.columns if 'player_' in x]
match_table[player_cols].value_counts(sort=True)
df1 = match_table[player_cols].melt(var_name='columns', value_name='appearances')['appearances'].value_counts(sort=True).reset_index(drop=False).rename(columns={'index':'player_id'})
appearances_df = df1.merge(player_table, how='right', on='player_id')[['player_id','player_name','appearances']].fillna(0)
Output:
print(appearances_df)
player_id player_name appearances
0 223 Lionel Messi 3.0
1 157 Cristiano Ronaldo 1.0
2 962 Neymar 0.0

Counting the no.of elements in a column and grouping them

Hope you guys are doing well . I have taken up a small project to do in python so I can learn how to code and do basic data analysis in python along the way . I need some help on counting the number of elements present in a column in a DF and grouping them .
Below is the Dataframe I am using
dates open high low close volume % change
372 2010-01-05 15:28:00 5279.2 5280.25 5279.1 5279.5 131450
373 2010-01-05 15:29:00 5279.75 5279.95 5278.05 5279.0 181200
374 2010-01-05 15:30:00 5277.3 5279.0 5275.0 5276.45 240000
375 2010-01-06 09:16:00 5288.5 5289.5 5288.05 5288.45 32750 0.22837324337386275
376 2010-01-06 09:17:00 5288.15 5288.75 5285.05 5286.5 55004
377 2010-01-06 09:18:00 5286.3 5289.0 5286.3 5288.2 37650
I would like to create another DF where the count of elements/entries in the % change column and group them as , x<= 0.5 or 0.5<x<=1 or 1<x<=1.5 or 1.5<x<=2 or 2<x<=2.5 or X<2.5
Below would be the desired output
Group no.of instances
x<= 0.5 1
0.5<x<=1 0
1<x<=1.5 0
1.5<x<=2 0
2<x<=2.5 0
X<2.5 0
Looking forward to a reply ,
Fudgster
You could get the number of elements in each category by using the bins option of the pandas.value_counts() method.This would return the series with number of records within the specified range.
Here is the code,
df["% change"].value_counts(bins=[0,0.5,1,1.5,2,2.5])

Cumulatively Reduce Values in Column by Results of Another Column

I am dealing with a dataset that shows duplicate stock per part and location. Orders from multiple customers are coming in and the stock was just added via a vlookup. I need help writing some sort of looping function in python that cumulatively decreases the stock quantity by the order quantity.
Currently data looks like this:
SKU Plant Order Stock
0 5455 989 2 90
1 5455 989 15 90
2 5455 990 10 80
3 5455 990 20 80
I want to accomplish this:
SKU Plant Order Stock
0 5455 989 2 88
1 5455 989 15 73
2 5455 990 10 70
3 5455 990 20 50
Try:
df.Stock -= df.groupby(['SKU','Plant'])['Order'].cumsum()

Mark sudden changes in prices in a dataframe time series and color them

I have a Pandas dataframe of prices for different months and years (timeseries), 80 columns. I want to be able to detect significant changes in prices either up or down and color them differently in a dataframe. Is that possible and what would be the best approach?
Jan-2001 Feb-2001 Jan-2002 Feb-2002 ....
100 30 10 ...
110 25 1 ...
40 5 50
70 11 4
120 35 2
Here in the first column 40 and 70 should be marked, in the second column 5 and 11 should be marked, in the third column not really sure but probably 1, 50, 4, 2...
Your question involves 2 problems I can see.
Printing the highlighting depends on the output method your trying to get to, be it STDOUT, file, or some program specific.
Identification of outliers based on the Column data. Its hard to interpret if you want it based on the entire dataset, vice the previous data in the column like a rolling outlier, ie the data previous is calculated to identify if the next thing is out of wack.
In the below instance I provide a method to go at the data with std dev/zscoring based on the mean of the data in the entire column. You will have to tweak the > < items to get to your desired state, there is many intricacies dealing with this concept and I would suggest taking a look at a few resources about this subject.
For your data:
Jan-2001,Feb-2001,Jan-2002
100,30,10
110,25,1
40,5,50
70,11,4
120,35,20000
I am aware of methods to highlight, but not in the terminal. The https://pandas.pydata.org/pandas-docs/stable/style.html method works in a few programs.
To get at the original item, identification of outliers in your data, you could use something like below to identify based on standard deviation and zscore.
Sample Code:
df = pd.read_csv("full.txt")
original = df.columns
print(df)
for col in df.columns:
col_zscore = col + "_zscore"
df[col_zscore] = (df[col] - df[col].mean())/df[col].std(ddof=0)
print(df[col].loc[(df[col_zscore] > 1.5) | (df[col_zscore] < -.5)])
print(df)
Output 1: # prints the original dataframe
Jan-2001 Feb-2001 Jan-2002
100 30 10
110 25 1
40 5 50
70 11 4
120 35 20000
Output 2: # Identifies the outliers
2 40
3 70
Name: Jan-2001, dtype: int64
2 5
3 11
Name: Feb-2001, dtype: int64
0 10
1 1
3 4
4 20000
Name: Jan-2002, dtype: int64
Output 3: # Prints the full dataframe created, with zscore of each item based on the column
Jan-2001 Feb-2001 Jan-2002 Jan-2001_std Jan-2001_zscore \
0 100 30 10 32.710854 0.410152
1 110 25 1 32.710854 0.751945
2 40 5 50 32.710854 -1.640606
3 70 11 4 32.710854 -0.615227
4 120 35 2 32.710854 1.093737
Feb-2001_std Feb-2001_zscore Jan-2002_std Jan-2002_zscore
0 12.735776 0.772524 20.755722 -0.183145
1 12.735776 0.333590 20.755722 -0.667942
2 12.735776 -1.422147 20.755722 1.971507
3 12.735776 -0.895426 20.755722 -0.506343
4 12.735776 1.211459 20.755722 -0.614076
Resources for zscore are here:
https://statistics.laerd.com/statistical-guides/standard-score-2.php

Sorting and Grouping in Pandas data frame column alphabetically

I want to sort and group by a pandas data frame column alphabetically.
a b c
0 sales 2 NaN
1 purchase 130 230.0
2 purchase 10 20.0
3 sales 122 245.0
4 purchase 103 320.0
I want to sort column "a" such that it is in alphabetical order and is grouped as well i.e., the output is as follows:
a b c
1 purchase 130 230.0
2 10 20.0
4 103 320.0
0 sales 2 NaN
3 122 245.0
How can I do this?
I think you should use the sort_values method of pandas :
result = dataframe.sort_values('a')
It will sort your dataframe by the column a and it will be grouped either because of the sorting. See ya !

Resources