How to t-test by group in a pandas dataframe? - python-3.x

I have quite a huge pandas dataframe with many columns. The dataframe contains two groups. It is basically setup as follows:
import pandas as pd
csv = [{"air" : 0.47,"co2" : 0.43 , "Group" : 1}, {"air" : 0.77,"co2" : 0.13 , "Group" : 1}, {"air" : 0.17,"co2" : 0.93 , "Group" : 2} ]
df = pd.DataFrame(csv)
I want to perform a t-test paired t-test on air and co2 thereby compare the two groups Group = 1 and Group = 2.
I have many many more columns than just air co2- hence, I would like to find a procedure that works for all columns int the dataframe. I believe, I could use scipy.stats.ttest_rel together with pd.groupby oder apply. How would that work? Thanks in advance /R

I would use pandas dataframe.where method.
group1_air = df.where(df.Group== 1).dropna()['air']
group2_air = df.where(df.Group== 2).dropna()['air']
This bit of code returns into group1_air all the values of the air column where the group column is 1 and all the values of air where group is 2 in group2_air.
The drop.na() is required because the .where method will return NAN for every row in which the specified conditions is not met. So all rows where group is 2 will return with NAN values when you use df.where(df.Group== 1).
Whether you need to use scipy.stats.ttest_rel or scipy.stats.ttest_ind depends on your groups. If you samples are from independent groups you should use ttest_ind if your samples are from related groups you should use ttest_rel.
So if your samples are independent from oneanother your final piece of required code is.
scipy.stats.ttest_ind(group1_air,group2_air)
else you need to use
scipy.stats.ttest_rel(group1_air,group2_air)
When you want to also test co2 you simply need to change air for co2 in the given example.
Edit:
This is a rough sketch of the code you should run to execute ttests over every column in your dataframe except for the group column. You may need to tamper a bit with the column_list to get it completely compliant with your needs (you may not want to loop over every column for example).
# get a list of all columns in the dataframe without the Group column
column_list = [x for x in df.columns if x != 'Group']
# create an empty dictionary
t_test_results = {}
# loop over column_list and execute code explained above
for column in column_list:
group1 = df.where(df.Group== 1).dropna()[column]
group2 = df.where(df.Group== 2).dropna()[column]
# add the output to the dictionary
t_test_results[column] = scipy.stats.ttest_ind(group1,group2)
results_df = pd.DataFrame.from_dict(t_test_results,orient='Index')
results_df.columns = ['statistic','pvalue']
At the end of this code you have a dataframe with the output of the ttest over every column you will have looped over.

Related

split row value by separator and create new columns

I I have a Pyspark dataset with a column “channels” that looks like this:
channels
name1,name2,name3,name4
happy1,happy2
entity1,entity2,entity3,entity4,entity5
I want to create 5 new columns i.e “channel1, channel2, channel3, channel4, channel5”.
Then, I want to split the contents of the “channels” column using the comma separator. After splitting values from each row, I want to put each separated value in a different column.
For example for the first row, the columns should look like this:
channel1 channel2 channel3 channel4 channel5
name1 name2 name3 name4 ~
When an element is not found, i want to use ~ as the column value. For example in the first row, there were only 4 values instead of 5 so for the channel5 column, I used ~
I only want to use ~, not None or NULL.
How can I achieve this result in pyspark?
I tried this:
df = df.withColumn("channels_split", split(df["channels"], ","))
df = df.withColumn("channel1", coalesce(df["channels_split"][0], "~"))
df = df.withColumn("channel2", coalesce(df["channels_split"][1], "~"))
df = df.withColumn("channel3", coalesce(df["channels_split"][2], "~"))
df = df.withColumn("channel4", coalesce(df["channels_split"][3], "~"))
df = df.withColumn("channel5", coalesce(df["channels_split"][4], "~"))
df = df.drop("channels_split")
but it gives me an error that:
`~` is missing
You're referencing the column `~`, but it is missing from the schema. Please check your code.
Note that I am using pyspark within Foundry
Coalesce expects cols as arguments and you are providing String, i think that you should use lit("~")
https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.coalesce.html
This is more of a spark problem than a Foundry problem though.
What #M_S said is correct, as the error message was stating, you need a column and should use the lit function.
But be careful if
spark.sql.ansi.enabled is set to True
then this code will throw an ArrayIndexOutOfBoundsException if there are less than 5 items in your array.
Another way to do this would be to ensure that your array has at least 5 elements on each row by adding some ~, then get the first 5 items or by adding a temporary column with the length of the array and use a when condition.
By the way, you don't need to repeat df = every time
df = (
df
.withColumn("channels_split", split(df["channels"], ","))
.withColumn("default_values", array_repeat(lit("~"), 5)))
.withColumn("channels_split", concat(col("channels_split"), col("default_values")))
.withColumn("channel1", df["channels_split"][0])
.withColumn("channel2", df["channels_split"][1])
.withColumn("channel3", df["channels_split"][2])
.withColumn("channel4", df["channels_split"][3])
.withColumn("channel5", df["channels_split"][4])
.drop("channels_split")
)

merging pandas dataframes on multiple columns - error about levels

I'm merging my two dataframes below on two fields.
successes = pd.merge(failures, successes, left_on=['name', 'project_name'], right_on=['name', 'project_name'], how='left')
But I get this error - can anyone help me out please?
/usr/local/lib/python3.8/site-packages/pandas/core/reshape/merge.py:643: UserWarning: merging between different levels can give an unintended result (1 levels on the left,2 on the right)
warnings.warn(msg, UserWarning)
I think it must be written this way:
successes.merge(failures, on=['name', 'project_name'])
This happens when you merge DataFrames with different levels of column indices.
Artificial example below reproduces your warning:
import pandas as pd
# a has 2 level column index
a = pd.DataFrame({("name_0","name_01"):[1,2,3,4],
("name_0","name_02"):[4,3,2,1]})
# b has 1 level column index
b = pd.DataFrame({"name_0":[10,2,30,40],
"name_1":[40,30,20,10]})
# Notice how left_on accepts list of tuples. Tuples can be used to adress multilevel columns
pd.merge(a,b,how="left",left_on=[("name_0","name_01")],right_on=["name_0"])
If you instead use only the level 1 of multilevel column index in DataFrame "a" this warning disappears:
import pandas as pd
a = pd.DataFrame({("name_0","name_01"):[1,2,3,4],
("name_0","name_02"):[4,3,2,1]})
# Only use the 1st level index (e.g. "name_01" and "name_02")
a.columns = a.columns.get_level_values(1)
b = pd.DataFrame({"name_0":[10,2,30,40],
"name_1":[40,30,20,10]})
# Notice how left_on is now a normal string since only 1 level is used
pd.merge(a,b,how="left",left_on=["name_01"],right_on=["name_0"])
I suggest you check whether both your DataFrames have same level indices. If not consider dropping one level or flattening them to one level.

How can I groupby rows by the columns in which they actually posses a data point?

I don't even know if groupby is the correct function to use for this. It's a bit hard to understand so Ill include a screenshot of my dataframe: screenshot
Basically, this dataframe has way too many columns because each column is specific to only one or a few rows. You can see in the screenshot that the first few columns are specific towards the first row and the last few columns are specific to the last row. I want to make it so that each row only has the columns that actually pertain to it. I've tried several methods of using groupby('equipment name') and several methods using dropna but none work in the way I need it to. I'm also open to separating it into multiple dataframes.
Any method is acceptable, this bug has been driving me crazy. It took me a while to get to this point because this started out as an unintelligible 10,000 line json. I'm pretty new to programming as well.
This is a very cool answer that could be one option - and it does use groupby so sorry for dismissing!!! This will group your data into DataFrames where each DataFrame has a unique group of columns, and any row which only contains values for those columns will be in that DataFrame. If your data are such that there are multiple groups of rows which share the exact same columns, this solution is ideal I think.
Just to note, though, if your null values are more randomly spread out throughout the dataset, or if one row in a group of rows is missing a single entry (compared to related rows), you will end up with more combinations of unique non-null columns, and then more output DataFrames.
There are also (in my opinion) nice ways to search a DataFrame, even if it is very sparse. You can check the non-null values for a row:
df.loc[index_name].dropna()
Or for an index number:
df.iloc[index_number].dropna()
You could further store these values, say in a dictionary (this is a dictionary of Series, but could be converted to DataFrame:
row_dict = {row : df.loc[row].dropna() for row in df.index}
I could imagine some scenarios where something based off these options is more helpful for searching. But that linked answer is slick, I would try that.
EDIT: Expanding on the answer above based on comments with OP.
The dictionary created in the linked post contain the DataFrames . Basically you can use this dictionary to do comparisons with the original source data. My only issue with that answer was that it may be hard to search the dictionary if the column names are janky (as it looks like in your data), so here's a slight modification:
for i, (name,df) in enumerate(df.groupby(df.isnull().dot(df.columns))):
d['df' + str(i)] = df.dropna(1)
Now the dictionary keys are "df#", and the values are the DataFrames. So if you wanted to inspect the content one DataFrame, you can call:
d['df1'].head()
#OR
print(d['df0'])
If you wanted to look at all the DataFrames, you could call
for df in d.values():
print(df.head()) #you can also pass an integer to head to show more rows than 5
Or if you wanted to save each DataFrame you could call:
for name in sorted(d.keys()):
d[name].to_csv('path/to/file/' + name + '.csv')
The point is, you've gotten to a data structure where you can look at the original data, separated into DataFrames without missing data. Joining these back into a single DataFrame would be redundant, as it would create a single DataFrame (equal to the original) or multiple with some amount of missing data.
I think it comes down to what you are looking for and how you need to search the data. You could rename the dictionary keys / output .CSV files based on the types of machinery inside, for example.
I thought your last comment might mean that objects of similar type might not share the same columns; say for example if not all "Exhaust Fans" have the same columns, they will end up in different DataFrames in the dictionary. This maybe the type of case where it might be easier to just look at individual rows, rather than grouping them into weird categories:
df_dict = {row : pd.DataFrame(df.loc[row].dropna()).transpose() for row in df.index}
You could again then save these DataFrames as CSV files or look at them one by one (or e.g. search for Exhaust Fans by seeing if "Exhaust" is in they key). You could also print them all at once:
import pandas as pd
import numpy as np
import natsort
#making some randomly sparse data
columns = ['Column ' + str(i+1) for i in range(10)]
index = ['Row ' + str(i+1) for i in range(100)]
df = pd.DataFrame(np.random.rand(100,10), columns=columns,index=index)
df[df<.7] = np.nan
#creating the dictionary where each key is a row name
df_dict = {row : pd.DataFrame(df.loc[row].dropna()).transpose() for row in df.index}
#printing all the output
for key in natsort.natsorted(df_dict.keys())[:5]: #using [:5] to limit output
print(df_dict[key], '\n')
Out[1]:
Column 1 Column 4 Column 7 Column 9 Column 10
Row 1 0.790282 0.710857 0.949141 0.82537 0.998411
Column 5 Column 8 Column 10
Row 2 0.941822 0.722561 0.796324
Column 2 Column 4 Column 5 Column 6
Row 3 0.8187 0.894869 0.997043 0.987833
Column 1 Column 7
Row 4 0.832628 0.8349
Column 1 Column 4 Column 6
Row 5 0.863212 0.811487 0.924363
Instead of printing, you could write the output to a text file; maybe that's the type of document that you could look at (and search) to compare to the input tables. Bute not that even though the printed data are tabular, they can't be made into a DataFrame without accepting that there will be missing data for rows which don't have entries for all columns.

Splitting dataframe based on multiple column values

I have a dataframe with 1M+ rows. A sample of the dataframe is shown below:
df
ID Type File
0 123 Phone 1
1 122 Computer 2
2 126 Computer 1
I want to split this dataframe based on Type and File. If the total count of Type is 2 (Phone and Computer), total number of files is 2 (1,2), then the total number of splits will be 4.
In short, total splits is as given below:
total_splits=len(set(df['Type']))*len(set(df['File']))
In this example, total_splits=4. Now, I want to split the dataframe df in 4 based on Type and File.
So the new dataframes should be:
df1 (having data of type=Phone and File=1)
df2 (having data of type=Computer and File=1)
df3 (having data of type=Phone and File=2)
df4 (having data of type=Computer and File=2)
The splitting should be done inside a loop.
I know we can split a dataframe based on one condition (shown below), but how do you split it based on two ?
My Code:
data = {'ID' : ['123', '122', '126'],'Type' :['Phone','Computer','Computer'],'File' : [1,2,1]}
df=pd.DataFrame(data)
types=list(set(df['Type']))
total_splits=len(set(df['Type']))*len(set(df['File']))
cnt=1
for i in range(0,total_splits):
for j in types:
locals()["df"+str(cnt)] = df[df['Type'] == j]
cnt += 1
The result of the above code gives 2 dataframes, df1 and df2. df1 will have data of Type='Phone' and df2 will have data of Type='Computer'.
But this is just half of what I want to do. Is there a way we can make 4 dataframes here based on 2 conditions ?
Note: I know I can first split on 'Type' and then split the resulting dataframe based on 'File' to get the output. However, I want to know of a more efficient way of performing the split instead of having to create multiple dataframes to get the job done.
EDIT
This is not a duplicate question as I want to split the dataframe based on multiple column values, not just one!
You can make do with groupby:
dfs = {}
for k, d in df.groupby(['Type','File']):
type, file = k
# do want ever you want here
# d is the dataframe corresponding with type, file
dfs[k] = d
You can also create a mask:
df['mask'] = df['File'].eq(1) * 2 + df['Type'].eq('Phone')
Then, for example:
df[df['mask'].eq(0)]
gives you the first dataframe you want, i.e. Type==Phone and File==1, and so on.

Cleaner way to one-by-one construct a row and add it to a final dataframe?

I have large Excel files, which contain observations of objects. I read the files via pandas, group them and then iterate over each group. For each group I calculate specific and quite complex results - let's say result1, result2 and optional result3.
I define an empty df with predefined columns, in which I insert my calculated values. In the end I combine all dfs to one final df.
Maybe better explained by code:
data = pd.read_excel()
grouped = data.groupby('obj_id')
columns = ['result1','result2','result3']
combined_results = pd.DataFrame(columns=columns)
for obj_id, obj_df in grouped:
obj_results = pd.DataFrame(columns=columns,index=[0])
# creates empty df with one all NaN row
obj_results['result1'] = fooCalculation(obj_df)
obj_results['result2'] = fooCalculation(obj_df)
combined_results = combined_results.append(obj_results, sort=False)
I like my current method, because if optional columns end up having no value, the col still exists, because it was set (to NaN) initially. This way I can one by one calculate my result values per object and update my df row as soon as I have updates.
I can't help but think, that this is not the cleanest way. Especially because obj_results['result1'] = fooCalculation() sets an entire column and I falsely use it to set one value.
What is the clean / best-pratice way here.
Should I instead "cache" the results in a dict and insert it into the combine_results?

Resources