I have a pandas dataframe which has some values for Male and some for Female. I would like to calculate if the percentage of both genders' values is significantly different or not and tell confidence intervals of these rates. Given below is the sample code:
data={}
data['gender']=['male','female','female','male','female','female','male','female','male']
data['values']=[10,2,13,4,11,8,14,19,2]
df_new=pd.DataFrame(data)
df_new.head() # make a simple data frame
gender values
0 male 10
1 female 2
2 female 13
3 male 4
4 female 11
df_male=df_new.loc[df_new['gender']=='male']
df_female=df_new.loc[df_new['gender']=='female'] # separate male and female
# calculate percentages
male_percentage=sum(df_male['values'].values)*100/sum(df_new['values'].values)
female_percentage=sum(df_female['values'].values)*100/sum(df_new['values'].values)
# want to tell whether both percentages are statistically different or not and what are their confidence interval rates
print(male_percentage)
print(female_percentage)
Any help will be much appreciated. Thanks!
Use t-test.In this case, use a two t test, meaning you are comparing values/means of two samples.
I am applying an alternative hypothesis; A!=B.
I do this by testing the null hypothesis A=B. This is achieved by calculating a p value. When p falls below a critical value, called alpha, I reject the null hypothesis. Standard value for alpha is 0.05. Below 5% probability, the sample will produce patterns similar to observed values
Extract Samples, in this case a list of values
A=df[df['gender']=='male']['values'].values.tolist()
B=df[df['gender']=='female']['values'].values.tolist()
Using scipy library, do the t -test
from scipy import stats
t_check=stats.ttest_ind(A,B)
t_check
alpha=0.05
if(t_check[1]<alpha):
print('A different from B')
Try this:
df_new.groupby('gender')['values'].sum()/df_new['values'].sum()*100
gender
female 63.855422
male 36.144578
Name: values, dtype: float64
Related
I'm trying to calculate a weighted median, but don't understand the difference between the following two methods. The answer I get from weighted.median() is different from (df, median(rep(value, count))), but I don't understand why. Are there many ways to get a weighted median? Is one more preferable over the other?
df = read.table(text="row count value
1 1. 25.
2 2. 26.
3 3. 30.
4 2. 32.
5 1. 39.", header=TRUE)
# weighted median
with(df, median(rep(value, count)))
# [1] 30
library(spatstat)
weighted.median(df$value, df$count)
# [1] 28
Note that with(df, median(rep(value, count))) only makes sense for weights which are positive integers (rep will accept float values for count but will coerce them to integers). This approach is thus not a full general approach to computing weighted medians. ?weighted.median shows that what the function tries to do is to compute a value m such that the total weight of the data below m is 50% of the total weight. In the case of your sample, there is no such m that works exactly. 28.5% of the total weight of the data is <= 26 and 61.9% is <= 30. In a case like this, by default ("type 2") it averages these 2 values to get the 28 that is returned. There are two other types. weighted.median(df$value,df$count,type = 1) returns 30. I am not completely sure if this type will always agree with your other approach.
I have a dataframe like this:
df = pd.DataFrame({'id':[10,20,30,40],'text':['some text','another text','random stuff', 'my cat is a god'],
'A':[0,0,1,1],
'B':[1,1,0,0],
'C':[0,0,0,1],
'D':[1,0,1,0]})
Here I have columns from Ato D but my real dataframe has 100 columns with values of 0and 1. This real dataframe has 100k reacords.
For example, the column A is related to the 3rd and 4rd row of text, because it is labeled as 1. The Same way, A is not related to the 1st and 2nd rows of text because it is labeled as 0.
What I need to do is to sample this dataframe in a way that I have the same or about the same number of features.
In this case, the feature C has only one occurrece, so I need to filter all others columns in a way that I have one text with A, one text with B, one text with Cetc..
The best would be: I can set using for example n=100 that means I want to sample in a way that I have 100 records with all the features.
This dataset is a multilabel dataset training and is higly unbalanced, I am looking for the best way to balance it for a machine learning task.
Important: I don't want to exclude the 0 features. I just want to have ABOUT the same number of columns with 1 and 0
For example. with a final data set with 1k records, I would like to have all columns from A to the final_column and all these columns with the same numbers of 1 and 0. To accomplish this I will need to random discard text rows and id only.
The approach I was trying was to look to the feature with the lowest 1 and 0 counts and then use this value as threshold.
Edit 1: One possible way I thought is to use:
df.sum(axis=0, skipna=True)
Then I can use the column with the lowest sum value as threshold to filter the text column. I dont know how to do this filtering step
Thanks
The exact output you expect is unclear, but assuming you want to get 1 random row per letter with 1 you could reshape (while dropping the 0s) and use GroupBy.sample:
(df
.set_index(['id', 'text'])
.replace(0, float('nan'))
.stack()
.groupby(level=-1).sample(n=1)
.reset_index()
)
NB. you can rename the columns if needed
output:
id text level_2 0
0 30 random stuff A 1.0
1 20 another text B 1.0
2 40 my cat is a god C 1.0
3 30 random stuff D 1.0
I am wondering , if I can write a formula which would operate over several columns, e.g. I want to calculate the amount of males in the school and I have a table:
A B C
Class Sex Number
1 male 3
2 male 4
1 female 6
2 female 5
Right now I have to break the operations into parts:
=(B2="Male")*C2 - additional column and then
=SUMME(D2:D5)
I want to do it at once. It seems like a trivial functionality, but I can not figure it out, how I can do it in one formula.
I am having trouble determining the correct way to calculate a final rank order for four categories. Each of the four metrics make up a higher group. A Top 10 of each category is applied to the respective product to risk analysis.
CURRENT LOGIC - Assignment of 25% max per category.
Columns - Y4
Parts
0.25
25
=IF(L9=1,$Y$4,IF(L9=2,$Y$4*0.9, IF(L9=3,$Y$4*0.8, IF(L9=4,$Y$4*0.7, IF(L9=5,$Y$4*0.6, IF(L9=6,$Y$4*0.5, IF(L9=7,$Y$4*0.4, IF(L9=8,$Y$4*0.3, IF(L9=9,$Y$4*0.2, IF(L9=10,$Y$4*0.1,0))))))))))
DESIRED...
I would like to use a statement to determine three criteria in order to apply a score (1=100, 2=90, 3=80, etc..).
SUM the rank positions of each of the four categories-apply product rank ascending (not including NULL since it's not in the Top 10)
IF a product is identified in more than one metric-apply a significant contribution weight of (*.75),
IF a product has the number 1 rank in any of the four metrics-apply a score of (100).
Data - UPDATED EXAMPLE
(Product) Parts Labor Overhead External Final Score
"XYZ" 3 1 7 7 100
"ABC" NULL 6 NULL 2 100
"LMN" 4 NULL NULL NULL 70
This is way beyond my capability. ANY assistance is appreciated greatly!!!
Jim
I figured this is a good start and I can alter the weight as needed to reflect the reality of the situation.
=AVERAGE(G28:I28)+SUM(G28:I28)*0.25
However, I couldn't figure out how to put a cap on the score of no more than 100 points.
I am still unclear of what exactly you are attempting and if this will work, but how about this simple matrix using an array formula and some conditional formatting.
Array Formula in F2 (make sure to press Ctrl+Shift+Enter when exiting formula edit mode)
=MIN(100,SUM(IF(B2:E2<>"NULL",CHOOSE(B2:E2,100,90,80,70,60,50,40,30,20,10))))
Conditional Formatting defined as shown below.
Red = 100 value where it comes from a 1
Yellow = 100 value where it comes from more than 1 factor, but without a 1.
In my data file I select a random sample of a fixed size, by Select Cases.
Say I have 400 cases, I randomly pick 150. All cases have a AGE and SEX value.
I now want to test the AGE and SEX distribution of the sample (150 cases) against the AGE and SEX distribution of the rest (250 cases) and check if my sample is representative of the population.
My solution is to compute two new variables where I put the value in depending on sample or rest. Here for age:
IF (filter_$ EQ 1) sample_age = age.
IF (filter_$ EQ 0) rest_age = age.
EXECUTE .
How do I then perform a test on sample_age and rest_age?
Which test would be appropriate?
the data looks like this:
person sample_age rest_age
1 29 .
2 56 .
3 . 34
4 . 12
5 65 .
You should not make new variables with missing values. Presuming you have calculated the filter_$ variable that identifies the separate samples, for the continuous age variable you can estimate an independent samples t-test.
T-TEST GROUPS = filter_$ (1 0)
/VARIABLES=age.
For sex which is categorical, you can run a CROSSTABS and calculate the chi-square statistic.
CROSSTABS
/TABLES = filter_$ BY sex
/STATISTICS=CHISQ.