I want to use the Chi-square test of independence to test the following two variables: Student knowledge v.s. course attendance
The null hypothesis is: student knowledge and course attendance (X and Y) are independent
Members in each student knowledge group: Low (12), average(29), high(9)
The results show that there are two degrees of freedom, the chi-square statistic is 0.20, and the p-value is 0.90, and we cannot accept the null hypothesis. I added an image of my test.
click to see the image of the test
I have little doubts regarding the following two issues: the student knowledge groups have an unequal number of participants, the number of participated students in each course is fewer than 10.
My question is: does this test fit for my data?
In case, this test cannot be used for my data, what statistical test I should use instead?
Welcome to stack exchange. Using the Chi-Square test for independence can be an issue with small cell sizes (ie G3, course Y which has a cell count of 2). This has to do with the use of Chi-Square Distribution as an approximation.
I would recommend Fisher's Exact Test. It's usually designated as a tool for small sample sizes, but it is still effective for large samples.
Related
I'm trying to do a simple comparison of two samples to determine if their means are different. Regardless of whether their standard deviations are equal/unequal, the formulas for a t-test or z-test are similar.
(i can't post images on a new account)
t-value w/ unequal variances:
https://www.biologyforlife.com/uploads/2/2/3/9/22392738/949234_orig.jpg
t-value w/ equal/pooled variances:
https://vitalflux.com/wp-content/uploads/2022/01/pooled-t-statistics-300x126.jpg
The issue here is the inverse and sqrt of sample size in the denominator that causes large samples to seem to have massive t-values.
For instance, I have 2 samples w/
size: N1=168,000 and N2=705,000
avgs: X1=89 and X2=49
stddev: S1=96 and S2=66 .
At first glance, these standard deviations are larger than the mean and suggest a nonhomogeneous sample with a lot of internal variation. When comparing the two samples, however, the denominator of the t-test becomes approx 0.25, suggesting that a 1 unit difference in means is equivalent to 4 standard deviations. Thus my t-value here comes out to around 160(!!)
All this to say, I'm just plugging in numbers since I didn't do many of these problems in advanced stats and haven't seen this formula since Stats110.
It makes some sense that two massive populations need their variance biased downward before comparing, but this seems like not the best test out there for the magnitude of what I'm doing.
What other tests are out there that I could try? What is the logic behind this seemingly over-biased variance?
One of my research hypotheses is that individuals from Southeast Asia who are ethnically Chinese are more likely to experience racially motivated hate crimes than their counterparts from other ethnic groups.
Respondents were recruited via non-probability sampling methods for my survey, and the data gathered for the hypothesis above are all nominal, with a sample size of 300, which means that the nonparametric Chi square test of independence is the most appropriate method of analyses.
However, there were 8 choices for ethnic groups (reflecting the heterogeneity of ethnicities in Southeast Asia) including the choice for "Chinese". I am expecting a frequency of < 5 in some of those cells due to the lack of response from individuals from particular ethnic groups. Is it appropriate then / even possible , to combine Chi-square test of independence with a Fisher Exact Test (to be used only for the ethnic groups with expected frequency of < 5)? Otherwise, how else go about the analysis?
Good afternoon,
I know that the traditional independent t-test assumes homoscedasticity (i.e., equal variances across groups) and normality of the residuals.
They are usually checked by using levene's test for homogeneity of variances, and the shapiro-wilk test and qqplots for the normality assumption.
Which statistical assumptions do I have to check with the bayesian independent t test? How may I check them in R with coda and rjags?
For whichever test you want to run, find the formula and plug in using the posterior draws of the parameters you have, such as the variance parameter and any regression coefficients that the formula requires. Iterating the formula over the posterior draws will give you a range of values for the test statistic from which you can take the mean to get an average value and the sd to get a standard deviation (uncertainty estimate).
And boom, you're done.
There might be non-parametric Bayesian t-tests. But commonly, Bayesian t-tests are parametric, and as such they assume equality of relevant population variances. If you could obtain a t-value from a t-test (just a regular t-test for your type of t-test from any software package you're comfortable with), use levene's test (do not think this in any way is a dependable test, remember it uses p-value), then you can do a Bayesian t-test. But remember the point that the Bayesian t-test, requires a conventional modeling of observations (Likelihood), and an appropriate prior for the parameter of interest.
It is highly recommended that t-tests be re-parameterized in terms of effect sizes (especially standardized mean difference effect sizes). That is, you focus on the Bayesian estimation of the effect size arising from the t-test not other parameter in the t-test. If you opt to estimate Effect Size from a t-test, then a very easy to use free, online Bayesian t-test software is THIS ONE HERE (probably one of the most user-friendly package available, note that this software uses a cauchy prior for the effect size arising from any type of t-test).
Finally, since you want to do a Bayesian t-test, I would suggest focusing your attention on picking an appropriate/defensible/meaningful prior rather then levenes' test. No test could really show that the sample data may have come from two populations (in your case) that have had equal variances or not unless data is plentiful. Note that the issue that sample data may have come from populations with equal variances itself is an inferential (Bayesian or non-Bayesian) question.
I am writing a study protocol for my masters thesis. The study seeks to compare the rates of Non Communicable Diseases and risk factors and determine the effects of rural to urban migration. Sibling pairs will be identified from a rural area. One of the siblings should have participated in the rural NCD survey which is currently on going in the area. The other sibling should have left the area and reported moving to a city.Data will collected by completing a questionnaire on demographics, family history,medical history, diet,alcohol consumption, smoking ,physical activity.This will be done for both the rural and urban sibling, with data on the amount of time spent in urban areas fur
The outcomes which are binary (whether one has a condition or not) are : 1.diabetic, 2.hypertensive, 3.obese
What statistical method can I use to compare the outcomes (stated above) between the two groups, considering that the siblings were matched (one urban sibling for every rural sibling)?
What statistical methods can also be used to explore associations between amount spent in urban residence and the outcomes?
Given that your main aim is to compare quantities of two nominal distributions, a chi-square test seems to be the method of choice with regard to your first question. However, it should be mentioned that a chi-square test is somehow "the smallest" test for answering differences in samples. If you are studying medicine (or related) a chi-square test is fine because it is also frequently applied by researchers of this field. If you are studying psychology or sociology (or related) I'd advise to highlight limitations of the test in the discussions section since it mostly tests your distributions against randomly expected distributions.
Regarding your second question, a logistic regression would be applicable since it allows binomial distributed variables both for independent variables (predictors) and dependent variables. However, if you have other interval scaled variables (e.g. age, weight etc.) you could also use t-tests or ANOVAs to investigate differences between these variables with respect to the existence of specific diseases (i.e. is diabetic or not).
Overall, this matter strongly depends on what you mean by "association". Classically, "association" refers to correlations or linear regression (for which you need interval scaled variables on "both sides") but given your data structure, the aforementioned methods possess a better fit.
How you actually calculate these tests depends on the statistics software used.
I`m analyzing data for paper, and i used Kruskal-wallis test and Steel-dwass post-hoc test for data analysis. I found significant difference when using Kruskal-wallis test, but no significant differences when comparing each pairs of the data groups. Could anyone tell me what the reason is? And what should i do then?
Check that the distribution of the data is skewed in the same direction. From what I remember it should be to use Kruskal-wallis tests. Also try Wilcoxon rank sum tests using Bonferroni correction on each pair.