How to interpret test Welch and which tests post hoc should be used after test Welch - statistics

I use ANOVA, including the Levene test, unidimensional significance tests, descriptive statistics and the Tukey test. I have some doubts about what happens when the assumption of homogeneity of variance is not met in the Levene test. In many of the available materials I found the information:
"Basic assumptions of ANOVA tests:
Independence of random variables in the populations (groups) under consideration.
Measurability of the analysed variables.
Normality of the distribution of the variables in each population (group).
Homogeneity of the variance in all populations (groups).
If one of the first three assumptions is not met in the analysis of variance, the non-parametric Kruskal-Wallis test should be used. If the assumption of homogeneity of variances is not met, the Welch test should be used to assess the means."
If we had heterogeneous variances and a non-normal distribution then we would apply the Kruskal-Wallis test, right? On the other hand, what if we have both heterogeneous variances and a normal distribution, do we use the Welch test? If we do the Welch test, how should it be interpreted and what tests are subsequently recommended to see statistically significant differences between groups?
I would be very grateful for an answer

Related

Nonparametric statistical significance test for dependent samples with dependent observations

The goal of my research is to establish whether one model outperforms the other (for a single dataset!!!) and the result is statistically significant.
The procedure is as follows for every model out of the two: I use 10-fold CV and repeat the procedure 3 times with different seeds to obtain, let's say, 30 estimates of precision. Hence, I obtain two sets of 30 estimates based on a single dataset.
Test for normality showed that the 30 estimates are not normally distributed. Thus, I need to resort to a nonparametric test. I considered Wilcoxon Signed-Rank Test yet the test is not suitable for the case when the estimates are dependent (due to CV). How could I tackle this situation?

Normality Assumption - how to check you have not violated it?

I am rleatively new to statistics and am stuggling with the normality assumption.
I understand that parametric tests are underpinned by the assumption that the data is normally distributed, but there seems to be lots of papers and articles providing conflicting information.
Some articles say that independant variables need to be normally disrbiuted and this may require a transformation (log, SQRT etc.). Others says that in linear modelling there are no assumptions about any linear the distribution of the independent variables.
I am trying to create a multiple regression model to predict highest pain scores on hospital admissions:
DV: numeric pain scores (0-no pain -> 5 intense pain)(discrete- dependant variable).
IVs: age (continuous), weight (continuous), sex (nominal), depreviation status (ordinal), race (nominal).
Can someone help clear up the following for me?
Before fitting a model, do I need to check the whether my independant variables are normally distributed? If so, why? Does this only apply to continuous variables (e.g. age and weight in my model)?
If age is positively skewed, would a transformation (e.g. log, SQRT) be appropriate and why? Is it best to do this before or after fitting a model? I assume I am trying to get close to a linear relationship between my DV and IV.
As part of the SPSS outputs it provides plots of the standardised residuals against predicted values and also normal P-P plots of standardised residuals. Are these tests all that is needed to check the normality assumption after fitting a model?
Many Thanks in advance!

Multiple Linear Regression: a Significant Anova but NO significant coeffecient predictors?

Multiple Linear Regression: a Significant ANOVA but NO significant coefficient predictors?
I have ran a multiple regression on 2 IVs to predict a dependant, all assumptions have been met, the ANOVA has a significant result but the coefficient table suggests that none of the predictors are significant?
what does this mean and how am I able to interpret the result of this?
(USED SPSS)
This almost certainly means the two predictors are substantially correlated with each other. The REGRESSION procedure in SPSS Statistics has a variety of collinearity diagnostics to aid in detection of more complicated situations involving collinearity, but in this case simply correlating the two predictors should establish the basic point.

What are the differences between stratified tests and covariates?

I am studying the correlation between several human traits. One way is to use a chi-square test, but this is unable to include covariates. I am also using logistical regression to do this, and this makes it possible to include age and race as covariates.
However, I noticed that some tests support stratified data for a chi-square-like test.
Therefore, I am wondering what are the differences between including covariates in logistic regression and a stratified chi-square test?

Robust one-sample tests of variance or scale

A common one sample test for variance is the chi-square test, e.g., http://www.itl.nist.gov/div898/handbook/eda/section3/eda358.htm.
What are some robust testing alternatives for variance when the population is not normal and/or is subject to outliers?

Resources