I have a linear regression (say) model
p(t|x;w) = N(t ; m , D);
Being Bayesian, I can put a Gaussian prior on parameter w.
However, I've realized for some models we can put Gaussian-Wishart hyperprior on the Gaussian to be 'more' Bayesian. Is this correct ? Are both of these two models valid Bayesian models ?
It seems to me that we can always put hyperprior, hyperhyperprior,.......... because it will still be a valid probabilistic model.
I am wondering what's the difference between putting a prior and putting the hyperprior on the prior. Are they both Bayesian ?
Using a hyperprior is still "valid Bayesian" in the sense that this sort of hierarchical modeling is comes naturally to Bayesian models, and just about any book/course on Bayesian modeling does go through the use of hyperpriors.
It's completely fine to use Normal-Wishart as the prior (or hyperprior) of a Gaussian distribution. I guess it's, in some sense, even "more Bayesian" to do so if doing so models the phenomenon at hand more accurately.
I'm not sure what you mean by "are they both Bayesian" when it comes to the difference between using a prior and a hyperprior. Bayesian hierarchical models with hyperpriors are still Bayesian models.
Using hyperpriors only makes sense in a hierarchical Bayesian model. In that case you would be looking at multiple groups and estimate a group specific coefficient w_group based on group specific priors, with coefficients drawn from a global hyperprior.
If your prior and hyperprior reside on the same hierarchical level, which seems to be the case you are think about, then the effect on the results is the same as using a simple prior with a wider standard deviation. Since it still requires additional computational costs, such stacking should be avoided.
There is a lot of statistical literature on how to pick non-informative priors, often theoretically best solutions are improper distributions (their total integral is infinite) and there is a large risk of getting improper posterior solutions without well defined means or even medians. So for practical purposes picking wide normal distributions usually works best.
Related
Basically I'm modelling tree fruiting patterns using mgcv bams and auto regressive (1) models have much better outcomes using itsadug::compareML(). (bam AR(1) was chosen due to limitations associated with binomial data) Further, this AR is backed up by biological theory. However, the best models when I use AR techniques often don't include terms that are included in the non-AR models. I understand this to be a common occurrence, the AR term explains much of the variance, leaving smaller predictions for the remaining terms.
I've seen discussions on here warning that AR GAMs should be interpreted with care, and Gavin Simposon's AR GAM post (part 1), ends with hinting that there are some serious diagnostic criteria that should be considered, but part 2 never came out, and I'm struggling to find resources on interpretation. Much more common are simple introductory articles.
I guess the fundamental question is thus: the two different types of model will make different statements about the effects of a given predictor, which should be believed?
If the non-AR model finds that month is a useful predictor, but the AR model finds it ultimately superfluous, does month have an effect? Is month relevant due to effects like light patterns, or just because of correlational structure? I guess this is a classic 'no models are true, some are useful' situation.
This problem persists even inside of a predictor. My temperature:vpds tensor product spline will cite a particular region as increasing the probability in non-AR models, but will suggest another region does so in AR models (in addition to the first).
I'm presently leaning towards including both sets of models in my paper, and noting that the AR models provide better predictions, but the non-AR models can provide insight into the effect of variables. Even then I wonder what's more useful? The model that best fits the data without any AR? Or the non-AR version of the AR model (i.e, set the autocorrelation parameter to 0 keeping the same predictors). I'm leaning towards the former, because I feel strange about the models that have almost no predictors.
I am rleatively new to statistics and am stuggling with the normality assumption.
I understand that parametric tests are underpinned by the assumption that the data is normally distributed, but there seems to be lots of papers and articles providing conflicting information.
Some articles say that independant variables need to be normally disrbiuted and this may require a transformation (log, SQRT etc.). Others says that in linear modelling there are no assumptions about any linear the distribution of the independent variables.
I am trying to create a multiple regression model to predict highest pain scores on hospital admissions:
DV: numeric pain scores (0-no pain -> 5 intense pain)(discrete- dependant variable).
IVs: age (continuous), weight (continuous), sex (nominal), depreviation status (ordinal), race (nominal).
Can someone help clear up the following for me?
Before fitting a model, do I need to check the whether my independant variables are normally distributed? If so, why? Does this only apply to continuous variables (e.g. age and weight in my model)?
If age is positively skewed, would a transformation (e.g. log, SQRT) be appropriate and why? Is it best to do this before or after fitting a model? I assume I am trying to get close to a linear relationship between my DV and IV.
As part of the SPSS outputs it provides plots of the standardised residuals against predicted values and also normal P-P plots of standardised residuals. Are these tests all that is needed to check the normality assumption after fitting a model?
Many Thanks in advance!
I am trying to create a decision tree based on some training data. I have never created a decision tree before, but have completed a few linear regression models. I have 3 questions:
With linear regression I find it fairly easy to plot graphs, fit models, group factor levels, check P statistics etc. in an iterative fashion until I end up with a good predictive model. I have no idea how to evaluate a decision tree. Is there a way to get a summary of the model, (for example, .summary() function in statsmodels)? Should this be an iterative process where I decide whether a factor is significant - if so how can I tell?
I have been very unsuccessful in visualising the decision tree. On the various different ways I have tried, the code seems to run without any errors, yet nothing appears / plots. The only thing I can do successfully is tree.export_text(model), which just states feature_1, feature_2, and so on. I don't know what any of the features actually are. Has anybody come across these difficulties with visualising / have a simple solution?
The confusion matrix that I have generated is as follows:
[[ 0 395]
[ 0 3319]]
i.e. the model is predicting all rows to the same outcome. Does anyone know why this might be?
Scikit-learn is a library designed to build predictive models, so there are no tests of significance, confidence intervals, etc. You can always build your own statistics, but this is a tedious process. In scikit-learn, you can eliminate features recursively using RFE, RFECV, etc. You can find a list of feature selection algorithms here. For the most part, these algorithms get rid off the least important feature in each loop according to feature_importances (where the importance of each feature is defined as its contribution to the reduction in entropy, gini, etc.).
The most straight forward way to visualize a tree is tree.plot_tree(). In particular, you should try passing the names of the features to feature_names. Please show us what you have tried so far if you want a more specific answer.
Try another criterion, set a higher max_depth, etc. Sometimes datasets have unidentifiable records. For example, two observations with the exact same values in all features, but different target labels. Is this the case in your dataset?
I am trying a multi-task regression model. However, the ground-truth labels of different tasks are on different scales. Therefore, I wonder whether it is necessary to normalize the targets. Otherwise, the MSE of some large-scale tasks will be extremely bigger. The figure below is part of my overall targets. You can certainly find that columns like ASA_m2_c have much higher values than some others.
First, I have already tried some weighted loss techniques to balance the concentration of my model when it does gradient backpropagation. The result shows it didn't perform well.
Secondly, I have seen tremendous discussions regarding normalizing the input data, but hardly discovered any particular talking about normalizing the labels. It's partly because most of the people's problems are classification type and a single task. I do know pytorch provides a convenient approach to normalize the vision dataset by transform.normalize, which is still operated on the input rather than the labels.
Similar questions: https://forums.fast.ai/t/normalizing-your-dataset/49799
https://discuss.pytorch.org/t/ground-truth-label-normalization/26981/19
PyTorch - How should you normalize individual instances
Moreover, I think it might be helpful to provide some details of my model architecture. The input is first fed into a feature extractor and then several generators use the shared output representation from that extractor to predict different targets.
I've been working on a Multi-Task Learning problem where one head has an output of ~500 and another between 0 and 1.
I've tried Uncertainty Weighting but in vain. So I'd be grateful if you could give me a little clue about your studies.(If there is any progress)
Thanks.
Literature says that the metropolis-hasting algorithm in MCMC is one of the most important algorithms developed last century and is revolutional. Literature also says that it is such development in MCMC that gave bayesian statistics a second birth.
I understand what MCMC does - it provides an efficient way to draw samples from any complicated probability distribution.
I also know what bayesian inference is - it is the process by which the full posterior distribution of parameters is calculated.
I am having difficult time connecting the dots here:
Which step in the process of bayesian inference does MCMC come into play? Why is MCMC so important that people say it is MCMC that gave bayesian statistics a second birth??
You might want to ask a similar question on StatsExchange. However, here is an attempt for a high level "build some intuition" answer (disclaimer: I am a Computer Scientist and not a Statistician. Head over to StatsExchange for a more formal discussion).
Bayesian Inference:
In the most basic sense we follow Bayes rule: p(Θ|y)=p(y|Θ)p(Θ)/p(y). Here p(Θ|y) is called the 'posterior' and this is what you are trying to compute. p(y|Θ) is called the 'data likelihood' and is typically given by your model or your generative description of the data. p(Θ) is called the 'prior' and it captures your belief about the plausible values of the parameters before observing the data. p(y) is called the 'marginal likelihood' and using the law of total probability can be expressed as ∫ p(y|Θ)p(Θ) dΘ. That looks really neat but in reality the p(y) is often intractable to compute analytically and in high dimensions (i.e. when Θ has many dimensions) numerical integration is imprecise and computationally intractable. There are certain cases when the conjugate structure of the problem allows you to compute this analytically, but in many useful models this is simply not possible. Therefore, we turn to approximating the posterior.
There are two ways (that I know of) to approximate the posterior: Monte Carlo and Variational Inference. Since you asked about MCMC, I'll stick to that.
Monte Carlo (and Markov Chain Monte Carlo):
Many problems in Statistics deal with taking expectations of functions under probability distributions. From the Law of Large Numbers, an expectation can be efficiently approximated by a Monte Carlo estimator. Therefore, if we can draw samples from a distribution (even if we don't know the distribution itself) then we can compute a Monte Carlo estimate of the expectation in question. The key is that we don't need to have an expression for the distribution: If we just have samples then we can compute the expectations that we are interested in. But there is a catch... How to draw the samples??
There has been a lot of work which developed ways of drawing samples from unknown distributions. These include 'rejection', 'importance' and 'slice' sampling. These were all great innovations and were useful in many applications but they all suffered by scaling poorly to high dimensions. For example, rejection sampling draws samples from a known 'proposal' distribution and then accepts or rejects that sample based on a probability that needs to evaluate the likelihood function and the proposal function. This is wonderful in 1 dimension but as the dimensionality grows, the probability mass that a given sample gets rejected increases dramatically.
Markov Chain Monte Carlo was an innovation that has some super nice theoretical guarantees attached to it. The key idea was to not randomly draw samples from a proposal distribution but rather to use a known sample (with the hope that the sample is in an area of high probability mass) and then make a small random step under a draw from a proposal distribution. Ideally, if the first draw was in an area of high probability mass then the second draw is also likely to be accepted. Therefore, you end up accepting many more samples and you don't waste time drawing samples that are to be rejected. The amazing thing is that if you run the Markov Chain long enough (i.e. to infinity) and under specific conditions (the chain must be finite, aperiodic, irreducible and ergodic) then your samples will be drawn from the true posterior of your model. That's amazing! The MCMC technique is to draw dependent samples so it scales to a higher dimensionality than previous methods, but under the right conditions, even though the samples are dependent, they are as if they are drawn IID from the desired distribution (which is the posterior in Bayesian Inference).
Tying it together (and hopefully answering your question):
MCMC can be seen as a tool that enables Bayesian Inference (just as analytical calculation from conjugate structure, Variational Inference and Monte Carlo are alternatives). Apart from an analytical solution, all of the other tools are approximating the true posterior. Our goal is then to make the approximation as good as possible and to do this as cheaply as possible (in both computation cost and the cost of computing a bunch of messy algebra). Pervious sampling methods did not scale to high dimensions (which are typical of any real world problem) and therefore Bayesian Inference became computationally very expensive and impractical in many instances. However, MCMC opened the door to a new way to efficiently draw samples from a high dimensional posterior, to do this with good theoretical guarantees and to do this (comparatively) easily and computationally cheaply.
It is worth mentioning that Metropolis itself has problems: it struggles with highly correlated latent parameter space, it requires a user-specified proposal distribution and the correlation between samples can be high leading to biased results. Therefore more modern and sometimes more useful MCMC tools have been proposed to try combat this. See 'Hamiltonian Monte Carlo' and the 'No U-Turn Sampler' for the state of the art. Nonetheless, Metropolis was a huge innovation that suddenly made real world problems computationally tractable.
A last note: See this discussion by MacKay for a really good overview of these topics.
This post https://stats.stackexchange.com/a/344360/137466 perfectly clears my question on how MCMC sampling helps solving bayesian inference. Especially this following part from the post is the key concept that I missed:
The Markov chain has a stationary
distribution
which is the distribution that preserves itself if you run it through
the chain. Under certain broad assumptions (e.g., the chain is
irreducible, aperiodic), the stationary distribution will also be the
limiting distribution of the Markov chain, so that regardless of how
you choose the starting value, this will be the distribution that the
outputs converge towards as you run the chain longer and longer. It
turns out that it is possible to design a Markov chain with a
stationary distribution equal to the posterior distribution, even
though we don't know exactly what that distribution is. That is, it
is possible to design a Markov chain that has $\pi( \theta |
\mathbb{x} )$ as its stationary limiting distribution, even if all we
know is that $\pi( \theta | \mathbb{x} ) \propto L_\mathbb{x}(\theta)
\pi(\theta)$. There are various ways to design this kind of Markov
chain, and these various designs constitute available MCMC algorithms
for generating values from the posterior distribution.
Once we have designed an MCMC method like this, we know that we can
feed in any arbitrary starting value $\theta_{(0)}$ and the
distribution of the outputs will converge to the posterior
distribution (since this is the stationary limiting distribution of
the chain). So we can draw (non-independent) samples from the
posterior distribution by starting with an arbitrary starting value,
feeding it into the MCMC algorithm, waiting for the chain to converge
close to its stationary distribution, and then taking the subsequent
outputs as our draws.