If I want to create a model that best describes the price of an asset using a multiplicative relationship, that is,
Price = base_rate * size_of_asset * number_of_subassets
(size of asset, number of subassets are both 0,1,2,3... N)
can I do this with a linear combination when the variables are categorical? If they were numerical I could log everything, which would do exactly that... however, the same approach can't be applied with categorical data, can it?
NB: I want to keep it as a multiplicative relationships so it's highly interpretable from a ratio perspective - that is, one can say by increasing the size_of_asset by 30% increases the price by x amount.
Thanks for the advice!
I think log-linear might be your solution as it can help you analyse the multiplicative effects of one or more categorical independent variables with a categorical dependent variable.
Check this out:
http://members.home.nl/jeroenvermunt/esbs2005c.pdf
Related
I am trying to assess the infuence of sex (nominal), altitude (nominal) and latitude (nominal) on corrected wing size (continuous; residual of wing size by body mass) of an animal species. I considered altitude as a nominal factor given the fact that this particular species is mainly distributed at the extremes (low and high) of steep elevational gradients in my study area. I also considered latitude as a nominal fixed factor given the fact that I have sampled individuals only at three main latitudinal levels (north, center and south).
I have been suggested to use Linear Mixed Model for this analysis. Specifically, considering sex, altitude, latitude, sex:latitude, sex:altitude, and altitude:latitude as fixed factors, and collection site (nominal) as the random effect. The latter given the clustered distribution of the collection sites.
However, I noticed that despite the corrected wing size follow a normal distribution, it violates the assumption of homoscedasticity among some altitudinal/latitudinal groups. I tried to use a non-parametric equivalent of factorial ANOVA (ARTool) but I cannot make it run because it does not allow cases of missing data and it requires to asses all possible fixed factor and their interactions. I will appreciate any advice on what type of model I can use given the design of my data and what software/package can I use to perform the analysis.
Thanks in advance for your kind attention.
Regards,
I am comparing two types of crops, let's call them crop A and B.
I have data from ~1000 farms on growth of the plants (average per farm) and want to correlate growth to crop type.
Unfortunately, the different farms also use different fertilizers (fertilizer 1...10), and some have changed the fertilizer used over time...
So, I want to show (with statistical significance) that the growth of crop type A exceeds the growth of crop type B, but make sure it is not coincidence because of the fertilizer used. Can you point me to a statistical test for this purpose? Or do I need to split the data into subgroups (that each contain only one fertilizer) and draw separate conclusions from each subgroup?
Thanks for any hints!
best wishes
Peter.
The type of fertilizer is a confounding variable which you need to control, in order to reduce it's effect on your statistical test.
Assuming all crop types might use all fertilizer types, a good way to control that confounding variable is by simple stratification
The data sampled is divided into two group (crop A, crop B) which are stratified by fertilizer type – to reduce its impact.
I'm using PCA from sckit-learn and I'm getting some results which I'm trying to interpret, so I ran into question - should I subtract the mean (or perform standardization) before using PCA, or is this somehow embedded into sklearn implementation?
Moreover, which of the two should I perform, if so, and why is this step needed?
I will try to explain it with an example. Suppose you have a dataset that includes a lot features about housing and your goal is to classify if a purchase is good or bad (a binary classification). The dataset includes some categorical variables (e.g. location of the house, condition, access to public transportation, etc.) and some float or integer numbers (e.g. market price, number of bedrooms etc). The first thing that you may do is to encode the categorical variables. For instance, if you have 100 locations in your dataset, the common way is to encode them from 0 to 99. You may even end up encoding these variables in one-hot encoding fashion (i.e. a column of 1 and 0 for each location) depending on the classifier that you are planning to use. Now if you use the price in million dollars, the price feature would have a much higher variance and thus higher standard deviation. Remember that we use square value of the difference from mean to calculate the variance. A bigger scale would create bigger values and square of a big value grow faster. But it does not mean that the price carry significantly more information compared to for instance location. In this example, however, PCA would give a very high weight to the price feature and perhaps the weights of categorical features would almost drop to 0. If you normalize your features, it provides a fair comparison between the explained variance in the dataset. So, it is good practice to normalize the mean and scale the features before using PCA.
Before PCA, you should,
Mean normalize (ALWAYS)
Scale the features (if required)
Note: Please remember that step 1 and 2 are not the same technically.
This is a really non-technical answer but my method is to try both and then see which one accounts for more variation on PC1 and PC2. However, if the attributes are on different scales (e.g. cm vs. feet vs. inch) then you should definitely scale to unit variance. In every case, you should center the data.
Here's the iris dataset w/ center and w/ center + scaling. In this case, centering lead to higher explained variance so I would go with that one. Got this from sklearn.datasets import load_iris data. Then again, PC1 has most of the weight on center so patterns I find in PC2 I wouldn't think are significant. On the other hand, on center | scaled the weight is split up between PC1 and PC2 so both axis should be considered.
For big datasets with 2bil+ samples and approximately 100+ features per sample. Among these, 10% features you have are numerical/continuous variables and the rest of it are categorical variables (position, languages, url etc...).
Let's use some examples:
e.g: dummy categorical feature
feature: Position
real values: SUD | CENTRE | NORTH
encoded values: 1 | 2 | 3
...would have sense use reduction like SVD because distance beetween sud:north > sud:centre and, moreover, it's possible to encode (e.g OneHotEncoder, StringIndexer) this variable because of the small cardinality of it values-set.
e.g: real categorical feature
feature: url
real values: very high cardinality
encoded values: ?????
1) In MLlibthe 90% of the model works just with numerical values (a part of Frequent Itemset and DecisionTree techniques)
2) Features transformers/reductor/extractor as PCA or SVD are not good for these kind of data, and there is no implementation of (e.g) MCA
a) Which could be your approach to engage with this kind of data in spark, or using Mllib?
b) Do you have any suggestions to cope with this much categorical values?
c) After reading a lot in literature, and counting the implemented model in spark, my idea, about make inference on one of that features using the others (categorical), the models at point 1 could be the best coiche. What do you think about it?
(to standardize a classical use case you can imagine the problem of infer the gender of a person using visited url and other categorical features).
Given that I am a newbie in regards to MLlib, may I ask you to provide a concrete example?
Thanks in advance
Well, first I would say stackoverflow works in a different way, you should be the one providing a working example with the problem you are facing and we help you out using this example.
Anyways I got intrigued with the use of the categorical values like the one you show as position. If this is a categorical value as you mention with 3 levels SUD,CENTRE, NORTH, there is no distance between them if they are truly categorical. In this sense I would create dummy variables like:
SUD_Cat CENTRE_Cat NORTH_Cat
SUD 1 0 0
CENTRE 0 1 0
NORTH 0 0 1
This is a truly dummy representation of a categorical variable.
On the other hand if you want to take that distance into account then you have to create another feature which takes this distance into account explicitly, but that is not a dummy representation.
If the problem you are facing is that after you wrote your categorical features as dummy variables (note that now all of them are numerical) you have very many features and you want to reduce your feature's space, then is a different problem.
As a rule of thumbs I try to utilize the entire feature space first, now a plus since in spark computing power allows you to run modelling tasks with big datasets, if it is too big then I would go for dimensionality reduction techniques, PCA etc...
Let's say, I have two random variables,x and y, both of them have n observations. I've used a forecasting method to estimate xn+1 and yn+1, and I also got the standard error for both xn+1 and yn+1. So my question is that what the formula would be if I want to know the standard error of xn+1 + yn+1, xn+1 - yn+1, (xn+1)*(yn+1) and (xn+1)/(yn+1), so that I can calculate the prediction interval for the 4 combinations. Any thought would be much appreciated. Thanks.
Well, the general topic you need to look at is called "change of variables" in mathematical statistics.
The density function for a sum of random variables is the convolution of the individual densities (but only if the variables are independent). Likewise for the difference. In special cases, that convolution is easy to find. For example, for Gaussian variables the density of the sum is also a Gaussian.
For product and quotient, there aren't any simple results, except in special cases. For those, you might as well compute the result directly, maybe by sampling or other numerical methods.
If your variables x and y are not independent, that complicates the situation. But even then, I think sampling is straightforward.