I have a logit model with an interaction term. I want to test this interaction coefficient across two sub-samples (divided based on some value of another variable). Since logistic models, interpretation of coefficients requires marginal analysis (Ai and Norton, 2003), I performed marginal analysis after running the model for each sub-sample. I don't know how I can show the statistical difference of the marginal values across these two sub-samples. I am going to use the "birth-weight data" to clarify further what I mean.
webuse lbw
logit low i.smoke##c.lwt ftv ptl if ui==0
su ptl if e(sample), detail
global ptl_mean = r(mean)
display $ptl_mean
global ptl_sd = r(sd)
display $ptl_sd
global ptl_mean_plus_sigma = $ptl_mean + $ptl_sd
display $ptl_mean_plus_sigma
global ptl_mean_plus_twosigma = $ptl_mean + 2*$ptl_sd
display $ptl_mean_plus_twosigma
margins if e(sample), dydx (smoke) at( (mean) _all ptl=(0 $ptl_mean $ptl_mean_plus_sigmaa $ptl_mean_plus_twosigma )) predict(pr) saving(m1, replace)
logit low i.smoke##c.lwt ftv age ptl if ui==1
su ptl if e(sample), detail
global ptl_mean = r(mean)
display $ptl_mean
global ptl_sd = r(sd)
display $ptl_sd
global ptl_mean_plus_sigma = $ptl_mean + $ptl_sd
display $ptl_mean_plus_sigma
global ptl_mean_plus_twosigma = $ptl_mean + 2*$ptl_sd
display $ptl_mean_plus_twosigma
margins if e(sample), dydx (smoke) at( (mean) _all ptl=(0 $ptl_mean $ptl_mean_plus_sigmaa $ptl_mean_plus_twosigma )) predict(pr) saving(m2, replace)
Now, I use the user-written command --combomarginsplot-- to combine the two marginal plots:
graph set window fontface "Times New Roman"
#delimit ;
combomarginsplot m1 m2,
labels("ui==0" "ui==1" position(3))
title("Marginal Effects of ptl on low brith weight likelihood", size(medium))
xtitle("ptl", size(small))
ytitle("Change in likelihood", size(small))
xlabel(,labsize(small))
ylabel(,labsize(small))
legend(size(small))
noci file1opts(lpattern(shortdash) msymbol(i))
file2opts(lpattern(dash) msymbol(D));
#delimit;
graph save "Graph" "marginal.gph",replace
The graph will be:
One might say that it is easy to say that the marginal values of the second sub-sample (i.e., ui==1) are not different from zero. Hence, we can conclude that the marginal values when ui==0 are greater than the marginal values when ui==1.
What if marginal values across both sub-samples were statistically significant. How can one test their statistical difference?
Thanks,
Navid
Related
I'm interested in estimating a shared, global trend over time for counts monitored at several different sites using generalized additive models (gams). I've read this great introduction to hierarchical gams (hgams) by Pederson et al. (2019), and I believe I can setup the model as follows (the Pederson et al. (2019) GS model),
fit_model = gam(count ~ s(year, m = 2) + s(year, site, bs = 'fs', m = 2),
data = count_df,
family = nb(link = 'log'),
method = 'REML')
I can plot the partial effect smooths, look at the fit diagnostics, and everything looks reasonable. My question is how to extract a non-centered annual relative count index? My first thought would be to add the estimated intercept (the average count across sites at the beginning of the time series) to the s(year) smooth (the shared global smooth). But I'm not sure if the uncertainty around that smooth already incorporates uncertainty in the estimated intercept? Or if I need to add that in? All of this was possible thanks to the amazing R libraries mgcv, gratia, and dplyr.
Your way doesn't include the uncertainty in the constant term, it just shifts everything around.
If you want to do this it would be easier to use the constant argument to gratia:::draw.gam():
draw(fit_model, select = "s(year)", constant = coef(fit_model)[1L])
which does what your code does, without as much effort (on your part).
An better way — with {gratia}, seeing as you are using it already — would be to create a data frame containing a sequence of values over the range of year and then use gratia::fitted_values() to generate estimates from the model for those values of year. To get what you want (which seems to be to exclude the random smooth component of the fit, such that you are setting the random component to equal 0 on the link scale) you need to pass that smooth to the exclude argument:
## data to predict at
new_year <- with(count_df,
tibble(year = gratia::seq_min_max(year, n = 100),
site = factor(levels(site)[1], levels = levels(site)))
## predict
fv <- fitted_values(fit_model, data = new_year, exclude = "s(year,site)")
If you want to read about exclude, see ?predict.gam
I have been working on a Churn Prediction use case in Python using XGBoost. The data trained on various parameters like Age, Tenure, Last 6 months income etc gives us the prediction if an employee is likely to leave based on its employee ID.
Additionally, if the user wants to the see why this ML system categorised the employee as such, the user can see the features that contributed to this, which are extracted form the model via eli5 library.
So to make this more explainable to the users, we had created some ranges for each feature:
Tenure (in days)
[0-100] = High Risk
[101-300] = Medium Risk
[301-800] = Low Risk
To define these ranges we've analysed the distributions of each feature and manually defined the ranges for our use in the system. We saw the impact of each feature on the target variable IsTerminated in training data. Following is an example of Tenure distribution.
Here the green bar represents the employees who are terminated or left and pink represents those who didn't.
So the question is that, as time passes and new data would be added to the model the such features' risk ranges would change. In this case of Tenure, if an employee has tenure of 780 days, after a month his tenure feature would show 810. Obviously, we keep the upper end on "Low Risk" as open ended. But real problem is, how can we define the internal boundaries / ranges programtically ?
EDIT: Thanks for the clarification. I have changed the answer.
It is important to realize that you are trying to project a selection in multi-dimensional space into a 1D space. Not in every case you will be able to see a clear separation like the one you got. There are also various possibilities to do that, here I made a simple example that could help your client to interpret the model, but does not represent the full complexity of the model, of course.
You did not provide any sample data, so I will generate some from the breast cancer dataset.
First let's import what we need:
from sklearn import datasets
from xgboost import XGBClassifier
import pandas as pd
import numpy as np
And now import the dataset and train a very simple XGBoost Model
cancer = datasets.load_breast_cancer()
X = cancer.data
y = cancer.target
xgb_model = XGBClassifier(n_estimators=5,
objective="binary:logistic",
random_state=42)
xgb_model.fit(X, y)
y_prob = pd.DataFrame(xgb_model.predict_proba(X))[0]
There are multiple ways to solve this.
One approach is to bin in the probability given by the model. So you will decide which probabilities you consider to be "High Risk", "Medium Risk" and "Low Risk" and the intervals on data can be classified. In this example I considered low to be 0 <= p <= 0.5, medium for 0.5 < p <= 0.8 and high for 0.8 < p <= 1.
First you have to calculate the probability for each prediction. I would suggest to maybe use the test set for that, to avoid bias from a possible model overfitting.
y_prob = pd.DataFrame(xgb_model.predict_proba(X))[0]
df = pd.DataFrame(X, columns=cancer.feature_names)
# Stores the probability of a malignant cancer
df['probability'] = y_prob
Then you have to bin your data and calculate average probabilities for each of those bins. I would suggest to bin your data using np.histogram_bin_edges automatic calculation:
def calculate_mean_prob(feat):
"""Calculates mean probability for a feature value, binning it."""
# Bins from the automatic rules from numpy, check docs for details
bins = np.histogram_bin_edges(df[feat], bins='auto')
binned_values = pd.cut(df[feat], bins)
return df['probability'].groupby(binned_values).mean()
Now you can classify each bin following what you would consider to be a low/medium/high probability:
def classify_probability(prob, medium=0.5, high=0.8, fillna_method= 'ffill'):
"""Classify the output of each bin into a risk group,
according to the probability.
Following the follow rules:
0 <= p <= medium: Low risk
medium < p <= high: Medium risk
high < p <= 1: High Risk
If a bin has no entries, it will be filled using fillna with the method
specified in fillna_method
"""
risk = pd.cut(prob, [0., medium, high, 1.0], include_lowest=True,
labels=['Low Risk', 'Medium Risk', 'High Risk'])
risk.fillna(method=fillna_method, inplace=True)
return risk
This will return you the risk for each bin that you divided your data. Since you will probably have multiple bins that have consecutive values, you might want to merge the consecutive pd.Interval bins. The code for that is shown below:
def sum_interval(i1, i2):
if i2 is None:
return None
if i1.right == i2.left:
return pd.Interval(i1.left, i2.right)
return None
def sum_intervals(args):
"""Given a list of pd.Intervals,
returns a list summing consecutive intervals."""
result = list()
current_interval = args[0]
for next_interval in list(args[1:]) + [None]:
# Try to sum the current interval and nex interval
# The None in necessary for the last interval
sum_int = sum_interval(current_interval, next_interval)
if sum_int is not None:
# Update the current_interval in case if it is
# possible to sum
current_interval = sum_int
else:
# Otherwise tries to start a new interval
result.append(current_interval)
current_interval = next_interval
if len(result) == 1:
return result[0]
return result
def combine_bins(df):
# Group them by label
grouped = df.groupby(df).apply(lambda x: sorted(list(x.index)))
# Sum each category in intervals, if consecutive
merged_intervals = grouped.apply(sum_intervals)
return merged_intervals
Now you can combine all the functions to calculate the bins for each feature:
def generate_risk_class(feature, medium=0.5, high=0.8):
mean_prob = calculate_mean_prob(feature)
classification = classify_probability(mean_prob, medium=medium, high=high)
merged_bins = combine_bins(classification)
return merged_bins
For example, generate_risk_class('worst radius') results in:
Low Risk (7.93, 17.3]
Medium Risk (17.3, 18.639]
High Risk (18.639, 36.04]
But in case you get features which are not so good discriminators (or that do not separate the high/low risk linearly), you will have more complicated regions. For example generate_risk_class('mean symmetry') results in:
Low Risk [(0.114, 0.209], (0.241, 0.249], (0.272, 0.288]]
Medium Risk [(0.209, 0.225], (0.233, 0.241], (0.249, 0.264]]
High Risk [(0.225, 0.233], (0.264, 0.272], (0.288, 0.304]]
I want to take an input of millions of lat long points (with a numerical attribute) and then find all fixed radius geospatial clusters where the sum of the attribute within the circle is above a defined threshold.
I started by using sklearn BallTree to sum the attribute within any defined circle, with the intention of then expanding this out to run across a grid or lattice of circles. The run time for one circle is around 0.01s, so this is fine for small lattices, but won't scale if I want to run 200m radius circles across the whole of the UK.
#example data (use 2m rows from postcode centroid file)
df = pandas.read_csv('National_Statistics_Postcode_Lookup_Latest_Centroids.csv', usecols=[0,1], nrows=2000000)
#this will be our grid of points (or lattice) use points from same file for example
df2 = pandas.read_csv('National_Statistics_Postcode_Lookup_Latest_Centroids.csv', usecols=[0,1], nrows=2000)
#reorder lat long columns for balltree input
columnTitles=["Y","X"]
df = df.reindex(columns=columnTitles)
df2 = df2.reindex(columns=columnTitles)
# assign new columns to existing dataframe. attribute will hold the data we want to sum over (set to 1 for now)
df['attribute'] = 1
df2['aggregation'] = 0
RADIANT_TO_KM_CONSTANT = 6367
class BallTreeIndex:
def __init__(self, lat_longs):
self.lat_longs = np.radians(lat_longs)
self.ball_tree_index =BallTree(self.lat_longs, metric='haversine')
def query_radius(self,query,radius):
radius_km = radius/1000
radius_radiant = radius_km / RADIANT_TO_KM_CONSTANT
query = np.radians(np.array([query]))
indices = self.ball_tree_index.query_radius(query,r=radius_radiant)
return indices[0]
#index the base data
a=BallTreeIndex(df.iloc[:,0:2])
#begin to loop over the lattice to test performance
for i in range(0,100):
b = df2.iloc[i,0:2]
output = a.query_radius(b, 200)
accumulation = sum(df.iloc[output, 2])
df2.iloc[i,2] = accumulation
It feels as if the above code is really inefficient as I don't need to run the calculation across all circles on my lattice (as most will be well below my threshold - or will have no data points in at all).
Instead of this for loop, is there a better way of scaling this algorithm to give me the most dense circles?
I'm new to python, so any help would be massively appreciated!!
First don't try to do this on a sphere! GB is small and we have a well defined geographic projection that will work. So use the oseast1m and osnorth1m columns as X and Y. They are in metres so no need to convert (roughly) to degrees and use Haversine. That should help.
Next add a spatial index to speed up lookups.
If you need more speed there are various tricks like loading a 2R strip across the country into memory and then running your circles across that strip, then moving down a grid step and updating that strip (checking Y values against a fixed value is quick, especially if you store the data sorted on Y then X value). If you need more speed then look at any of the papers the Stan Openshaw (and sometimes I) wrote about parallelising the GAM. There are examples of implementing GAM in python (e.g. this paper, this paper) that may also point to better ways.
According to a paper, it is supposed to work.
But as a learner of scikit-learn package.. I do not see how.
All the sample codes cluster by ellipses or circles as here.
I would really like to know how to cluster the following plot by different patterns... 0 -3 are the mean of power over certain time periods (divided into 4) while 4, 5, 6 each correspond to standard deviation of the year, variance in weekday/weekend, variance in winter/summer. So the ylabel does not necessarily meet with 4,5,6.
Following the sample..BIC did generate that the optimal number of clusters is 5.
n_components = np.arange(1, 21)
models = [GMM(n, covariance_type='full', random_state=0).fit(input)
for n in n_components]
plt.plot(n_comp, [m.bic(read) for m in models], label = 'BIC')
plt.legend(loc='best')
plt.xlabel('n_components')
If I plot with the sample code available however.. it returns something completely weird, not worth sharing. I though negative BIC was ok. But I don't even know if it clustered correctly to deduce that 5 is the optimal number.
Basically in an effort to close this question..my following post answers how to cluster using GMM.
Create a model using the parameters accordingly
gmm = GaussianMixture(n_components=10, covariance_type ='full', \
init_params = 'random', max_iter = 100, random_state=0)
Fit your data (number of samples x number of attributes) whose name is input in my case
gmm.fit(input)
print(gmm.means_.round(2))
cluster = gmm.predict(input)
Cluster contains the labels to each of these samples of my input
Feel free to add, if I've gotten anything wrong
Good day,
I have been working through Baddeley et al. 2015 to fit a point process model to several point patterns using mppm {spatstat}.
My point patterns are annual count data of large herbivores (i.e. point localities (x, y) of male/female animals * 5 years) in a protected area (owin). I have a number of spatial covariates e.g. distance to rivers (rivD) and vegetation productivity (NDVI).
Originally I fitted a model where herbivore response was a function of rivD + NDVI and allowed the coefficients to vary by sex (see mppm1 in reproducible example below). However, my annual point patterns are not independent between years in that there is a temporally increasing trend (i.e. there are exponentially more animals in year 1 compared to year 5).
So I added year as a random effect, thinking that if I allowed the intercept to change per year I could account for this (see mppm2).
Now I'm wondering if this is the right way to go about it? If I was fitting a GAMM gamm {mgcv} I would add a temporal correlation structure e.g. correlation = corAR1(form=~year) but don't think this is possible in mppm (see mppm3)?
I would really appreciate any ideas on how to deal with this temporal correlation structure in a replicated point pattern with mppm {spatstat}.
Thank you very much
Sandra
# R version 3.3.1 (64-bit)
library(spatstat) # spatstat version 1.45-2.008
#### Simulate point patterns
# multitype Neyman-Scott process (each cluster is a multitype process)
nclust2 = function(x0, y0, radius, n, types=factor(c("male", "female"))) {
X = runifdisc(n, radius, centre=c(x0, y0))
M = sample(types, n, replace=TRUE)
marks(X) = M
return(X)
}
year1 = rNeymanScott(5,0.1,nclust2, radius=0.1, n=5)
# plot(year1)
#-------------------
year2 = rNeymanScott(10,0.1,nclust2, radius=0.1, n=5)
# plot(year2)
#-------------------
year2 = rNeymanScott(15,0.1,nclust2, radius=0.1, n=10)
# plot(year2)
#-------------------
year3 = rNeymanScott(20,0.1,nclust2, radius=0.1, n=10)
# plot(year3)
#-------------------
year4 = rNeymanScott(25,0.1,nclust2, radius=0.1, n=15)
# plot(year4)
#-------------------
year5 = rNeymanScott(30,0.1,nclust2, radius=0.1, n=15)
# plot(year5)
#### Simulate distance to rivers
line <- psp(runif(10), runif(10), runif(10), runif(10), window=owin())
# plot(line)
# plot(year1, add=TRUE)
#------------------------ UPDATE ------------------------#
#### Create hyperframe
#---> NDVI simulated with distmap to point patterns (not ideal but just to test)
hyp.years = hyperframe(year=factor(2010:2014),
ppp=list(year1,year2,year3,year4,year5),
NDVI=list(distmap(year5),distmap(year1),distmap(year2),distmap(year3),distmap(year4)),
rivD=distmap(line),
stringsAsFactors=TRUE)
hyp.years$numYear = with(hyp.years,as.numeric(year)-1)
hyp.years
#### Run mppm models
# mppm1 = mppm(ppp~(NDVI+rivD)/marks,data=hyp.years); summary(mppm1)
#..........................
# mppm2 = mppm(ppp~(NDVI+rivD)/marks,random = ~1|year,data=hyp.years); summary(mppm2)
#..........................
# correlation = corAR1(form=~year)
# mppm3 = mppm(ppp~(NDVI+rivD)/marks,correlation = corAR1(form=~year),use.gam = TRUE,data=hyp.years); summary(mppm3)
###---> Run mppm model with annual trend and random variation in growth
mppmCorr = mppm(ppp~(NDVI+rivD+numYear)/marks,random = ~1|year,data=hyp.years)
summary(mppm1)
If there's a trend in population size over time, then it might make sense to include this trend in the systematic part of the model. I would suggest you add a new numeric variable NumYear to the data frame (eg giving the number of years since 2010). Then try adding simple trend terms such as +NumYear to the model formula (this would correspond to the exponential growth in population that you observed.) You can keep the 1|year random effect term which will then allow for random variation in population size around the long term growth trend.
There's no need to split the data patterns for each year into separate male and female patterns. The variable marks in the model formula can be used to specify any model that depends on sex.
I'm pretty sure that mppm with use.gam=TRUE does not recognise the argument correlation and this is probably just ignored. (It depends what happens inside gam).