Structural equation for survival mode - survival-analysis

How can i model two response variable time to death and time to recurrence whit sem، Why metastasis time ratio has greater than one coefficient?

Related

Identify short signal peaks with rolling time frame in python

I am trying to identify all peaks from my sensor readings data. The smallest peak can be lesser than 10 amplitude and largest can be more than 400 amplitude. The rolling time window is not fixed as one peak can arrive in 6 hours vs second one in another 3 hours. I tried wavelet transform and python peak identification but that is only working for higher peaks. How do I resolve this? Here is signal image link, all peaks in Grey color I am identifying and in blue is my algorithm
Welcome to SO.
It is hard to provide you with a detailed answer without knowing your data's sampling rate and the duration of the peaks. From what I see in your example image they seem all over the place!
I don't think that wavelets will be of any use for your problem.
A recipe that I like to use to despike data is:
Smooth your input data using a median filter (a 11 points median filter generally does the trick for me): smoothed=scipy.signal.medfilt(data, window_len=11)
Compute a noise array by subtracting smoothed from data: noise=data-smoothed
Create a despiked_data array from data:
despiked_data=np.zeros_like(data)
np.copyto(despiked_data, data)
Then every time the noise exceeds a user defined threshold (mythreshold), replace the corresponding value in despiked_data with nan values: despiked_data[np.abs(noise)>mythreshold]=np.nan
You may later interpolate the output despiked_data array but if your intent is simply to identify the spikes, you don't even need to run this extra step.

How can r-squared be negative when the correlation between prediction and truth is positive?

Trying to understand how the r-squared (and also explained variance) metrics can be negative (thus indicating non-existant forecasting power) when at the same time the correlation factor between prediction and truth (as well as slope in a linear-regression (regressing truth on prediction)) are positive
R Squared can be negative in a rare scenario.
R squared = 1 – (SSR/SST)
Here, SST stands for Sum of Squared Total which is nothing but how much does the predicted points get varies from the mean of the target variable. Mean is nothing but a regression line here.
SST = Sum (Square (Each data point- Mean of the target variable))
For example,
If we want to build a regression model to predict height of a student with weight as the independent variable then a possible prediction without much effort is to calculate the mean height of all current students and consider it as the prediction.
In the above diagram, red line is the regression line which is nothing but the mean of all heights. This mean calculated without much effort and can be considered as one of the worst method of prediction with poor accuracy. In the diagram itself we can see that the prediction is nowhere near to the original data points.
Now come to SSR,
SSR stands for Sum of Squared Residuals. This residual is calculated from the model which we build from our mathematical approach (Linear regression, Bayesian regression, Polynomial regression or any other approach). If we use a sophisticated approach rather than using a naive approach like mean then our accuracy will obviously increase.
SSR = Sum (Square (Each data point - Each corresponding data point in the regression line))
In the above diagram, let's consider that the blue line indicates a sophisticated model with large mathematical analysis. We can see that it has obviously higher accuracy than the red line.
Now come to the formula,
R Squared = 1- (SSR/SST)
Here,
SST will be large number because it a very poor model (red line).
SSR will be a small number because it is the best model we developed
after much mathematical analysis (blue line).
So, SSR/SST will be a very small number (It will become very small
whenever SSR decreases).
So, 1- (SSR/SST) will be large number.
So we can infer that whenever R Squared goes higher, it means the
model is too good.
This is a generic case but this cannot be applied in many cases where multiple independent variables are present. In the example, we had only one independent variable and one target variable but in real case, we will have 100's of independent variables for a single dependent variable. The actual problem is that, out of 100's of independent variables-
Some variables will have very high correlation with target variable.
Some variables will have very small correlation with target variable.
Also some independent variables will have no correlation at all.
So, RSquared is calculated on an assumption that the average line of the target which is perpendicular line of y axis is the worst fit a model can have at a maximum riskiest case. SST is the squared difference between this average line and original data points. Similarly, SSR is the squared difference between the predicted data points (by the model plane) and original data points.
SSR/SST gives a ratio how SSR is worst with respect to SST. If your model can somewhat build a plane which is a comparatively good than the worst, then in 99% cases SSR<SST. It eventually makes R squared as positive if you substitute it in the equation.
But what if SSR>SST ? This means that your regression plane is worse than the mean line (SST). In this case, R squared will be obviously negative. But it happens only at 1% of cases or smaller.
Answer was originally written in quora by me -
https://qr.ae/pNsLU8
https://qr.ae/pNsLUr

How to design a score or signature function based on the time series data

I want to design a score or signature function based on a time series signal. Usually, the signal has ups and downs.
For a given time window, I desire to design the score function based on the number of times it fluctuates, the duration of the fluctuations, and the magnitude of the fluctuations. I am wondering what kind of math I can use to design the function. I am not sure if the statistical features (mean, median, and so on) would be enough to design unique function such that two time windows would be distinguishable.
Thanks!
Summary statistics will not give you what you want... but it can still be useful.
Things you can try:
Zero crossings on the signal will give you number of fluctuations. You'll have to use some central tendency value to move the signal about the 0 line in order to do this. Alternatively you can use FFT on the original to find the harmonic frequency as part of the score.
Could define the duration of fluctuations as the difference between zero crossings divided by two (since one fluctuation will reach the 0-line twice).
Magnitude can be done by finding the local minima and maxima - check out some packages with peak finding functions. You might want to use the mean or median to rule out local minima and maxima that fall on the wrong side of the line. Alternatively, finding the zero crossings on the derivative signal and then mapping them back to the original will give you all the local minima and maxima as well.

Simulate Arival Times in a Poisson Process

In Excel, I want to generate arrival times for a simulation (illustration) of a M/M/1 queue.
Jobs arrive according to a Poisson process. I found POISSON and POISSON.DIST functions in Excel, but not an inverse Poisson distribution function. I figured that since Normal distribution with mean λ and variance λ is supposed to be a good approximation of Poisson distribution (given large enough time intervals), I tried to use inverse Normal distribution function to simulate the intervals between arrivals:
=NORM.INV(RAND(), mean, SQRT(mean))
And to compute the arrival times (Excel format of time is in fractions of a day):
=IFERROR(previous_time + interval_in_seconds/60/60/24, 0)
I am no expert in statistics, but my simulated intervals look a bit too regular for it to be a Poisson process (see illustration below for λ = 1/10s) - what am I doing wrong plz??
I realized my mistake after a good night's sleep that there is an important distinction between these 2 concepts:
Poisson Process
A renewal process with exponentially distributed renewal intervals.
Poisson Distribution
A discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time.
So while number of jobs that arrive according to Poisson process during a time interval x follow Poisson distribution with parameter λx, the inter-arrival times of this process are distributed exponentially.
Inverse exponential function can be written in Excel as follows:
=-LN(RAND()) * mean
Illustration for λ = 1/10s:

statistical cosinor analysis,

Hey i am trying to calculate a cosinor analysis in statistica but am at a loss as to how to do so. I need to calculate the MESOR, AMPLITUDE, and ACROPHASE of ciracadian rhythm data.
http://www.wepapers.com/Papers/73565/Cosinor_analysis_of_accident_risk_using__SPSS%27s_regression_procedures.ppt
there is a link that shows how to do it, the formulas and such, but it has not given me much help. Does anyone know the code for it, either in statistica or SPSS??
I really need to get this done because it is for an important paper
I don't have SPSS or Statistica, so I can't tell you the exact "push-this-button" kind of steps, but perhaps this will help.
Cosinor analysis is fitting a cosine (or sine) curve with a known period. The main idea is that the non-linear problem of fitting a cosine function can be reduced to a problem that is linear in its parameters if the period is known. I will assume that your period T=24 hours.
You should already have two variables: Time at which the measurement is taken, and Value of the measurement (these, of course, might be called something else).
Now create two new variables: SinTime = sin(2 x pi x Time / 24) and CosTime = cos(2 x pi x Time / 24) - this is desribed on p.11 of the presentation you linked (x is multiplication). Use pi=3.1415 if the exact value is not built-in.
Run multiple linear regression with Value as outcome and SinTime and CosTime as two predictors. You should get estimates of their coefficients, which we will call A and B.
The intercept term of the regression model is the MESOR.
The AMPLITUDE is sqrt(A^2 + B^2) [square root of A squared plus B squared]
The ACROPHASE is arctan(- B / A), where arctan is the inverse function of tan. The last two formulas are from p.14 of the presentation.
The regression model should also give you an R-squared value to see how well the 24 hour circadian pattern fits the data, and an overall p-value that tests for the presence of a circadian component with period 24 hrs.
One can get standard errors on amplitude and phase using standard error propogation formulas, but that is not included in the presentation.

Resources