Python ARIMA output - interpreting Sigma2 - python-3.x

I am trying to interpret ARIMA output below and not clear about sigma2. The documentation says it is 'The variance of the residuals.'. What is the hypothesis behind this output/importance?.
Kindly provide answer or a link where it is covered in detail.
import statsmodels.api as sm
mod = sm.tsa.statespace.SARIMAX(df.Sales, order=(0, 1, 1),
seasonal_order=(0, 1, 1, 12), enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary().tables[1])
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ma.L1 -0.9317 0.055 -16.989 0.000 -1.039 -0.824
ma.S.L12 -0.0851 0.143 -0.594 0.553 -0.366 0.196
sigma2 1.185e+09 2.13e-11 5.56e+19 0.000 1.19e+09 1.19e+09
==============================================================================

Yes, you're right that sigma squared represents the variance of the residual values. This value is used to test the normality of residuals against the alternative of non-normality.
For more information, you can check this PR: https://github.com/statsmodels/statsmodels/pull/2431
and this QA: https://github.com/statsmodels/statsmodels/issues/2507#:~:text=The%20sigma2%20output%20in%20the,variance%20of%20the%20error%20term.

Related

How to compare the slopes of two regression lines?

Assume that you have two numpy arrays: blue_y and light_blue_y such as:
import numpy as np
x=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
blue_y = np.array([0.94732871, 0.85729212, 0.86039587, 0.89169027, 0.90817473, 0.93606619, 0.93890423, 1., 0.97783521, 0.93035495])
light_blue_y = np.array([0.81346023, 0.72248919, 0.72406021, 0.74823437, 0.77759055, 0.81167983, 0.84050726, 0.90357904, 0.97354455, 1. ])
blue_m, blue_b = np.polyfit(x, blue_y, 1)
light_blue_m, light_blue_b = np.polyfit(x, light_blue_y, 1)
After fitting linear regression lines to these two numpy arrays I get the following slopes:
>>> blue_m
0.009446010787878795
>>> light_blue_m
0.028149985151515147
How to compare these two slopes and show that they are statistically different from each other or not?
import numpy as np
import statsmodels.api as sm
x=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
blue_y = np.array([0.94732871, 0.85729212, 0.86039587, 0.89169027, 0.90817473, 0.93606619, 0.93890423, 1., 0.97783521, 0.93035495])
light_blue_y = np.array([0.81346023, 0.72248919, 0.72406021, 0.74823437, 0.77759055, 0.81167983, 0.84050726, 0.90357904, 0.97354455, 1. ])
For simplicity we can try to see the difference. If they had the same constant and the same slope that should become visible in the linear regression.
y = blue_y-light_blue_y
# Let create a linear regression
mod = sm.OLS(y, sm.add_constant(x))
res = mod.fit()
With Output
print(res.summary())
Output
==============================================================================
Dep. Variable: y R-squared: 0.647
Model: OLS Adj. R-squared: 0.603
Method: Least Squares F-statistic: 14.67
Date: Mon, 08 Feb 2021 Prob (F-statistic): 0.00502
Time: 23:12:25 Log-Likelihood: 18.081
No. Observations: 10 AIC: -32.16
Df Residuals: 8 BIC: -31.56
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.1775 0.026 6.807 0.000 0.117 0.238
x1 -0.0187 0.005 -3.830 0.005 -0.030 -0.007
==============================================================================
Omnibus: 0.981 Durbin-Watson: 0.662
Prob(Omnibus): 0.612 Jarque-Bera (JB): 0.780
Interpretation:
The constant and the slope appear different so blue_y and light_blue_y have different slope and constant term.
Alternative:
An more traditional you could run the linear regression of both cases and run your own F-stest.
As in here:
https://towardsdatascience.com/fisher-test-for-regression-analysis-1e1687867259#:~:text=The%20F%2Dtest%2C%20when%20used,its%20use%20in%20linear%20regression.
https://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/comparing-slopes-two-independent-samples/

How to statistically compare the intercept and slope of two different linear regression models in Python?

I have two series of data as below. I want to create an OLS linear regression model for df1 and another OLS linear regression model for df2. And then statistically test if the y-intercepts of these two linear regression models are statistically different (p<0.05), and also test if the slopes of these two linear regression models are statistically different (p<0.05). I did the following
import numpy as np
import math
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
np.inf == float('inf')
data1 = [1, 3, 45, 0, 25, 13, 43]
data2 = [1, 1, 1, 1, 1, 1, 1]
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
fig, ax = plt.subplots()
df1.plot(figsize=(20, 10), linewidth=5, fontsize=18, ax=ax, kind='line')
df2.plot(figsize=(20, 10), linewidth=5, fontsize=18, ax=ax, kind='line')
plt.show()
model1 = sm.OLS(df1, df1.index)
model2 = sm.OLS(df2, df2.index)
results1 = model1.fit()
results2 = model2.fit()
print(results1.summary())
print(results2.summary())
Results #1
OLS Regression Results
=======================================================================================
Dep. Variable: 0 R-squared (uncentered): 0.625
Model: OLS Adj. R-squared (uncentered): 0.563
Method: Least Squares F-statistic: 10.02
Date: Mon, 01 Mar 2021 Prob (F-statistic): 0.0194
Time: 20:34:34 Log-Likelihood: -29.262
No. Observations: 7 AIC: 60.52
Df Residuals: 6 BIC: 60.47
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 5.6703 1.791 3.165 0.019 1.287 10.054
==============================================================================
Omnibus: nan Durbin-Watson: 2.956
Prob(Omnibus): nan Jarque-Bera (JB): 0.769
Skew: 0.811 Prob(JB): 0.681
Kurtosis: 2.943 Cond. No. 1.00
==============================================================================
Results #2
OLS Regression Results
=======================================================================================
Dep. Variable: 0 R-squared (uncentered): 0.692
Model: OLS Adj. R-squared (uncentered): 0.641
Method: Least Squares F-statistic: 13.50
Date: Mon, 01 Mar 2021 Prob (F-statistic): 0.0104
Time: 20:39:14 Log-Likelihood: -5.8073
No. Observations: 7 AIC: 13.61
Df Residuals: 6 BIC: 13.56
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 0.2308 0.063 3.674 0.010 0.077 0.384
==============================================================================
Omnibus: nan Durbin-Watson: 0.148
Prob(Omnibus): nan Jarque-Bera (JB): 0.456
Skew: 0.000 Prob(JB): 0.796
Kurtosis: 1.750 Cond. No. 1.00
==============================================================================
This is as far I have got, but I think something is wrong. Neither of these regression outcome seems to show the y-intercept. Also, I expect the coef in results #2 to be 0 since I expect the slope to be 0 when all the values are 1, but the result shows 0.2308. Any suggestions or guiding material will be greatly appreciated.
In statsmodels an OLS model does not fit an intercept by default (see the docs).
exog array_like
A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.
The documentation on the exog argument of the OLS constructor suggests using this feature of the tools module in order to add an intercept to the data.
To perform a hypothesis test on the values of the coefficients this question provides some guidance. This unfortunately only works if the variances of the residual errors is the same.
We can start by looking at whether the residuals of each distribution have the same variance (using Levine's test) and ignore coefficients of the regression model for now.
import numpy as np
import pandas as pd
from scipy.stats import levene
from statsmodels.tools import add_constant
from statsmodels.formula.api import ols ## use formula api to make the tests easier
np.inf == float('inf')
data1 = [1, 3, 45, 0, 25, 13, 43]
data2 = [1, 1, 1, 1, 1, 1, 1]
df1 = add_constant(pd.DataFrame(data1)) ## add a constant column so we fit an intercept
df1 = df1.reset_index() ## just doing this to make the index a column of the data frame
df1 = df1.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df2 = add_constant(pd.DataFrame(data2)) ## this does nothing because the y column is already a constant
df2 = df2.reset_index()
df2 = df2.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
formula1 = 'y ~ x + const' ## define formulae
formula2 = 'y ~ x'
model1 = ols(formula1, df1).fit()
model2 = ols(formula2, df2).fit()
print(levene(model1.resid, model2.resid))
The output of the levene test looks like this:
LeveneResult(statistic=7.317386741297884, pvalue=0.019129208414097015)
So we can reject the null hypothesis that the residual distributions have the same variance at alpha=0.05.
There is no point to testing the linear regression coefficients now because the residuals don't have don't have the same distributions. It is important to remember that in a regression problem it doesn't make sense to compare the regression coefficients independent of the data they are fit on. The distribution of the regression coefficients depends on the distribution of the data.
Lets see what happens when we try the proposed test anyways. Combining the instructions above with this method from the OLS package yields the following code:
## stack the data and addd the indicator variable as described in:
## stackexchange question:
df1['c'] = 1 ## add indicator variable that tags the first groups of points
df_all = df1.append(df2, ignore_index=True).drop('const', axis=1)
df_all = df_all.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df_all = df_all.fillna(0) ## a bunch of the values are missing in the indicator columns after stacking
df_all['int'] = df_all['x'] * df_all['c'] # construct the interaction column
print(df_all) ## look a the data
formula = 'y ~ x + c + int' ## define the linear model using the formula api
result = ols(formula, df_all).fit()
hypotheses = '(c = 0), (int = 0)'
f_test = result.f_test(hypotheses)
print(f_test)
The result of the f-test looks like this:
<F test: F=array([[4.01995453]]), p=0.05233934453138028, df_denom=10, df_num=2>
The result of the f-test means that we just barely fail to reject any of the null hypotheses specified in the hypotheses variable namely that the coefficient of the indicator variable 'c' and interaction term 'int' are zero.
From this example it is clear that the f test on the regression coefficients is not very powerful if the residuals do not have the same variance.
Note that the given example has so few points it is hard for the statistical tests to clearly distinguish the two cases even though to the human eye they are very different. This is because even though the statistical tests are designed to make few assumptions about the data but those assumption get better the more data you have. When testing statistical methods to see if they accord with your expectations it is often best to start by constructing large samples with little noise and then see how well the methods work as your data sets get smaller and noisier.
For the sake of completeness I will construct an example where the Levene test will fail to distinguish the two regression models but f test will succeed to do so. The idea is to compare the regression of a noisy data set with its reverse. The distribution of residual errors will be the same but the relationship between the variables will be very different. Note that this would not work reversing the noisy dataset given in the previous example because the data is so noisy the f test cannot distinguish between the positive and negative slope.
import numpy as np
import pandas as pd
from scipy.stats import levene
from statsmodels.tools import add_constant
from statsmodels.formula.api import ols ## use formula api to make the tests easier
n_samples = 6
noise = np.random.randn(n_samples) * 5
data1 = np.linspace(0, 30, n_samples) + noise
data2 = data1[::-1] ## reverse the time series
df1 = add_constant(pd.DataFrame(data1)) ## add a constant column so we fit an intercept
df1 = df1.reset_index() ## just doing this to make the index a column of the data frame
df1 = df1.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df2 = add_constant(pd.DataFrame(data2)) ## this does nothing because the y column is already a constant
df2 = df2.reset_index()
df2 = df2.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
formula1 = 'y ~ x + const' ## define formulae
formula2 = 'y ~ x'
model1 = ols(formula1, df1).fit()
model2 = ols(formula2, df2).fit()
print(levene(model1.resid, model2.resid))
## stack the data and addd the indicator variable as described in:
## stackexchange question:
df1['c'] = 1 ## add indicator variable that tags the first groups of points
df_all = df1.append(df2, ignore_index=True).drop('const', axis=1)
df_all = df_all.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df_all = df_all.fillna(0) ## a bunch of the values are missing in the indicator columns after stacking
df_all['int'] = df_all['x'] * df_all['c'] # construct the interaction column
print(df_all) ## look a the data
formula = 'y ~ x + c + int' ## define the linear model using the formula api
result = ols(formula, df_all).fit()
hypotheses = '(c = 0), (int = 0)'
f_test = result.f_test(hypotheses)
print(f_test)
The result of Levene test and the f test follow:
LeveneResult(statistic=5.451203655948632e-31, pvalue=1.0)
<F test: F=array([[10.62788052]]), p=0.005591319998324387, df_denom=8, df_num=2>
A final note since we are doing multiple comparisons on this data and stopping if we get a significant result, i.e. if the Levene test rejects the null we quit, if it doesn't then we do the f test, this is a stepwise hypothesis test and we are actually inflating our false positive error rate. We should correct our p-values for multiple comparisons before we report our results. Note that the f test is already doing this for the hypotheses we test about the regression coefficients. I am a bit fuzzy on the underlying assumptions of these testing procedures so I am not 100% sure that you are better off making the following correction but keep it in mind in case you feel you are getting false positives too often.
from statsmodels.sandbox.stats.multicomp import multipletests
print(multipletests([1, .005591], .05)) ## correct out pvalues given that we did two comparisons
The output looks like this:
(array([False, True]), array([1. , 0.01115074]), 0.025320565519103666, 0.025)
This means we rejected the second null hypothesis under the correction and that the corrected p-values looks like [1., 0.011150]. The last two values are corrections to your significance level under two different correction methods.
I hope this helps anyone trying to do this type of work. If anyone has anything to add I would welcome comments. This isn't my area of expertise so I could be making some mistakes.

How to view the interactions of all categorical predictors in an OLS model using python's statsmodels?

I have successfully run an OLS model using the statsmodels package in python. However, the model pics one variable as an intercept, and does not include it in the results of interactions. Specifically, I have 5 levels in the "Meal_Cat" category below, and the model picks one of them ("Low" level) and treats it as an intercept. That is okay, but the problem is that I am unable to see its interactions with other categories (such as a Low by Group interaction).
See below for how the model is set up:
model = ols('Cost ~ C(Meal_Cat)*C(Group)*C(Region) + Age + Gender', data= Mealcat_DF).fit()
# Seeing if the overall model is significant
print(f"Overall model F({model.df_model: .0f},{model.df_resid: .0f}) = {model.fvalue: .3f}, p = {model.f_pvalue: .4f}")
model.summary()
I was wondering if anyone can suggest for a way to include all terms from the model in the interaction summary.
If your variable is already string or category variable, you just try the following.
import pandas as pd
import seaborn as sns
import statsmodels.formula.api as smf
df = sns.load_dataset('tips')
formula = 'tip ~ sex*smoker*day + total_bill'
model = smf.ols(formula, data=df)
results = model.fit()
print(results.summary())
OLS Regression Results
==============================================================================
Dep. Variable: tip R-squared: 0.485
Model: OLS Adj. R-squared: 0.449
Method: Least Squares F-statistic: 13.35
Date: Mon, 20 Jan 2020 Prob (F-statistic): 8.29e-25
Time: 14:21:24 Log-Likelihood: -344.02
No. Observations: 244 AIC: 722.0
Df Residuals: 227 BIC: 781.5
Df Model: 16
Covariance Type: nonrobust
=========================================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------------------------------
Intercept 0.9917 0.357 2.777 0.006 0.288 1.695
sex[T.Female] -0.0731 0.506 -0.144 0.885 -1.071 0.925
smoker[T.No] -0.0427 0.398 -0.107 0.915 -0.827 0.741
day[T.Fri] -0.4549 0.487 -0.933 0.352 -1.415 0.506
day[T.Sat] -0.4662 0.381 -1.224 0.222 -1.217 0.284
day[T.Sun] -0.2880 0.423 -0.681 0.497 -1.121 0.545
sex[T.Female]:smoker[T.No] -0.1423 0.593 -0.240 0.811 -1.311 1.026
sex[T.Female]:day[T.Fri] 0.8553 0.737 1.161 0.247 -0.597 2.307
sex[T.Female]:day[T.Sat] 0.2319 0.605 0.383 0.702 -0.960 1.424
sex[T.Female]:day[T.Sun] 1.0867 0.772 1.407 0.161 -0.435 2.608
smoker[T.No]:day[T.Fri] 0.1224 0.905 0.135 0.893 -1.660 1.905
smoker[T.No]:day[T.Sat] 0.6258 0.480 1.303 0.194 -0.320 1.572
smoker[T.No]:day[T.Sun] 0.2552 0.505 0.506 0.614 -0.739 1.250
sex[T.Female]:smoker[T.No]:day[T.Fri] -0.2185 1.303 -0.168 0.867 -2.787 2.350
sex[T.Female]:smoker[T.No]:day[T.Sat] -0.4487 0.759 -0.591 0.555 -1.944 1.046
sex[T.Female]:smoker[T.No]:day[T.Sun] -0.7027 0.892 -0.788 0.431 -2.460 1.054
total_bill 0.1078 0.008 13.951 0.000 0.093 0.123
==============================================================================
Omnibus: 29.744 Durbin-Watson: 2.154
Prob(Omnibus): 0.000 Jarque-Bera (JB): 60.768
Skew: 0.616 Prob(JB): 6.38e-14
Kurtosis: 5.112 Cond. No. 629.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

What equation is Statsmodels OLS using to create example fit line

I'm a novice to statics analysis and was looking into using statsmodels. As part of my research I ran into the following set of examples.
The section "OLS Non-linear curve but liner in parameters" is confusing the heck out of me. Example Follows:
np.random.seed(9876789)
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size = nsample)
res = sm.OLS(y, X).fit()
print(res.summary())
This shows the following summary results:
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.933
Model: OLS Adj. R-squared: 0.928
Method: Least Squares F-statistic: 211.8
Date: Tue, 28 Feb 2017 Prob (F-statistic): 6.30e-27
Time: 21:33:30 Log-Likelihood: -34.438
No. Observations: 50 AIC: 76.88
Df Residuals: 46 BIC: 84.52
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 0.4687 0.026 17.751 0.000 0.416 0.522
x2 0.4836 0.104 4.659 0.000 0.275 0.693
x3 -0.0174 0.002 -7.507 0.000 -0.022 -0.013
const 5.2058 0.171 30.405 0.000 4.861 5.550
==============================================================================
Omnibus: 0.655 Durbin-Watson: 2.896
Prob(Omnibus): 0.721 Jarque-Bera (JB): 0.360
Skew: 0.207 Prob(JB): 0.835
Kurtosis: 3.026 Cond. No. 221.
==============================================================================
When you plot all of this you get:
Plot of Data, Fit and True Line
What's confusing me is I can't figure out how the fit comes out of the coefficients shown in the summary table. My understanding is that these coefficients for the linear fit should correspond to and equation in the format X1 * x^3 + X2 * X^2 + X3 * X + Const, but that does not result in the curve seen. My next thought was that it might have been extrapolating the equation based on the values in the X matrix, therefore something like X1 * x + X2 * sin(x) + X3 * (x-5)^2 + Const. That also does not work.
What does seem to work is a polynomial fit with a degree of around 10. I found that using np.polyfit(x, y, 10). (The Coefficients of which are not similar to the OLS ones, plus there's 6 more)
So my question is what equation is OLS using to produce the predicted values? How doe the coefficients relate to it? When no equation is specified (assuming it's using something different than normal polynomial equations) how does it determine what to use or whats the best fit?
One Note, I've figured out that I can force it to do what I was expecting by changing the x values used to a different matrix via np.vander()
X = np.vander(X, 4)
This produces results in line with what I was expecting and np.polyfit.

After normalize data, using regression anlaysis how to predict y?

I have Normalize my data and apply regression analysis to predict yield(y).
but my predicted output also gives in normalized (in 0 to 1)
I want my predicted answer in my correct data numbers,not in 0 to 1.
Data:
Total_yield(y) Rain(x)
64799.30 720.1
77232.40 382.9
88487.70 1198.2
77338.20 341.4
145602.05 406.4
67680.50 325.8
84536.20 791.8
99854.00 748.6
65939.90 1552.6
61622.80 1357.7
66439.60 344.3
Next,I have normalize data using this code :
from sklearn.preprocessing import Normalizer
import pandas
import numpy
dataframe = pandas.read_csv('/home/desktop/yield.csv')
array = dataframe.values
X = array[:,0:2]
scaler = Normalizer().fit(X)
normalizedX = scaler.transform(X)
print(normalizedX)
Total_yield Rain
0 0.999904 0.013858
1 0.999782 0.020872
2 0.999960 0.008924
3 0.999967 0.008092
4 0.999966 0.008199
5 0.999972 0.007481
6 0.999915 0.013026
7 0.999942 0.010758
8 0.999946 0.010414
9 0.999984 0.005627
10 0.999967 0.008167
Next, I use this normalize value to calculate R-sqaure using following code :
array=normalizedX
data = pandas.DataFrame(array,columns=['Total_yield','Rain'])
import statsmodels.formula.api as smf
lm = smf.ols(formula='Total_yield ~ Rain', data=data).fit()
lm.summary()
Output :
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: Total_yield R-squared: 0.752
Model: OLS Adj. R-squared: 0.752
Method: Least Squares F-statistic: 1066.
Date: Thu, 09 Feb 2017 Prob (F-statistic): 2.16e-108
Time: 14:21:21 Log-Likelihood: 941.53
No. Observations: 353 AIC: -1879.
Df Residuals: 351 BIC: -1871.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept 1.0116 0.001 948.719 0.000 1.009 1.014
Rain -0.3013 0.009 -32.647 0.000 -0.319 -0.283
==============================================================================
Omnibus: 408.798 Durbin-Watson: 1.741
Prob(Omnibus): 0.000 Jarque-Bera (JB): 40636.533
Skew: -4.955 Prob(JB): 0.00
Kurtosis: 54.620 Cond. No. 10.3
==============================================================================
Now, R-square = 0.75 ,
regression model : y = b0 + b1 *x
Yield = b0 + b1 * Rain
Yield = intercept + coefficient for Rain * Rain
Now when I use my data value for Rain data then it will gives this answer :
Yield = 1.0116 + ( -0.3013 * 720.1(mm)) = -215.95
-215.95yield is wrong,
And when I use normalize value for rain data then predicted yield comes in normalize value in between 0 to 1.
I want predict if rainfall will be 720.1 mm then how many yield will be there?
If anyone help me how to get predicted yield ? I want to compare Predicted yield vs given yield.
First, you should not use Normalizer in this case. It doesn't normalize across features. It does it along rows. You may not want it.
Use MinMaxScaler or RobustScaler to scale each feature. See the preprocessing docs for more details.
Second, these classes have a inverse_transform() function which can convert the predicted y value back to original units.
x = np.asarray([720.1,382.9,1198.2,341.4,406.4,325.8,
791.8,748.6,1552.6,1357.7,344.3]).reshape(-1,1)
y = np.asarray([64799.30,77232.40,88487.70,77338.20,145602.05,67680.50,
84536.20,99854.00,65939.90,61622.80,66439.60]).reshape(-1,1)
scalerx = RobustScaler()
x_scaled = scalerx.fit_transform(x)
scalery = RobustScaler()
y_scaled = scalery.fit_transform(y)
Call your statsmodel.OLS on these scaled data.
While predicting, first transform your test data:
x_scaled_test = scalerx.transform([720.1])
Apply your regression model on this value and get the result. This result of y will be according to the scaled data.
Yield_scaled = b0 + b1 * x_scaled_test
So inverse transform it to get data in original units.
Yield_original = scalery.inverse_transform(Yield_scaled)
But in my opinion, this linear model will not give much accuracy, because when I plotted your data, this is the result.
This data will not be fitted with linear models. Use other techniques, or get more data.

Resources