gam in mgcv R with big number of covariates - gam

I would like to know if there is another way to write the function:
gam(VariableResponse ~ s(CovariateName1) + s(CovariateName2) + ... + s(CovariateName100),
family = gaussian(link = identity), data = MyData)
in mgcv package without typing 100 covariates' name as above?
Supposing that in MyData I have only VariableResponse in column 1, CovariateName1 in column 2, etc.
Many thank!

Yes, use the brute force approach to generate a formula by pasting together the covariate names with the strings 's(' and ')' and then collapsing the whole things with ' + '. The convert the resultant string to a formula and pass that to gam(). You may need to fix issues with the formula's environment if gam() can't find the variable you name as it is going to do some NSE on the formula to identify which terms need smooths estimating and hence need to be replaced by a basis expansion.
library(mgcv)
set.seed(2) ## simulate some data...
df <- gamSim(1, n=400, dist = "normal", scale = 2)
> names(df)
[1] "y" "x0" "x1" "x2" "x3" "f" "f0" "f1" "f2" "f3"
We'll ignore the last 5 of those columns for the purposes of this example
df <- df[1:5]
Make the formula
fm <- paste('s(', names(df[ -1 ]), ')', sep = "", collapse = ' + ')
fm <- as.formula(paste('y ~', fm))
Now fit the model
m <- gam(fm, data = df)
> m
Family: gaussian
Link function: identity
Formula:
y ~ s(x0) + s(x1) + s(x2) + s(x3)
Estimated degrees of freedom:
2.5 2.4 7.7 1.0 total = 14.6
GCV score: 4.050519
You do have to be careful about fitting GAMs this way however; concurvity (the nonlinear counterpart to multicolinearlity in linear models) can cause catastrophically bad estimates of smooth functions.

Related

Measurement invariance in Lavaan (subgroup analysis); Warning: covariance matrix of latent variables is not positive definite

I have a good fitting CFA model and am now conducting some additional analyses by different subgroups (e.g., gender, age, education, ...). Specifically, I'm calculating fit indices for measurement invariance (configural, metric, scalar). Everything looks fine, except for one of my subgroups. I get the following Warning message in Lavaan for a group that has a relatively small sample size:
Warning message:
In lav_object_post_check(object) :
lavaan WARNING: covariance matrix of latent variables
is not positive definite;
use lavInspect(fit, "cov.lv") to investigate.
Model specification and fit:
model <- '
F1 =~ a + b + c + d + e
F2 =~ f + g + h + i + j
F3 =~ k + l + m + n + o
F4 =~ p + q + r + s + t
F5 =~ u + v + w + x + y
Crossload =~ g + j + n + o + q
'
fit.model.group3 <- sem(model, data=MyData.group3, estimator='WLSMV', missing='pairwise', ordered = T)
fit.model.configural.edu <- sem(model, data=MyData.edu, estimator='WLSMV', missing='pairwise', ordered = T, group = "edu")
This grouping variable (edu) has 3 levels (Ns: group1 = 1150, group2 = 215, group3 = 120), and I only get the warning for group3 (the smallest group). I can fix it combining some factors (which were strongly correlated for this group), and then fitting the more parsimonious (4-factor) model. However, as the point of this subgroup analysis is to investigate the same CFA model across the different subgroups, I don't think it's appropriate to change the model specification(?). And so, is it okay to ignore this warning message for the sake of the measurement invariance analysis (the model still converges normally, and I get the necessary metrics)? Note that some of the eigenvalues are negative, and I get the following output from this call:
> det(lavInspect(fit.model.group3, "cov.lv"))
[1] 2.544325e-06
As it's close to zero, could it be a machine precision issue?
Any advice would be very much appreciated. Unfortunately, my technical knowledge is lacking and so I've struggled to follow some of the existing advice elsewhere.
Best wishes,
Mary

Canonical correlation analysis on covariance matrices instead of raw data

Due to privacy issues I don't have the original raw data matrices, but instead I can have covariance matrices of x and y (x'x, y'y, x'y) datasets or the correlation matrix between the two of them (or any other sort of matrix that is not the original data matrix).
I need to find a way to apply canonical correlation analysis directly on those matrices. Browsing the net I didn't find any solution to my problem. I want to ask if there is already an implemented algorithm able to work on these data, in R would be the best, but other languages are ok
Example from the tutorial in R for cca package: (https://stats.idre.ucla.edu/r/dae/canonical-correlation-analysis/)
mm <- read.csv("https://stats.idre.ucla.edu/stat/data/mmreg.csv")
colnames(mm) <- c("Control", "Concept", "Motivation", "Read", "Write", "Math",
"Science", "Sex")
You divide the dataset into x and y :
x <- mm[, 1:3]
y <- mm[, 4:8]
Then the function works taking as input these two datasets: cc(x,y) (note that the function standardizes the data by itself).
What I want to know if there is a way to perform cca starting by centering matrices around the mean:
x = scale(x, scale = F)
y = scale(Y, scale = F)
An then computing the covariance matrices x'x, y'y, xy'xy:
cvx = crossprod(x); cvy = crossprod(y); cvxy = crossprod(x,y)
And the algorithm should take in input those matrices to work and compute the canonical variates and correlation coefficients
like: f(cvx, cvy, cvxy)
In this article is written a solution starting from covariance matrices for example, but I don't if it is just theory or someone has actually implemented it
http://graphics.stanford.edu/courses/cs233-20-spring/ReferencedPapers/CCA_Weenik.pdf
I hope to be exhaustive enough!
In short: the correlation are using internally in most (probably all) CCA analysis.
In long: you will need to work out a bit how to do that depending on the case. Let me show you below a example.
What is Canonical-correlation analysis (CCA)?
Canonical-correlation analysis (CCA): help you to identify the best possible linear relations you could create between two datasets. See wikipedia. See references for examples. I will follow this post for the data and use libraries.
Set up libraries, upload the data, select some variables, removed nans, estandarizad the data.
import pandas as pd
import numpy as np
df = pd.read_csv('2016 School Explorer.csv')
# choose relevant features
df = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %','Average ELA Proficiency',
'Average Math Proficiency']]
df.corr()
# drop missing values
df = df.dropna()
# separate X and Y groups
X = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %'
]]
Y = df[['Average ELA Proficiency',
'Average Math Proficiency']]
for col in X.columns:
X[col] = X[col].str.strip('%')
X[col] = X[col].astype('int')
# Standardise the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler(with_mean=True, with_std=True)
X_sc = sc.fit_transform(X)
Y_sc = sc.fit_transform(Y)
What are Correlations?
I am pausing here to talk about the idea and the implementation.
First of all CCA analysis is naturally based on that idea however for the numerical resolution there are different ways to do that.
The definition from wikipedia. See the pic:
I am talking about this because I am going to modify a function of that library and I want you to really pay attention to that.
See Eq 4 in Bilenko et al 2016. But you need to be really careful with how to place that well.
Notice that strictly speaking you do not need the correlations.
Let me show the the function that is working out that expression, in pyrrcca library here
def kcca(data, reg=0., numCC=None, kernelcca=True,
ktype='linear',
gausigma=1.0, degree=2):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp
The output from here the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
I just want you to pay attention to this:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
That is exactly the place where that operation happens. Notice that is not exactly the definition from Wikipedia, however is mathematically equivalent.
Calculation of the correlations
I am going to calculate the correlations as in wikipedia but later I will modify that function, so it is going to bit a couple of details, to make sure this is answering the original questions clearly.
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
print(crosscovs)
[array([[1217. , 746.04496925, 736.14178336, 575.21073838,
517.52474332, 641.25363806],
[ 746.04496925, 1217. , 732.6297358 , 1094.38480773,
572.95747557, 1073.96490387],
[ 736.14178336, 732.6297358 , 1217. , 559.5753228 ,
682.15312862, 774.36607617],
[ 575.21073838, 1094.38480773, 559.5753228 , 1217. ,
495.79248754, 1047.31981248],
[ 517.52474332, 572.95747557, 682.15312862, 495.79248754,
1217. , 632.75610906],
[ 641.25363806, 1073.96490387, 774.36607617, 1047.31981248,
632.75610906, 1217. ]]), array([[367.74099904, 391.82683717],
[348.78464015, 355.81358426],
[440.88117453, 514.22183796],
[326.32173163, 311.97282341],
[216.32441793, 269.72859023],
[288.27601974, 304.20209135]]), array([[367.74099904, 348.78464015, 440.88117453, 326.32173163,
216.32441793, 288.27601974],
[391.82683717, 355.81358426, 514.22183796, 311.97282341,
269.72859023, 304.20209135]]), array([[1217. , 1139.05867099],
[1139.05867099, 1217. ]])]
Have a look to the output, I am going to change that a bit so is between -1 and 1. Again, this modification is minor. Following the definition from wikipedia the authors just care about the numerator, and I am just going to include now the denominator.
max_unit = 0
for crosscov in crosscovs:
max_unit = np.max([max_unit,np.max(crosscov)])
"""I normalice"""
crosscovs_new = []
for crosscov in crosscovs:
crosscovs_new.append(crosscov/max_unit)
print(crosscovs_new)
[array([[1. , 0.6130197 , 0.60488232, 0.47264646, 0.4252463 ,
0.52691342],
[0.6130197 , 1. , 0.6019965 , 0.89924799, 0.47079497,
0.88246911],
[0.60488232, 0.6019965 , 1. , 0.45979895, 0.56052024,
0.63629094],
[0.47264646, 0.89924799, 0.45979895, 1. , 0.40738906,
0.86057503],
[0.4252463 , 0.47079497, 0.56052024, 0.40738906, 1. ,
0.51993107],
[0.52691342, 0.88246911, 0.63629094, 0.86057503, 0.51993107,
1. ]]), array([[0.30217009, 0.32196125],
[0.28659379, 0.29236942],
[0.36226884, 0.42253232],
[0.26813618, 0.25634579],
[0.17775219, 0.22163401],
[0.2368743 , 0.24996063]]), array([[0.30217009, 0.28659379, 0.36226884, 0.26813618, 0.17775219,
0.2368743 ],
[0.32196125, 0.29236942, 0.42253232, 0.25634579, 0.22163401,
0.24996063]]), array([[1. , 0.93595618],
[0.93595618, 1. ]])]
For clarity I will show you in a slightly different way to see that the numbers and indeed correlations of the original data.
df.corr()
Average ELA Proficiency Average Math Proficiency
Average ELA Proficiency 1.000000 0.935956
Average Math Proficiency 0.935956 1.000000
That is a way to see as well the variables name. I just want to show you that the numbers above make sense, and are what you are calling correlations.
Calculations of the CCA
So now I will just modify a bit the function kcca from pyrrcca. The idea is for that function to accept the previously calculated correlations matrixes.
from rcca import _make_kernel
from scipy.linalg import eigh
def kcca_working(data, reg=0.,
numCC=None,
kernelcca=False,
ktype='linear',
gausigma=1.0,
degree=2,
crosscovs=None):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
if crosscovs is None:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp, crosscovs
Let run the function:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = crosscovs_new)
print(comp)
[array([[-0.00375779, 0.0078263 ],
[ 0.00061439, -0.00357358],
[-0.02054012, -0.0083491 ],
[-0.01252477, 0.02976148],
[ 0.00046503, -0.00905069],
[ 0.01415084, -0.01264106]]), array([[ 0.00632283, 0.05721601],
[-0.02606459, -0.05132531]])]
So I take the original function, and make possible to introduce the correlations, I also output that just for checking.
I print the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
Comparing results
Now I am going to compare results from the original and the modified function. I will show you that the results are equivalent.
I could obtain the original results this way. With crosscovs = None, so it is calculated as originally, instead of us introducing it:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = None)
print(comp)
[array([[-0.13109264, 0.27302457],
[ 0.02143325, -0.12466608],
[-0.71655285, -0.2912628 ],
[-0.43693303, 1.03824477],
[ 0.01622265, -0.31573818],
[ 0.49365965, -0.44098996]]), array([[ 0.2205752 , 1.99601077],
[-0.90927705, -1.79051045]])]
I print the Canonical Covariates (comp), those are a' and b' in Eq4 in Bilenko et al 2016.
a, b and a', b' are different but they are just different in the scale, so for all purpose they are equivalent. This is because of the correlations definitions.
To show that let me pick up numbers from each case and calculate the ratio:
print(0.00061439/-0.00375779)
-0.16349769412340764
print(0.02143325/-0.13109264)
-0.16349697435340382
They are the same result.
When that is modified you could just build in the top of that.
References:
Cool post with example and explanations in Python, using library pyrcca: https://towardsdatascience.com/understanding-how-schools-work-with-canonical-correlation-analysis-4c9a88c6b913
Bilenko, Natalia Y., and Jack L. Gallant. "Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging." Frontiers in neuroinformatics 10 (2016): 49. Paper in which pyrcca is explained: https://www.frontiersin.org/articles/10.3389/fninf.2016.00049/full

Euler beam, solving differential equation in python

I must solve the Euler Bernoulli differential beam equation which is:
w’’’’(x) = q(x)
and boundary conditions:
w(0) = w(l) = 0
and
w′′(0) = w′′(l) = 0
The beam is as shown on the picture below:
beam
The continious force q is 2N/mm.
I have to use shooting method and scipy.integrate.odeint() func.
I can't even manage to start as i do not understand how to write the differential equation as a system of equation
Can someone who understands solving of differential equations with boundary conditions in python please help!
Thanks :)
The shooting method
To solve the fourth order ODE BVP with scipy.integrate.odeint() using the shooting method you need to:
1.) Separate the 4th order ODE into 4 first order ODEs by substituting:
u = w
u1 = u' = w' # 1
u2 = u1' = w'' # 2
u3 = u2' = w''' # 3
u4 = u3' = w'''' = q # 4
2.) Create a function to carry out the derivation logic and connect that function to the integrate.odeint() like this:
function calc(u, x , q)
{
return [u[1], u[2], u[3] , q]
}
w = integrate.odeint(calc, [w(0), guess, w''(0), guess], xList, args=(q,))
Explanation:
We are sending the boundary value conditions to odeint() for x=0 ([w(0), w'(0) ,w''(0), w'''(0)]) which calls the function calc which returns the derivatives to be added to the current state of w. Note that we are guessing the initial boundary conditions for w'(0) and w'''(0) while entering the known w(0)=0 and w''(0)=0.
Addition of derivatives to the current state of w occurs like this:
# the current w(x) value is the previous value plus the current change of w in dx.
w(x) = w(x-dx) + dw/dx
# others are calculated the same
dw(x)/dx = dw(x-dx)/dx + d^2w(x)/dx^2
# etc.
This is why we are returning values [u[1], u[2], u[3] , q] instead of [u[0], u[1], u[2] , u[3]] from the calc function, because u[1] is the first derivative so we add it to w, etc.
3.) Now we are able to set up our shooting method. We will be sending different initial boundary values for w'(0) and w'''(0) to odeint() and then check the end result of the returned w(x) profile to determine how close w(L) and w''(L) got to 0 (the known boundary conditions).
The program for the shooting method:
# a function to return the derivatives of w
def returnDerivatives(u, x, q):
return [u[1], u[2], u[3], q]
# a shooting funtion which takes in two variables and returns a w(x) profile for x=[0,L]
def shoot(u2, u4):
# the number of x points to calculate integration -> determines the size of dx
# bigger number means more x's -> better precision -> longer execution time
xSteps = 1001
# length of the beam
L= 1.0 # 1m
xSpace = np.linspace(0, L, xSteps)
q = 0.02 # constant [N/m]
# integrate and return the profile of w(x) and it's derivatives, from x=0 to x=L
return odeint(returnDerivatives, [ 0, u2, 0, u4] , xSpace, args=(q,))
# the tolerance for our results.
tolerance = 0.01
# how many numbers to consider for u2 and u4 (the guess boundary conditions)
u2_u4_maxNumbers = 1327 # bigger number, better precision, slower program
# you can also divide into separate variables like u2_maxNum and u4_maxNum
# these are already tested numbers (the best results are somewhere in here)
u2Numbers = np.linspace(-0.1, 0.1, u2_u4_maxNumbers)
# the same as above
u4Numbers = np.linspace(-0.5, 0.5, u2_u4_maxNumbers)
# result list for extracted values of each w(x) profile => [u2Best, u4Best, w(L), w''(L)]
# which will help us determine if the w(x) profile is inside tolerance
resultList = []
# result list for each U (or w(x) profile) => [w(x), w'(x), w''(x), w'''(x)]
resultW = []
# start generating numbers for u2 and u4 and send them to odeint()
for u2 in u2Numbers:
for u4 in u4Numbers:
U = []
U = shoot(u2,u4)
# get only the last row of the profile to determine if it passes tolerance check
result = U[len(U)-1]
# only check w(L) == 0 and w''(L) == 0, as those are the known boundary cond.
if (abs(result[0]) < tolerance) and (abs(result[2]) < tolerance):
# if the result passed the tolerance check, extract some values from the
# last row of the w(x) profile which we will need later for comaprisons
resultList.append([u2, u4, result[0], result[2]])
# add the w(x) profile to the list of profiles that passed the tolerance
# Note: the order of resultList is the same as the order of resultW
resultW.append(U)
# go through the resultList (list of extracted values from last row of each w(x) profile)
for i in range(len(resultList)):
x = resultList[i]
# both boundary conditions are 0 for both w(L) and w''(L) so we will simply add
# the two absolute values to determine how much the sum differs from 0
y = abs(x[2]) + abs(x[3])
# if we've just started set the least difference to the current
if i == 0:
minNum = y # remember the smallest difference to 0
index = 0 # remember index of best profile
elif y < minNum:
# current sum of absolute values is smaller
minNum = y
index = i
# print out the integral for w(x) over the beam
sum = 0
for i in resultW[index]:
sum = sum + i[0]
print("The integral of w(x) over the beam is:")
print(sum/1001) # sum/xSteps
This outputs:
The integral of w(x) over the beam is:
0.000135085272117
To print out the best profile for w(x) that we found:
print(resultW[index])
which outputs something like:
# w(x) w'(x) w''(x) w'''(x)
[[ 0.00000000e+00 7.54147813e-04 0.00000000e+00 -9.80392157e-03]
[ 7.54144825e-07 7.54142917e-04 -9.79392157e-06 -9.78392157e-03]
[ 1.50828005e-06 7.54128237e-04 -1.95678431e-05 -9.76392157e-03]
...,
[ -4.48774290e-05 -8.14851572e-04 1.75726275e-04 1.01560784e-02]
[ -4.56921910e-05 -8.14670764e-04 1.85892353e-04 1.01760784e-02]
[ -4.65067671e-05 -8.14479780e-04 1.96078431e-04 1.01960784e-02]]
To double check the results from above we will also solve the ODE using the numerical method.
The numerical method
To solve the problem using the numerical method we first need to solve the differential equations. We will get four constants which we need to find with the help of the boundary conditions. The boundary conditions will be used to form a system of equations to help find the necessary constants.
For example:
w’’’’(x) = q(x);
means that we have this:
d^4(w(x))/dx^4 = q(x)
Since q(x) is constant after integrating we have:
d^3(w(x))/dx^3 = q(x)*x + C
After integrating again:
d^2(w(x))/dx^2 = q(x)*0.5*x^2 + C*x + D
After another integration:
dw(x)/dx = q(x)/6*x^3 + C*0.5*x^2 + D*x + E
And finally the last integration yields:
w(x) = q(x)/24*x^4 + C/6*x^3 + D*0.5*x^2 + E*x + F
Then we take a look at the boundary conditions (now we have expressions from above for w''(x) and w(x)) with which we make a system of equations to solve the constants.
w''(0) => 0 = q(x)*0.5*0^2 + C*0 + D
w''(L) => 0 = q(x)*0.5*L^2 + C*L + D
This gives us the constants:
D = 0 # from the first equation
C = - 0.01 * L # from the second (after inserting D=0)
After repeating the same for w(0)=0 and w(L)=0 we obtain:
F = 0 # from first
E = 0.01/12.0 * L^3 # from second
Now, after we have solved the equation and found all of the integration constants we can make the program for the numerical method.
The program for the numerical method
We will make a FOR loop to go through the entire beam for every dx at a time and sum up (integrate) w(x).
L = 1.0 # in meters
step = 1001.0 # how many steps to take (dx)
q = 0.02 # constant [N/m]
integralOfW = 0.0; # instead of w(0) enter the boundary condition value for w(0)
result = []
for i in range(int(L*step)):
x= i/step
w = (q/24.0*pow(x,4) - 0.02/12.0*pow(x,3) + 0.01/12*pow(L,3)*x)/step # current w fragment
# add up fragments of w for integral calculation
integralOfW += w
# add current value of w(x) to result list for plotting
result.append(w*step);
print("The integral of w(x) over the beam is:")
print(integralOfW)
which outputs:
The integral of w(x) over the beam is:
0.00016666652805511192
Now to compare the two methods
Result comparison between the shooting method and the numerical method
The integral of w(x) over the beam:
Shooting method -> 0.000135085272117
Numerical method -> 0.00016666652805511192
That's a pretty good match, now lets see check the plots:
From the plots it's even more obvious that we have a good match and that the results of the shooting method are correct.
To get even better results for the shooting method increase xSteps and u2_u4_maxNumbers to bigger numbers and you can also narrow down the u2Numbers and u4Numbers to the same set size but a smaller interval (around the best results from previous program runs). Keep in mind that setting xSteps and u2_u4_maxNumbers too high will cause your program to run for a very long time.
You need to transform the ODE into a first order system, setting u0=w one possible and usually used system is
u0'=u1,
u1'=u2,
u2'=u3,
u3'=q(x)
This can be implemented as
def ODEfunc(u,x): return [ u[1], u[2], u[3], q(x) ]
Then make a function that shoots with experimental initial conditions and returns the components of the second boundary condition
def shoot(u01, u03): return odeint(ODEfunc, [0, u01, 0, u03], [0, l])[-1,[0,2]]
Now you have a function of two variables with two components and you need to solve this 2x2 system with the usual methods. As the system is linear, the shooting function is linear as well and you only need to find the coefficients and solve the resulting linear system.

Error in caret-svm - "NAs are not allowed in subscripted assignments"

experts. I am a beginner to R. I am trying to use caret-SVM to make classification. The kernel is svmPoly.
First, I used the default parameters to train the model with leave-one-out cross-validation
The code is :
ctrl <- trainControl(method = "LOOCV",
classProbs = T,
savePredictions = T,
repeats = 1)
modelFit <- train(group~.,data=table_svm,method="svmPoly",
preProc = c("center","scale"),
trControl = ctrl)
The best accuracy is 80%. And the final values used for the model were degree = 1, scale = 0.1 and C = 1 .
Second, I tried to tune the parameters.
The code is:
grid_svmpoly=expand.grid(degree=c(1:11),scale=seq(0,5,length.out=25),C=10^c(0:4))
modelFit_tune <- train(group~.,data=table_svm,method="svmPoly",
preProc = c("center","scale"),
tuneGrid=grid_svmpoly,
trControl = ctrl)
I got an error message: Error in { :
task 264 failed - "NAs are not allowed in subscripted assignments"
I checked the data and found no NA.
There must be some NA inside the data-set. I am not new to this but not much expert. To ensure there is no NA inside first convert data-set into matrix format using:
x <- data.matrix(dataframe)
then use which() function which very handy in this case:
which(is.na(x)==T)
I hope this will help you finding the answer. The values will be in row wise order.
Let me know if this resolve your query.

How to convert from string to numeric value in R

i'm looking to graph the heights of a group of people in R
B/c a height like 5-11 comes in as a string, I was wondering if there were any tips on how to convert to a number so it could be graphed.
It also works with base R functions:
sapply(strsplit(x, "-"), function(x) {x <- as.numeric(x); x[1] + x[2] / 12})
I am assuming your data is a factor. If this is correct you can edit all the levels to numeric values then convert the factor to a numeric vector.
levels(data$heights) #Levels of the factor
levels(data$heights)<-c(5*12+10,5*12+11,6*12,6*12+1) #Renaming factors in numerical make sure
##the numbers are in the same order as in your levels
data$heights<-as.numeric(levels(data$heights))[data$heights] #Changing factor GPA to numeric vector GPA
data$heights
Reminder that this works if your R read your data as a factor
require(stringr)
Split <- str_split(x, "-")
sapply(Split, function(x) x[1] + (x[2] / 11))

Resources