i´ve a problem with running the following function to perform a Wald Test on my Data. It always displays me the error message that the variable theta is not defined. I tried to define it in struct WaldTestFun{F, T, Z} but this does not work. The error message im getting is
Error Message = UndefVarError: theta not defined
Here is a part of the code:
using LinearAlgebra
using Optim
using PrettyTables
using Printf
using Statistics
using StatsBase
using StatsFuns
struct WaldTest
tbl::NamedTuple
rankmin::Int64
rankₘₐₓ::Int64
end
struct WaldTestFun{F, T, Z}
f::F
r::Int64
vecsigma::T
Vhat::Z
end
(wf::WaldTestFun)(theta) = wf.f(theta, wf.r, wf.vecsigma, wf.Vhat) #Here the error occurs
function waldobjfun(th, r, vecsigma, Vhat)
r,k = size(theta) #Here the error occurs
theta = reshape(theta, r+1, length(th)÷(r+1))
sigmamat = diagm(0=>theta[1,:].^2) .+ theta[2:r+1,:]'*theta[2:r+1,:]
tempsigma = sigmamat[findall(tril(ones(size(sigmamat))).==1)]
(vecsigma -tempsigma)' /Vhat *(vecsigma - tempsigma)
end
X = randn(100,10);
fm = Factotum.FactorModel(X, 3)
function waldtest(fm::FactorModel, minrank::Int = 0, maxrank::Int = 2)
X = copy(fm.X)
T, n = size(X)
## Normalize factor
Xs = X / diagm(0=>sqrt.(diag(cov(X))))
covX = cov(Xs)
meanX = mean(Xs, dims=1)
vecsigma = vech(covX)
bigN = length(vecsigma)
Vhat = Array{Float64}(undef, bigN, bigN)
varvecsig = zeros(n,n,n,n);
Related
I am trying to use julai as main language for my work. But I find that this plot is different than python (Which outputs the right plot)
Here is the python code and output
import numpy as np
import math
import matplotlib.pyplot as plt
u = 9.27*10**(-21)
k = 1.38*10**(-16)
j2 = 7/2
nrr = 780
h = 1000
na = 6*10**(23)
rho = 7.842
mgd = 157.25
a = mgd
d = na*rho*u/a
m_f = []
igd = 7.0
for t in range(1,401):
while True:
h1 = h+d*nrr*igd
x2 = (7*u*h1)/(k*t)
x4 = 2*j2
q2 = (x4+1)/x4
m = abs(7*(q2*math.tanh(q2*x2)**-1 - (1/x4)*math.tanh(x2/x4)**-1))
if abs(m - igd) < 10**(-12):
break
else:
igd = m
m_f.append(abs(m))
plt.plot(range(1,401), m_f)
plt.savefig("Py_plot.pdf")
and it gives the following right plot
The right plot as expected
But when I do the same calculations in julia it gives different output than python, here is my julia code
using Plots
u = 9.27*10^(-21)
k = 1.38*10^(-16)
j2 = 7/2
nrr = 780
h = 1000
na = 6*10^(23)
rho = 7.842
mgd = 157.25
a = mgd
d = na*rho*u/a
igd = 7.0
m = 0.0
m_f = Float64[]
for t in 1:400
while true
h1 = h+d*nrr*igd
x2 = (7*u*h1)/(k*t)
x4 = 2*j2
q2 = (x4+1)/x4
m = 7*(q2*coth(rad2deg(q2*x2))-(1/x4)*coth(rad2deg(x2/x4)))
if abs(abs(m)-igd) < 10^(-10)
break
else
igd = m
end
end
push!(m_f, abs(m))
end
plot(1:400, m_f)
and this is the unexpected julia output
unexpected wrong output from julia
I wish for help....
Code:
using Plots
const u = 9.27e-21
const k = 1.38e-16
const j2 = 7/2
const nrr = 780
const h = 1000
const na = 6.0e23
const rho = 7.842
const mgd = 157.25
const a = mgd
const d = na*rho*u/a
function plot_graph()
igd = 7.0
m = 0.0
trange = 1:400
m_f = Vector{Float64}(undef, length(trange))
for t in trange
while true
h1 = h+d*nrr*igd
x2 = (7*u*h1)/(k*t)
x4 = 2*j2
q2 = (x4+1)/x4
m = abs(7*(q2*coth(q2*x2)-(1/x4)*coth(x2/x4)))
if isapprox(m, igd, atol = 10^(-10))
break
else
igd = m
end
end
m_f[t] = m
end
plot(trange, m_f)
end
Plot:
Changes for correctness:
Changed na = 6.0*10^(23) to na = 6.0e23.
Since ^ has a higher precedence than *, 10^23 is evaluated first, and since the operands are Int values, the result is also an Int. However, Int (i.e. Int64) can only hold numbers up to approximately 9 * 10^18, so 10^23 overflows and gives a wrong result.
julia> 10^18
1000000000000000000
julia> 10^19 #overflow starts here
-8446744073709551616
julia> 10^23 #and gives a wrong value here too
200376420520689664
6.0e23 avoids this problem by directly using the scientific e-notation to create a literal Float64 value (Float64 can hold this value without overflowing).
Removed the rad2deg calls when calling coth. Julia trigonometric functions by default take radians, so there's no need to make this conversion.
Other changes
Marked all the constants as const, and moved the rest of the code into a function. See Performance tip: Avoid non-constant global variables
Changed the abs(m - igd) < 10^-10 to isapprox(m, igd, atol = 10^-10) which performs basically the same check, but is clearer and more flexible (for eg. if you wanted to change to a relative tolerance rtol later).
Stored the 1:400 as a named variable trange. This is just because it's used multiple times, so it's easier to manage as a variable.
Changed m_f = Float64[] to m_f = Vector{Float64}(undef, length(trange)) (and the push! at the end to an assignment). If the size of the array is known beforehand (as it is in this case), it's better for performance to pre-allocate it with undef values and then assign to it.
Changed u and k also to use the scientific e-notation, for consistency and clarity (thanks to #DNF for suggesting the use of this notation in the comments).
I have estimated nested logit in R using the mlogit package. However, I encountered some problems when trying to estimate the marginal effect. Below is the code I implemented.
library(mlogit)
# data
data2 = read.csv(file = "neat_num_energy.csv")
new_ener2 <- mlogit.data(
data2,
choice="alter4", shape="long",
alt.var="energy_altern",chid.var="id")
# estimate model
nest2 <- mlogit(
alter4 ~ expendmaint + expendnegy |
educ + sex + ppa_power_sp + hu_price_powersupply +
hu_2price +hu_3price + hu_9price + hu_10price +
hu_11price + hu_12price,
data = data2,
nests = list(
Trad = c('Biomas_Trad', 'Solar_Trad'),
modern = c('Biomas_Modern', 'Solar_Modern')
), unscaled=FALSE)
# create Z variable
z3 <- with(data2, data.frame(
expendnegy = tapply(expendnegy, idx(nest2,2), mean),
expendmaint= tapply(expendmaint, idx(nest2,2), mean),
educ= mean(educ),
sex = mean(sex),
hu_price_powersupply = mean(hu_price_powersupply),
ppa_power_sp = mean(ppa_power_sp),
hu_2price = mean(hu_2price),
hu_3price = mean(hu_3price),
hu_9price = mean(hu_9price),
hu_10price = mean(hu_10price),
hu_11price = mean(hu_11price),
ppa_power_sp = mean(ppa_power_sp),
hu_12price = mean(hu_12price)
))
effects(nest2, covariate = "sex", data = z3, type = "ar")
#> ** Error in Solve.default (H, g[!fixed]): Lapack routine dgesv: #> system is exactly singular:U[6,6] =0.**
My data is in long format with expendmaint and expendnegy being the only alternative specific while every other variable is case specific.
altern4 is a nominal variable representing each alternative
I have an expression like :
b = IndexedBase('b')
k = IndexedBase('k')
w = IndexedBase('w')
r = IndexedBase('r')
z = IndexedBase('z')
i = symbols("i", cls=Idx)
omega = symbols("omega", cls=Idx)
e_p = (-k[i, omega]*r[i]/w[i] + k[i, omega]*r[i]/(b[omega]*w[i]))**b[omega]*k[i, omega]*r[i]/(-b[omega]*k[i, omega]*k[i, omega]**b[omega]*r[i]*z[omega]/w[i] + k[i, omega]*k[i, omega]**b[omega]*r[i]*z[omega]/w[i])
e_p = simplify(e_p)
print(type(e_p))
print(e_p)
<class 'sympy.core.mul.Mul'>
-(-(b[omega] - 1)*k[i, omega]*r[i]/(b[omega]*w[i]))**b[omega]*k[i, omega]**(-b[omega])*w[i]/((b[omega] - 1)*z[omega])
So k[i, omega] should be canceled out when I use simplify() function but do nothing. How can I get rid of unnecessary variables and coefficients?
For a better structure of my Constraints, I want to summarize multiple constraints into a block, so that a don't have to scroll through a long list of separate functions representing my constraints.
My problem is that I'm using an Abstract model and don't know how to define that Block for a set that has not been initialized yet
M.s = pe.Set(dimen=1)
M.chp_minPower = pe.Param(within=pe.Reals,mutable=True)
M.chp_maxPower = pe.Param(within=pe.Reals,mutable=True)
M.chp_posGrad = pe.Param(within=pe.Reals,mutable=True)
M.chp_negGrad = pe.Param(within=pe.Reals,mutable=True)
M.chp_k = pe.Param(within=pe.Reals,mutable=True)
M.chp_c = pe.Param(within=pe.Reals,mutable=True)
M.chp_f1 = pe.Param(within=pe.Reals,mutable=True)
M.chp_f2 = pe.Param(within=pe.Reals,mutable=True)
M.gasCost = pe.Param(within=pe.Reals,mutable=True)
M.chpOn = pe.Var(M.s, within=pe.Binary)
M.chpSwitchON = pe.Var(M.s,within=pe.Binary)
M.chpPel = pe.Var(M.s,within=pe.NonNegativeReals)
M.chpPth = pe.Var(M.s, within=pe.NonNegativeReals)
M.chpQGas = pe.Var(M.s, within=pe.NonNegativeReals)
def chp_block_rule1(nb,i):
#Constraints
nb.chpPelMax = pe.Constraint(expr=M.chpPel[i] <= M.chp_maxPower * M.chpOn[i])
nb.chpPelMin = pe.Constraint(expr=M.chpPel[i] >= M.chp_minPower * M.chpOn[i])
#b.sellBin = pe.Constraint(expr=b.sell[i]/M.maxSell <= M.sellBin[i]
nb.chpCogen = pe.Constraint(expr=M.chpPth[i] == M.chp_f1 * M.chpPel[i] + M.chp_f2 * M.chpOn[i])
nb.chpConsumption = pe.Constraint(expr=M.chpQGas[i] == M.chp_c * M.chpOn[i] + M.chp_k + M.chpPel[i])
M.chp_block = pe.Block(M.s, rule=chp_block_rule1)
ValueError: Error retrieving component chpPel[1]: The component has
not been constructed.
Does anybody know how to work with blocks in Abstract models?
I'm not 100% sure but I guess expr tries to actually evaluate the expression, and because chpPel is a variable (thus has no value) it breaks.
In order to delay the evaluation of the expression (i.e. pass the expression into the solver as symbolic), you can use rule instead of expr.
As you probably know rule takes a function. If the expression is short enough, you can use a lambda function.
nb.chpPelMax = pe.Constraint(rule=lambda M: M.chpPel[i] <= M.chp_maxPower * M.chpOn[i])
Side note: you can just use list(range(...)) instead of the list comprehension.
I'm running MATLAB R2017a. I am trying to execute a simple program that writes 3 characters to an Excel file. When I run the program with a small number of values it is fine but when I increase it to the millions, the program pauses.
Does anyone know why the programming is pausing like this?
X = []
filename = 'PopltnFL.xlsx';
NumTrump = 4617886;
NumClinton = 4504975;
NumOther = 297025;
*% Values for which program runs without puasing*
% NumTrump = 4;
% NumClinton = 4;
% NumOther = 2;
%
for ii = 1:NumTrump
X = [X,'T'];
end
for jj = 1:NumClinton
X = [X,'C'];
end
for kk = 1:NumOther
X = [X,'O'];
end
X = X';
xlswrite(filename,X)