Related
Could you help me with the
GurobiError: Unable to convert argument to an expression
I tried to declare variables as single variables (e.g x1, x2, x3 etc.), but then I thought the iterable objects will work better (as there was 'Non-itereable object' error), but still Gurobi cannot convert the expression; It now only throws an error at the #96, but I still can't find What should I do?
My code:
import gurobipy as grb
f = [5.5, 5.2, 5]
s = 3.8
lX = [0, 0, 0]
uX = [45000, 4000, 1000000]
lV = [0, 0]
lY = [0, 0]
uY = [1000000, 30000]
r = [3.25, 3.4]
pProc = 0.35
pConv = 0.25
p = [5.75, 4.0]
OreProcessingModel = grb.Model(name="MIP Model")
OreProcessingModel.ModelSense = grb.GRB.MAXIMIZE
x = {i: OreProcessingModel.addVar(vtype=grb.GRB.CONTINUOUS,
lb=lX[i],
ub= uX[i],
name="x_{0}".format(i))
for i in range(3)}
v = {i: OreProcessingModel.addVar(vtype=grb.GRB.CONTINUOUS,
lb=lV[i],
name="v_{0}".format(i))
for i in range(2)}
y = {i: OreProcessingModel.addVar(vtype=grb.GRB.CONTINUOUS,
lb=lY[i],
ub=uY[i],
name="v_{0}".format(i))
for i in range(2)}
conv = OreProcessingModel.addVar(vtype=grb.GRB.CONTINUOUS, lb=0, ub=50000, name="conv")
vlms2 = [v[0], v[1]]
vlmsitrtr = 2
constraint_1 = {1:
OreProcessingModel.addConstr(
lhs=grb.quicksum(y[i] for i in range(2)),
sense=grb.GRB.LESS_EQUAL,
rhs=100000,
name="constraint_{1}")
}
constraint_2 = {1:
OreProcessingModel.addConstr(
lhs=grb.quicksum(v[i] for i in range(2)),
sense=grb.GRB.LESS_EQUAL,
rhs=50000,
name="constraint_{2}")
}
OreProcessingModel.setObjective(grb.quicksum((f[i] * x[i] + s * v[i]) - (y[i]*r[i] + pProc*(y[i]) + pConv*conv))for i in range(3))
OreProcessingModel.optimize()
print(OreProcessingModel)
You should not work with dictionaries like that. You can turn this code
x = {i: OreProcessingModel.addVar(vtype=grb.GRB.CONTINUOUS,
lb=lX[i],
ub= uX[i],
name="x_{0}".format(i))
for i in range(3)}
into this:
x = OreProcessingModel.addVars(len(lX), vtype=grb.GRB.CONTINUOUS, lb=lX, ub=uX, name="x")
Furthermore, the dimensions of your variables don't match. You are looping over range(3) in the setObjective() call and try to access v[i] and y[i] but these dicts are only of length 2.
I am doing some numerical analysis exercise where I need calculate solution of linear system using a specific algorithm. My answer differs from the answer of the book by some decimal places which I believe is due to rounding errors. Is there a way where I can automatically set arithmetic to round eight decimal places after each arithmetic operation? The following is my python code.
import numpy as np
A1 = [4, -1, 0, 0, -1, 4, -1, 0,\
0, -1, 4, -1, 0, 0, -1, 4]
A1 = np.array(A1).reshape([4,4])
I = -np.identity(4)
O = np.zeros([4,4])
A = np.block([[A1, I, O, O],
[I, A1, I, O],
[O, I, A1, I],
[O, O, I, A1]])
b = np.array([1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6])
def conj_solve(A, b, pre=False):
n = len(A)
C = np.identity(n)
if pre == True:
for i in range(n):
C[i, i] = np.sqrt(A[i, i])
Ci = np.linalg.inv(C)
Ct = np.transpose(Ci)
x = np.zeros(n)
r = b - np.matmul(A, x)
w = np.matmul(Ci, r)
v = np.matmul(Ct, w)
alpha = np.dot(w, w)
for i in range(MAX_ITER):
if np.linalg.norm(v, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
u = np.matmul(A, v)
t = alpha/np.dot(v, u)
x = x + t*v
r = r - t*u
w = np.matmul(Ci, r)
beta = np.dot(w, w)
if np.abs(beta) < TOL:
if np.linalg.norm(r, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
s = beta/alpha
v = np.matmul(Ct, w) + s*v
alpha = beta
print("Max iteration exceeded")
return x
MAX_ITER = 1000
TOL = 0.05
sol = conj_solve(A, b, pre=True)
Using this, I get 2.55516527 as first element of array which should be 2.55613420.
OR, is there a language/program where I can specify the precision of arithmetic?
Precision/rounding during the calculation is unlikely to be the issue.
To test this I ran the calculation with precisions that bracket the precision you are aiming for: once with np.float64, and once with np.float32. Here is a table of the printed results, their approximate decimal precision, and the result of the calculation (ie, the first printed array value).
numpy type decimal places result
-------------------------------------------------
np.float64 15 2.55516527
np.float32 6 2.5551653
Given that these are so much in agreement, I doubt an intermediate precision of 8 decimal places is going to give an answer that's not between these two results (ie, 2.55613420 that's off in the 4th digit).
This isn't part isn't part of my answer, but is a comment on using mpmath. The questioner suggested it in the comments, and it was my first thought too, so I ran a quick test to see if it behaved how I expected with low precision calculations. It didn't, so I abandoned it (but I'm not an expert with it).
Here's my test function, basically multiplying 1/N by N and 1/N repeatedly to emphasise the error in 1/N.
def precision_test(dps=100, N=19, t=mpmath.mpf):
with mpmath.workdps(dps):
x = t(1)/t(N)
print(x)
y = x
for i in range(10000):
y *= x
y *= N
print(y)
This works as expected with, eg, np.float32:
precision_test(dps=2, N=3, t=np.float32)
# 0.33333334
# 0.3334327041164994
Note that the error has propagated into more significant digits, as expected.
But with mpmath, I could never get that to happen (testing with a range of dps and a various prime N values):
precision_test(dps=2, N=3)
# 0.33
# 0.33
Because of this test, I decided mpmath is not going to give normal results for low precision calculations.
TL;DR:
mpmath didn't behave how I expected at low precision so I abandoned it.
I have two functions that compute the same metric. One ends up using a list comprehension to cycle through a calculation, the other uses only numpy tensor operations. The functions take in a (N, 3) array, where N is the number of points in 3D space. When N <~ 3000 the tensor function is faster, when N >~ 3000 the list comprehension is faster. Both seem to have linear time complexity in terms of N i.e two time-N lines cross at N=~3000.
def approximate_area_loop(section, num_area_divisions):
n_a_d = num_area_divisions
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = lambda u: (1 - u) * a1 + u * a2
d = lambda u: (1 - u) * b1 + u * b2
x = lambda u, v: (1 - v) * c(u) + v * d(u)
area = np.sum([np.linalg.norm(np.cross((x((i + 1)/n_a_d, j/n_a_d) - x(i/n_a_d, j/n_a_d)),\
(x(i/n_a_d, (j +1)/n_a_d) - x(i/n_a_d, j/n_a_d))), axis = 1)\
for i in range(n_a_d) for j in range(n_a_d)])
Dt = section[-1, 0] - section[0, 0]
return area, Dt
def approximate_area_tensor(section, num_area_divisions):
divisors = np.linspace(0, 1, num_area_divisions + 1)
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = np.multiply.outer(a1, (1 - divisors)) + np.multiply.outer(a2, divisors) # c_areas_vecs_divs
d = np.multiply.outer(b1, (1 - divisors)) + np.multiply.outer(b2, divisors) # d_areas_vecs_divs
x = np.multiply.outer(c, (1 - divisors)) + np.multiply.outer(d, divisors) # x_areas_vecs_Divs_divs
u = x[:, :, 1:, :-1] - x[:, :, :-1, :-1] # u_areas_vecs_Divs_divs
v = x[:, :, :-1, 1:] - x[:, :, :-1, :-1] # v_areas_vecs_Divs_divs
sub_area_norm_vecs = np.cross(u, v, axis = 1) # areas_crosses_Divs_divs
sub_areas = np.linalg.norm(sub_area_norm_vecs, axis = 1) # areas_Divs_divs (values are now sub areas)
area = np.sum(sub_areas)
Dt = section[-1, 0] - section[0, 0]
return area, Dt
Why does the list comprehension version work faster at large N? Surely the tensor version should be faster? I'm wondering if it's something to do with the size of the calculations meaning it's too big to be done in cache? Please ask if I haven't included enough information, I'd really like to get to the bottom of this.
The bottleneck in the fully vectorized function was indeed in the np.linalg.norm as #hpauljs comment suggested.
Norm was used only to get the magnitude of all the vectors contained in axis 1. A much simpler and faster method was to just:
sub_areas = np.sqrt((sub_area_norm_vecs*sub_area_norm_vecs).sum(axis = 1))
This gives exactly the same results and sped up the code by up to 25 times faster than the loop implementation (even when the loop doesn't use linalg.norm either).
I use binocular camera to reconstruct points in 3d from 2d picture,I took many pictures by binocular camera and reconstructed points(feature points have been found already),but I found that the 3d models I reconstructed are not in a same coordinate.
I don't know the extrinsic params(by the way,I wonder how to get this params,because I got the intrinsic matrix from calibration already)
so, I compute the E matrix(8 points algorithm) and assume project matrix P1 of camera1 is P[I|0] and calculate P2 by P1 and E
the last step is to calculate the points in 3d by triangulation.
Code:
def compute_normalized_image_to_image_matrix(p1, p2, compute_essential=False):
""" Computes the fundamental or essential matrix from corresponding points
using the normalized 8 point algorithm.
:input p1, p2: corresponding points with shape 3 x n
:returns: fundamental or essential matrix with shape 3 x 3
"""
n = p1.shape[1]
if p2.shape[1] != n:
raise ValueError('Number of points do not match.')
# preprocess image coordinates
p1n, T1 = scale_and_translate_points(p1)
p2n, T2 = scale_and_translate_points(p2)
# compute F or E with the coordinates
F = compute_image_to_image_matrix(p1n, p2n, compute_essential)
# reverse preprocessing of coordinates
# We know that P1' E P2 = 0
F = np.dot(T1.T, np.dot(F, T2))
return F / F[2, 2]
def compute_fundamental_normalized(p1, p2):
return compute_normalized_image_to_image_matrix(p1, p2)
def compute_essential_normalized(p1, p2):
return compute_normalized_image_to_image_matrix(p1, p2, compute_essential=True)
def scale_and_translate_points(points):
""" Scale and translate image points so that centroid of the points
are at the origin and avg distance to the origin is equal to sqrt(2).
:param points: array of homogenous point (3 x n)
:returns: array of same input shape and its normalization matrix
"""
x = points[0]
y = points[1]
center = points.mean(axis=1) # mean of each row
cx = x - center[0] # center the points
cy = y - center[1]
dist = np.sqrt(np.power(cx, 2) + np.power(cy, 2))
scale = np.sqrt(2) / dist.mean()
norm3d = np.array([
[scale, 0, -scale * center[0]],
[0, scale, -scale * center[1]],
[0, 0, 1]
])
return np.dot(norm3d, points), norm3d
def compute_P_from_fundamental(F):
""" Compute the second camera matrix (assuming P1 = [I 0])
from a fundamental matrix.
"""
e = compute_epipole(F.T) # left epipole
Te = skew(e)
return np.vstack((np.dot(Te, F.T).T, e)).T
def compute_P_from_essential(E):
""" Compute the second camera matrix (assuming P1 = [I 0])
from an essential matrix. E = [t]R
:returns: list of 4 possible camera matrices.
"""
U, S, V = np.linalg.svd(E)
# Ensure rotation matrix are right-handed with positive determinant
if np.linalg.det(np.dot(U, V)) < 0:
V = -V
# create 4 possible camera matrices (Hartley p 258)
W = np.array([[0, -1, 0], [1, 0, 0], [0, 0, 1]])
P2s = [np.vstack((np.dot(U, np.dot(W, V)).T, U[:, 2])).T,
np.vstack((np.dot(U, np.dot(W, V)).T, -U[:, 2])).T,
np.vstack((np.dot(U, np.dot(W.T, V)).T, U[:, 2])).T,
np.vstack((np.dot(U, np.dot(W.T, V)).T, -U[:, 2])).T]
return P2s
def linear_triangulation(p1, p2, m1, m2):
"""
Linear triangulation (Hartley ch 12.2 pg 312) to find the 3D point X
where p1 = m1 * X and p2 = m2 * X. Solve AX = 0.
:param p1, p2: 2D points in homo. or catesian coordinates. Shape (2 x n)
:param m1, m2: Camera matrices associated with p1 and p2. Shape (3 x 4)
:returns: 4 x n homogenous 3d triangulated points
"""
num_points = p1.shape[1]
res = np.ones((4, num_points))
for i in range(num_points):
A = np.asarray([
(p1[0, i] * m1[2, :] - m1[0, :]),
(p1[1, i] * m1[2, :] - m1[1, :]),
(p2[0, i] * m2[2, :] - m2[0, :]),
(p2[1, i] * m2[2, :] - m2[1, :])
])
_, _, V = np.linalg.svd(A)
X = V[-1, :]
res[:, i] = X / X[3]
return res
so how can I solve this? I want all my reconstructed points to be in a same coordinate system,could you please tell me?thank you very much!
firstly, apologize for little cryptic title to my question. Let me try to explain my need:-
I am reading two features namely X1, X2 from a CSV file. I have a training set of data in a csv file containing 1000 records with each line corresponding to the value of X1, X2. To make my training set fit better to my machine learning code, I want to do feature mapping that would take X1, X2 and create polynomial terms to the power of 4. for example if X1 =a, X2=b, I want to add newer features a^2, a*b, b^2, a^3,a^2*b,a*b^2,a^4...and so on.
Now if I read them as a numpy matrix , I want to see the data like this:
[ [ 1 a b a^2 a*b, b^2 a^3 a^2*b......]
[.... ............ ............ ]
[ ..
..] ]
Note that the number of rows are fixed , but the number of columns are determined by the degree selected. Also first three columns need to be
[[1 a b ..]
[1 c d ..]
..
..]
The pseudo code I am thinking of is as follows:-
def poly(X): # where X is a numpy matrix with X1, X2 columns,
degree = 4;
r= X.shape[0]
c=1 # number of columns
val_matrix= np.ones(shape=(r,c)) # creating a (r,1) matrix init with 1s
# *start of psuedo code*
while i<=degree:
while j <=i:
val_matrix[:, c+1] = (X1.^(i-j)).*(X2.^j)
I am not sure how to get this working in python?. would appreciate some suggestion. Note that ^ refers to the power of.
Starting with two vectors X1 and X2 you could create the monomials:
X1p = X1[:, None]**np.arange(max_deg + 1)
X2p = X2[:, None]**np.arange(max_deg + 1)
and then combine them using mgrid
i, j = np.mgrid[:max_deg + 1,:max_deg + 1]
m = i+j <= max_deg
result = X1p[:, i[m]]*X2p[:, j[m]]
Alternatively you could apply the indices directly to X1 and X2:
result = X1[:, None]**i[m] * X2[:, None]**j[m]
This requires fewer lines of code but uses more multiplications.
If the number of multiplications is a concern, X1p and X2p could also be computed cheaper; X1p:
X1p = np.empty((len(X1), max_deg + 1), X1.dtype)
X1p[:, 0] = 1
X1p[:, 1:] = X1[:, None]
np.multiply.accumulate(X1p[:,1:], axis=-1, out=X1p[:, 1:])
and similar for X2p