Does SciPy.sparse.linalg.svds give matrix rank? - python-3.x

I have a largish sparse binary-valued rectangular matrix, M, where n > m. My understanding of matrix rank suggests the largest possible rank is m, and my understanding of SVD suggests the rank of a matrix can be found by identifying the number of non-zero singular values.
I'm attempting to use SciPy.sparse.linalg.svds to determine the rank of M. First problem is that I cannot compute m singular values since k can only go up to p = m - 1. So I thought I'd be clever and compute p highest values, the p lowest values, combine them, run set to find the unique values, and end up with a list of at most m values. This didn't work out according to plan.
Here's a MWE:
import scipy.sparse
import scipy.sparse.linalg
import numpy
import itertools
m = 6
n = 10
test = scipy.sparse.rand(m, n, density=0.25, format='lil', dtype=None, random_state=None)
for i, j in itertools.product(list(range(m)), list(range(n))):
test[i, j] = 1 if test[i, j] > 0 else 0
U1, S1, VT1 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'LM', v0 = None, maxiter = None,
return_singular_vectors = True)
U2, S2, VT2 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'SM', v0 = None, maxiter = None,
return_singular_vectors = True)
S = list(set(numpy.concatenate((S1, S2), axis = 0)))
len(S)
Here's a sample output:
10
with S being
[0.5303120147925737,
1.0725314055439354,
2.7940865631779643,
1.5060744813473148,
1.8412737686034186,
0.3208993522030293,
0.5303120147925728,
1.072531405543936,
1.5060744813473153,
1.841273768603419]
How can a m X n matrix with m < n have a rank of n? Are my assumptions above incorrect, or am I misapplying the function? My real M is sparse, binary-valued, and roughly 300 X 500.
Thanks for looking!
With help from #tch I've come up with the following hack. To check for rank = m, I only need check the smallest value, and append it to the m - 1 values obtained from the svds highest values function. It turns out svds doesn't report 0s when thresholded, so the lowest values function will return nan for rank < m. Here's the revised code:
import scipy.sparse
import scipy.sparse.linalg
import numpy
import itertools
m = 6
n = 10
test = scipy.sparse.rand(m, n, density=0.25, format='lil', dtype=None, random_state=None)
test = test > 0
test = test.astype('d')
U1, S1, VT1 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'LM', v0 = None, maxiter = None,
return_singular_vectors = True)
U2, S2, VT2 = scipy.sparse.linalg.svds(test, k = 1, ncv = None, tol = 1e-5, which = 'SM', v0 = None, maxiter = None,
return_singular_vectors = True)
S = list(set(numpy.concatenate((S1, S2), axis = 0)))
print(sum(x > 1e-10 for x in S))
S

What you are trying to do would work in exact arithmetic (assuming the matrix has no repeat singular values). However, due to numerical rounding errors, it won't work in practice.
To see this try
C = np.random.randn(10,3)
u,s,vt = np.linalg.svd(C#C.T)
Note that C#C.T is a 10x10 matrix with rank 3. However, you will see that none of the singular values are exactly zero (however 7 are close to 0).
When finding the rank of a matrix numerically, thresholding is often used to determine what it means for a singular value to be 0. For instance, everything below 1e-10 may be set to zero.
If the matrix has exact rank k, hopefully you will see k singular values away from 0, and then min(m,n)-k singular values very close to zero. However, depending on the matrix, there may not even be a well defined "drop".
So for your example, you could try removing elements which are within some threshold of one another. However this of course could run into issues if the matrix has repeat singular values.
You could just run the smallest singular values and see how many give you near zero. Presumably the matrix is at least rank ` so the first singular value will be nonzero.
As a note about finding where test[i,j] > 0, you can just to test>0 and it will give a boolean array with True in the nonzero entries and False elsewhere. You can also set the dtype of the random matrix to bool and it will be True whenever the random number is nonzero.

Related

Translating a mixed-integer programming formulation to Scipy

I would like to solve the above formulation in Scipy and solve it using milp(). For a given graph (V, E), f_ij and x_ij are the decision variables. f_ij is the flow from i to j (it can be continuous). x_ij is the number of vehicles from i to j. p is the price. X is the available number vehicles in a region. c is the capacity.
I have difficulty in translating the formulation to Scipy milp code. I would appreciate it if anyone could give me some pointers.
What I have done:
The code for equation (1):
f_obj = [p[i] for i in Edge]
x_obj = [0]*len(Edge)
obj = f_obj + v_obj
Integrality:
f_cont = [0 for i in Edge] # continous
x_int = [1]*len(Edge) # integer
integrality = f_cont + x_int
Equation (2):
def constraints(self):
b = []
A = []
const = [0]*len(Edge) # for f_ij
for i in v: # for x_ij
for e in Edge:
if e[0] == i:
const.append(1)
else:
const.append(0)
A.append(const)
b.append(self.accInit[i])
const = [0]*len(Edge) # for f_ij
return A, b
Equation (4):
[(0, demand[e]) for e in Edge]
I'm going to do some wild guessing, given how much you've left open to interpretation. Let's assume that
this is a maximisation problem, since the minimisation problem is trivial
Expression (1) is actually the maximisation objective function, though you failed to write it as such
p and d are floating-point vectors
X is an integer vector
c is a floating-point scalar
the graph edges, since you haven't described them at all, do not matter for problem setup
The variable names are not well-chosen and hide what they actually contain. I demonstrate potential replacements.
import numpy as np
from numpy.random._generator import Generator
from scipy.optimize import milp, Bounds, LinearConstraint
import scipy.sparse
from numpy.random import default_rng
rand: Generator = default_rng(seed=0)
N = 20
price = rand.uniform(low=0, high=10, size=N) # p
demand = rand.uniform(low=0, high=10, size=N) # d
availability = rand.integers(low=0, high=10, size=N) # X aka. accInit
capacity = rand.uniform(low=0, high=10) # c
c = np.zeros(2*N) # f and x
c[:N] = -price # (1) f maximized with coefficients of 'p'
# x not optimized
CONTINUOUS = 0
INTEGER = 1
integrality = np.empty_like(c, dtype=int)
integrality[:N] = CONTINUOUS # f
integrality[N:] = INTEGER # x
upper = np.empty_like(c)
upper[:N] = demand # (4) f
upper[N:] = availability # (2) x
eye_N = scipy.sparse.eye(N)
A = scipy.sparse.hstack((-eye_N, capacity*eye_N)) # (3) 0 <= -f + cx
result = milp(
c=c, integrality=integrality,
bounds=Bounds(lb=np.zeros_like(c), ub=upper),
constraints=LinearConstraint(lb=np.zeros(N), A=A),
)
print(result.message)
flow = result.x[:N]
vehicles = result.x[N:].astype(int)

Numpy.linalg.eig is giving different results than numpy.linalg.eigh for Hermitian matrices

I have one hermitian matrix (specifically, a Hamiltonian). Though phase of a singe eigenvector can be arbitrary, the quantities I am calculating is physical (I reduced the code a bit keeping just the reproducible part). eig and eigh are giving very different results.
import numpy as np
import numpy.linalg as nlg
import matplotlib.pyplot as plt
def Ham(Ny, Nx, t, phi):
h = np.zeros((Ny,Ny), dtype=complex)
for ii in range(Ny-1):
h[ii+1,ii] = t
h[Ny-1,0] = t
h=h+np.transpose(np.conj(h))
u = np.zeros((Ny,Ny), dtype=complex)
for ii in range(Ny):
u[ii,ii] = -t*np.exp(-2*np.pi*1j*phi*ii)
u = u + 1e-10*np.eye(Ny)
H = np.kron(np.eye(Nx,dtype=int),h) + np.kron(np.diag(np.ones(Nx-1), 1),u) + np.kron(np.diag(np.ones(Nx-1), -1),np.transpose(np.conj(u)))
H[0:Ny,Ny*(Nx-1):Ny*Nx] = np.transpose(np.conj(u))
H[Ny*(Nx-1):Ny*Nx,0:Ny] = u
x=[]; y=[];
for jj in range (1,Nx+1):
for ii in range (1,Ny+1):
x.append(jj); y.append(ii)
x = np.asarray(x)
y = np.asarray(y)
return H, x, y
def C_num(Nx, Ny, E, t, phi):
H, x, y = Ham(Ny, Nx, t, phi)
ifhermitian = np.allclose(H, np.transpose(np.conj(H)), rtol=1e-5, atol=1e-8)
assert ifhermitian == True
Hp = H
V,wf = nlg.eigh(Hp) ##Check. eig gives different result
idx = np.argsort(np.real(V))
wf = wf[:, idx]
normmat = wf*np.conj(wf)
norm = np.sqrt(np.sum(normmat, axis=0))
wf = wf/(norm*np.sqrt(len(H)))
wf = wf[:, V<=E] ##Chose a subset of eigenvectors
V01 = wf*np.exp(1j*x)[:,None]; V12 = wf*np.exp(1j*y)[:,None]
V23 = wf*np.exp(1j*x)[:,None]; V30 = wf*np.exp(1j*y)[:,None]
wff = np.transpose(np.conj(wf))
C01 = np.dot(wff,V01); C12 = np.dot(wff,V12); C23 = np.dot(wff,V23); C30 = np.dot(wff,V30)
F = nlg.multi_dot([C01,C12,C23,C30])
ifhermitian = np.allclose(F, np.transpose(np.conj(F)), rtol=1e-5, atol=1e-8)
assert ifhermitian == True
evals, efuns = nlg.eig(F) ##Check eig gives different result
C = (1/(2*np.pi))*np.sum(np.angle(evals));
return C
C = C_num(16, 16, 0, 1, 1/8)
print(C)
Changing both nlg.eigh to nlg.eig, or even changing only the last one, giving very different results.
As I mentioned elsewhere, the eigenvalue and eigenvector are not unique.
The only thing that is true is that for each eigenvalue $A v = lambda v$, the two matrices returned by eig and eigh describe those solutions, it is natural that eig inexact but approximate results.
You can see that both the solutions will triangularize your matrix in different ways
H, x, y = Ham(16, 16, 1, 1./8)
D, V = nlg.eig(H)
Dh, Vh = nlg.eigh(H)
Then
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 7))
plt.subplot(121);
plt.imshow(abs(np.conj(Vh.T) # H # Vh))
plt.title('diagonalized with eigh')
plt.subplot(122);
plt.imshow(abs(np.conj(V.T) # H # V))
plt.title('diagonalized with eig')
Plots this
That both diagonalizations were successfull, but the eigenvalues are indifferent order.
If you sort the eigenvalues you see they match
plt.plot(np.diag(np.real(np.conj(Vh.T) # H # Vh)))
plt.plot(np.diag(np.imag(np.conj(Vh.T) # H # Vh)))
plt.plot(np.sort(np.diag(np.real(np.conj(V.T) # H # V))))
plt.title('eigenvalues')
plt.legend(['real eigh', 'imag eigh', 'sorted real eig'], loc='upper left')
Since many eigenvalues are repeated, the eigenvector associated with a given eigenvalue is not unique as well, the only thing we can guarantee is that the eigenvectors for a given eigenvalue must span the same subspace.
The diagonalization test is the best in my opinion.
Is eigh always better than eig?
If you search for the eigenvalues in the lapack routines you will have many options. So it is I cannot discuss each possible implementation here. The common sense says that we can expect that the symmetric/hermitian routines to perform better, otherwise ther would be no reason to add one more routine that is more limited. But I never tested carefully the behavior of eig vs eigh.
To have an intuition compare the equation for tridiagonalization for symmetric matrices, and the equation for reduction of a general matrix to its Heisenberg form found here.

Automatically round arithmetic operations to eight decimals

I am doing some numerical analysis exercise where I need calculate solution of linear system using a specific algorithm. My answer differs from the answer of the book by some decimal places which I believe is due to rounding errors. Is there a way where I can automatically set arithmetic to round eight decimal places after each arithmetic operation? The following is my python code.
import numpy as np
A1 = [4, -1, 0, 0, -1, 4, -1, 0,\
0, -1, 4, -1, 0, 0, -1, 4]
A1 = np.array(A1).reshape([4,4])
I = -np.identity(4)
O = np.zeros([4,4])
A = np.block([[A1, I, O, O],
[I, A1, I, O],
[O, I, A1, I],
[O, O, I, A1]])
b = np.array([1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6])
def conj_solve(A, b, pre=False):
n = len(A)
C = np.identity(n)
if pre == True:
for i in range(n):
C[i, i] = np.sqrt(A[i, i])
Ci = np.linalg.inv(C)
Ct = np.transpose(Ci)
x = np.zeros(n)
r = b - np.matmul(A, x)
w = np.matmul(Ci, r)
v = np.matmul(Ct, w)
alpha = np.dot(w, w)
for i in range(MAX_ITER):
if np.linalg.norm(v, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
u = np.matmul(A, v)
t = alpha/np.dot(v, u)
x = x + t*v
r = r - t*u
w = np.matmul(Ci, r)
beta = np.dot(w, w)
if np.abs(beta) < TOL:
if np.linalg.norm(r, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
s = beta/alpha
v = np.matmul(Ct, w) + s*v
alpha = beta
print("Max iteration exceeded")
return x
MAX_ITER = 1000
TOL = 0.05
sol = conj_solve(A, b, pre=True)
Using this, I get 2.55516527 as first element of array which should be 2.55613420.
OR, is there a language/program where I can specify the precision of arithmetic?
Precision/rounding during the calculation is unlikely to be the issue.
To test this I ran the calculation with precisions that bracket the precision you are aiming for: once with np.float64, and once with np.float32. Here is a table of the printed results, their approximate decimal precision, and the result of the calculation (ie, the first printed array value).
numpy type decimal places result
-------------------------------------------------
np.float64 15 2.55516527
np.float32 6 2.5551653
Given that these are so much in agreement, I doubt an intermediate precision of 8 decimal places is going to give an answer that's not between these two results (ie, 2.55613420 that's off in the 4th digit).
This isn't part isn't part of my answer, but is a comment on using mpmath. The questioner suggested it in the comments, and it was my first thought too, so I ran a quick test to see if it behaved how I expected with low precision calculations. It didn't, so I abandoned it (but I'm not an expert with it).
Here's my test function, basically multiplying 1/N by N and 1/N repeatedly to emphasise the error in 1/N.
def precision_test(dps=100, N=19, t=mpmath.mpf):
with mpmath.workdps(dps):
x = t(1)/t(N)
print(x)
y = x
for i in range(10000):
y *= x
y *= N
print(y)
This works as expected with, eg, np.float32:
precision_test(dps=2, N=3, t=np.float32)
# 0.33333334
# 0.3334327041164994
Note that the error has propagated into more significant digits, as expected.
But with mpmath, I could never get that to happen (testing with a range of dps and a various prime N values):
precision_test(dps=2, N=3)
# 0.33
# 0.33
Because of this test, I decided mpmath is not going to give normal results for low precision calculations.
TL;DR:
mpmath didn't behave how I expected at low precision so I abandoned it.

Speed Up a for Loop - Python

I have a code that works perfectly well but I wish to speed up the time it takes to converge. A snippet of the code is shown below:
def myfunction(x, i):
y = x + (min(0, target[i] - data[i, :]x))*data[i]/(norm(data[i])**2))
return y
rows, columns = data.shape
start = time.time()
iterate = 0
iterate_count = []
norm_count = []
res = 5
x_not = np.ones(columns)
norm_count.append(norm(x_not))
iterate_count.append(0)
while res > 1e-8:
for row in range(rows):
y = myfunction(x_not, row)
x_not = y
iterate += 1
iterate_count.append(iterate)
norm_count.append(norm(x_not))
res = abs(norm_count[-1] - norm_count[-2])
print('Converge at {} iterations'.format(iterate))
print('Duration: {:.4f} seconds'.format(time.time() - start))
I am relatively new in Python. I will appreciate any hint/assistance.
Ax=b is the problem we wish to solve. Here, 'A' is the 'data' and 'b' is the 'target'
Ugh! After spending a while on this I don't think it can be done the way you've set up your problem. In each iteration over the row, you modify x_not and then pass the updated result to get the solution for the next row. This kind of setup can't be vectorized easily. You can learn the thought process of vectorization from the failed attempt, so I'm including it in the answer. I'm also including a different iterative method to solve linear systems of equations. I've included a vectorized version -- where the solution is updated using matrix multiplication and vector addition, and a loopy version -- where the solution is updated using a for loop to demonstrate what you can expect to gain.
1. The failed attempt
Let's take a look at what you're doing here.
def myfunction(x, i):
y = x + (min(0, target[i] - data[i, :] # x)) * (data[i] / (norm(data[i])**2))
return y
You subtract
the dot product of (the ith row of data and x_not)
from the ith row of target,
limited at zero.
You multiply this result with the ith row of data divided my the norm of that row squared. Let's call this part2
Then you add this to the ith element of x_not
Now let's look at the shapes of the matrices.
data is (M, N).
target is (M, ).
x_not is (N, )
Instead of doing these operations rowwise, you can operate on the entire matrix!
1.1. Simplifying the dot product.
Instead of doing data[i, :] # x, you can do data # x_not and this gives an array with the ith element giving the dot product of the ith row with x_not. So now we have data # x_not with shape (M, )
Then, you can subtract this from the entire target array, so target - (data # x_not) has shape (M, ).
So far, we have
part1 = target - (data # x_not)
Next, if anything is greater than zero, set it to zero.
part1[part1 > 0] = 0
1.2. Finding rowwise norms.
Finally, you want to multiply this by the row of data, and divide by the square of the L2-norm of that row. To get the norm of each row of a matrix, you do
rownorms = np.linalg.norm(data, axis=1)
This is a (M, ) array, so we need to convert it to a (M, 1) array so we can divide each row. rownorms[:, None] does this. Then divide data by this.
part2 = data / (rownorms[:, None]**2)
1.3. Add to x_not
Finally, we're adding each row of part1 * part2 to the original x_not and returning the result
result = x_not + (part1 * part2).sum(axis=0)
Here's where we get stuck. In your approach, each call to myfunction() gives a value of part1 that depends on target[i], which was changed in the last call to myfunction().
2. Why vectorize?
Using numpy's inbuilt methods instead of looping allows it to offload the calculation to its C backend, so it runs faster. If your numpy is linked to a BLAS backend, you can extract even more speed by using your processor's SIMD registers
The conjugate gradient method is a simple iterative method to solve certain systems of equations. There are other more complex algorithms that can solve general systems well, but this should do for the purposes of our demo. Again, the purpose is not to have an iterative algorithm that will perfectly solve any linear system of equations, but to show what kind of speedup you can expect if you vectorize your code.
Given your system
data # x_not = target
Let's define some variables:
A = data.T # data
b = data.T # target
And we'll solve the system A # x = b
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
p = resid
while (np.abs(resid) > tolerance).any():
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
x = x + alpha * p
resid_new = resid - alpha * Ap
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
To contrast the fully vectorized approach with one that uses iterations to update the rows of x and resid_new, let's define another implementation of the CG solver that does this.
def solve_loopy(data, target, itermax = 100, tolerance = 1e-8):
A = data.T # data
b = data.T # target
rows, columns = data.shape
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
resid_new = b - A # x
p = resid
niter = 0
while (np.abs(resid) > tolerance).any() and niter < itermax:
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
for i in range(len(x)):
x[i] = x[i] + alpha * p[i]
resid_new[i] = resid[i] - alpha * Ap[i]
# resid_new = resid - alpha * A # p
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
niter += 1
return x
And our original vector method:
def solve_vect(data, target, itermax = 100, tolerance = 1e-8):
A = data.T # data
b = data.T # target
rows, columns = data.shape
x = np.zeros((columns,)) # Initial guess. Can be anything
resid = b - A # x
resid_new = b - A # x
p = resid
niter = 0
while (np.abs(resid) > tolerance).any() and niter < itermax:
Ap = A # p
alpha = (resid.T # resid) / (p.T # Ap)
x = x + alpha * p
resid_new = resid - alpha * Ap
beta = (resid_new.T # resid_new) / (resid.T # resid)
p = resid_new + beta * p
resid = resid_new + 0
niter += 1
return x
Let's solve a simple system to see if this works first:
2x1 + x2 = -5
−x1 + x2 = -2
should give a solution of [-1, -3]
data = np.array([[ 2, 1],
[-1, 1]])
target = np.array([-5, -2])
print(solve_loopy(data, target))
print(solve_vect(data, target))
Both give the correct solution [-1, -3], yay! Now on to bigger things:
data = np.random.random((100, 100))
target = np.random.random((100, ))
Let's ensure the solution is still correct:
sol1 = solve_loopy(data, target)
np.allclose(data # sol1, target)
# Output: False
sol2 = solve_vect(data, target)
np.allclose(data # sol2, target)
# Output: False
Hmm, looks like the CG method doesn't work for badly conditioned random matrices we created. Well, at least both give the same result.
np.allclose(sol1, sol2)
# Output: True
But let's not get discouraged! We don't really care if it works perfectly, the point of this is to demonstrate how amazing vectorization is. So let's time this:
import timeit
timeit.timeit('solve_loopy(data, target)', number=10, setup='from __main__ import solve_loopy, data, target')
# Output: 0.25586539999994784
timeit.timeit('solve_vect(data, target)', number=10, setup='from __main__ import solve_vect, data, target')
# Output: 0.12008900000000722
Nice! A ~2x speedup simply by avoiding a loop while updating our solution!
For larger systems, this will be even better.
for N in [10, 50, 100, 500, 1000]:
data = np.random.random((N, N))
target = np.random.random((N, ))
t_loopy = timeit.timeit('solve_loopy(data, target)', number=10, setup='from __main__ import solve_loopy, data, target')
t_vect = timeit.timeit('solve_vect(data, target)', number=10, setup='from __main__ import solve_vect, data, target')
print(N, t_loopy, t_vect, t_loopy/t_vect)
This gives us:
N t_loopy t_vect speedup
00010 0.002823 0.002099 1.345390
00050 0.051209 0.014486 3.535048
00100 0.260348 0.114601 2.271773
00500 0.980453 0.240151 4.082644
01000 1.769959 0.508197 3.482822

How to sort 4 integers using only min() and max()? Python

I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
So I'm guessing your input is something like this?
string = input('Type your numbers, separated by a space')
Then I'd do:
numbers = [int(i) for i in string.strip().split(' ')]
amount_of_numbers = len(numbers)
sorted = []
for i in range(amount_of_numbers):
x = max(numbers)
numbers.remove(x)
sorted.append(x)
print(sorted)
This will sort them using max, but min can also be used.
If you didn't have to use min and max:
string = input('Type your numbers, separated by a space')
numbers = [int(i) for i in string.strip().split(' ')]
numbers.sort() #an optional reverse argument possible
print(numbers)
LITERALLY just min and max? Odd, but, why not. I'm about to crash, but I think the following would work:
# Easy
arr[0] = max(a,b,c,d)
# Take the smallest element from each pair.
#
# You will never take the largest element from the set, but since one of the
# pairs will be (largest, second_largest) you will at some point take the
# second largest. Take the maximum value of the selected items - which
# will be the maximum of the items ignoring the largest value.
arr[1] = max(min(a,b)
min(a,c)
min(a,d)
min(b,c)
min(b,d)
min(c,d))
# Similar logic, but reversed, to take the smallest of the largest of each
# pair - again omitting the smallest number, then taking the smallest.
arr[2] = min(max(a,b)
max(a,c)
max(a,d)
max(b,c)
max(b,d)
max(c,d))
# Easy
arr[3] = min(a,b,c,d)
For Tankerbuzz's result for the following:
first_integer = 9
second_integer = 19
third_integer = 1
fourth_integer = 15
I get 1, 15, 9, 19 as the ascending values.
The following is one of the forms that gives symbolic form of the ascending values (using i1-i4 instead of first_integer, etc...):
Min(i1, i2, i3, i4)
Max(Min(i4, Max(Min(i1, i2), Min(i3, Max(i1, i2))), Max(i1, i2, i3)), Min(i1, i2, i3, Max(i1, i2)))
Max(Min(i1, i2), Min(i3, Max(i1, i2)), Min(i4, Max(i1, i2, i3)))
Max(i1, i2, i3, i4)
It was generated by a 'bubble sort' using the Min and Max functions of SymPy (a python CAS):
def minmaxsort(v):
"""return a sorted list of the elements in v using the
Min and Max functions.
Examples
========
>>> minmaxsort(3, 2, 1)
[1, 2, 3]
>>> minmaxsort(1, x, y)
[Min(1, x, y), Max(Min(1, x), Min(y, Max(1, x))), Max(1, x, y)]
>>> minmaxsort(1, y, x)
[Min(1, x, y), Max(Min(1, y), Min(x, Max(1, y))), Max(1, x, y)]
"""
from sympy import Min, Max
v = list(v)
v0 = Min(*v)
for j in range(len(v)):
for i in range(len(v) - j - 1):
w = v[i:i + 2]
v[i:i + 2] = [Min(*w), Max(*w)]
v[0] = v0
return v
I have worked it out.
min_integer = min(first_integer, second_integer, third_integer, fourth_integer)
mid_low_integer = min(max(first_integer, second_integer), max(third_integer, fourth_integer))
mid_high_integer = max(min(first_integer, second_integer), min(third_integer, fourth_integer))
max_integer = max(first_integer, second_integer, third_integer, fourth_integer)

Resources