I am doing some numerical analysis exercise where I need calculate solution of linear system using a specific algorithm. My answer differs from the answer of the book by some decimal places which I believe is due to rounding errors. Is there a way where I can automatically set arithmetic to round eight decimal places after each arithmetic operation? The following is my python code.
import numpy as np
A1 = [4, -1, 0, 0, -1, 4, -1, 0,\
0, -1, 4, -1, 0, 0, -1, 4]
A1 = np.array(A1).reshape([4,4])
I = -np.identity(4)
O = np.zeros([4,4])
A = np.block([[A1, I, O, O],
[I, A1, I, O],
[O, I, A1, I],
[O, O, I, A1]])
b = np.array([1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6])
def conj_solve(A, b, pre=False):
n = len(A)
C = np.identity(n)
if pre == True:
for i in range(n):
C[i, i] = np.sqrt(A[i, i])
Ci = np.linalg.inv(C)
Ct = np.transpose(Ci)
x = np.zeros(n)
r = b - np.matmul(A, x)
w = np.matmul(Ci, r)
v = np.matmul(Ct, w)
alpha = np.dot(w, w)
for i in range(MAX_ITER):
if np.linalg.norm(v, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
u = np.matmul(A, v)
t = alpha/np.dot(v, u)
x = x + t*v
r = r - t*u
w = np.matmul(Ci, r)
beta = np.dot(w, w)
if np.abs(beta) < TOL:
if np.linalg.norm(r, np.infty) < TOL:
print(i+1, "steps")
print(x)
print(r)
return
s = beta/alpha
v = np.matmul(Ct, w) + s*v
alpha = beta
print("Max iteration exceeded")
return x
MAX_ITER = 1000
TOL = 0.05
sol = conj_solve(A, b, pre=True)
Using this, I get 2.55516527 as first element of array which should be 2.55613420.
OR, is there a language/program where I can specify the precision of arithmetic?
Precision/rounding during the calculation is unlikely to be the issue.
To test this I ran the calculation with precisions that bracket the precision you are aiming for: once with np.float64, and once with np.float32. Here is a table of the printed results, their approximate decimal precision, and the result of the calculation (ie, the first printed array value).
numpy type decimal places result
-------------------------------------------------
np.float64 15 2.55516527
np.float32 6 2.5551653
Given that these are so much in agreement, I doubt an intermediate precision of 8 decimal places is going to give an answer that's not between these two results (ie, 2.55613420 that's off in the 4th digit).
This isn't part isn't part of my answer, but is a comment on using mpmath. The questioner suggested it in the comments, and it was my first thought too, so I ran a quick test to see if it behaved how I expected with low precision calculations. It didn't, so I abandoned it (but I'm not an expert with it).
Here's my test function, basically multiplying 1/N by N and 1/N repeatedly to emphasise the error in 1/N.
def precision_test(dps=100, N=19, t=mpmath.mpf):
with mpmath.workdps(dps):
x = t(1)/t(N)
print(x)
y = x
for i in range(10000):
y *= x
y *= N
print(y)
This works as expected with, eg, np.float32:
precision_test(dps=2, N=3, t=np.float32)
# 0.33333334
# 0.3334327041164994
Note that the error has propagated into more significant digits, as expected.
But with mpmath, I could never get that to happen (testing with a range of dps and a various prime N values):
precision_test(dps=2, N=3)
# 0.33
# 0.33
Because of this test, I decided mpmath is not going to give normal results for low precision calculations.
TL;DR:
mpmath didn't behave how I expected at low precision so I abandoned it.
Related
I'm trying to apply the method for baselinining vibrational spectra, which is announced as an improvement over asymmetric and iterative re-weighted least-squares algorithms in the 2015 paper (doi:10.1039/c4an01061b), where the following matlab code was provided:
function z = baseline(y, lambda, ratio)
% Estimate baseline with arPLS in Matlab
N = length(y);
D = diff(speye(N), 2);
H = lambda*D'*D;
w = ones(N, 1);
while true
W = spdiags(w, 0, N, N);
% Cholesky decomposition
C = chol(W + H);
z = C \ (C' \ (w.*y) );
d = y - z;
% make d-, and get w^t with m and s
dn = d(d<0);
m = mean(d);
s = std(d);
wt = 1./ (1 + exp( 2* (d-(2*s-m))/s ) );
% check exit condition and backup
if norm(w-wt)/norm(w) < ratio, break; end
end
that I rewrote into python:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
while True:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
return(z)
Except for the input vector y the method requires parameters lam and ratio and it runs ok for values lam<1.e+07 and ratio>1.e-01, but outputs poor results. When values are changed outside this range, for example lam=1e+07, ratio=1e-02 the CPU starts heating up and job never finishes (I interrupted it after 1min). Also in both cases the following warning shows up:
/usr/local/lib/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py: 144: SparseEfficencyWarning: spsolve requires A to be CSC or CSR matrix format warn('spsolve requires A to be CSC or CSR format',
although I added the recommended format='csr' option to the spdiags call.
And here's some synthetic data (similar to one in the paper) for testing purposes. The noise was added along with a 3rd degree polynomial baseline The method works well for parameters bl_1 and fails to converge for bl_2:
import numpy
from matplotlib import pyplot
from scipy.sparse import spdiags, diags, identity
from scipy.sparse.linalg import spsolve
from numpy.linalg import cholesky, norm
import sys
x = numpy.arange(0, 1000)
noise = numpy.random.uniform(low=0, high = 10, size=len(x))
poly_3rd_degree = numpy.poly1d([1.2e-06, -1.23e-03, .36, -4.e-04])
poly_baseline = poly_3rd_degree(x)
y = 100 * numpy.exp(-((x-300)/15)**2)+\
200 * numpy.exp(-((x-750)/30)**2)+ \
100 * numpy.exp(-((x-800)/15)**2) + noise + poly_baseline
bl_1 = baseline_arPLS(y, 1e+07, 1e-01)
bl_2 = baseline_arPLS(y, 1e+07, 1e-02)
pyplot.figure(1)
pyplot.plot(x, y, 'C0')
pyplot.plot(x, poly_baseline, 'C1')
pyplot.plot(x, bl_1, 'k')
pyplot.show()
sys.exit(0)
All this is telling me that I'm doing something very non-optimal in my python implementation. Since I'm not knowledgeable enough about the intricacies of scipy computations I'm kindly asking for suggestions on how to achieve convergence in this calculations.
(I encountered an issue in running the "straight" matlab version of the code because the line D = diff(speye(N), 2); truncates the last two rows of the matrix, creating dimension mismatch later in the function. Following the description of matrix D's appearance I substituted this line by directly creating a tridiagonal matrix using the diags function.)
Guided by the comment #hpaulj made, and suspecting that the loop exit wasn't coded properly, I re-visited the paper and found out that the authors actually implemented an exit condition that was not featured in their matlab script. Changing the while loop condition provides an exit for any set of parameters; my understanding is that algorithm is not guaranteed to converge in all cases, which is why this condition is necessary but was omitted by error. Here's the edited version of my python code:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
i = 0
N_iterations = 100
while i < N_iterations:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
i += 1
return(z)
We are given 2 binary strings (A and B) both of length N and an integer K.
We need to check if there is a rotation of string B present where hamming distance between A and the rotated string is equal to K. We can just remove one character from front and put it at back in single operation.
Example : Let say we are given these 2 string with values as A="01011" and B="01110" and also K=4.
Note : Hamming distance between binary string is number of bit positions in which two corresponding bits in strings are different.
In above example answer will be "YES" as if we rotate string B once it becomes "11100", which has hamming distance of 4, that is equal to K.
Approach :
For every rotated string of B
check that hamming distance with A
if hamming distance == K:
return "YES"
return "NO"
But obviously above approach will execute in O(Length of string x Length of string) times. Is there better approach to solve this. As we don't need to find the actual string, I am just wondering there is some better algorithm to get this answer.
Constraints :
Length of each string <= 2000
Number of test cases to run in one file <=600
First note that we can compute the Hamming distance as the sum of a[i]*(1-b[i]) + b[i]*(1-a[i]) for all i. This simplifies to a[i] + b[i] - 2*a[i]*b[i]. Now in O(n) we can compute the sum of a[i] and b[i] for all i, and this doesn't change with bit rotations, so the only interesting term is 2*a[i]*b[i]. We can compute this term efficiently for all bit rotations by noting that it is equivalent to a circular convolution of a and b. We can efficiently compute such convolutions using the Discrete Fourier transform in O(n log n) time.
For example in Python using numpy:
import numpy as np
def hdist(a, b):
return sum(bool(ai) ^ bool(bi) for ai, bi in zip(a, b))
def slow_circular_hdist(a, b):
return [hdist(a, b[i:] + b[:i]) for i in range(len(b))]
def circular_convolution(a, b):
return np.real(np.fft.ifft(np.fft.fft(a)*np.fft.fft(b[::-1])))[::-1]
def fast_circular_hdist(a, b):
hdist = np.sum(a) + np.sum(b) - 2*circular_convolution(a, b)
return list(np.rint(hdist).astype(int))
Usage:
>>> a = [0, 1, 0, 1, 1]
>>> b = [0, 1, 1, 1, 0]
>>> slow_circular_hdist(a, b)
[2, 4, 2, 2, 2]
>>> fast_circular_hdist(a, b)
[2, 4, 2, 2, 2]
Speed and large correctness test:
>>> x = list((np.random.random(5000) < 0.5).astype(int))
>>> y = list((np.random.random(5000) < 0.5).astype(int))
>>> s = time.time(); slow_circular_hdist(x, y); print(time.time() - s)
6.682933807373047
>>> s = time.time(); fast_circular_hdist(x, y); print(time.time() - s)
0.008500814437866211
>>> slow_circular_hdist(x, y) == fast_circular_hdist(x, y)
True
I have one hermitian matrix (specifically, a Hamiltonian). Though phase of a singe eigenvector can be arbitrary, the quantities I am calculating is physical (I reduced the code a bit keeping just the reproducible part). eig and eigh are giving very different results.
import numpy as np
import numpy.linalg as nlg
import matplotlib.pyplot as plt
def Ham(Ny, Nx, t, phi):
h = np.zeros((Ny,Ny), dtype=complex)
for ii in range(Ny-1):
h[ii+1,ii] = t
h[Ny-1,0] = t
h=h+np.transpose(np.conj(h))
u = np.zeros((Ny,Ny), dtype=complex)
for ii in range(Ny):
u[ii,ii] = -t*np.exp(-2*np.pi*1j*phi*ii)
u = u + 1e-10*np.eye(Ny)
H = np.kron(np.eye(Nx,dtype=int),h) + np.kron(np.diag(np.ones(Nx-1), 1),u) + np.kron(np.diag(np.ones(Nx-1), -1),np.transpose(np.conj(u)))
H[0:Ny,Ny*(Nx-1):Ny*Nx] = np.transpose(np.conj(u))
H[Ny*(Nx-1):Ny*Nx,0:Ny] = u
x=[]; y=[];
for jj in range (1,Nx+1):
for ii in range (1,Ny+1):
x.append(jj); y.append(ii)
x = np.asarray(x)
y = np.asarray(y)
return H, x, y
def C_num(Nx, Ny, E, t, phi):
H, x, y = Ham(Ny, Nx, t, phi)
ifhermitian = np.allclose(H, np.transpose(np.conj(H)), rtol=1e-5, atol=1e-8)
assert ifhermitian == True
Hp = H
V,wf = nlg.eigh(Hp) ##Check. eig gives different result
idx = np.argsort(np.real(V))
wf = wf[:, idx]
normmat = wf*np.conj(wf)
norm = np.sqrt(np.sum(normmat, axis=0))
wf = wf/(norm*np.sqrt(len(H)))
wf = wf[:, V<=E] ##Chose a subset of eigenvectors
V01 = wf*np.exp(1j*x)[:,None]; V12 = wf*np.exp(1j*y)[:,None]
V23 = wf*np.exp(1j*x)[:,None]; V30 = wf*np.exp(1j*y)[:,None]
wff = np.transpose(np.conj(wf))
C01 = np.dot(wff,V01); C12 = np.dot(wff,V12); C23 = np.dot(wff,V23); C30 = np.dot(wff,V30)
F = nlg.multi_dot([C01,C12,C23,C30])
ifhermitian = np.allclose(F, np.transpose(np.conj(F)), rtol=1e-5, atol=1e-8)
assert ifhermitian == True
evals, efuns = nlg.eig(F) ##Check eig gives different result
C = (1/(2*np.pi))*np.sum(np.angle(evals));
return C
C = C_num(16, 16, 0, 1, 1/8)
print(C)
Changing both nlg.eigh to nlg.eig, or even changing only the last one, giving very different results.
As I mentioned elsewhere, the eigenvalue and eigenvector are not unique.
The only thing that is true is that for each eigenvalue $A v = lambda v$, the two matrices returned by eig and eigh describe those solutions, it is natural that eig inexact but approximate results.
You can see that both the solutions will triangularize your matrix in different ways
H, x, y = Ham(16, 16, 1, 1./8)
D, V = nlg.eig(H)
Dh, Vh = nlg.eigh(H)
Then
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 7))
plt.subplot(121);
plt.imshow(abs(np.conj(Vh.T) # H # Vh))
plt.title('diagonalized with eigh')
plt.subplot(122);
plt.imshow(abs(np.conj(V.T) # H # V))
plt.title('diagonalized with eig')
Plots this
That both diagonalizations were successfull, but the eigenvalues are indifferent order.
If you sort the eigenvalues you see they match
plt.plot(np.diag(np.real(np.conj(Vh.T) # H # Vh)))
plt.plot(np.diag(np.imag(np.conj(Vh.T) # H # Vh)))
plt.plot(np.sort(np.diag(np.real(np.conj(V.T) # H # V))))
plt.title('eigenvalues')
plt.legend(['real eigh', 'imag eigh', 'sorted real eig'], loc='upper left')
Since many eigenvalues are repeated, the eigenvector associated with a given eigenvalue is not unique as well, the only thing we can guarantee is that the eigenvectors for a given eigenvalue must span the same subspace.
The diagonalization test is the best in my opinion.
Is eigh always better than eig?
If you search for the eigenvalues in the lapack routines you will have many options. So it is I cannot discuss each possible implementation here. The common sense says that we can expect that the symmetric/hermitian routines to perform better, otherwise ther would be no reason to add one more routine that is more limited. But I never tested carefully the behavior of eig vs eigh.
To have an intuition compare the equation for tridiagonalization for symmetric matrices, and the equation for reduction of a general matrix to its Heisenberg form found here.
I have a largish sparse binary-valued rectangular matrix, M, where n > m. My understanding of matrix rank suggests the largest possible rank is m, and my understanding of SVD suggests the rank of a matrix can be found by identifying the number of non-zero singular values.
I'm attempting to use SciPy.sparse.linalg.svds to determine the rank of M. First problem is that I cannot compute m singular values since k can only go up to p = m - 1. So I thought I'd be clever and compute p highest values, the p lowest values, combine them, run set to find the unique values, and end up with a list of at most m values. This didn't work out according to plan.
Here's a MWE:
import scipy.sparse
import scipy.sparse.linalg
import numpy
import itertools
m = 6
n = 10
test = scipy.sparse.rand(m, n, density=0.25, format='lil', dtype=None, random_state=None)
for i, j in itertools.product(list(range(m)), list(range(n))):
test[i, j] = 1 if test[i, j] > 0 else 0
U1, S1, VT1 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'LM', v0 = None, maxiter = None,
return_singular_vectors = True)
U2, S2, VT2 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'SM', v0 = None, maxiter = None,
return_singular_vectors = True)
S = list(set(numpy.concatenate((S1, S2), axis = 0)))
len(S)
Here's a sample output:
10
with S being
[0.5303120147925737,
1.0725314055439354,
2.7940865631779643,
1.5060744813473148,
1.8412737686034186,
0.3208993522030293,
0.5303120147925728,
1.072531405543936,
1.5060744813473153,
1.841273768603419]
How can a m X n matrix with m < n have a rank of n? Are my assumptions above incorrect, or am I misapplying the function? My real M is sparse, binary-valued, and roughly 300 X 500.
Thanks for looking!
With help from #tch I've come up with the following hack. To check for rank = m, I only need check the smallest value, and append it to the m - 1 values obtained from the svds highest values function. It turns out svds doesn't report 0s when thresholded, so the lowest values function will return nan for rank < m. Here's the revised code:
import scipy.sparse
import scipy.sparse.linalg
import numpy
import itertools
m = 6
n = 10
test = scipy.sparse.rand(m, n, density=0.25, format='lil', dtype=None, random_state=None)
test = test > 0
test = test.astype('d')
U1, S1, VT1 = scipy.sparse.linalg.svds(test, k = min(test.shape) - 1, ncv = None, tol = 1e-5, which = 'LM', v0 = None, maxiter = None,
return_singular_vectors = True)
U2, S2, VT2 = scipy.sparse.linalg.svds(test, k = 1, ncv = None, tol = 1e-5, which = 'SM', v0 = None, maxiter = None,
return_singular_vectors = True)
S = list(set(numpy.concatenate((S1, S2), axis = 0)))
print(sum(x > 1e-10 for x in S))
S
What you are trying to do would work in exact arithmetic (assuming the matrix has no repeat singular values). However, due to numerical rounding errors, it won't work in practice.
To see this try
C = np.random.randn(10,3)
u,s,vt = np.linalg.svd(C#C.T)
Note that C#C.T is a 10x10 matrix with rank 3. However, you will see that none of the singular values are exactly zero (however 7 are close to 0).
When finding the rank of a matrix numerically, thresholding is often used to determine what it means for a singular value to be 0. For instance, everything below 1e-10 may be set to zero.
If the matrix has exact rank k, hopefully you will see k singular values away from 0, and then min(m,n)-k singular values very close to zero. However, depending on the matrix, there may not even be a well defined "drop".
So for your example, you could try removing elements which are within some threshold of one another. However this of course could run into issues if the matrix has repeat singular values.
You could just run the smallest singular values and see how many give you near zero. Presumably the matrix is at least rank ` so the first singular value will be nonzero.
As a note about finding where test[i,j] > 0, you can just to test>0 and it will give a boolean array with True in the nonzero entries and False elsewhere. You can also set the dtype of the random matrix to bool and it will be True whenever the random number is nonzero.
I have a list of characters, say x in number, denoted by b[1], b[2], b[3] ... b[x]. After x,
b[x+1] is the concatenation of b[1],b[2].... b[x] in that order. Similarly,
b[x+2] is the concatenation of b[2],b[3]....b[x],b[x+1].
So, basically, b[n] will be concatenation of last x terms of b[i], taken left from right.
Given parameters as p and q as queries, how can I find out which character among b[1], b[2], b[3]..... b[x] does the qth character of b[p] corresponds to?
Note: x and b[1], b[2], b[3]..... b[x] is fixed for all queries.
I tried brute-forcing but the string length increases exponentially for large x.(x<=100).
Example:
When x=3,
b[] = a, b, c, a b c, b c abc, c abc bcabc, abc bcabc cabcbcabc, //....
//Spaces for clarity, only commas separate array elements
So for a query where p=7, q=5, answer returned would be 3(corresponding to character 'c').
I am just having difficulty figuring out the maths behind it. Language is no issue
I wrote this answer as I figured it out, so please bear with me.
As you mentioned, it is much easier to find out where the character at b[p][q] comes from among the original x characters than to generate b[p] for large p. To do so, we will use a loop to find where the current b[p][q] came from, thereby reducing p until it is between 1 and x, and q until it is 1.
Let's look at an example for x=3 to see if we can get a formula:
p N(p) b[p]
- ---- ----
1 1 a
2 1 b
3 1 c
4 3 a b c
5 5 b c abc
6 9 c abc bcabc
7 17 abc bcabc cabcbcabc
8 31 bcabc cabcbcabc abcbcabccabcbcabc
9 57 cabcbcabc abcbcabccabcbcabc bcabccabcbcabcabcbcabccabcbcabc
The sequence is clear: N(p) = N(p-1) + N(p-2) + N(p-3), where N(p) is the number of characters in the pth element of b. Given p and x, you can just brute-force compute all the N for the range [1, p]. This will allow you to figure out which prior element of b b[p][q] came from.
To illustrate, say x=3, p=9 and q=45.
The chart above gives N(6)=9, N(7)=17 and N(8)=31. Since 45>9+17, you know that b[9][45] comes from b[8][45-(9+17)] = b[8][19].
Continuing iteratively/recursively, 19>9+5, so b[8][19] = b[7][19-(9+5)] = b[7][5].
Now 5>N(4) but 5<N(4)+N(5), so b[7][5] = b[5][5-3] = b[5][2].
b[5][2] = b[3][2-1] = b[3][1]
Since 3 <= x, we have our termination condition, and b[9][45] is c from b[3].
Something like this can very easily be computed either recursively or iteratively given starting p, q, x and b up to x. My method requires p array elements to compute N(p) for the entire sequence. This can be allocated in an array or on the stack if working recursively.
Here is a reference implementation in vanilla Python (no external imports, although numpy would probably help streamline this):
def so38509640(b, p, q):
"""
p, q are integers. b is a char sequence of length x.
list, string, or tuple are all valid choices for b.
"""
x = len(b)
# Trivial case
if p <= x:
if q != 1:
raise ValueError('q={} out of bounds for p={}'.format(q, p))
return p, b[p - 1]
# Construct list of counts
N = [1] * p
for i in range(x, p):
N[i] = sum(N[i - x:i])
print('N =', N)
# Error check
if q > N[-1]:
raise ValueError('q={} out of bounds for p={}'.format(q, p))
print('b[{}][{}]'.format(p, q), end='')
# Reduce p, q until it is p < x
while p > x:
# Find which previous element character q comes from
offset = 0
for i in range(p - x - 1, p):
if i == p - 1:
raise ValueError('q={} out of bounds for p={}'.format(q, p))
if offset + N[i] >= q:
q -= offset
p = i + 1
print(' = b[{}][{}]'.format(p, q), end='')
break
offset += N[i]
print()
return p, b[p - 1]
Calling so38509640('abc', 9, 45) produces
N = [1, 1, 1, 3, 5, 9, 17, 31, 57]
b[9][45] = b[8][19] = b[7][5] = b[5][2] = b[3][1]
(3, 'c') # <-- Final answer
Similarly, for the example in the question, so38509640('abc', 7, 5) produces the expected result:
N = [1, 1, 1, 3, 5, 9, 17]
b[7][5] = b[5][2] = b[3][1]
(3, 'c') # <-- Final answer
Sorry I couldn't come up with a better function name :) This is simple enough code that it should work equally well in Py2 and 3, despite differences in the range function/class.
I would be very curious to see if there is a non-iterative solution for this problem. Perhaps there is a way of doing this using modular arithmetic or something...