I have a list of 262144 elements in a list created through "itertools.product". Now I have to loop over these elements and multiply it with all other elements, which is taking too much time. (I don't have any issue of memory / cpu)
elements = []
for e in itertools.product(range(4), repeat=9):
elements.append(e)
for row in elements:
for col in elements:
do_calculations(row, col)
def do_calculations(ro, co):
t = {}
t[0] = [multiply(c=ro[0], r=co[0])]
for i in range(1, len(ro)):
_t = []
for j in range(i+1):
_t.append(multiply(c=ro[j], r=co[i-j]))
t[i] = _t
for vals in t.values():
nx = len(vals)
_co = ro[nx:]
_ro = co[nx:]
for k in range(len(_ro)):
vals.append(multiply(c=_co[k], r=_ro[k]))
_t = []
for k in t.values():
s = k[0]
for j in range(1, len(k)):
s = addition(c=s, r=k[j])
_t.append(s)
return _t
def addition(c, r) -> int:
__a = [[0, 3, 1, 2],
[3, 2, 0, 1],
[0, 3, 2, 1],
[1, 0, 2, 3]]
return __a[c][r]
def multiply(c, r) -> int:
__m = [[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 3, 1, 2],
[0, 2, 3, 1]]
return __m[c][r]
it is taking too much time to process single col with rows....
can any one help me in this?
regards
Not much of a python guy but
make sure col is a higher number than row (small optimization, but optimization nevertheless)
use a multiprocessing library (alink). that should cut the calculation time.
(as noted in comment by #Skam, multithreading does not increase performance in such case)
also, you might consider some optimizations in the calculation itself.
Related
I have a list:
l = [10,22,3]
I'm trying to create a function that returns the distances (how close they are in the list itself), such that elements on the left of any element has a negative value, and those on its right has a positive value:
#optimal output
dis = [[0,1,2],[-1,0,1],[-2,-1,0]]
Is there a quick way to do that?
You could try a nested for-in loop. The idea here is to just retrieve the indexes of each value and its distance from other values.
nums = [10, 22, 3]
distances = []
for i in range(len(nums)):
for n in range(len(nums)):
distances.append(i-n)
print(distances)
Output:
[0, -1, -2, 1, 0, -1, 2, 1, 0]
Also, never name a variable l, because it looks like a 1.
Based on Leonardo's answer, to do what the OP commented:
nums = [10, 22, 3]
distances = []
for i in range(len(nums)):
temp = []
for n in range(len(nums)):
temp.append(n-i)
distances.append(temp)
print(distances)
Output:
[[0, 1, 2], [-1, 0, 1], [-2, -1, 0]]
I did a for loop using enumerate from values in a matrix and tried assigning a value to the items that are different than 0 while appending to a list elements that are equal to 0. The fact is that original matrix don't get updated.
Sample code:
matrix = [[0, 0, 0], [0, 1, 0], [1, 1, 1]]
current = []
for x, i in enumerate(matrix):
for y, j in enumerate(i):
if j == 0:
current.append((x, y))
else:
#matrix[x][y] = -1 # This works
j = -1 # This doesn't
Since this doesn't work, there is no utility in using enumerate for that case. So I changed the code to:
matrix = [[0, 0, 0], [0, 1, 0], [1, 1, 1]]
current = []
for x in range(len(matrix)):
for y in range(len(matrix[0])):
if matrix[x][y] == 0:
current.append((x, y))
else:
matrix[x][y] = -1
The code above IMO is much less readble and also pylint suggests against using that with:
C0200: Consider using enumerate instead of iterating with range and
len (consider-using-enumerate)
You can't just update 2d array in-place through assigning to local variable j = -1 (which is reinitialized on each loop iteration for y, j in enumerate(i)).
In your simple case you can update your matrix with the following simple traversal:
matrix = [[0, 0, 0], [0, 1, 0], [1, 1, 1]]
for i, row in enumerate(matrix):
for j, val in enumerate(row):
if val != 0: matrix[i][j] = -1
print(matrix) # [[0, 0, 0], [0, -1, 0], [-1, -1, -1]]
Though Numpy provides a more powerful way for updating matrices:
import numpy as np
matrix = np.array([[0, 0, 0], [0, 1, 0], [1, 1, 1]])
matrix = np.where(matrix == 0, matrix, -1)
print(matrix)
Here are two related SO questions 1 2 that helped me formulate my preliminary solution.
The reason for wanting to do this is to feed permutations by edit distance into a Damerau-Levenshtein NFA; the number of permutations grows fast, so it's a good idea to delay (N-C) cycle N permutations candidates until (N-C) iterations of the NFA.
I've only studied engineering math up to Differential Equations and Discrete Mathematics, so I lack the foundation to approach this task from a formal perspective. If anyone can provide reference materials to help me understand this problem properly, I would appreciate that!
Through brief empirical analysis, I've noticed that I can generate the swaps for all C cycle N permutations with this procedure:
Generate all 2-combinations of N elements (combs)
Subdivide combs into arrays where the smallest element of each 2-combination is the same (ncombs)
Generate the cartesian products of the (N-C)-combinations of ncombs (pcombs)
Sum pcombs to get a list of the swaps that will generate all C cycle N permutations (swaps)
The code is here.
My Python is a bit rusty, so helpful advice about the code is appreciated (I have the feeling that lines 17, 20, and 21 should be combined. I'm not sure if I should be making lists of the results of itertools.(combinations|product). I don't know why line 10 can't be ncombs += ... instead of ncombs.append(...)).
My primary question is how to solve this question properly. I did the rounds on my own due diligence by finding a solution, but I am sure there's a better way. I've also only verified my solution for N=3 and N=4, is it really correct?
The ideal solution would be functionally identical to heap's algorithm, except it would generate the permutations in decreasing cycle order (by the minimum number of swaps to generate the permutation, increasing).
This is far from Heap's efficiency, but it does produce only the necessary cycle combinations restricted by the desired number of cycles, k, in the permutation. We use the partitions of k to create all combinations of cycles for each partition. Enumerating the actual permutations is just a cartesian product of applying each cycle n-1 times, where n is the cycle length.
Recursive Python 3 code:
from math import ceil
def partitions(N, K, high=float('inf')):
if K == 1:
return [[N]]
result = []
low = ceil(N / K)
high = min(high, N-K+1)
for k in range(high, low - 1, -1):
for sfx in partitions(N-k, K - 1, k):
result.append([k] + sfx)
return result
print("partitions(10, 3):\n%s\n" % partitions(10, 3))
def combs(ns, subs):
def g(i, _subs):
if i == len(ns):
return [tuple(tuple(x) for x in _subs)]
res = []
cardinalities = set()
def h(j):
temp = [x[:] for x in _subs]
temp[j].append(ns[i])
res.extend(g(i + 1, temp))
for j in range(len(subs)):
if not _subs[j] and not subs[j] in cardinalities:
h(j)
cardinalities.add(subs[j])
elif _subs[j] and len(_subs[j]) < subs[j]:
h(j)
return res
_subs = [[] for x in subs]
return g(0, _subs)
A = [1,2,3,4]
ns = [2, 2]
print("combs(%s, %s):\n%s\n" % (A, ns, combs(A, ns)))
A = [0,1,2,3,4,5,6,7,8,9,10,11]
ns = [3, 3, 3, 3]
print("num combs(%s, %s):\n%s\n" % (A, ns, len(combs(A, ns))))
def apply_cycle(A, cycle):
n = len(cycle)
last = A[ cycle[n-1] ]
for i in range(n-1, 0, -1):
A[ cycle[i] ] = A[ cycle[i-1] ]
A[ cycle[0] ] = last
def permutations_by_cycle_count(n, num_cycles):
arr = [x for x in range(n)]
cycle_combs = []
for partition in partitions(n, num_cycles):
cycle_combs.extend(combs(arr, partition))
result = {}
def f(A, cycle_comb, i):
if i == len(cycle_comb):
result[cycle_comb].append(A)
return
if len(cycle_comb[i]) == 1:
f(A[:], cycle_comb, i+1)
for k in range(1, len(cycle_comb[i])):
apply_cycle(A, cycle_comb[i])
f(A[:], cycle_comb, i+1)
apply_cycle(A, cycle_comb[i])
for cycle_comb in cycle_combs:
result[cycle_comb] = []
f(arr, cycle_comb, 0)
return result
result = permutations_by_cycle_count(4, 2)
print("permutations_by_cycle_count(4, 2):\n")
for e in result:
print("%s: %s\n" % (e, result[e]))
Output:
partitions(10, 3):
[[8, 1, 1], [7, 2, 1], [6, 3, 1], [6, 2, 2], [5, 4, 1], [5, 3, 2], [4, 4, 2], [4, 3, 3]]
# These are the cycle combinations
combs([1, 2, 3, 4], [2, 2]):
[((1, 2), (3, 4)), ((1, 3), (2, 4)), ((1, 4), (2, 3))]
num combs([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [3, 3, 3, 3]):
15400
permutations_by_cycle_count(4, 2):
((0, 1, 2), (3,)): [[2, 0, 1, 3], [1, 2, 0, 3]]
((0, 1, 3), (2,)): [[3, 0, 2, 1], [1, 3, 2, 0]]
((0, 2, 3), (1,)): [[3, 1, 0, 2], [2, 1, 3, 0]]
((1, 2, 3), (0,)): [[0, 3, 1, 2], [0, 2, 3, 1]]
((0, 1), (2, 3)): [[1, 0, 3, 2]]
((0, 2), (1, 3)): [[2, 3, 0, 1]]
((0, 3), (1, 2)): [[3, 2, 1, 0]]
Suppose I have an array, namely Map. Map[i][j] means the distance between area i and area j. Under this definition, we get:
a) Map[i][i] always equals 0.
b) Map[i][k] <= Map[i][j] + Map[j][k] for all i,j,k
I want to build a function func(Map,k) returning a metric D, while D[i][j] is the shortest distance of a route from area i to area j, and this route should pass through at least k different area.
This is my python code to do so:
def func(Map,k):
n=len(Map)
D_temp = [list(x) for x in Map]
D = [list(x) for x in Map]
for m in range(k - 1):
for i in range(n):
for j in range(n):
tmp = [D[i][x] + Map[x][j] for x in range(n) if x != i and x != j]
D_temp[i][j] = min(tmp)
D = [list(x) for x in D_temp]
return D
func([[0, 2, 3], [2, 0, 1], [3, 1, 0]],2)
return a distance metric D which equals [[4, 4, 3], [4, 2, 5], [3, 5, 2]]
D[0][0] equals 4, because the shortest route from area0 to area0 which pass through at least 2 area is {area0-->area1-->area0}, and the distance of the route is Map[0][1]+Map[1][0]=2+2=4
Wanted to know what would be the best way to do that?
You can use the A* algorithm for this, using Map[i][j] as the heuristic for the minimum remaining distance to the target node (assuming that, as you said, Map[i][j] <= Map[i][x] + Map[x][j] for all i,j,x). The only difference to a regular A* would be that you only accept paths if they have a minimum length of k.
import heapq
def min_path(Map, k, i, j):
heap = [(0, 0, i, [])]
while heap:
_, cost, cur, path = heapq.heappop(heap)
if cur == j and len(path) >= k:
return cost
for other in range(len(Map)):
if other != cur:
c = cost + Map[cur][other]
heapq.heappush(heap, (c + Map[other][j], c, other, path + [other]))
Change your func to return a list comprehension using this min_path accordingly.
def func(Map, k):
n = len(Map)
return [[min_path(Map, k, i, j) for i in range(n)] for j in range(n)]
res = func([[0, 2, 3], [2, 0, 1], [3, 1, 0]], 2)
This gives me the result [[4, 4, 3], [4, 2, 3], [3, 3, 2]] for len(path) >= k, or [[4, 4, 3], [4, 2, 5], [3, 5, 2]] for len(path) == k.
Let me first describe the question on which I am working.
Its a question from hackerrank - List Comprehensions.
Well here's my solution to that question :
final_list = [];
temp_list = [];
x = int(input());
y = int(input());
z = int(input());
n = int(input());
for i in range(x+1):
for j in range(y+1):
for k in range(z+1):
if((i + j + k) != n):
temp_list.clear();
temp_list.append(i);
temp_list.append(j);
temp_list.append(k);
final_list.append(temp_list);
print(final_list);
I used these values as my input : x = 1, y = 1 , z = 1 and n = 2.
Using these values I got output : [[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]]
I am not getting that even I cleared the temp_list then why am I getting this output instead of : [[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1]].
and moreover, when I declared the temp_list in the if condition itself instead of declaring on the top of the code, I got the answer.
can anyone let me know why is this happening ?