How to fix tribonacci series function when using generators - python-3.x

Following is my approach to return the n element in Tribonacci series
def tri(n,seq = [1, 1, 1]):
for i in range(n-2):
seq = seq[1:] + [sum(seq)]
return seq[-1]
I get the correct result when passing argument through print().
print(tri(10))
Output : 193
However, when using generator(using repl.it), I get error of can only concatenate tuple (not"list") to tuple
I am using below for generator
def tri_generator():
for i in range(1000):
yield (i, (1, 1, 1))
yield (i, (1, 0, 1))
yield (i, (1, 2, 3))
Not sure what I am missing? Any help is appreciated.

Here's a simple generator (you can clean up the code as you may like):
def tri_generator():
i = 0
seq = [1, 1, 1]
while True:
seq = [seq[1], seq[2], seq[0] + seq[1] + seq[2]]
yield i, seq
i += 1
n = 10
xx = tri_generator()
for i in range(n - 2):
print(next(xx))
## Output:
## (0, [1, 1, 3])
## (1, [1, 3, 5])
## (2, [3, 5, 9])
## (3, [5, 9, 17])
## (4, [9, 17, 31])
## (5, [17, 31, 57])
## (6, [31, 57, 105])
## (7, [57, 105, 193])

Related

How to compute the outer sum (similar to outer product

Given tensors x and y, each with shape (num_batches, d), how can I use PyTorch to compute the sum of every combination of x and y within a batch?
This is similar to outer product, except we don't want to multiply, but sum. (This implies that I could solve this by exponentiating, outer product, and taking the log, but of course that has numerical and performance disadvantages).
It could be done via cartesian product and then summing each of the combinations.
Essentially, I'd like osum[b, i, j] == x[b, i] + y[b, j]. Can PyTorch do this in tensors, without loops?
This can easily be done, by introducing singleton dimensions into x and y and broadcasting along these singleton dimensions:
osum = x[..., None] + y[:, None, :]
For example:
x = torch.arange(6).view(2,3)
y = x * 10
osum = x[..., None] + y[:, None, :]
Results with:
tensor([[[ 0, 10, 20],
[ 1, 11, 21],
[ 2, 12, 22]],
[[33, 43, 53],
[34, 44, 54],
[35, 45, 55]]])
Update (July, 14th): How it works?
You have two tensors, x and y of shape bxn, and you want to compute:
osum[b,i,j] = x[b, i] + y[b, j]
We can, conceptually, create new variables xx and yy by repeating each element of x and y along a third dimension, such that:
xx[b, i, j] == x[b, i] # for all j
yy[b, i, j] == y[b, j] # for all i
With these new variables, it is easy to see that:
osum = xx + yy
since, by deinition
osum[b, i, j] == xx[b, i, j] + yy[b, i, j] == x[b, i] + y[b, j]
Now, you can use commands such as torch.expand or torch.repeat to explicitly create xx and yy - but why bother? since their elements are just trivial repetitions of the elements along specific dimensions, broadcasting does this implicitly for you.
You can perform such operation using broadcasting:
>>> x = torch.randint(0,10,(2,4))
tensor([[0, 6, 5, 8],
[3, 0, 7, 5]])
>>> y = torch.randint(0,10,(2,5))
tensor([[6, 9, 9, 8, 7],
[0, 4, 6, 2, 5]])
>>> x[:,:,None].shape
(2, 4, 1)
>>> y[:,None].shape
(2, 1, 5])
Adding a singleton to dimensions which differ ensures the 'outer' operation
is performed.
>>> osum = x[:,:,None] + y[:,None]
tensor([[[ 6, 9, 9, 8, 7],
[12, 15, 15, 14, 13],
[11, 14, 14, 13, 12],
[14, 17, 17, 16, 15]],
[[ 3, 7, 9, 5, 8],
[ 0, 4, 6, 2, 5],
[ 7, 11, 13, 9, 12],
[ 5, 9, 11, 7, 10]]])

get multiple tuples from list of tuples using min function

I have a list that looks like this
mylist = [('Part1', 5, 5), ('Part2', 7, 7), ('Part3', 11, 9),
('Part4', 45, 45), ('part5', 5, 5)]
I am looking for all the tuples that has a number closest to my input
now i am using this code
result = min([x for x in mylist if x[1] >= 4 and x[2] >= 4])
The result i am getting is
('part5', 5, 5)
But i am looking for an result looking more like
[('Part1', 5, 5), ('part5', 5, 5)]
and if there are more tuples in it ( i have 2 in this example but it could be more) then i would like to get all the tuples back
the whole code
mylist = [('Part1', 5, 5), ('Part2', 7, 7), ('Part3', 11, 9), ('Part4', 45, 45), ('part5', 5, 5)]
result = min([x for x in mylist if x[1] >= 4 and x[2] >= 4])
print(result)
threshold = 4
mylist = [('Part1', 5, 5), ('Part2', 7, 7), ('Part3', 11, 9), ('Part4', 45, 45), ('part5', 5, 5)]
filtered = [x for x in mylist if x[1] >= threshold and x[2] >= threshold]
keyfunc = lambda x: x[1]
my_min = keyfunc(min(filtered, key=keyfunc))
result = [v for v in filtered if keyfunc(v)==my_min]
# [('Part1', 5, 5), ('part5', 5, 5)]

why python spark slow when use for loop

I am learning pyspark from write a page rank program.
But when I use for loop to compute, every iteration is getting slower.
I try to use cache, but it seems don't work.
I have no idea how to fix this problem.
Here is my loop code
from time import time
for idx, i in tqdm(enumerate(range(10))):
start_time = time() # <-- start timing
new_values = stochastic_matrix.flatMap(lambda x: get_new_value(x, beta, N))
new_values = new_values.reduceByKey(add).map(lambda x: [x[0], x[1] + ((1-beta)/N)] )
S = new_values.values().reduce(add)
new_stochastic_matrix = stochastic_matrix.fullOuterJoin(new_values)
stochastic_matrix = new_stochastic_matrix.map(lambda x: sum_new_value(x, S, N))
new_stochastic_matrix.cache()
stochastic_matrix.cache() # <--- cache here
end_time = time()
print(idx, end_time - start_time)
sorted(stochastic_matrix.collect())[:10]
Update
After I comment this line
stochastic_matrix = new_stochastic_matrix.map(lambda x: sum_new_value(x, S, N))
It work normal !!
But I still don't know why and how to fix it.
Update 2
I set S as a constant, the speed is normal.
But I still don't know why and how to fix it.
All Flow
After Input Data
variable: stochastic_matrix - data struct looks like this.
[
(key,[value, this_node_connect_to_which_node]),
(1, [0.2, [2, 3]]),
(2, [0.2, [4]]),
(3, [0.2, [1, 4, 5]]),
(4, [0.2, []]),
(5, [0.2, [1, 4]])
]
Map
def get_new_value(item, beta, N):
key, tmp = item
value, dest = tmp
N_dest = len(dest)
new_values = []
for i in dest:
new_values.append([i, beta * (value/ N_dest)] )
return new_values
new_values = stochastic_matrix.flatMap(lambda x: get_new_value(x, beta, N))
new_values.collect()
########### output
[node, each_node_new_value]
[[2, 0.08000000000000002],
[3, 0.08000000000000002],
[4, 0.16000000000000003],
[1, 0.05333333333333334],
[4, 0.05333333333333334],
[5, 0.05333333333333334],
[1, 0.08000000000000002],
[4, 0.08000000000000002]]
Reduce by key
beta and N is just a float number
new_values = new_values.reduceByKey(add).map(lambda x: [x[0], x[1] + ((1-beta)/N)] )
new_values.collect()
###### Output
[[2, 0.12000000000000001],
[3, 0.12000000000000001],
[4, 0.33333333333333337],
[1, 0.17333333333333334],
[5, 0.09333333333333332]]
Combine new_values and stochastic_matrix
new_stochastic_matrix = stochastic_matrix.fullOuterJoin(new_values)
new_stochastic_matrix.collect()
#### Output
# (key, ([value, this_node_connect_to_which_node], new_value))
[(2, ([0.2, [4]], 0.12000000000000001)),
(4, ([0.2, []], 0.33333333333333337)),
(1, ([0.2, [2, 3]], 0.17333333333333334)),
(3, ([0.2, [1, 4, 5]], 0.12000000000000001)),
(5, ([0.2, [1, 4]], 0.09333333333333332))]
Update new_value to value
S and N are just a number
def sum_new_value(item, S, N):
key, value = item
if value[1] == None:
new_value = 0 + (1-S)/N
else:
new_value = value[1] + (1-S)/N
# new_value = value[1]
return [key, [new_value, value[0][1]]]
stochastic_matrix = new_stochastic_matrix.map(lambda x: sum_new_value(x, S, N))
sorted(stochastic_matrix.collect())[:10]
######## Output
[[1, [0.2053333333333333, [2, 3]]],
[2, [0.152, [4]]],
[3, [0.152, [1, 4, 5]]],
[4, [0.36533333333333334, []]],
[5, [0.1253333333333333, [1, 4]]]]

How to generate permutations by decreasing cycles?

Here are two related SO questions 1 2 that helped me formulate my preliminary solution.
The reason for wanting to do this is to feed permutations by edit distance into a Damerau-Levenshtein NFA; the number of permutations grows fast, so it's a good idea to delay (N-C) cycle N permutations candidates until (N-C) iterations of the NFA.
I've only studied engineering math up to Differential Equations and Discrete Mathematics, so I lack the foundation to approach this task from a formal perspective. If anyone can provide reference materials to help me understand this problem properly, I would appreciate that!
Through brief empirical analysis, I've noticed that I can generate the swaps for all C cycle N permutations with this procedure:
Generate all 2-combinations of N elements (combs)
Subdivide combs into arrays where the smallest element of each 2-combination is the same (ncombs)
Generate the cartesian products of the (N-C)-combinations of ncombs (pcombs)
Sum pcombs to get a list of the swaps that will generate all C cycle N permutations (swaps)
The code is here.
My Python is a bit rusty, so helpful advice about the code is appreciated (I have the feeling that lines 17, 20, and 21 should be combined. I'm not sure if I should be making lists of the results of itertools.(combinations|product). I don't know why line 10 can't be ncombs += ... instead of ncombs.append(...)).
My primary question is how to solve this question properly. I did the rounds on my own due diligence by finding a solution, but I am sure there's a better way. I've also only verified my solution for N=3 and N=4, is it really correct?
The ideal solution would be functionally identical to heap's algorithm, except it would generate the permutations in decreasing cycle order (by the minimum number of swaps to generate the permutation, increasing).
This is far from Heap's efficiency, but it does produce only the necessary cycle combinations restricted by the desired number of cycles, k, in the permutation. We use the partitions of k to create all combinations of cycles for each partition. Enumerating the actual permutations is just a cartesian product of applying each cycle n-1 times, where n is the cycle length.
Recursive Python 3 code:
from math import ceil
def partitions(N, K, high=float('inf')):
if K == 1:
return [[N]]
result = []
low = ceil(N / K)
high = min(high, N-K+1)
for k in range(high, low - 1, -1):
for sfx in partitions(N-k, K - 1, k):
result.append([k] + sfx)
return result
print("partitions(10, 3):\n%s\n" % partitions(10, 3))
def combs(ns, subs):
def g(i, _subs):
if i == len(ns):
return [tuple(tuple(x) for x in _subs)]
res = []
cardinalities = set()
def h(j):
temp = [x[:] for x in _subs]
temp[j].append(ns[i])
res.extend(g(i + 1, temp))
for j in range(len(subs)):
if not _subs[j] and not subs[j] in cardinalities:
h(j)
cardinalities.add(subs[j])
elif _subs[j] and len(_subs[j]) < subs[j]:
h(j)
return res
_subs = [[] for x in subs]
return g(0, _subs)
A = [1,2,3,4]
ns = [2, 2]
print("combs(%s, %s):\n%s\n" % (A, ns, combs(A, ns)))
A = [0,1,2,3,4,5,6,7,8,9,10,11]
ns = [3, 3, 3, 3]
print("num combs(%s, %s):\n%s\n" % (A, ns, len(combs(A, ns))))
def apply_cycle(A, cycle):
n = len(cycle)
last = A[ cycle[n-1] ]
for i in range(n-1, 0, -1):
A[ cycle[i] ] = A[ cycle[i-1] ]
A[ cycle[0] ] = last
def permutations_by_cycle_count(n, num_cycles):
arr = [x for x in range(n)]
cycle_combs = []
for partition in partitions(n, num_cycles):
cycle_combs.extend(combs(arr, partition))
result = {}
def f(A, cycle_comb, i):
if i == len(cycle_comb):
result[cycle_comb].append(A)
return
if len(cycle_comb[i]) == 1:
f(A[:], cycle_comb, i+1)
for k in range(1, len(cycle_comb[i])):
apply_cycle(A, cycle_comb[i])
f(A[:], cycle_comb, i+1)
apply_cycle(A, cycle_comb[i])
for cycle_comb in cycle_combs:
result[cycle_comb] = []
f(arr, cycle_comb, 0)
return result
result = permutations_by_cycle_count(4, 2)
print("permutations_by_cycle_count(4, 2):\n")
for e in result:
print("%s: %s\n" % (e, result[e]))
Output:
partitions(10, 3):
[[8, 1, 1], [7, 2, 1], [6, 3, 1], [6, 2, 2], [5, 4, 1], [5, 3, 2], [4, 4, 2], [4, 3, 3]]
# These are the cycle combinations
combs([1, 2, 3, 4], [2, 2]):
[((1, 2), (3, 4)), ((1, 3), (2, 4)), ((1, 4), (2, 3))]
num combs([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [3, 3, 3, 3]):
15400
permutations_by_cycle_count(4, 2):
((0, 1, 2), (3,)): [[2, 0, 1, 3], [1, 2, 0, 3]]
((0, 1, 3), (2,)): [[3, 0, 2, 1], [1, 3, 2, 0]]
((0, 2, 3), (1,)): [[3, 1, 0, 2], [2, 1, 3, 0]]
((1, 2, 3), (0,)): [[0, 3, 1, 2], [0, 2, 3, 1]]
((0, 1), (2, 3)): [[1, 0, 3, 2]]
((0, 2), (1, 3)): [[2, 3, 0, 1]]
((0, 3), (1, 2)): [[3, 2, 1, 0]]

How to assign a random weight to edges in networkx, like weight of edge (a, a) = 0 and weight of edge (a, b) = K, where K is some random number

I am working on weighted graphs and I would like to assign a random weight for the edges of the graph, such that,
weight of edge(a, a) = 0
weight of (a, b) = weight of edge(b, a) = K
where K is some random number. This goes on for all the edges of the graphs.
For that, I am using random.randint() method. I am actually using the logic of sum. If sum of both the edges is same, then assign some random integer.
Here is my code,
nodelist = list(range(1, num_nodes + 1))
edgelist = []
for i in nodelist:
for j in nodelist:
if i == j:
edgelist.append((i, j, 0))
if (i != j and sum((i, j)) == sum((j, i))):
rand = random.randint(5, 25)
edgelist.append((i, j, rand))
print(edgelist)
Actual result,
[(1, 1, 0), (1, 2, 18), (1, 3, 6), (2, 1, 13), (2, 2, 0), (2, 3, 21), (3, 1, 20), (3, 2, 17), (3, 3, 0)]
Expected result,
[(1, 1, 0), (1, 2, K), (1, 3, H), (2, 1, K), (2, 2, 0), (2, 3, P), (3, 1, H), (3, 2, P), (3, 3, 0)]
where, K, H, P are some random integers.
If the ordering of the result is not important following code gives the desired output:
import random
num_nodes = 3
nodelist = list(range(1, num_nodes + 1))
edgelist = []
for i in nodelist:
for j in nodelist:
if j > i:
break
if i == j:
edgelist.append((i, j, 0))
else:
rand = random.randint(5, 25)
edgelist.append((i, j, rand))
edgelist.append((j, i, rand))
print(edgelist)
# [(1, 1, 0), (2, 1, 7), (1, 2, 7), (2, 2, 0), (3, 1, 18), (1, 3, 18), (3, 2, 13), (2, 3, 13), (3, 3, 0)]
In case you need the edges sorted, simply use:
print(sorted(edgelist))
# [(1, 1, 0), (1, 2, 20), (1, 3, 16), (2, 1, 20), (2, 2, 0), (2, 3, 23), (3, 1, 16), (3, 2, 23), (3, 3, 0)]
Just a little change in your code will do the trick.
Here is the solution I found to obtain your expected output
num_nodes = 3
nodelist = list(range(1, num_nodes + 1))
edgelist = []
for i in nodelist:
for j in nodelist:
if i == j:
edgelist.append((i, j, 0))
elif i < j:
rand = random.randint(5, 25)
edgelist.append((i, j, rand))
edgelist.append((j, i, rand))
print(sorted(edgelist))
This code outputs :
[(1, 1, 0), (1, 2, 15), (1, 3, 15), (2, 1, 15), (2, 2, 0), (2, 3, 21), (3, 1, 15), (3, 2, 21), (3, 3, 0)]
So I figured out something interesting. Say below matrix shows edges in a complete graph of 5 nodes,
[1, 1] [1, 2] [1, 3] [1, 4] [1, 5]
[2, 1] [2, 2] [2, 3] [2, 4] [2, 5]
[3, 1] [3, 2] [3, 3] [3, 4] [3, 5]
[4, 1] [4, 2] [4, 3] [4, 4] [4, 5]
[5, 1] [5, 2] [5, 3] [5, 4] [5, 5]
now, moving right side from principal diagonal, we have lists whose first element is less than second element. We just got to target them and append new random weight to it.
Here is my code,
nodelist = list(range(1, num_nodes + 1))
edgelist = []
for i in nodelist:
for j in nodelist:
edgelist.append([i, j])
p = 0
eff_edgelist = []
while p < len(edgelist):
if edgelist[p][0] <= edgelist[p][1]:
eff_edgelist.append(edgelist[p])
p += 1
for i in eff_edgelist:
if i[0] == i[1]:
i.append(0)
else:
i.append(random.randint(5, 50))
eff_edgelist = [tuple(i) for i in eff_edgelist]
for i in list(G.edges(data=True)):
print([i])
and the result,
[(1, 1, {'weight': 0})]
[(1, 2, {'weight': 12})]
[(1, 3, {'weight': 37})]
[(1, 4, {'weight': 38})]
[(1, 5, {'weight': 6})]
[(2, 2, {'weight': 0})]
[(2, 3, {'weight': 12})]
[(2, 4, {'weight': 40})]
[(2, 5, {'weight': 8})]
[(3, 3, {'weight': 0})]
[(3, 4, {'weight': 15})]
[(3, 5, {'weight': 38})]
[(4, 4, {'weight': 0})]
[(4, 5, {'weight': 41})]
[(5, 5, {'weight': 0})]
and if you check, print(G[2][1]), the output will be {'weight': 12},
which means weight of edge(a, b) = weight of edge(b, a).

Resources