I have the following code for object detection:
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
num_classes = 4
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
images,targets = next(iter(data_loader))
imges = list(image.long() / 255. for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
model.train()
output = model(imges, targets) # Returns losses and detections
model.eval()
predictions = model(imges)
after evaluating the object detection model, I have the following variables:
targets: [{'boxes': tensor([[355, 220, 376, 244]],
dtype=torch.int32), 'labels': tensor([2])}] output:
{'loss_classifier': tensor(1.3333, grad_fn=),
'loss_box_reg': tensor(0.0038, grad_fn=),
'loss_objectness': tensor(0.1066,
grad_fn=), 'loss_rpn_box_reg':
tensor(0.0052, grad_fn=)} predictions: [{'boxes':
tensor([[499.3585, 192.1516, 509.1189, 229.6568],
[283.9691, 141.0438, 297.4498, 173.8349],
[219.0643, 288.0016, 240.9904, 300.9542],
[ 1.3697, 304.6172, 19.1882, 391.8512],
[276.8518, 289.1565, 286.3466, 297.7505],
[347.7480, 233.1855, 358.6660, 254.8889],
[361.5839, 296.3174, 374.4094, 306.7511],
[344.9453, 229.1313, 355.5451, 238.3187],
[251.1995, 292.2169, 266.5463, 305.9667],
[309.4635, 291.2534, 314.7242, 300.4301],
[223.1157, 285.6935, 252.8376, 299.0389],
[562.0787, 267.8251, 599.0346, 334.5351],
[222.7859, 288.2105, 235.1801, 303.9444],
[311.3547, 290.5253, 315.8662, 298.9694],
[362.1211, 298.5587, 375.1563, 312.6452],
[311.8416, 273.1684, 318.3790, 286.2831],
[220.4009, 287.3602, 231.2190, 302.1286],
[ 0.0000, 296.7562, 64.8016, 362.1397],
[ 2.7204, 291.9724, 19.4280, 355.7361],
[328.4286, 298.9844, 344.3835, 307.2706],
[197.0271, 193.5887, 258.5456, 232.4857],
[277.7694, 142.6341, 310.9907, 176.3230],
[226.8122, 292.2942, 253.2384, 307.8453],
[309.3487, 281.2783, 613.6774, 327.8228],
[361.5640, 227.6523, 369.5664, 241.1888],
[311.3616, 257.4242, 317.6867, 273.2939],
[287.6371, 295.8700, 296.1999, 303.9047],
[363.7999, 292.3636, 376.4563, 324.4444],
[329.5759, 299.3618, 347.5399, 312.2188],
[314.8295, 294.8729, 323.9438, 306.2670],
[278.0969, 291.1522, 288.4695, 299.7475],
[312.2480, 288.7871, 316.6707, 297.3116],
[341.1802, 223.9077, 360.4586, 260.1764],
[134.1158, 234.9592, 145.3087, 258.9106],
[312.3074, 292.3976, 317.6186, 302.2166],
[311.4927, 268.3312, 318.5302, 282.3795],
[204.4817, 295.7732, 258.9794, 314.0266],
[ 0.0000, 265.7996, 45.9227, 388.1324],
[273.5489, 292.5255, 282.9457, 303.2326],
[217.3738, 284.6293, 249.0125, 309.3752],
[274.8192, 290.2737, 282.2784, 299.6808],
[ 0.9731, 257.4869, 24.0500, 375.4413],
[311.5385, 285.5492, 315.9659, 293.7715],
[468.9328, 279.4567, 505.6418, 331.5283],
[310.3611, 275.5211, 316.5564, 286.3037],
[218.3149, 285.7218, 226.5772, 299.1930],
[324.0149, 297.8809, 347.2982, 320.1754],
[129.2703, 233.3225, 138.2789, 244.7385],
[307.8921, 260.1715, 312.6920, 270.2595],
[343.3422, 234.6691, 355.3820, 253.8513],
[210.6515, 191.1276, 269.8756, 308.7494],
[309.7780, 271.4445, 315.4118, 282.1513],
[306.9601, 264.1185, 311.5843, 274.1895],
[125.7533, 232.6025, 156.1994, 262.3342],
[324.1716, 248.5284, 632.7621, 373.1680],
[234.5245, 293.4004, 252.0132, 313.5597],
[213.8129, 283.7307, 279.8897, 302.0436],
[ 9.3936, 249.8066, 312.7980, 403.3334],
[ 7.1693, 309.9051, 50.8535, 372.1105],
[313.0856, 290.1074, 318.0972, 298.6309],
[438.3509, 102.7765, 480.8036, 240.5124],
[ 4.0711, 240.0525, 57.4158, 356.4373],
[349.6964, 318.7935, 625.7964, 409.0545],
[307.6823, 267.9908, 312.8143, 278.5309],
[104.1652, 266.7782, 110.4961, 276.5671],
[299.3577, 267.6205, 303.7659, 274.9702],
[346.7052, 228.4357, 357.4846, 247.6979],
[ 6.4561, 92.1798, 22.0957, 152.1279],
[104.7418, 261.4524, 111.9131, 275.1398],
[127.5597, 232.1843, 142.1213, 260.0159],
[277.8427, 294.9715, 286.7830, 303.0511],
[ 2.2437, 289.5077, 31.9722, 365.3925],
[339.7329, 300.2569, 346.9595, 320.4829],
[189.3274, 263.6696, 194.7567, 274.4467],
[417.5750, 294.8152, 624.1110, 371.8047],
[308.6866, 256.3249, 315.1602, 269.1858],
[129.8589, 230.3042, 151.0375, 276.8960],
[ 11.7817, 277.1637, 263.9375, 355.9714],
[284.6764, 294.4560, 293.4034, 303.1683],
[286.5944, 299.7869, 349.3191, 315.4422],
[361.8950, 296.4371, 369.1558, 318.6344],
[276.0127, 135.9264, 296.3451, 174.2803],
[316.2193, 284.2166, 321.4098, 294.1941],
[258.3928, 289.4342, 273.1449, 302.1514],
[288.1286, 291.5562, 295.2573, 301.7779],
[564.3504, 271.4033, 597.6384, 303.2319],
[178.8375, 262.0026, 186.0949, 272.7263],
[535.7798, 257.4667, 630.2303, 399.2629],
[308.4334, 262.2772, 313.0033, 275.5795],
[186.4485, 199.1323, 265.1494, 278.7984],
[473.2414, 70.5220, 507.8289, 243.3028],
[315.2024, 271.2917, 322.9809, 284.3033],
[ 13.3878, 284.5474, 47.8141, 347.7423],
[267.7779, 145.4938, 340.6838, 177.7943],
[219.2828, 291.4946, 226.5941, 303.7463],
[416.8861, 217.8241, 640.0000, 421.8463],
[430.6125, 119.5146, 504.7456, 223.8819],
[130.7495, 235.0306, 143.1934, 246.0306],
[308.1282, 258.0222, 316.2524, 273.8670],
[256.1530, 293.4894, 268.8109, 310.0722]], grad_fn=), 'labels': tensor([3, 2, 3, 3, 3, 2, 3, 2,
3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 2, 2, 3, 1,
3, 2, 3, 3, 3, 3, 3, 3, 2, 2, 3, 2, 1, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2,
3, 2, 2, 2, 3, 2, 1, 3, 3, 3, 3, 3, 3, 3, 1, 3, 2, 2, 2, 3, 2, 2, 3, 3,
3, 3, 1, 3, 2, 3, 3, 1, 3, 2, 3, 3, 3, 3, 3, 1, 3, 2, 3, 2, 3, 2, 3, 1,
3, 2, 2, 3]), 'scores': tensor([0.5648, 0.4955, 0.4951, 0.4896, 0.4864, 0.4858, 0.4761, 0.4677, 0.4536,
0.4489, 0.4475, 0.4424, 0.4408, 0.4391, 0.4374, 0.4373, 0.4362, 0.4330,
0.4326, 0.4301, 0.4282, 0.4282, 0.4279, 0.4250, 0.4240, 0.4228, 0.4226,
0.4215, 0.4205, 0.4203, 0.4166, 0.4154, 0.4136, 0.4084, 0.4083, 0.4057,
0.4054, 0.4038, 0.4038, 0.4035, 0.4032, 0.4026, 0.4015, 0.4008, 0.3976,
0.3959, 0.3954, 0.3939, 0.3939, 0.3901, 0.3877, 0.3870, 0.3859, 0.3859,
0.3849, 0.3789, 0.3776, 0.3774, 0.3773, 0.3763, 0.3755, 0.3753, 0.3749,
0.3748, 0.3739, 0.3737, 0.3729, 0.3715, 0.3707, 0.3703, 0.3697, 0.3694,
0.3687, 0.3687, 0.3678, 0.3668, 0.3655, 0.3641, 0.3629, 0.3619, 0.3615,
0.3607, 0.3604, 0.3603, 0.3598, 0.3594, 0.3578, 0.3571, 0.3568, 0.3563,
0.3562, 0.3559, 0.3559, 0.3546, 0.3538, 0.3527, 0.3518, 0.3515, 0.3513,
0.3508], grad_fn=)}]
when i call compute() i have this error:
from torchmetrics.detection.mean_ap import MeanAveragePrecision
metric = MeanAveragePrecision(box_format='xyxy', class_metrics=True)
metric.update(predictions, targets)
metric.compute()
RuntimeError: value cannot be converted to type int without overflow
So i cant solve that(
Related
>>> b
tensor([[ 6, 7, 12, 7, 8],
[ 0, 1, 6, 1, 2],
[ 0, 1, 6, 1, 2],
[ 2, 3, 8, 3, 4],
[ 2, 3, 8, 3, 4],
[ 2, 3, 8, 3, 4],
[10, 11, 16, 11, 12],
[-1, 0, 5, 0, 1],
[-2, -1, 4, -1, 0],
[ 2, 3, 8, 3, 4],
[ 1, 2, 7, 2, 3],
[ 1, 2, 7, 2, 3],
[ 2, 3, 8, 3, 4],
[ 5, 6, 11, 6, 7],
[-2, -1, 4, -1, 0],
[-3, -2, 3, -2, -1],
[-5, -4, 1, -4, -3],
[ 1, 2, 7, 2, 3],
[12, 13, 18, 13, 14],
[-3, -2, 3, -2, -1],
[ 2, 3, 8, 3, 4],
[ 3, 4, 9, 4, 5],
[10, 11, 16, 11, 12],
[-6, -5, 0, -5, -4],
[ 9, 10, 15, 10, 11],
[12, 13, 18, 13, 14],
[-3, -2, 3, -2, -1],
[-2, -1, 4, -1, 0],
[-4, -3, 2, -3, -2],
[-1, 0, 5, 0, 1],
[ 2, 3, 8, 3, 4],
[ 4, 5, 10, 5, 6],
[-1, 0, 5, 0, 1],
[ 5, 6, 11, 6, 7],
[ 7, 8, 13, 8, 9],
[ 3, 4, 9, 4, 5],
[ 2, 3, 8, 3, 4],
[ 4, 5, 10, 5, 6],
[-4, -3, 2, -3, -2],
[ 2, 3, 8, 3, 4],
[-1, 0, 5, 0, 1],
[ 2, 3, 8, 3, 4],
[ 4, 5, 10, 5, 6],
[ 9, 10, 15, 10, 11],
[-1, 0, 5, 0, 1],
[-4, -3, 2, -3, -2],
[ 0, 1, 6, 1, 2],
[ 4, 5, 10, 5, 6],
[ 6, 7, 12, 7, 8],
[-2, -1, 4, -1, 0]])
>>> torch.mode(b, 0)
torch.return_types.mode(
values=tensor([2, 3, 8, 3, 4]),
indices=tensor([20, 20, 20, 20, 20]))
i don't know why indeces is all equal to 20
the details of torch.mode description as below
https://pytorch.org/docs/stable/generated/torch.mode.html#torch.mode
torch.mode(input, dim=- 1, keepdim=False, *, out=None)
Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row, and indices is the index location of each mode value found.
By default, dim is the last dimension of the input tensor.
If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input.
It is because of the way the tensor b is. The row [2, 3, 8, 3, 4] is repeated a lot, so in each column the modes are respectively [2, 3, 8, 3, 4] and more importantly, the mode indices will be equal precisely because the modes occur together; if you look at the row with index 20 (i.e., the 21st row), it is exactly [2, 3, 8, 3, 4].
I am assuming that you constructed b similar to the example in torch.mode which I believe is a poor choice for an example as it leads to confusion like the one you are having.
Instead, consider the following:
>>> b = torch.randint(4, (5, 7))
>>> b
tensor([[0, 0, 0, 2, 0, 0, 2],
[0, 3, 0, 0, 2, 0, 1],
[2, 2, 2, 0, 0, 0, 3],
[2, 2, 3, 0, 1, 1, 0],
[1, 1, 0, 0, 2, 0, 2]])
>>> torch.mode(b, 0)
torch.return_types.mode(
values=tensor([0, 2, 0, 0, 0, 0, 2]),
indices=tensor([1, 3, 4, 4, 2, 4, 4]))
In the above, b has different modes in each column which are respectively [0, 2, 0, 0, 0, 0, 2] and the indices returned by torch.mode are [1, 3, 4, 4, 2, 4, 4]. This makes sense because, for example, in the first column, 0 is the most common element and there is a 0 at index 1. Similarly, in the second column, 2 is the most common element and there is a 2 at index 3. This is true for all columns. If you want the modes of the rows instead, you would do torch.mode(b, 1).
I'm hoping to calculate the distances between two points in a (Nx1) numpy array, i.e.:
a = [2, 5, 5, 12, 5, 3, 10, 8, 1, 3, 1]
I'm hoping to get a square matrix with the (normed) distances between each point:
sq = [[0, |2-5|, |2-5|, |2-12|, |2-5|, ...],
[|5-2|, 0, ...], ...]
So far, what I have doesn't work, giving wrong values for the square distance matrix. Is there a way to (I'm not sure if it is the correct term?) vectorise my method too, but am unfamiliar with the advanced indexing.
What I currently have is the following:
sq = np.zero((len(a), len(a))
for i in a:
for j in len(a+1):
sq[i,j] = np.abs(a[:,0] - a[:,0])
Would appreciate any help!
I think that by exploiting numpy broadcasting, this is the faster solution:
a = [2, 5, 5, 12, 5, 3, 10, 8, 1, 3, 1]
a = np.array(a).reshape(-1,1)
sq = np.abs(a.T-a)
sq
array([[ 0, 3, 3, 10, 3, 1, 8, 6, 1, 1, 1],
[ 3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4],
[ 3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4],
[10, 7, 7, 0, 7, 9, 2, 4, 11, 9, 11],
[ 3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4],
[ 1, 2, 2, 9, 2, 0, 7, 5, 2, 0, 2],
[ 8, 5, 5, 2, 5, 7, 0, 2, 9, 7, 9],
[ 6, 3, 3, 4, 3, 5, 2, 0, 7, 5, 7],
[ 1, 4, 4, 11, 4, 2, 9, 7, 0, 2, 0],
[ 1, 2, 2, 9, 2, 0, 7, 5, 2, 0, 2],
[ 1, 4, 4, 11, 4, 2, 9, 7, 0, 2, 0]])
With numpy the following line might be the shortest to your result:
import numpy as np
a = np.array([2, 5, 5, 12, 5, 3, 10, 8, 1, 3, 1])
sq = np.array([np.array([(np.abs(i - j)) for j in a]) for i in a])
print(sq)
The following would give you the desired result without numpy.
a = [2, 5, 5, 12, 5, 3, 10, 8, 1, 3, 1]
sq = []
for i in a:
distances = []
for j in a:
distances.append(abs(i-j))
sq.append(distances)
print(sq)
With both, the result comes as:
[[0, 3, 3, 10, 3, 1, 8, 6, 1, 1, 1], [3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4], [3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4], [10, 7, 7, 0, 7, 9, 2, 4, 11, 9, 11], [3, 0, 0, 7, 0, 2, 5, 3, 4, 2, 4], [1, 2, 2, 9, 2, 0, 7, 5, 2, 0, 2], [8, 5, 5, 2, 5, 7, 0, 2, 9, 7, 9], [6, 3, 3, 4, 3, 5, 2, 0, 7, 5, 7], [1, 4, 4, 11, 4, 2, 9, 7, 0, 2, 0], [1, 2, 2, 9, 2, 0, 7, 5, 2, 0, 2], [1, 4, 4, 11, 4, 2, 9, 7, 0, 2, 0]]
There may be more than one way to do this but one way is to only use numpy operations instead of loops because internally python does lots of optimizations for numpy arrays.
One way to do only using array operations is to create an NxN matrix by repeating the original matrix (a) N times.
This will create a matrix N times.
E.g:
a = [1, 2, 3]
b = [[1 , 2, 3], [1 , 2, 3], [1 , 2, 3]]
Then you can do a matrix, array operation of
ans = abs(b - a)
Assuming a is numpy array, you can do:
b = np.repeat(a,a.shape).reshape((a.shape[0],a.shape[0]))
ans = np.abs(b - a)
`So,I am given a list-
group = [2,1,3,4]
Each index of this list represents a group.
So group 0 = 2
group 1 = 1
group 2 = 3
group 3 = 4
I am given another list called :
l =[[[0, 0, 3, 3, 3, 3], [0, 0, 1, 3, 3, 3, 3]], [[0, 1]], [[2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3]], [[0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3]]]
the output I want is:
dict = {0:[0,3],1;[1],2:[2,3],3:[0,2]}
If the number of times the element appears in each sublist of l ie if both l[0][0] and l[0][1] have 0's appear 2 times, it is added to the index 0 of the dict. Since both l[0][0] and l[0][1] have 3 appear 4 times(this is because group[3] is 4), it is added to the index 0.
now in l[1][0] and 0 appears just once(instead of twice) so its not added to index 1. However 1 just appears once so it is added to index 1.Thanks!
What I have tried so far:
def tryin(l,groups):
for i in range(len(l)):
count = 0
for j in range(len(l[i])):
if j in (l[i][j]):
count+=1
if count == groups[i]:
print(i,j)
try this code :
input:
group = [2,1,3,4]
l =[[[0, 0, 3, 3, 3, 3], [0, 0, 1, 3, 3, 3, 3]], [[0, 1]], [[2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3]], [[0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3]]]
def IntersecOfSets(list_):
result = set(list_[0])
for s in list_[1:]:
result.intersection_update(s)
return result
def nb_occ(l,group):
d = {}
for i in l:
l2=[]
for j in i:
l1 = []
for x in group:
if j.count(group.index(x)) >= x :
y=group.index(x)
l1.append(y)
l2.append(l1)
if len(l2)>1:
d[str(l.index(i))]= IntersecOfSets(l2)
else:
d[str(l.index(i))]= l2[0]
return d
print(nb_occ(l,group))
output:
{'2': {2, 3}, '1': [1], '0': {0, 3}, '3': {0, 2}}
I'm trying to find all palindromic sequences of length k that sum to n. I have a specific example (k=6):
def brute(n):
J=[]
for a in range(1,n):
for b in range(1,n):
for c in range(1,n):
if (a+b+c)*2==n:
J.append((a,b,c,c,b,a))
return(J)
The output gives me something like:
[(1, 1, 6, 6, 1, 1),
(1, 2, 5, 5, 2, 1),
(1, 3, 4, 4, 3, 1),
(1, 4, 3, 3, 4, 1),
(1, 5, 2, 2, 5, 1),
(1, 6, 1, 1, 6, 1),
(2, 1, 5, 5, 1, 2),
(2, 2, 4, 4, 2, 2),
(2, 3, 3, 3, 3, 2),
(2, 4, 2, 2, 4, 2),
(2, 5, 1, 1, 5, 2),
(3, 1, 4, 4, 1, 3),
(3, 2, 3, 3, 2, 3),
(3, 3, 2, 2, 3, 3),
(3, 4, 1, 1, 4, 3),
(4, 1, 3, 3, 1, 4),
(4, 2, 2, 2, 2, 4),
(4, 3, 1, 1, 3, 4),
(5, 1, 2, 2, 1, 5),
(5, 2, 1, 1, 2, 5),
(6, 1, 1, 1, 1, 6)]
The issue is that I have no idea how to generalize this to any values of n and k. I hear that dictionaries would be helpful. Did I mention I was new to python? any help would be appreciated
thanks
The idea is that we simply count from 0 to 10**k, and consider each of these "integers" as a palindrome sequence. We left pad with 0 where necessary. So, for k==6, 0 -> [0, 0, 0, 0, 0, 0], 1 -> [0, 0, 0, 0, 0, 1], etc. This enumerates over all possible combinations. If it's a palindrome, we also check that it adds up to n.
Below is some code that (should) give a correct result for arbitrary n and k, but is not terribly efficient. I'll leave optimizing up to you (if it's necessary), and give some tips on how to do it.
Here's the code:
def find_all_palindromic_sequences(n, k):
result = []
for i in range(10**k):
paly = gen_palindrome(i, k, n)
if paly is not None:
result.append(paly)
return result
def gen_palindrome(i, k, n):
i_padded = str(i).zfill(k)
i_digits = [int(digit) for digit in i_padded]
if i_digits == i_digits[::-1] and sum(i_digits) == n:
return i_digits
to test it, we can do:
for paly in find_all_palindromic_sequences(n=16, k=6):
print(paly)
this outputs:
[0, 0, 8, 8, 0, 0]
[0, 1, 7, 7, 1, 0]
[0, 2, 6, 6, 2, 0]
[0, 3, 5, 5, 3, 0]
[0, 4, 4, 4, 4, 0]
[0, 5, 3, 3, 5, 0]
[0, 6, 2, 2, 6, 0]
[0, 7, 1, 1, 7, 0]
[0, 8, 0, 0, 8, 0]
[1, 0, 7, 7, 0, 1]
[1, 1, 6, 6, 1, 1]
[1, 2, 5, 5, 2, 1]
[1, 3, 4, 4, 3, 1]
[1, 4, 3, 3, 4, 1]
[1, 5, 2, 2, 5, 1]
[1, 6, 1, 1, 6, 1]
[1, 7, 0, 0, 7, 1]
[2, 0, 6, 6, 0, 2]
[2, 1, 5, 5, 1, 2]
[2, 2, 4, 4, 2, 2]
[2, 3, 3, 3, 3, 2]
[2, 4, 2, 2, 4, 2]
[2, 5, 1, 1, 5, 2]
[2, 6, 0, 0, 6, 2]
[3, 0, 5, 5, 0, 3]
[3, 1, 4, 4, 1, 3]
[3, 2, 3, 3, 2, 3]
[3, 3, 2, 2, 3, 3]
[3, 4, 1, 1, 4, 3]
[3, 5, 0, 0, 5, 3]
[4, 0, 4, 4, 0, 4]
[4, 1, 3, 3, 1, 4]
[4, 2, 2, 2, 2, 4]
[4, 3, 1, 1, 3, 4]
[4, 4, 0, 0, 4, 4]
[5, 0, 3, 3, 0, 5]
[5, 1, 2, 2, 1, 5]
[5, 2, 1, 1, 2, 5]
[5, 3, 0, 0, 3, 5]
[6, 0, 2, 2, 0, 6]
[6, 1, 1, 1, 1, 6]
[6, 2, 0, 0, 2, 6]
[7, 0, 1, 1, 0, 7]
[7, 1, 0, 0, 1, 7]
[8, 0, 0, 0, 0, 8]
Which looks similar to your result, plus the results that contain 0.
Ideas for making it faster (this will slow down a lot as k becomes large):
This is an embarrassingly parallel problem, consider multithreading/multiprocessing.
The palindrome check of i_digits == i_digits[::-1] isn't as efficient as it could be (both in terms of memory and CPU). Having a pointer at the start and end, and traversing characters one by one till the pointers cross would be better.
There are some conditional optimizations you can do on certain values of n. For instance, if n is 0, it doesn't matter how large k is, the only palindrome will be [0, 0, 0, ..., 0, 0]. As another example, if n is 8, we obviously don't have to generate any permutations with 9 in them. Or, if n is 20, and k is 6, then we can't have 3 9's in our permutation. Generalizing this pattern will pay off big assuming n is reasonably small. It works the other way, too, actually. If n is large, then there is a limit to the number of 0s and 1s that can be in each permutation.
There is probably a better way of generating palindromes than testing every single integer. For example, if we know that integer X is a palindrome sequence, then X+1 will not be. It's pretty easy to show this: the first and last digits can't match for X+1 since we know they must have matched for X. You might be able to show that X+2 and X+3 cannot be palindromes either, etc. If you can generalize where you must test for a new palindrome, this will be key. A number theorist could help more in this regard.
HTH.
First few examples:
Input:
10
1
4 5 6
Output:
6
another one:
Input:
10
2
3 3 3
7 7 4
Output:
4
I put this code it is correct for some cases but not for all where is the problem?
n = int(input())
q = int(input())
z = 0
repeat = 0
ans = 0
answ = []
arrx = []
arry = []
for i in range(q):
maxi = 0
x,y,w = [int(i) for i in input().split()]
x,y = x+1, y+1
if((arrx.count(x)>=1)):
index = arrx.index(x)
if(y==arry[index]):
if(answ[index]==ans):
repeat += answ[index]
z = answ[index]
arrx.append(x)
arry.append(y)
if((w>x or w>y) or (w>(n-x) or w>(n-y))):
maxi = max(x, y, (n-x), (n-y))
if(((x>=w) or (y>=w)) or (((n-x)>=w) or ((n-y)>=w))):
maxi = w
ans = max(ans, maxi)
answ.append(ans)
if(ans>z):
repeat = 0
print(ans+repeat)
The problem I see with your code is you are handling the data as two one dimensional arrays, arrx and arry, when the problem calls for a two dimensional array. You should be able to print out your data structure and see the heat map for the volcanoes. For the first example, you've got a single hot volcano in the middle of the map:
10
1
4 5 6
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
[2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
[2, 3, 3, 3, 3, 3, 3, 3, 2, 1]
[2, 3, 4, 4, 4, 4, 4, 3, 2, 1]
[2, 3, 4, 5, 5, 5, 4, 3, 2, 1]
[2, 3, 4, 5, 6, 5, 4, 3, 2, 1]
[2, 3, 4, 5, 5, 5, 4, 3, 2, 1]
[2, 3, 4, 4, 4, 4, 4, 3, 2, 1]
[2, 3, 3, 3, 3, 3, 3, 3, 2, 1]
[2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
Where the hottest (6) spot is obviously the one volcano itself. For the second example, you've got two cooler volcanos:
10
2
3 3 3
3 3 3
7 7 4
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 1, 1, 1, 1, 0, 0, 0, 0]
[0, 1, 2, 2, 2, 1, 0, 0, 0, 0]
[0, 1, 2, 3, 2, 1, 0, 0, 0, 0]
[0, 1, 2, 2, 3, 2, 1, 1, 1, 1]
[0, 1, 1, 1, 2, 3, 2, 2, 2, 2]
[0, 0, 0, 0, 1, 2, 3, 3, 3, 2]
[0, 0, 0, 0, 1, 2, 3, 4, 3, 2]
[0, 0, 0, 0, 1, 2, 3, 3, 3, 2]
[0, 0, 0, 0, 1, 2, 2, 2, 2, 2]
Where the hot spot will either be the hotter of the two volcanos or potentially some spot that falls in their overlap that gets heated by both. In this case, the overlap spots don't get hotter than the hotest (4) volcano. But if the volcanoes were closer, one or more might have.