Pivot a table in KQL - pivot

I have a table that looks like the one below. String values inside the table can be completely different and don't necessarily follow some set naming rules. Integer values may be more than just 0, 1, 2.
let input = datatable (name:string, test_name:string, value:int)
[
"VM01", "test_1", 0,
"VM01", "test_2", 1,
"VM01", "test_3", 1,
"VM01", "test_4", 2,
"VM01", "test_5", 2,
"VM02", "test_1", 2,
"VM02", "test_2", 1,
"VM02", "test_3", 1,
"VM02", "test_4", 1,
"VM02", "test_5", 2,
"VM03", "test_1", 1,
"VM03", "test_2", 1,
"VM03", "test_3", 1,
"VM03", "test_4", 0,
"VM03", "test_5", 2,
"VM04", "test_1", 1,
"VM04", "test_2", 1,
"VM04", "test_3", 1,
"VM04", "test_4", 1,
"VM04", "test_5", 2,
"VM05", "test_1", 1,
"VM05", "test_2", 1,
"VM05", "test_3", 2,
"VM05", "test_4", 2,
"VM05", "test_5", 1,
];
I want to write a query that transforms it into this:
test_name
VM01
VM02
VM03
VM04
VM05
test1
0
2
1
1
1
test2
1
1
1
1
1
test3
1
1
1
1
2
test4
2
1
0
1
2
test5
2
2
2
2
1

pivot plugin
let input = datatable (name:string, test_name:string, value:int)
[
"VM01", "test_1", 0,
"VM01", "test_2", 1,
"VM01", "test_3", 1,
"VM01", "test_4", 2,
"VM01", "test_5", 2,
"VM02", "test_1", 2,
"VM02", "test_2", 1,
"VM02", "test_3", 1,
"VM02", "test_4", 1,
"VM02", "test_5", 2,
"VM03", "test_1", 1,
"VM03", "test_2", 1,
"VM03", "test_3", 1,
"VM03", "test_4", 0,
"VM03", "test_5", 2,
"VM04", "test_1", 1,
"VM04", "test_2", 1,
"VM04", "test_3", 1,
"VM04", "test_4", 1,
"VM04", "test_5", 2,
"VM05", "test_1", 1,
"VM05", "test_2", 1,
"VM05", "test_3", 2,
"VM05", "test_4", 2,
"VM05", "test_5", 1,
];
input
| evaluate pivot(name, take_any(value))
test_name
VM01
VM02
VM03
VM04
VM05
test_1
0
2
1
1
1
test_2
1
1
1
1
1
test_3
1
1
1
1
2
test_4
2
1
0
1
2
test_5
2
2
2
2
1
Fiddle

Related

How to do BERT tokenization in batches and concatenate results in dictionary with tensor lists?

Updating a dictionary in python
I am generating tokenizers in batches and storing it in dictionary. How can I combine tensor dictionary into single dictionary at end.
Here is my code:
sequence = ["hello world", "I am","hiiiiiiiiiii","whyyyyyyy", "hello", "hi", "what", "hiiiii", "please", "heuuuuu", "whuuuuuu", "why"]
tokenizer = tr.XLMRobertaTokenizer.from_pretrained("nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large")
orig={}
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
for c in chunks(sequence, 6):
print(c)
train_encodings = tokenizer.batch_encode_plus(c, truncation=True, padding=True, max_length=512, return_tensors="pt")
print(train_encodings)
orig.update(train_encodings)
Problem is every batch has has different dimension of input_ids and attention_mask. As pad token is added based on largest sequence in that batch.
{'input_ids': tensor([[ 0, 33600, 31, 8999, 2, 1],
[ 0, 87, 444, 2, 1, 1],
[ 0, 1274, 195922, 153869, 2, 1],
[ 0, 15400, 34034, 34034, 34034, 2],
[ 0, 33600, 31, 2, 1, 1],
[ 0, 1274, 2, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0]])}
{'input_ids': tensor([[ 0, 2367, 2, 1, 1, 1],
[ 0, 1274, 153869, 2, 1, 1],
[ 0, 22936, 2, 1, 1, 1],
[ 0, 71570, 125489, 2, 1, 1],
[ 0, 148, 1132, 125489, 34, 2],
[ 0, 15400, 2, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 0, 0, 0]])}
To avoid duplicate I am following this link: Updating a dictionary in python but I was not able to solve this.

Is there any way to inverse tensorflow predicted variable to origin value?

I have run tensorflow 2.0 algorithm where my dependent variable is y in label format
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0])
while predicting model got scaling value from tensorflow.
# compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy',f1_m,precision_m, recall_m])
# fit the model
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
y_pred = model.predict(X_test)
y_pred[0:10]
array([[1.8975088e-02, 9.2321676e-01, 5.7808172e-02],
[2.1689970e-03, 1.1041342e-02, 9.8678964e-01],
[9.7219455e-01, 2.1523101e-02, 6.2822714e-03],
[8.9549669e-04, 9.9892455e-01, 1.7989198e-04],
[5.9214713e-06, 9.9999106e-01, 2.9540893e-06],
[1.5215098e-05, 9.9994588e-01, 3.8917195e-05],
[3.2570759e-05, 9.9996614e-01, 1.3605905e-06],
[2.5089746e-03, 9.9069571e-01, 6.7953388e-03],
[2.3909420e-02, 9.6926796e-01, 6.8226634e-03],
[2.9210409e-04, 9.9955446e-01, 1.5343193e-04]], dtype=float32)
It must be come as numerical category between 0,1 and 2 but showing some other value.how we can achieve original value ?
Use tf.argmax() to get indices:
ind = tf.argmax(y_pred, -1)

Why does code A runs well whereas B got an error?

I encountered a question when I try to understand a line of code:
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)].
Here I am going to give the code that can reproduce it:
import torch # v1.5.1
scores = [torch.rand(480, 640)]
keypoints = [torch.randint(0, 480, [978, 2])]
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] # label A, OK
for s, k in zip(scores, keypoints):
print(s[tuple(k.t())]) # label B, Error
>>> IndexError: too many indices for tensor of dimension 1
I think the above two kinds of code (A and B respectively) are almost the same, but the latter threw out an error. I discussed it with my roommate but we have no idea what's going on.
Could someone help me? Thanks in advance!
You can just change the order of A and B, for you have already changed "scores" when you execute B. It will definitely works.
import torch
scores = [torch.rand(2, 3)]
keypoints = [torch.randint(0, 2, [4, 2])]
for s, k in zip(scores, keypoints):
print(s[tuple(k.t())]) # label B, Error (now ok)
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] # label A, OK
print(scores)
The output is
[tensor([0.3175, 0.6934, 0.2842, 0.3175])]
[tensor([0.3175, 0.6934, 0.2842, 0.3175])]
which is exactly the same.
I believe you are redefining what scores is and therefore the values change. However if you change the line
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)]
to:
labelA = [s[tuple(k.t())] for s, k in zip(scores, keypoints)]
It should be the same.
Python 3.6.8 |Anaconda custom (64-bit)| (default, Dec 29 2018, 19:04:46)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> scores = [torch.rand(480, 640)]
>>> keypoints = [torch.randint(0, 480, [978, 2])]
>>>
>>>
>>> labelA = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] # label A, OK
>>> labelB = []
>>> for s, k in zip(scores, keypoints):
... labelB.append(s[tuple(k.t())])
...
>>> labelB
[tensor([0.7239, 0.1610, 0.6094, 0.8368, 0.1523, 0.1063, 0.6487, 0.8021, 0.0650,
0.5950, 0.3393, 0.7012, 0.6868, 0.8905, 0.4100, 0.8173, 0.2271, 0.7572,
0.9446, 0.4321, 0.5575, 0.4204, 0.8950, 0.2369, 0.1743, 0.8011, 0.3178,
0.8098, 0.1318, 0.2499, 0.2979, 0.4248, 0.7016, 0.5042, 0.5122, 0.0894,
0.6981, 0.9588, 0.9439, 0.6145, 0.1386, 0.0325, 0.6099, 0.5928, 0.1368,
0.3560, 0.5814, 0.8894, 0.5715, 0.3138, 0.2881, 0.0041, 0.2198, 0.2518,
0.7856, 0.9828, 0.0374, 0.1621, 0.8600, 0.9820, 0.9233, 0.1292, 0.3045,
0.5753, 0.6036, 0.7019, 0.2679, 0.5477, 0.2112, 0.7881, 0.3429, 0.2112,
0.3397, 0.6253, 0.2414, 0.0650, 0.4213, 0.0078, 0.5286, 0.1454, 0.5127,
0.9211, 0.2263, 0.7103, 0.6328, 0.4127, 0.5828, 0.5850, 0.3128, 0.6189,
0.0670, 0.5370, 0.3466, 0.8870, 0.0481, 0.4839, 0.0507, 0.1635, 0.6603,
0.6537, 0.6107, 0.0907, 0.0952, 0.8440, 0.5431, 0.4123, 0.2333, 0.0709,
0.3717, 0.5081, 0.6989, 0.7998, 0.4175, 0.4417, 0.4483, 0.4192, 0.4145,
0.3055, 0.1455, 0.8435, 0.0244, 0.4587, 0.1832, 0.7093, 0.2874, 0.9551,
0.8489, 0.1560, 0.3212, 0.2144, 0.2841, 0.0630, 0.9778, 0.0212, 0.8164,
0.6745, 0.2651, 0.7740, 0.0466, 0.4219, 0.0916, 0.2075, 0.4771, 0.6549,
0.0735, 0.6748, 0.8197, 0.2675, 0.0482, 0.1787, 0.9859, 0.5532, 0.1647,
0.7046, 0.8058, 0.7552, 0.1674, 0.3935, 0.0430, 0.2665, 0.6067, 0.2496,
0.1711, 0.3226, 0.3504, 0.8985, 0.4125, 0.0925, 0.8231, 0.8535, 0.8478,
0.2536, 0.6850, 0.6608, 0.6128, 0.0255, 0.6569, 0.0738, 0.8647, 0.2322,
0.9898, 0.5044, 0.7879, 0.8705, 0.0973, 0.2900, 0.7294, 0.8847, 0.7572,
0.4871, 0.8809, 0.5839, 0.4855, 0.8424, 0.4151, 0.1806, 0.7665, 0.4365,
0.6867, 0.3397, 0.3951, 0.5472, 0.5545, 0.8930, 0.8970, 0.2972, 0.2406,
0.3203, 0.3957, 0.1715, 0.1609, 0.8939, 0.0374, 0.8682, 0.4520, 0.2852,
0.9323, 0.9132, 0.5007, 0.5879, 0.2878, 0.5277, 0.1378, 0.3752, 0.0059,
0.4944, 0.9876, 0.7333, 0.2803, 0.4471, 0.3596, 0.8874, 0.6594, 0.5410,
0.0277, 0.0497, 0.4526, 0.3456, 0.2889, 0.4981, 0.1150, 0.5470, 0.8293,
0.9683, 0.1102, 0.0432, 0.0206, 0.0381, 0.0426, 0.1343, 0.0595, 0.1643,
0.4930, 0.1606, 0.4333, 0.7708, 0.3279, 0.9765, 0.0809, 0.6337, 0.8330,
0.7061, 0.5643, 0.9792, 0.7728, 0.9949, 0.6205, 0.7836, 0.3634, 0.4491,
0.3759, 0.8197, 0.8402, 0.8870, 0.3647, 0.0718, 0.3802, 0.6333, 0.0564,
0.1321, 0.9894, 0.4785, 0.2591, 0.5078, 0.8955, 0.1457, 0.3913, 0.7020,
0.1685, 0.2133, 0.9725, 0.1179, 0.3458, 0.8330, 0.4897, 0.3802, 0.2729,
0.3109, 0.2683, 0.3611, 0.9983, 0.9674, 0.8986, 0.0674, 0.4810, 0.2030,
0.3708, 0.9974, 0.3354, 0.3416, 0.9956, 0.4438, 0.4523, 0.1212, 0.1906,
0.0255, 0.3857, 0.0520, 0.8090, 0.0363, 0.5155, 0.5259, 0.5144, 0.0832,
0.8416, 0.8666, 0.3573, 0.2119, 0.8180, 0.4281, 0.9585, 0.0069, 0.3688,
0.8813, 0.9660, 0.4405, 0.7213, 0.9818, 0.0342, 0.7656, 0.4573, 0.2477,
0.3124, 0.2173, 0.2723, 0.2149, 0.9469, 0.7091, 0.8051, 0.9815, 0.0416,
0.3525, 0.7070, 0.9123, 0.1957, 0.4095, 0.8105, 0.4935, 0.7852, 0.9508,
0.7854, 0.3267, 0.0761, 0.1353, 0.5961, 0.3199, 0.6699, 0.8500, 0.8540,
0.4927, 0.4319, 0.5490, 0.8794, 0.4855, 0.4408, 0.2513, 0.3591, 0.4385,
0.4902, 0.0892, 0.2645, 0.2993, 0.5301, 0.2470, 0.4180, 0.3924, 0.4231,
0.0155, 0.0239, 0.6117, 0.5051, 0.5522, 0.8229, 0.3941, 0.9290, 0.2339,
0.0214, 0.8480, 0.6003, 0.9120, 0.0334, 0.0822, 0.7023, 0.0906, 0.1074,
0.8747, 0.0965, 0.5973, 0.8562, 0.5227, 0.5342, 0.5682, 0.7457, 0.4776,
0.7218, 0.5008, 0.6378, 0.5541, 0.6418, 0.1443, 0.1458, 0.8786, 0.4283,
0.2835, 0.6132, 0.8284, 0.9009, 0.5561, 0.5137, 0.9318, 0.0267, 0.2009,
0.6635, 0.3267, 0.5239, 0.1676, 0.2327, 0.1248, 0.8706, 0.4100, 0.0529,
0.8903, 0.8717, 0.7688, 0.2585, 0.3399, 0.7276, 0.4385, 0.6888, 0.6419,
0.2661, 0.8262, 0.7331, 0.5433, 0.9399, 0.4117, 0.4899, 0.6666, 0.0826,
0.1968, 0.6295, 0.4921, 0.3039, 0.5059, 0.8406, 0.6863, 0.1999, 0.8681,
0.7958, 0.2988, 0.5588, 0.6630, 0.4348, 0.2735, 0.4761, 0.0994, 0.0803,
0.1431, 0.9707, 0.3018, 0.3598, 0.6734, 0.6126, 0.1162, 0.2229, 0.1249,
0.8871, 0.6972, 0.4470, 0.7034, 0.4932, 0.1183, 0.2348, 0.8528, 0.7901,
0.2365, 0.7217, 0.5406, 0.1416, 0.9804, 0.7091, 0.3708, 0.4327, 0.0531,
0.0861, 0.2463, 0.0912, 0.6666, 0.4180, 0.9266, 0.2631, 0.7023, 0.0398,
0.0631, 0.6601, 0.6339, 0.6206, 0.1393, 0.1075, 0.6920, 0.6626, 0.8973,
0.1839, 0.2475, 0.1521, 0.0381, 0.5855, 0.2973, 0.1848, 0.3025, 0.8042,
0.5952, 0.6057, 0.5527, 0.7248, 0.0033, 0.3411, 0.8677, 0.9543, 0.6956,
0.2909, 0.9458, 0.4611, 0.1876, 0.1012, 0.6692, 0.9081, 0.1122, 0.9392,
0.8478, 0.6917, 0.6057, 0.6920, 0.1247, 0.9858, 0.3460, 0.8301, 0.8894,
0.8431, 0.4964, 0.0289, 0.1298, 0.1918, 0.8065, 0.5335, 0.9905, 0.7099,
0.6120, 0.2878, 0.2931, 0.8318, 0.9276, 0.9328, 0.3071, 0.3785, 0.5239,
0.2914, 0.1401, 0.4540, 0.4798, 0.4797, 0.0380, 0.2156, 0.1642, 0.5507,
0.0664, 0.8262, 0.9418, 0.0536, 0.2727, 0.9576, 0.4063, 0.4981, 0.4513,
0.6310, 0.9909, 0.3513, 0.5842, 0.6780, 0.8629, 0.7755, 0.0898, 0.9114,
0.0207, 0.1783, 0.1597, 0.8240, 0.9023, 0.0074, 0.7930, 0.8564, 0.4700,
0.8839, 0.4839, 0.9852, 0.3291, 0.9607, 0.8842, 0.8725, 0.1717, 0.6004,
0.0670, 0.8676, 0.6065, 0.6930, 0.8870, 0.2545, 0.1041, 0.5940, 0.5596,
0.8877, 0.4002, 0.5495, 0.3640, 0.4373, 0.8292, 0.2008, 0.3124, 0.6308,
0.2529, 0.1802, 0.5372, 0.9018, 0.0830, 0.3240, 0.6729, 0.9612, 0.9211,
0.5371, 0.8745, 0.4602, 0.7666, 0.0433, 0.5461, 0.2115, 0.2959, 0.0351,
0.8651, 0.7865, 0.2392, 0.1375, 0.7444, 0.6702, 0.1889, 0.0102, 0.2363,
0.9406, 0.2144, 0.2174, 0.5765, 0.8715, 0.5440, 0.7480, 0.0387, 0.2754,
0.7528, 0.4358, 0.1080, 0.8259, 0.0232, 0.2766, 0.4030, 0.6221, 0.5128,
0.1035, 0.3966, 0.3859, 0.4088, 0.8898, 0.6106, 0.0241, 0.0442, 0.1680,
0.3836, 0.8129, 0.4091, 0.7610, 0.7527, 0.7474, 0.2838, 0.3861, 0.3496,
0.6985, 0.0842, 0.4432, 0.2829, 0.9554, 0.7354, 0.4623, 0.4193, 0.6449,
0.1662, 0.3948, 0.1070, 0.4275, 0.6427, 0.6758, 0.1149, 0.9313, 0.3048,
0.2237, 0.4992, 0.4688, 0.3633, 0.5445, 0.8166, 0.0392, 0.9118, 0.7784,
0.4596, 0.5234, 0.0789, 0.1671, 0.0663, 0.6008, 0.4271, 0.6451, 0.8050,
0.7993, 0.3750, 0.4266, 0.2093, 0.8230, 0.0515, 0.0785, 0.1407, 0.0502,
0.1021, 0.0343, 0.0291, 0.0833, 0.4709, 0.7199, 0.6756, 0.3500, 0.7100,
0.6334, 0.8984, 0.6105, 0.2101, 0.7228, 0.2321, 0.2186, 0.2271, 0.3792,
0.3462, 0.7752, 0.9628, 0.4922, 0.3908, 0.7770, 0.0485, 0.5218, 0.1772,
0.0367, 0.9492, 0.9352, 0.4897, 0.9790, 0.1704, 0.9757, 0.3399, 0.3952,
0.2428, 0.3014, 0.1833, 0.0175, 0.9480, 0.3613, 0.3031, 0.1372, 0.4799,
0.0364, 0.7588, 0.4608, 0.2652, 0.2054, 0.6034, 0.5563, 0.0053, 0.3368,
0.7328, 0.5666, 0.2000, 0.4721, 0.5381, 0.9557, 0.0762, 0.4067, 0.8686,
0.6698, 0.7660, 0.9169, 0.8401, 0.7283, 0.0271, 0.2323, 0.0811, 0.6277,
0.8744, 0.9459, 0.9015, 0.6752, 0.7650, 0.6628, 0.4006, 0.5877, 0.0514,
0.8610, 0.3106, 0.6999, 0.6773, 0.4738, 0.1838, 0.0942, 0.9465, 0.0689,
0.3126, 0.7237, 0.6566, 0.4259, 0.1337, 0.8046, 0.2415, 0.9873, 0.9736,
0.2487, 0.8346, 0.9100, 0.3429, 0.1321, 0.7593, 0.7780, 0.2588, 0.2804,
0.3661, 0.3630, 0.7371, 0.5247, 0.9303, 0.2413, 0.2591, 0.3403, 0.0683,
0.4428, 0.4089, 0.7018, 0.8541, 0.2662, 0.2819, 0.9080, 0.5924, 0.8527,
0.8277, 0.0945, 0.3408, 0.0259, 0.8425, 0.3551, 0.9404, 0.6876, 0.3102,
0.8169, 0.3289, 0.7174, 0.2404, 0.7087, 0.2562, 0.2022, 0.1705, 0.6359,
0.7204, 0.0698, 0.1980, 0.7807, 0.0989, 0.0387, 0.5021, 0.9782, 0.0989,
0.4415, 0.9582, 0.8193, 0.7433, 0.3606, 0.8234, 0.9470, 0.6152, 0.0739,
0.0091, 0.3852, 0.6140, 0.8024, 0.3931, 0.6374, 0.7420, 0.8262, 0.5612,
0.1429, 0.1118, 0.5879, 0.2417, 0.8952, 0.2698, 0.8374, 0.9325, 0.9897,
0.7748, 0.0254, 0.8351, 0.5943, 0.9824, 0.2132, 0.6469, 0.8862, 0.9013,
0.5097, 0.5300, 0.7497, 0.8371, 0.1307, 0.2927, 0.1760, 0.6744, 0.7508,
0.7924, 0.9564, 0.8733, 0.8700, 0.9728, 0.1362, 0.3822, 0.7738, 0.7682,
0.3459, 0.5163, 0.6223, 0.9129, 0.3179, 0.7660, 0.0849, 0.7594, 0.9258,
0.3004, 0.5658, 0.1079, 0.0985, 0.3576, 0.8792, 0.2296, 0.2061, 0.9494,
0.9141, 0.1866, 0.4255, 0.6605, 0.5622, 0.4929])]
>>> labelA
[tensor([0.7239, 0.1610, 0.6094, 0.8368, 0.1523, 0.1063, 0.6487, 0.8021, 0.0650,
0.5950, 0.3393, 0.7012, 0.6868, 0.8905, 0.4100, 0.8173, 0.2271, 0.7572,
0.9446, 0.4321, 0.5575, 0.4204, 0.8950, 0.2369, 0.1743, 0.8011, 0.3178,
0.8098, 0.1318, 0.2499, 0.2979, 0.4248, 0.7016, 0.5042, 0.5122, 0.0894,
0.6981, 0.9588, 0.9439, 0.6145, 0.1386, 0.0325, 0.6099, 0.5928, 0.1368,
0.3560, 0.5814, 0.8894, 0.5715, 0.3138, 0.2881, 0.0041, 0.2198, 0.2518,
0.7856, 0.9828, 0.0374, 0.1621, 0.8600, 0.9820, 0.9233, 0.1292, 0.3045,
0.5753, 0.6036, 0.7019, 0.2679, 0.5477, 0.2112, 0.7881, 0.3429, 0.2112,
0.3397, 0.6253, 0.2414, 0.0650, 0.4213, 0.0078, 0.5286, 0.1454, 0.5127,
0.9211, 0.2263, 0.7103, 0.6328, 0.4127, 0.5828, 0.5850, 0.3128, 0.6189,
0.0670, 0.5370, 0.3466, 0.8870, 0.0481, 0.4839, 0.0507, 0.1635, 0.6603,
0.6537, 0.6107, 0.0907, 0.0952, 0.8440, 0.5431, 0.4123, 0.2333, 0.0709,
0.3717, 0.5081, 0.6989, 0.7998, 0.4175, 0.4417, 0.4483, 0.4192, 0.4145,
0.3055, 0.1455, 0.8435, 0.0244, 0.4587, 0.1832, 0.7093, 0.2874, 0.9551,
0.8489, 0.1560, 0.3212, 0.2144, 0.2841, 0.0630, 0.9778, 0.0212, 0.8164,
0.6745, 0.2651, 0.7740, 0.0466, 0.4219, 0.0916, 0.2075, 0.4771, 0.6549,
0.0735, 0.6748, 0.8197, 0.2675, 0.0482, 0.1787, 0.9859, 0.5532, 0.1647,
0.7046, 0.8058, 0.7552, 0.1674, 0.3935, 0.0430, 0.2665, 0.6067, 0.2496,
0.1711, 0.3226, 0.3504, 0.8985, 0.4125, 0.0925, 0.8231, 0.8535, 0.8478,
0.2536, 0.6850, 0.6608, 0.6128, 0.0255, 0.6569, 0.0738, 0.8647, 0.2322,
0.9898, 0.5044, 0.7879, 0.8705, 0.0973, 0.2900, 0.7294, 0.8847, 0.7572,
0.4871, 0.8809, 0.5839, 0.4855, 0.8424, 0.4151, 0.1806, 0.7665, 0.4365,
0.6867, 0.3397, 0.3951, 0.5472, 0.5545, 0.8930, 0.8970, 0.2972, 0.2406,
0.3203, 0.3957, 0.1715, 0.1609, 0.8939, 0.0374, 0.8682, 0.4520, 0.2852,
0.9323, 0.9132, 0.5007, 0.5879, 0.2878, 0.5277, 0.1378, 0.3752, 0.0059,
0.4944, 0.9876, 0.7333, 0.2803, 0.4471, 0.3596, 0.8874, 0.6594, 0.5410,
0.0277, 0.0497, 0.4526, 0.3456, 0.2889, 0.4981, 0.1150, 0.5470, 0.8293,
0.9683, 0.1102, 0.0432, 0.0206, 0.0381, 0.0426, 0.1343, 0.0595, 0.1643,
0.4930, 0.1606, 0.4333, 0.7708, 0.3279, 0.9765, 0.0809, 0.6337, 0.8330,
0.7061, 0.5643, 0.9792, 0.7728, 0.9949, 0.6205, 0.7836, 0.3634, 0.4491,
0.3759, 0.8197, 0.8402, 0.8870, 0.3647, 0.0718, 0.3802, 0.6333, 0.0564,
0.1321, 0.9894, 0.4785, 0.2591, 0.5078, 0.8955, 0.1457, 0.3913, 0.7020,
0.1685, 0.2133, 0.9725, 0.1179, 0.3458, 0.8330, 0.4897, 0.3802, 0.2729,
0.3109, 0.2683, 0.3611, 0.9983, 0.9674, 0.8986, 0.0674, 0.4810, 0.2030,
0.3708, 0.9974, 0.3354, 0.3416, 0.9956, 0.4438, 0.4523, 0.1212, 0.1906,
0.0255, 0.3857, 0.0520, 0.8090, 0.0363, 0.5155, 0.5259, 0.5144, 0.0832,
0.8416, 0.8666, 0.3573, 0.2119, 0.8180, 0.4281, 0.9585, 0.0069, 0.3688,
0.8813, 0.9660, 0.4405, 0.7213, 0.9818, 0.0342, 0.7656, 0.4573, 0.2477,
0.3124, 0.2173, 0.2723, 0.2149, 0.9469, 0.7091, 0.8051, 0.9815, 0.0416,
0.3525, 0.7070, 0.9123, 0.1957, 0.4095, 0.8105, 0.4935, 0.7852, 0.9508,
0.7854, 0.3267, 0.0761, 0.1353, 0.5961, 0.3199, 0.6699, 0.8500, 0.8540,
0.4927, 0.4319, 0.5490, 0.8794, 0.4855, 0.4408, 0.2513, 0.3591, 0.4385,
0.4902, 0.0892, 0.2645, 0.2993, 0.5301, 0.2470, 0.4180, 0.3924, 0.4231,
0.0155, 0.0239, 0.6117, 0.5051, 0.5522, 0.8229, 0.3941, 0.9290, 0.2339,
0.0214, 0.8480, 0.6003, 0.9120, 0.0334, 0.0822, 0.7023, 0.0906, 0.1074,
0.8747, 0.0965, 0.5973, 0.8562, 0.5227, 0.5342, 0.5682, 0.7457, 0.4776,
0.7218, 0.5008, 0.6378, 0.5541, 0.6418, 0.1443, 0.1458, 0.8786, 0.4283,
0.2835, 0.6132, 0.8284, 0.9009, 0.5561, 0.5137, 0.9318, 0.0267, 0.2009,
0.6635, 0.3267, 0.5239, 0.1676, 0.2327, 0.1248, 0.8706, 0.4100, 0.0529,
0.8903, 0.8717, 0.7688, 0.2585, 0.3399, 0.7276, 0.4385, 0.6888, 0.6419,
0.2661, 0.8262, 0.7331, 0.5433, 0.9399, 0.4117, 0.4899, 0.6666, 0.0826,
0.1968, 0.6295, 0.4921, 0.3039, 0.5059, 0.8406, 0.6863, 0.1999, 0.8681,
0.7958, 0.2988, 0.5588, 0.6630, 0.4348, 0.2735, 0.4761, 0.0994, 0.0803,
0.1431, 0.9707, 0.3018, 0.3598, 0.6734, 0.6126, 0.1162, 0.2229, 0.1249,
0.8871, 0.6972, 0.4470, 0.7034, 0.4932, 0.1183, 0.2348, 0.8528, 0.7901,
0.2365, 0.7217, 0.5406, 0.1416, 0.9804, 0.7091, 0.3708, 0.4327, 0.0531,
0.0861, 0.2463, 0.0912, 0.6666, 0.4180, 0.9266, 0.2631, 0.7023, 0.0398,
0.0631, 0.6601, 0.6339, 0.6206, 0.1393, 0.1075, 0.6920, 0.6626, 0.8973,
0.1839, 0.2475, 0.1521, 0.0381, 0.5855, 0.2973, 0.1848, 0.3025, 0.8042,
0.5952, 0.6057, 0.5527, 0.7248, 0.0033, 0.3411, 0.8677, 0.9543, 0.6956,
0.2909, 0.9458, 0.4611, 0.1876, 0.1012, 0.6692, 0.9081, 0.1122, 0.9392,
0.8478, 0.6917, 0.6057, 0.6920, 0.1247, 0.9858, 0.3460, 0.8301, 0.8894,
0.8431, 0.4964, 0.0289, 0.1298, 0.1918, 0.8065, 0.5335, 0.9905, 0.7099,
0.6120, 0.2878, 0.2931, 0.8318, 0.9276, 0.9328, 0.3071, 0.3785, 0.5239,
0.2914, 0.1401, 0.4540, 0.4798, 0.4797, 0.0380, 0.2156, 0.1642, 0.5507,
0.0664, 0.8262, 0.9418, 0.0536, 0.2727, 0.9576, 0.4063, 0.4981, 0.4513,
0.6310, 0.9909, 0.3513, 0.5842, 0.6780, 0.8629, 0.7755, 0.0898, 0.9114,
0.0207, 0.1783, 0.1597, 0.8240, 0.9023, 0.0074, 0.7930, 0.8564, 0.4700,
0.8839, 0.4839, 0.9852, 0.3291, 0.9607, 0.8842, 0.8725, 0.1717, 0.6004,
0.0670, 0.8676, 0.6065, 0.6930, 0.8870, 0.2545, 0.1041, 0.5940, 0.5596,
0.8877, 0.4002, 0.5495, 0.3640, 0.4373, 0.8292, 0.2008, 0.3124, 0.6308,
0.2529, 0.1802, 0.5372, 0.9018, 0.0830, 0.3240, 0.6729, 0.9612, 0.9211,
0.5371, 0.8745, 0.4602, 0.7666, 0.0433, 0.5461, 0.2115, 0.2959, 0.0351,
0.8651, 0.7865, 0.2392, 0.1375, 0.7444, 0.6702, 0.1889, 0.0102, 0.2363,
0.9406, 0.2144, 0.2174, 0.5765, 0.8715, 0.5440, 0.7480, 0.0387, 0.2754,
0.7528, 0.4358, 0.1080, 0.8259, 0.0232, 0.2766, 0.4030, 0.6221, 0.5128,
0.1035, 0.3966, 0.3859, 0.4088, 0.8898, 0.6106, 0.0241, 0.0442, 0.1680,
0.3836, 0.8129, 0.4091, 0.7610, 0.7527, 0.7474, 0.2838, 0.3861, 0.3496,
0.6985, 0.0842, 0.4432, 0.2829, 0.9554, 0.7354, 0.4623, 0.4193, 0.6449,
0.1662, 0.3948, 0.1070, 0.4275, 0.6427, 0.6758, 0.1149, 0.9313, 0.3048,
0.2237, 0.4992, 0.4688, 0.3633, 0.5445, 0.8166, 0.0392, 0.9118, 0.7784,
0.4596, 0.5234, 0.0789, 0.1671, 0.0663, 0.6008, 0.4271, 0.6451, 0.8050,
0.7993, 0.3750, 0.4266, 0.2093, 0.8230, 0.0515, 0.0785, 0.1407, 0.0502,
0.1021, 0.0343, 0.0291, 0.0833, 0.4709, 0.7199, 0.6756, 0.3500, 0.7100,
0.6334, 0.8984, 0.6105, 0.2101, 0.7228, 0.2321, 0.2186, 0.2271, 0.3792,
0.3462, 0.7752, 0.9628, 0.4922, 0.3908, 0.7770, 0.0485, 0.5218, 0.1772,
0.0367, 0.9492, 0.9352, 0.4897, 0.9790, 0.1704, 0.9757, 0.3399, 0.3952,
0.2428, 0.3014, 0.1833, 0.0175, 0.9480, 0.3613, 0.3031, 0.1372, 0.4799,
0.0364, 0.7588, 0.4608, 0.2652, 0.2054, 0.6034, 0.5563, 0.0053, 0.3368,
0.7328, 0.5666, 0.2000, 0.4721, 0.5381, 0.9557, 0.0762, 0.4067, 0.8686,
0.6698, 0.7660, 0.9169, 0.8401, 0.7283, 0.0271, 0.2323, 0.0811, 0.6277,
0.8744, 0.9459, 0.9015, 0.6752, 0.7650, 0.6628, 0.4006, 0.5877, 0.0514,
0.8610, 0.3106, 0.6999, 0.6773, 0.4738, 0.1838, 0.0942, 0.9465, 0.0689,
0.3126, 0.7237, 0.6566, 0.4259, 0.1337, 0.8046, 0.2415, 0.9873, 0.9736,
0.2487, 0.8346, 0.9100, 0.3429, 0.1321, 0.7593, 0.7780, 0.2588, 0.2804,
0.3661, 0.3630, 0.7371, 0.5247, 0.9303, 0.2413, 0.2591, 0.3403, 0.0683,
0.4428, 0.4089, 0.7018, 0.8541, 0.2662, 0.2819, 0.9080, 0.5924, 0.8527,
0.8277, 0.0945, 0.3408, 0.0259, 0.8425, 0.3551, 0.9404, 0.6876, 0.3102,
0.8169, 0.3289, 0.7174, 0.2404, 0.7087, 0.2562, 0.2022, 0.1705, 0.6359,
0.7204, 0.0698, 0.1980, 0.7807, 0.0989, 0.0387, 0.5021, 0.9782, 0.0989,
0.4415, 0.9582, 0.8193, 0.7433, 0.3606, 0.8234, 0.9470, 0.6152, 0.0739,
0.0091, 0.3852, 0.6140, 0.8024, 0.3931, 0.6374, 0.7420, 0.8262, 0.5612,
0.1429, 0.1118, 0.5879, 0.2417, 0.8952, 0.2698, 0.8374, 0.9325, 0.9897,
0.7748, 0.0254, 0.8351, 0.5943, 0.9824, 0.2132, 0.6469, 0.8862, 0.9013,
0.5097, 0.5300, 0.7497, 0.8371, 0.1307, 0.2927, 0.1760, 0.6744, 0.7508,
0.7924, 0.9564, 0.8733, 0.8700, 0.9728, 0.1362, 0.3822, 0.7738, 0.7682,
0.3459, 0.5163, 0.6223, 0.9129, 0.3179, 0.7660, 0.0849, 0.7594, 0.9258,
0.3004, 0.5658, 0.1079, 0.0985, 0.3576, 0.8792, 0.2296, 0.2061, 0.9494,
0.9141, 0.1866, 0.4255, 0.6605, 0.5622, 0.4929])]
>>> labelA[0] == labelB[0]
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
dtype=torch.uint8)
I know this is a lot, so let me know if you have any questions.

Why do other values change in an ndarray when I try to change a specific cell value?

For example, I have a 3D ndarray of the shape (10,10,10) and whenever I try to change all the cells in this section [5,:,9] to a specific single value I end up changing values in this section too [4,:,9]. Which to me makes no sense. I do not get this behavior when I convert to a list of lists.
I use a simply for loop:
For i in range(0,10):
matrix[5,i, 9]= matrix[5,9,9]
Is there anyway to avoid this? I do not get this behavior when using a list of lists but I don’t wanna convert back and forth between the two as it takes too much processing time.
Doesn't happen that way for me:
In [232]: arr = np.ones((10,10,10),int)
In [233]: arr[5,9,9] = 10
In [234]: for i in range(10): arr[5,i,9]=arr[5,9,9]
In [235]: arr[5,:,9]
Out[235]: array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
In [236]: arr[4,:,9]
Out[236]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
or assigning a whole "column" at once:
In [237]: arr[5,:,9] = np.arange(10)
In [239]: arr[5]
Out[239]:
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 2],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 3],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 4],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 5],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 6],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 7],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 8],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 9]])

Conversion of numpy 2d array to ENVI binary file through gdal

I have SAR CEOS format files which consist of data file, leader file, null volume directory file and volume directory file.
I am reading the data file using gdal ReadAsArray and then I am doing operations on this 2d Array and now I want to save this 2d array as an ENVI binary file.
Kindly guide how to do this in Python 3.5.
Find help for Tutorial Website: https://pcjericks.github.io/py-gdalogr-cookbook/
Such as the example of
import gdal, ogr, os, osr
import numpy as np
def array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array):
cols = array.shape[1]
rows = array.shape[0]
originX = rasterOrigin[0]
originY = rasterOrigin[1]
driver = gdal.GetDriverByName('ENVI')
outRaster = driver.Create(newRasterfn, cols, rows, 1, gdal.GDT_Byte)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
outband = outRaster.GetRasterBand(1)
outband.WriteArray(array)
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(4326)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
def main(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array):
reversed_arr = array[::-1] # reverse array so the tif looks like the array
array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,reversed_arr) # convert array to raster
if __name__ == "__main__":
rasterOrigin = (-123.25745,45.43013)
pixelWidth = 10
pixelHeight = 10
newRasterfn = 'test.tif'
array = np.array([[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1],
[ 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1],
[ 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1],
[ 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1],
[ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
main(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array)

Resources