Convert an array of array to array of JSON - python-3.x

I have an array of arrays such as this:
pl = [
["name1", "address1"],
["name2", ["address2"],
["name3", "address3"]
....
]
but I need to convert it into an array of objects:
pl = [
{"name1": "address1"},
{"name2": ["address2"},
{"name3": "address3"}
....
]
I'm struggling, with no luck.

Docs: https://docs.python.org/3/library/json.html
Example from docs:
import json
json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])

Related

Convert a list of 3D coordinates in string format to a list of floats

I have a list of 3D coordinates in the format as list_X.
list_X =' [43.807 7.064 77.155], [35.099 3.179 82.838], [53.176052 -5.4618497 83.53082 ], [39.75858 1.5679997 74.76174 ], [42.055664 2.459083 80.89183 ]'
I want to convert into floats as below
list_X =[43.807 7.064 77.155], [35.099 3.179 82.838], [53.176052 -5.4618497 83.53082 ], [39.75858 1.5679997 74.76174 ], [42.055664 2.459083 80.89183 ]
I was trying as below which doesn't work
list1=[float(x) for x in list_X]
You can clean up the string to fit in the format of a list (i.e., add surrounding square brackets ([]) to contain all of the 3D coordinates, and separate the values by commas), and then use the json.loads method.
import json
list_X ='[[43.807, 7.064, 77.155], [35.099, 3.179, 82.838], [53.176052, -5.4618497, 83.53082], [39.75858, 1.5679997, 74.76174], [42.055664, 2.459083, 80.89183]]'
print(json.loads(list_X))
# Output
[[43.807, 7.064, 77.155], [35.099, 3.179, 82.838], [53.176052, -5.4618497, 83.53082], [39.75858, 1.5679997, 74.76174], [42.055664, 2.459083, 80.89183]]

Changing the values of matrix is changing the weights of the model

I am working with neural network weights and I am seeing a weird thing. I have written this code:
x = list(mnist_classifier.named_parameters())
weight = x[0][1].detach().cpu().numpy().squeeze()
print(weight)
So I get the following values:
[[[-0.2435195 0.05255396 -0.32765684]
[ 0.06372751 0.03564635 -0.31417745]
[ 0.14694464 -0.03277654 -0.10328879]]
[[-0.13716389 0.0128522 0.24107361]
[ 0.45231998 0.15497956 0.11112727]
[ 0.18206735 -0.22820294 -0.29146808]]
[[ 1.1747813 0.9206593 0.49848938]
[ 1.1558323 1.0859997 0.7743778 ]
[ 1.0287125 0.52122927 0.4096022 ]]
[[-0.2980809 -0.04358199 -0.26461622]
[-0.1165191 -0.2267315 0.37054354]
[ 0.4429275 0.44967037 0.06866694]]
[[ 0.39549246 0.10898255 0.32859102]
[-0.07753246 0.1628792 0.03021396]
[ 0.323148 0.5103844 0.16282919]]
....
Now, when I change the value of the first matrix weight[0] to 0.1, it changes the values of the original weights:
x = list(mnist_classifier.named_parameters())
weight = x[0][1].detach().cpu().numpy().squeeze()
weight[0] = weight[0] * 0 + 0.1
print(list(mnist_classifier.named_parameters()))
[('conv1.weight', Parameter containing:
tensor([[[[ 0.1000, 0.1000, 0.1000],
[ 0.1000, 0.1000, 0.1000],
[ 0.1000, 0.1000, 0.1000]]],
[[[-0.1372, 0.0129, 0.2411],
[ 0.4523, 0.1550, 0.1111],
[ 0.1821, -0.2282, -0.2915]]],
[[[ 1.1748, 0.9207, 0.4985],
[ 1.1558, 1.0860, 0.7744],
[ 1.0287, 0.5212, 0.4096]]],
...
What is going on here? How is weight[0] connected to the neural network?
I found the answer. Apparently, when copying np arrays, you are supposed to use copy() otherwise it's a pass-by reference. So using copy() helped.

Convert a deeply nested list to csv in Python

I wanted to write a deep nested list (within list within list) to csv, but it always collapse my list and printed with ..., which im unable to retrieve the hidden values.
List is to store frames of videos, the list goes up to 5 layer ()-len of each layer, Video (no of videos) > 8 frames(8) > width(200) > height(200) > pixel of 3 channel (3)
I tried converting the list to data frame before writing it to csv but still unable to solve this problem.
"[array([[[0.23137255, 0.26666668, 0.27058825],
[0.23921569, 0.27450982, 0.2784314 ],
[0.23529412, 0.27058825, 0.27450982],
...,
[0.25882354, 0.29411766, 0.2901961 ],
[0.25490198, 0.2901961 , 0.28627452],
[0.25490198, 0.2901961 , 0.28627452]],
[[0.20392157, 0.23921569, 0.24313726],
[0.21568628, 0.2509804 , 0.25490198],
[0.21568628, 0.2509804 , 0.25490198],
...,
[0.26666668, 0.3019608 , 0.29803923],
[0.26666668, 0.3019608 , 0.29803923],
[0.2627451 , 0.29803923, 0.29411766]],
[[0.1882353 , 0.22352941, 0.22745098],
[0.2 , 0.23529412, 0.23921569],
[0.20392157, 0.23921569, 0.24313726],
...,
[0.27450982, 0.30980393, 0.30588236],
[0.27058825, 0.30588236, 0.3019608 ],
[0.27058825, 0.30588236, 0.3019608 ]],
...,
I'd try one of the following:
dump the whole object into json:
import json
with open('my_saved_file.json', 'w+') as out_file:
out_file.write(list_of_lists_of_lists, indent=2)
What I'd try is storing all of your images as images and reference them in an index (could be csv)
import numpy as np
from PIL import Image
with open('reference.csv', 'w+') as out_csv:
out_csv.write("video, frame_set, frame1, frame2, frame3, frame4, frame5, frame6, frame7, frame8\n")
for video_no, video in enumerate(list_of_lists_of_lists):
row = [video_no]
for frame_set_no, frames in enumerate(video):
for frame_no, frame in enumerate(frames):
im = Image.fromarray(frame)
frame_name = f"{video_no}-{frame_set_no}-{frame_no}.jpeg"
row.append(frame_name)
im.save(frame_name)
out_csv.write(",".join(row) + "\n")

genfromtxt return numpy array not separated by comma

I have a *.csv file that store two columns of float data.
I am using this function to import it but it generates the data not separated with comma.
data=np.genfromtxt("data.csv", delimiter=',', dtype=float)
output:
[[ 403.14915 150.560364 ]
[ 403.7822265 135.13165 ]
[ 404.5017 163.4669 ]
[ 434.02465 168.023224 ]
[ 373.7655 177.904114 ]
[ 450.608429 208.4187315]
[ 454.39475 239.9666595]
[ 453.8055 248.4082 ]
[ 457.5625305 247.70315 ]
[ 451.729431 258.19335 ]
[ 366.74405 225.169922 ]
[ 377.0055235 258.110077 ]
[ 380.3581 261.760071 ]
[ 383.98615 262.33805 ]
[ 388.2516785 272.715332 ]
[ 408.378174 200.9713135]]
How to format it to get a numpy array like
[[ 403.14915, 150.560364 ]
[ 403.7822265, 135.13165 ],....]
?
NumPy doesn't display commas when you print arrays. If you really want to see them, you can use
print(repr(data))
The repr function forces a str representation not ment for "nice" printing, but for the literal representation you would use yourself to type the data in your code.

How to get the index count from a DatetimeIndex by given the date?

I have a DataFrame and its index is the type of DatetimeIndex and it looks as follow:
DatetimeIndex(
['2003-10-17', '2003-10-21', '2003-10-22', '2003-10-23',
'2003-10-24', '2003-10-27', '2003-10-28', '2003-10-29',
'2003-10-30', '2003-10-31',
...
'2017-08-04', '2017-08-07', '2017-08-08', '2017-08-09',
'2017-08-10', '2017-08-11', '2017-08-14', '2017-08-15',
'2017-08-16', '2017-08-17'
],
dtype='datetime64[ns, UTC]', name=u'DATE', length=3482, freq=None
)
I wonder how to get the position of index-count of 2017-08-04 for example.
To get just the integer position of key '2017-08-04' use DatetimeIndex.get_loc function:
dt_idx = pd.DatetimeIndex(
[ '2003-10-17', '2003-10-21', '2003-10-22', '2003-10-23', '2003-10-24', '2003-10-27',
'2003-10-28', '2003-10-29', '2003-10-30', '2003-10-31', '2017-08-04', '2017-08-07',
'2017-08-08', '2017-08-09', '2017-08-10', '2017-08-11', '2017-08-14', '2017-08-15',
'2017-08-16', '2017-08-17'
], dtype='datetime64[ns, UTC]', name=u'DATE', length=3482, freq=None)
print(dt_idx.get_loc('2017-08-04'))

Resources