I am running the code in this notebook.
https://colab.research.google.com/github/zaidalyafeai/Notebooks/blob/master/Deep_GCN_Spam.ipynb#scrollTo=UjoTbUQVnCz8
I get an error when I change the data set to my own data set. I know this might be an error of my code. Then I cleared all the code to generate data sets. I saved the two data sets as file and reload it. I really cannot see the difference between these two data sets.
The shape and type of these two data sets is provided below. I can provide any information that is needed. Can anyone help me fix this?
This is my data set
data = torch.load("dataset.pt")
data
>>>Data(edge_attr=[3585, 1], edge_index=[2, 3585], x=[352, 1], y=[352])
data.x.dtype, data.y.dtype, data.edge_attr.dtype, data.edge_index.dtype
>>>(torch.float32, torch.int64, torch.float32, torch.int64)
data.edge_index.T.numpy().shape
>>>(3585, 2)
np.unique(data.edge_index.T.numpy(), axis=0).shape
>>>(3585, 2)
np.unique(data.edge_index.T.numpy(), axis=0).shape
>>>(3585, 2)
data.edge_index.unique().shape
>>>torch.Size([352])
data.edge_index
>>>tensor([[ 13, 13, 13, ..., 103, 103, 103],
[ 1, 2, 3, ..., 6, 9, 10]])
This is the data set mentioned in the notebook
data2 = torch.load("spam.pt")
data2
>>>Data(edge_attr=[50344, 1], edge_index=[2, 50344], x=[1000, 1], y=[1000])
data2.x.dtype, data2.y.dtype, data2.edge_attr.dtype, data2.edge_index.dtype
>>>(torch.float32, torch.int64, torch.float32, torch.int64)
data2.edge_index
>>>tensor([[ 0, 1, 1, ..., 999, 999, 999],
[455, 173, 681, ..., 377, 934, 953]])
Python version: 3.8
PyTorch geometric version: 1.6.2
CUDA version: 10.2
System: Windows 10
Screeshot
In my case, I need to make sure that my edge_attributes where in the range of [0,1].
See here
When I print a numpy array, I get a truncated representation, but I want the full array.
>>> numpy.arange(10000)
array([ 0, 1, 2, ..., 9997, 9998, 9999])
>>> numpy.arange(10000).reshape(250,40)
array([[ 0, 1, 2, ..., 37, 38, 39],
[ 40, 41, 42, ..., 77, 78, 79],
[ 80, 81, 82, ..., 117, 118, 119],
...,
[9880, 9881, 9882, ..., 9917, 9918, 9919],
[9920, 9921, 9922, ..., 9957, 9958, 9959],
[9960, 9961, 9962, ..., 9997, 9998, 9999]])
Use numpy.set_printoptions:
import sys
import numpy
numpy.set_printoptions(threshold=sys.maxsize)
import numpy as np
np.set_printoptions(threshold=np.inf)
I suggest using np.inf instead of np.nan which is suggested by others. They both work for your purpose, but by setting the threshold to "infinity" it is obvious to everybody reading your code what you mean. Having a threshold of "not a number" seems a little vague to me.
Temporary setting
You can use the printoptions context manager:
with numpy.printoptions(threshold=numpy.inf):
print(arr)
(of course, replace numpy by np if that's how you imported numpy)
The use of a context manager (the with-block) ensures that after the context manager is finished, the print options will revert to whatever they were before the block started. It ensures the setting is temporary, and only applied to code within the block.
See numpy.printoptions documentation for details on the context manager and what other arguments it supports. It was introduced in NumPy 1.15 (released 2018-07-23).
The previous answers are the correct ones, but as a weaker alternative you can transform into a list:
>>> numpy.arange(100).reshape(25,4).tolist()
[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21,
22, 23], [24, 25, 26, 27], [28, 29, 30, 31], [32, 33, 34, 35], [36, 37, 38, 39], [40, 41,
42, 43], [44, 45, 46, 47], [48, 49, 50, 51], [52, 53, 54, 55], [56, 57, 58, 59], [60, 61,
62, 63], [64, 65, 66, 67], [68, 69, 70, 71], [72, 73, 74, 75], [76, 77, 78, 79], [80, 81,
82, 83], [84, 85, 86, 87], [88, 89, 90, 91], [92, 93, 94, 95], [96, 97, 98, 99]]
Here is a one-off way to do this, which is useful if you don't want to change your default settings:
def fullprint(*args, **kwargs):
from pprint import pprint
import numpy
opt = numpy.get_printoptions()
numpy.set_printoptions(threshold=numpy.inf)
pprint(*args, **kwargs)
numpy.set_printoptions(**opt)
This sounds like you're using numpy.
If that's the case, you can add:
import numpy as np
np.set_printoptions(threshold=np.nan)
That will disable the corner printing. For more information, see this NumPy Tutorial.
Using a context manager as Paul Price sugggested
import numpy as np
class fullprint:
'context manager for printing full numpy arrays'
def __init__(self, **kwargs):
kwargs.setdefault('threshold', np.inf)
self.opt = kwargs
def __enter__(self):
self._opt = np.get_printoptions()
np.set_printoptions(**self.opt)
def __exit__(self, type, value, traceback):
np.set_printoptions(**self._opt)
if __name__ == '__main__':
a = np.arange(1001)
with fullprint():
print(a)
print(a)
with fullprint(threshold=None, edgeitems=10):
print(a)
numpy.savetxt
numpy.savetxt(sys.stdout, numpy.arange(10000))
or if you need a string:
import StringIO
sio = StringIO.StringIO()
numpy.savetxt(sio, numpy.arange(10000))
s = sio.getvalue()
print s
The default output format is:
0.000000000000000000e+00
1.000000000000000000e+00
2.000000000000000000e+00
3.000000000000000000e+00
...
and it can be configured with further arguments.
Note in particular how this also not shows the square brackets, and allows for a lot of customization, as mentioned at: How to print a Numpy array without brackets?
Tested on Python 2.7.12, numpy 1.11.1.
This is a slight modification (removed the option to pass additional arguments to set_printoptions)of neoks answer.
It shows how you can use contextlib.contextmanager to easily create such a contextmanager with fewer lines of code:
import numpy as np
from contextlib import contextmanager
#contextmanager
def show_complete_array():
oldoptions = np.get_printoptions()
np.set_printoptions(threshold=np.inf)
try:
yield
finally:
np.set_printoptions(**oldoptions)
In your code it can be used like this:
a = np.arange(1001)
print(a) # shows the truncated array
with show_complete_array():
print(a) # shows the complete array
print(a) # shows the truncated array (again)
with np.printoptions(edgeitems=50):
print(x)
Change 50 to how many lines you wanna see
Source: here
A slight modification: (since you are going to print a huge list)
import numpy as np
np.set_printoptions(threshold=np.inf, linewidth=200)
x = np.arange(1000)
print(x)
This will increase the number of characters per line (default linewidth of 75). Use any value you like for the linewidth which suits your coding environment. This will save you from having to go through huge number of output lines by adding more characters per line.
Complementary to this answer from the maximum number of columns (fixed with numpy.set_printoptions(threshold=numpy.nan)), there is also a limit of characters to be displayed. In some environments like when calling python from bash (rather than the interactive session), this can be fixed by setting the parameter linewidth as following.
import numpy as np
np.set_printoptions(linewidth=2000) # default = 75
Mat = np.arange(20000,20150).reshape(2,75) # 150 elements (75 columns)
print(Mat)
In this case, your window should limit the number of characters to wrap the line.
For those out there using sublime text and wanting to see results within the output window, you should add the build option "word_wrap": false to the sublime-build file [source] .
To turn it off and return to the normal mode
np.set_printoptions(threshold=False)
Since NumPy version 1.16, for more details see GitHub ticket 12251.
from sys import maxsize
from numpy import set_printoptions
set_printoptions(threshold=maxsize)
Suppose you have a numpy array
arr = numpy.arange(10000).reshape(250,40)
If you want to print the full array in a one-off way (without toggling np.set_printoptions), but want something simpler (less code) than the context manager, just do
for row in arr:
print row
If you're using a jupyter notebook, I found this to be the simplest solution for one off cases. Basically convert the numpy array to a list and then to a string and then print. This has the benefit of keeping the comma separators in the array, whereas using numpyp.printoptions(threshold=np.inf) does not:
import numpy as np
print(str(np.arange(10000).reshape(250,40).tolist()))
You won't always want all items printed, especially for large arrays.
A simple way to show more items:
In [349]: ar
Out[349]: array([1, 1, 1, ..., 0, 0, 0])
In [350]: ar[:100]
Out[350]:
array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1])
It works fine when sliced array < 1000 by default.
If you are using Jupyter, try the variable inspector extension. You can click each variable to see the entire array.
This is the hackiest solution it even prints it nicely as numpy does:
import numpy as np
a = np.arange(10000).reshape(250,40)
b = [str(row) for row in a.tolist()]
print('\n'.join(b))
Out:
You can use the array2string function - docs.
a = numpy.arange(10000).reshape(250,40)
print(numpy.array2string(a, threshold=numpy.nan, max_line_width=numpy.nan))
# [Big output]
If you have pandas available,
numpy.arange(10000).reshape(250,40)
print(pandas.DataFrame(a).to_string(header=False, index=False))
avoids the side effect of requiring a reset of numpy.set_printoptions(threshold=sys.maxsize) and you don't get the numpy.array and brackets. I find this convenient for dumping a wide array into a log file
If an array is too large to be printed, NumPy automatically skips the central part of the array and only prints the corners:
To disable this behaviour and force NumPy to print the entire array, you can change the printing options using set_printoptions.
>>> np.set_printoptions(threshold='nan')
or
>>> np.set_printoptions(edgeitems=3,infstr='inf',
... linewidth=75, nanstr='nan', precision=8,
... suppress=False, threshold=1000, formatter=None)
You can also refer to the numpy documentation numpy documentation for "or part" for more help.
I have a dateframe that has a column that looks like this:
0 [ [ 1051, 0, 10181, 62, 17, ...
1 [ [ 882, 0, 9909, 59, 23, 9...
2 [ [ 1061, 0, 10192, 60, 17, ...
3 [ [ 122, 4, 501, 2, 8, 3, ...
4 [ [ 397, 1, 859, 9, 8, 5, ...
5 [ [ 1213, 1, 10791, 23, 17, ...
6 [ [ 1395, 3, 11147, 0, 17, ...
7 [ [ 757, 3, 1900, 34, 23, 8...
8 [ [ 129, 0, 507, 10, 8, 3, ...
9 [ [ 1438, 0, 11177, 26, 2, ...
10 [ [ 1272, 1, 10901, 7, 17, ...
An example row with fewer features would be something like this:
[[1,2,3,4],[2,3,4,5],[3,4,5,6]]
The datatype is a string so json.loads has to be used to convert them to arrays that are [N_TIMESTAMPS, N_FEATURES] where each feature is a numerical value.
In order to use this data as input for a neural network I have to convert this column into a numpy array of shape: [N_SAMPLES, N_TIMESTAMPS, N_FEATURES]. So, like this:
[[[1,2,3,4],[2,3,4,5],[3,4,5,6]],[[1,2,3,4],[2,3,4,5],[3,4,5,6]]]
This is how I am doing it now:
train_x = np.array(
df.time_stream.apply(json.loads).apply(np.array).apply(
lambda x: x.reshape(N_TIMESTAMPS,N_FEATURES).tolist()).values.tolist()
)
For a dataset that has 268,521 rows, this computation takes 12.5 mins. Not ideal, but it was working; however, it's not scalable. For the new dataset with 756,961 rows it never finishes (N_TIMESTAMPS = 100; N_FEATURES = 54) because it uses up all of the RAM and the computer crashes.
I'm looking for recommendations for how to make this faster, and perhaps more memory efficient. One issue is that a lot of swap is being used.