AssertionError when using self-defined nested list in Pyspc - python-3.x

I installed pyspc and run on Jupyter Notebook successfully when using original samples.
But when I tried introducing a self defined nested list and an error message showed up.
pyspc library: https://github.com/carlosqsilva/pyspc
from pyspc import*
import numpy
abc=[[2,3,4],[4,5.6],[1,4,5],[3,4,4],[4,5,6]]
a=spc(abc)+xbar_rbar()+rules()+rbar()
print(a)
error message for AssertionError
Thank you for advise where went wrong and how to fix it.

Check the data you have accidentally used the . instead of , for value [4,5.6], second element of the list.
Here is the corrected data
abc=[[2,3,4],[4,5,6],[1,4,5],[3,4,4],[4,5,6]]
Hope this will help.

Related

from sparkdl import DeepImageFeaturizer

I need to use spark in transfer learning to train images ,the error is:
"nnot import name 'resnet50' from 'keras.applications' (/usr/local/lib/python3.7/dist-packages/keras/applications/init.py) "
i try to solve this question since one week, this one is coming from sparkdl, if you add to this file (sparkdl/transformers/keras_applications.py)
**
from tensorflow.keras.applications
**, it will be return normal, but this time you will see another error like
AttributeError: module 'tensorflow' has no attribute 'Session'
i tried on different IDE (Pycharm, Vs Code) but i got the same errors. there are different explications on Stackoverflow. but i'm totally confused now

Numpy error on .arange command

I am trying to use sklearn nmf on a binary file (.bin) imported via numpy and converted to uint8. I import the file no problem, but it's coming in as a 1D array, and when I try and arrange into a 2D array (which sklearn.NMF requires) it errors. I have imported numpy and sklearn.
Import data:
m1 = np.fromfile('file', dtype='uint8')
Code it errors on (I added the - symbol following advice from the docs, it also errors without the - symbol):
m1.arange(962240400).reshape((31020,-31020))
The error:
AttributeError: 'numpy.ndarray' object has no attribute 'arange'
I have tried looking at the official docs and stack overflow, but nothing seems to be working. If anyone has any ideas as to why my code is wrong that would be great.
Use np.arange(962240400).reshape((31020,-31020)), it is a function of numpy, not a method of the array m1
use arange in place of arrange.there should only one 'r'

FailedPreconditionError: Table already initialized

I am reading data from tfrecords with dataset api. I am converting string data to dummy data with following code.
SFR1 = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list("SFR1 ",
vocabulary_list=("1", "2")))
But when i run my code, tensorflow is throwing following error.
tensorflow.python.framework.errors_impl.FailedPreconditionError: Table
already initialized. [[Node:
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/hash_table/table_init
= InitializeTableV2[Tkey=DT_STRING, Tval=DT_INT64](Generator/input_layer/SFR1 _indicator/SFR1 _lookup/hash_table,
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/Const,
Generator/input_layer/SFR1 _indicator/SFR1 _lookup/ToInt64)]]
[[Node: Generator2/IteratorGetNext =
IteratorGetNextoutput_shapes=[[?,10000,160]],
output_types=[DT_FLOAT],
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
I have tried many combinations to determine the source of problem. I understood that this problem occurs when model includes both tf.feature_column.categorical_column_with_vocabulary_list and dataset api. If i choose TFRecordReader instead of dataset, code is running.
When i search stackoverflow, I noticed that there is a similar issue. I am adding issue link below. As both problem are same, I didn't copy all my code. Below link includes enough data to explain my problem
Tensorflow feature columns in Dataset map Table already initialized issue
Thanks.
I came across the same issue. Then modified my code following the warning from Tensorflow and it works:
Creating lookup tables inside a function passed to Dataset.map() is not supported. Create each table outside the function, and capture it inside the function to use it.
Hope it would help.
This is issue with earlier version of TensorFlow, updating to TF2.0 should resolve this.
pip install --upgrade tensorflow

Name error when calling defined function in Jupyter

I am following a tutorial over at https://blog.patricktriest.com/analyzing-cryptocurrencies-python/ and I've got a bit stuck. I am tyring to define, then immediately call, a function.
My code is as follows:
def merge_dfs_on_column(dataframes, labels, col):
'''merge a single column of each dataframe on to a new combined dataframe'''
series_dict={}
for index in range(len(dataframes)):
series_dict[labels[index]]=dataframes[index][col]
return pd.DataFrame(series_dict)
# Merge the BTC price dataseries into a single dataframe
btc_usd_datasets= merge_dfs_on_column(list(exchange_data.values()),list(exchange_data.keys()),'Weighted Price')
I can clearly see that I have defined the merge_dfs_on_column fucntion and I think the syntax is correct, however, when I call the function on the last line, I get the following error:
NameError Traceback (most recent call last)
<ipython-input-22-a113142205e3> in <module>()
1 # Merge the BTC price dataseries into a single dataframe
----> 2 btc_usd_datasets= merge_dfs_on_column(list(exchange_data.values()),list(exchange_data.keys()),'Weighted Price')
NameError: name 'merge_dfs_on_column' is not defined
I have Googled for answers and carefully checked the syntax, but I can't see why that function isn't recognised when called.
Your function definition isn't getting executed by the Python interpreter before you call the function.
Double check what is getting executed and when. In Jupyter it's possible to run code out of input-order, which seems to be what you are accidentally doing. (perhaps try 'Run All')
Well, if you're defining yourself,
Then you probably have copy and pasted it directly from somewhere on the web and it might have characters that you are probably not able to see.
Just define that function by typing it and use pass and comment out other code and see if it is working or not.
"run all" does not work.
Shutting down the kernel and restarting does not help either.
If I write:
def whatever(a):
return a*2
whatever("hallo")
in the next cell, this works.
I have also experienced this kind of problem frequently in jupyter notebook
But after replacing %% with %%time the error resolved. I didn't know why?
So,after some browsing i get that this is not jupyter notenook issue,it is ipython issueand here is the issue and also this problem is answered in this stackoverflow question

Keras with Theano on GPU

While trying to run my Keras code on GPU (CUDA installed), I am not able to execute the following statement, as has been suggested on many online references.
set THEANO_FLAGS="mode=FAST_RUN,device=gpu,floatX=float32" & python theanogpu_example.py
I am getting the following error.
ValueError: Invalid value ("FAST_RUN,device=gpu,floatX=float32") for configurati
on variable "mode". Valid options are ('Mode', 'DebugMode', 'FAST_RUN', 'NanGuar
dMode', 'FAST_COMPILE', 'DEBUG_MODE')
I have tried the other mode suggested as well from inside the code.
import theano
theano.config.device = 'gpu'
theano.config.floatX = 'float32'
I get the following error.
Exception: Can't change the value of this config parameter after initialization!
Apart from knowing how to make it run, I would also take this opportunity to ask a simpler question. How to know in Windows what is my device i.e. whether 'gpu' or 'gpu1' or 'gpu0'? I have tried all 3 for my case but it hasn't yielded result.
Any suggestions will be appreciated.
The best way is using THEANO_FLAGS before run code, because the config variables cannot be changed after importing Theano, try this:
import os
os.environ['THEANO_FLAGS'] = "device=cuda,force_device=True,floatX=float32"
import theano

Resources