I'm trying to replicate the first example on https://pytorch.org/docs/stable/generated/torch.linalg.solve.html
import torch
import time
Acuda = torch.randn(2,3,3,device='cuda')
bcuda = torch.randn(2,3,4,device='cuda')
t1 = time.time()
torch.linalg.torch.solve(Acuda,bcuda)
print('torch took: ',time.time()-t1)
As result I'm getting
Traceback (most recent call last):
File "linalg_solver_test.py", line 10, in <module>
torch.linalg.torch.solve(Acuda,bcuda)
RuntimeError: A must be batches of square matrices, but they are 4 by 3 matrices
My Pytorch Version is 1.7.1.
In contray to the example on the documentation page, I'm using torch.linalg.torch.solve
as torch.linalg.solve does not exist.
The order of the parameters A, B is actually B, A in this old version of function for solving AX = B.
https://pytorch.org/docs/stable/generated/torch.solve.html
You should use the latest PyTorch 1.9 for LinAlg, because it explicitly mentions "Major improvements to support scientific computing, including torch.linalg" (https://github.com/pytorch/pytorch/releases/tag/v1.9.0)
PyTorch 1.7.1 is rather old. Looks like this version's LinAlg solver doesn't support non-square matrices.
Related
I am getting this error when trying to use PyTorch
import torch
z = torch.zeros(5,3)
print (z)
print(z.datatype)
AttributeError: partially initialized module 'torch' has no attribute 'zeros' (most likely due to a circular import)
I am on python 3.9 because PyTorch does not work with more modern versions
I tried reimporting with pip3 and it says that I already have it downloaded
Show a minimal, reproducible example
In an empty python program, show what is needed for any third party to replicate your problem.
This means:
import statement
just enough statements to trigger the error
And please state version of Pytorch as well as the version of Python that you did give.
I'm trying to convert Pytorch model to MLModel with Onnx.
My code:
import torch
from onnx_coreml import convert
import coremltools
net = BiSeNet(19)
net.cuda()
net.load_state_dict(torch.load('model.pth'))
#net.eval()
dummy = torch.rand(1,3,512,512).cuda()
torch.onnx.export(net, dummy, "Model.onnx", input_names=["image"], output_names=["output"], opset_version=11)
finalModel = convert(model='Model.onnx', minimum_ios_deployment_target='12')
finalModel.save('ModelML.mlmodel')
After the code runs Model.onnx is generated, however, .mlmodel file is not generated. There're no errors in the console. This is the output:
2020-04-15 21:49:32.367179: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
WARNING:root:TensorFlow version 2.2.0-rc2 detected. Last version known to be fully compatible is 1.14.0 .
WARNING:root:Keras version 2.3.1 detected. Last version known to be fully compatible of Keras is 2.2.4 .
1.4.0
/content/drive/My Drive/Collab/fp/model.py:116: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
size_array = [int(s) for s in feat32.size()[2:]]
/content/drive/My Drive/Collab/fp/model.py:80: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
size_array = [int(s) for s in feat.size()[2:]]
/content/drive/My Drive/Collab/fp/model.py:211: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
size_array = [int(s) for s in feat.size()[2:]]
What could be the issue?
as far as I can see, sklearn has deprecated the partial dependence functionality. I tried to run a simple example:
from sklearn.datasets import make_friedman1
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.inspection import partial_dependence
from sklearn.inspection import plot_partial_dependence
X, y = make_friedman1()
clf = GradientBoostingRegressor(n_estimators=10).fit(X, y)
plot_partial_dependence(clf, X, [0, (0, 1)])
But I am returned the following error message: ImportError: No module named 'sklearn.inspection'
To me, partial dependence (and marginal effects) plots a very important (in combination with relative importances) to better understand machine learning results and predictions.
Is there an alternative available? Respectively, how can I plot the partial dependence?
I think there might be a confusion with versions of sklearn. Just as a suggestion -- I would check yours (e.g., import sklearn; sklearn.__version__). For example, if it is v.0.20.3, by chance -- aren't you looking for partial_dependence and plot_partial_dependence from sklearn.ensemble.partial_dependence instead of sklearn.inspection?
I had the same issue, I resolved it by simply updating sklearn, which now contains sklearn.inspection. I'm using Anaconda, if you're also using Anaconda, simply type this in the Anaconda Propmt:
conda update --all
to update all packages. Restart your jupyter notebook and now it should work.
When I import scikit-learn before importing tensorflow I don't have any issues. Running this block of code produces an output of 1.7766212763101197e-12.
import numpy as np
np.random.seed(123)
import numpy.random as rand
from sklearn.decomposition import PCA
import tensorflow as tf
X = rand.randn(100,15)
X = X - X.mean(axis=0)
mod = PCA()
w = mod.fit_transform(X)
h = mod.components_
print(np.sum(np.abs(X-np.dot(w,h))))
However, if I import tensorflow before importing scikit-learn my code no longer functions. When I run this code-block
import tensorflow as tf
import numpy as np
np.random.seed(123)
import numpy.random as rand
from sklearn.decomposition import PCA
X = rand.randn(100,15)
X = X - X.mean(axis=0)
mod = PCA()
w = mod.fit_transform(X)
h = mod.components_
print(np.sum(np.abs(X-np.dot(w,h))))
I get an output of 130091393261440.25.
Why is that? My versions for the packages are:
numpy - 1.13.1
sklearn - 0.19.0
tensorflow - 1.3.0
Import order should not affect output, as python modules are self-contained, except in the case of dependencies.
I was unable to reproduce your error, and get an output of 1.7951539777252834e-12 for both code blocks.
This is an interesting problem and I am curious to see if others can provide a better response for why you are seeing this issue.
Note: the present answer is an answer to the title for the ones looking for using TensorFlow within Scikit-Learn, and does not just regards some import errors as you've had.
You can use TensorFlow within Scikit-Learn pipelines using Neuraxle.
Neuraxle is an extension of Scikit-Learn to make it more compatible with all deep learning libraries.
Problem: You can’t Parallelize nor Save Pipelines Using Steps that Can’t be Serialized “as-is” by Joblib (e.g.: a TensorFlow step)
Whereas a step is a transformer or estimator in a scikit-learn Pipeline.
This problem will only surface past some point of using Scikit-Learn. This is the point of no-return: you’ve coded your entire production pipeline, but once you trained it and selected the best model, you realize that what you’ve just coded can’t be serialized.
This means once trained, your pipeline can’t be saved to disks because one of its steps imports things from a weird python library coded in another language and/or uses GPU resources. Your code smells weird and you start panicking over what was a full year of research development.
Solution with Code Examples:
Here is a full project example from A to Z where TensorFlow is used with Neuraxle as if it was used with Scikit-Learn.
Here is another practical example where TensorFlow is used within a scikit-learn-like pipeline
The trick is performed by using Neuraxle-TensorFlow.
This is to make use of Neuraxle's savers.
Read also: https://stackoverflow.com/a/60557192/2476920
I've tried
from numpy import array
from pyspark.mllib.clustering import BisectingKMeans, BisectingKMeansModel
I'm using the iris.data set:
iris_model.transform(iris)
but I get this error:
AttributeError
Traceback (most recent call last)
<ipython-input-241-59b5e8c1e068> in <module>()
----> 1 iris_model.transform(iris)
AttributeError: 'BisectingKMeansModel' object has no attribute 'transform'
I can get the ClusterCenters and I get the array, but I need the group of which each case belongs to.
Thanks
You probably mismatch Spark ML and MLlib APIs.
MLLib package was the first package, but then developers started to build new package, ML, which works with DataFrames.
Change your package to pyspark.ml.clustering and you will have new version, which has transform function and work with DataFrame and new ML Pipelines. I suggest yo build Pipeline when you will have algorithm working :)