SciKit-Learn Interactive simulation of data through UI - python-3.x

With the current version 0.22.2 there is a interactive tool to enter interactive data and see the results. Its called Libsvm GUI.
I never managed to have it running in Jupyter notebook.
Having seen that there is a binder option. When trying this (which should not depend on my computer environment) errors come up.
https://scikit-learn.org/stable/auto_examples/applications/svm_gui.html
Automatically created module for IPython interactive environment
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-e5e1b6a6b155> in <module>
6
7 import matplotlib
----> 8 matplotlib.use('TkAgg')
9
10 from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
305 f"for the old name will be dropped %(removal)s.")
306 kwargs[new] = kwargs.pop(old)
--> 307 return func(*args, **kwargs)
308
309 # wrapper() must keep the same documented signature as func(): if we
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/__init__.py in use(backend, warn, force)
1305 if force:
1306 from matplotlib.pyplot import switch_backend
-> 1307 switch_backend(name)
1308 else:
1309 # Finally if pyplot is not imported update both rcParams and
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/pyplot.py in switch_backend(newbackend)
234 "Cannot load backend {!r} which requires the {!r} interactive "
235 "framework, as {!r} is currently running".format(
--> 236 newbackend, required_framework, current_framework))
237
238 rcParams['backend'] = rcParamsDefault['backend'] = newbackend
ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'headless' is currently running
Seeing the first error, it seems even with the untouched binder environment there is something wrong. But I am not sure if it is with binder or with the code itself.
What can I try to make it working?

Related

Unable to export Core ML model in Turicreate

I used AWS Sagemaker with Jupyter notebook to train my Turicreate model. It trained successfully but I'm unable to export it to a CoreML model. It shows the below error. I've tried various kernels in the Jupyter notebook with the same result. Any ideas on how to fix this error?
turicreate 5.4
GPU: mxnet-cu100
KeyError Traceback (most recent call last)
<ipython-input-6-3499bdb76e06> in <module>()
1 # Export for use in Core ML
----> 2 model.export_coreml('pushupsTC.mlmodel')
~/anaconda3/envs/python3/lib/python3.6/site-packages/turicreate/toolkits/object_detector/object_detector.py in export_coreml(self, filename, include_non_maximum_suppression, iou_threshold, confidence_threshold)
1216 assert (self._model[23].name == 'pool5' and
1217 self._model[24].name == 'specialcrop5')
-> 1218 del net._children[24]
1219 net._children[23] = op
1220
KeyError: 24

keras lstm seq2seq example Keyword argument not understood return_state on windows

I am running this example code ( seq2seq built on Keras)form https://github.com/fchollet/keras/blob/master/examples/lstm_seq2seq.py.
This code runs correctly on my Ubuntu. But an error occured when I ran the same code on my Windows.
It says:
Using TensorFlow backend.
Number of samples: 10000
Number of unique input tokens: 73
Number of unique output tokens: 86
Max sequence length for inputs: 17
Max sequence length for outputs: 42
Traceback (most recent call last):
File "h:/eclipse_workspace/Keras_DL/src/seq2seq/lstm_seq2seq.py", line 125, in
encoder = LSTM(latent_dim, return_state = True)
File "D:\software\anaconda\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "D:\software\anaconda\lib\site-packages\keras\layers\recurrent.py", line 949, in init
super(LSTM, self).init(**kwargs)
File "D:\software\anaconda\lib\site-packages\keras\layers\recurrent.py", line 191, in init
super(Recurrent, self).init(**kwargs)
File "D:\software\anaconda\lib\site-packages\keras\engine\topology.py", line 281, in init
raise TypeError('Keyword argument not understood:', kwarg)
TypeError: ('Keyword argument not understood:', 'return_state')
I found that return_state do exists in
keras.layers.recurrent.Recurrent(return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, implementation=0)
Can anyone tell me how can I run this demo correctly on Windows?
My system info:
- OS : Windows 10 64 bit
- python 3.5.2 64 bit
- cudnn-8.0-windows10-x64-v5.1
- keras 2.04 tensorflow-gpu 1.1.0
Your Keras version is too old. return_state is added in Keras 2.0.5. I suggest you install the latest version from GitHub, since the example code you're running has been added to the library less than 24 hours ago.

How to fix "TypeError: connect() failed between NavigationToolbar2QT.message[str] and _show_message()" after Linux upgrade?

After upgrading my Fedora 24 to 25, I am having issue with running a python script which was running just fine under Fedora 24. No matter what I choose from that default list for backend in matplotlibrc file, I am not able to produce plots. In particular, when I choose Qt5Agg in that list for the backend, I am receiving this weird error message and it is really bothering that I cannot find any thing related to that on internet just by searching. But I am also aware that something in the upgrade could have gone wrong affecting my python and/or Qt packages. I just need to know why connectivity has to do with the choice of backend (if any at all) and why none of the default choices can get rid of any sort of error message? But to be specific, why choosing Qt5Agg as the default backend of matplotlibrc file is giving such an error message related to the function connect()? Please let me know if posting the script would help you with the answer. Here is the imports in the beginning of that script:
import numpy as np
from numpy import nan
import pandas as pd
import matplotlib as mpl
#import matplotlib
#matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt
import pylab as pl
from uncertainties import ufloat
from uncertainties.umath import *
from matplotlib.ticker import MaxNLocator
from collections import OrderedDict
import astropy.units as u
from astropy.cosmology import FlatLambdaCDM, z_at_value
from numpy import sqrt, mean, square, std, maximum, minimum
from sklearn.metrics import mean_squared_error
from scipy.stats import poisson, chi2
import math
import sys
And the error message:
QObject::connect: Cannot connect NavigationToolbar2QT::message(QString) to (null)::_show_message()
Traceback (most recent call last):
File "myscript.py", line 496, in <module>
f, ((ax1, ax6, ax11), (ax2, ax7, ax12), (ax3, ax8, ax13), (ax4, ax9, ax14), (ax5, ax10, ax15)) = plt.subplots(5, 3, sharex=True, sharey=False , figsize=(20,9))
File "/usr/lib/python3.5/site-packages/matplotlib/pyplot.py", line 1177, in subplots
fig = figure(**fig_kw)
File "/usr/lib/python3.5/site-packages/matplotlib/pyplot.py", line 527, in figure
**kwargs)
File "/usr/lib/python3.5/site-packages/matplotlib/backends/backend_qt5agg.py", line 43, in new_figure_manager
return new_figure_manager_given_figure(num, thisFig)
File "/usr/lib/python3.5/site-packages/matplotlib/backends/backend_qt5agg.py", line 51, in new_figure_manager_given_figure
return FigureManagerQT(canvas, num)
File "/usr/lib/python3.5/site-packages/matplotlib/backends/backend_qt5.py", line 465, in __init__
self.toolbar.message.connect(self._show_message)
TypeError: connect() failed between NavigationToolbar2QT.message[str] and _show_message()
It's a bug in that backend, exposed by more strict checking in PyQt 5.7.
It was fixed in July - I suggest you open a Fedora bug so they upgrade those packages or backport the fix.
As for why this happens: This is not related to connectivity as in network, but connecting of Qt's signals and slots.

PySpark exception while using IPython

I installed PySpark and Ipython notebook in ubuntu 12.04.
After installing when I run the "ipython --profile=pyspark", it is throwing the following exception
ubuntu_user#ubuntu_user-VirtualBox:~$ ipython --profile=pyspark
Python 2.7.3 (default, Jun 22 2015, 19:33:41)
Type "copyright", "credits" or "license" for more information.
IPython 0.12.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
IPython profile: pyspark
Error: Must specify a primary resource (JAR or Python or R file)
Run with --help for usage help or --verbose for debug output
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
173 else:
174 filename = fname
--> 175 __builtin__.execfile(filename, *where)
/home/ubuntu_user/.config/ipython/profile_pyspark/startup/00-pyspark-setup.py in <module>()
6 sys.path.insert(0, os.path.join(spark_home, 'python/lib/py4j-0.8.2.1-src.zip'))
7
----> 8 execfile(os.path.join(spark_home, 'python/pyspark/shell.py'))
9
/home/ubuntu_user/spark/python/pyspark/shell.py in <module>()
41 SparkContext.setSystemProperty("spark.executor.uri", os.environ["SPARK_EXECUTOR_URI"])
42
---> 43 sc = SparkContext(pyFiles=add_files)
44 atexit.register(lambda: sc.stop())
45
/home/ubuntu_user/spark/python/pyspark/context.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
108 """
109 self._callsite = first_spark_call() or CallSite(None, None, None)
--> 110 SparkContext._ensure_initialized(self, gateway=gateway)
111 try:
112 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
/home/ubuntu_user/spark/python/pyspark/context.pyc in _ensure_initialized(cls, instance, gateway)
232 with SparkContext._lock:
233 if not SparkContext._gateway:
--> 234 SparkContext._gateway = gateway or launch_gateway()
235 SparkContext._jvm = SparkContext._gateway.jvm
236
/home/ubuntu_user/spark/python/pyspark/java_gateway.pyc in launch_gateway()
92 callback_socket.close()
93 if gateway_port is None:
---> 94 raise Exception("Java gateway process exited before sending the driver its port number")
95
96 # In Windows, ensure the Java child processes do not linger after Python has exited.
Exception: Java gateway process exited before sending the driver its port number
Below is the settings and configuration file.
ubuntu_user#ubuntu_user-VirtualBox:~$ ls /home/ubuntu_user/spark
bin ec2 licenses README.md
CHANGES.txt examples NOTICE RELEASE
conf lib python sbin
data LICENSE R spark-1.5.2-bin-hadoop2.6.tgz
Below is the IPython setting
ubuntu_user#ubuntu_user-VirtualBox:~$ ls .config/ipython/profile_pyspark/
db ipython_config.py log security
history.sqlite ipython_notebook_config.py pid startup
IPython and Spark(PySpark) Configuration
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .config/ipython/profile_pyspark/ipython_notebook_config.py
# Configuration file for ipython-notebook.
c = get_config()
# IPython PySpark
c.NotebookApp.ip = 'localhost'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 7770
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .config/ipython/profile_pyspark/startup/00-pyspark-setup.py
import os
import sys
spark_home = os.environ.get('SPARK_HOME', None)
sys.path.insert(0, spark_home + "/python")
sys.path.insert(0, os.path.join(spark_home, 'python/lib/py4j-0.8.2.1-src.zip'))
execfile(os.path.join(spark_home, 'python/pyspark/shell.py'))
Setting the following environment variables in .bashrc or .bash_profile:
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .bashrc
export SPARK_HOME="/home/ubuntu_user/spark"
export PYSPARK_SUBMIT_ARGS="--master local[2]"
I am new for apache spark and IPython. How to solve this issue?
I had the same exception when my virtual machine doesn't have enough memory for Java. So I allocated more memory for my virtual machine and this exception goes away.
Steps: Shut down the VM -> VirtualBox Settings -> "System" tab -> Set the memory
(However, this may be only a workaround. I guess the correct way to fix this exception might be properly configuring Spark in terms of java memory.)
May be there is an error locating the pyspark shell by the spark.
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
This will work for Spark 1.6.1. If you have a different version try locating the .zip file and adding the path to the extract.
Two thoughts:
Where is your JDK? I don't see a JAVA_HOME parameter configured in your file. That might be enough given:
Error: Must specify a primary resource (JAR or Python or R file)
Second, Make sure your port 7770 is open and available to your JVM.

Error when running deepmind

It took me two days to install the requirements of deepQ(python version),then I tried to run it today but I faced this problem, and the code are as followed.
root#unicorn:/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl# python run_nips.py
A.L.E: Arcade Learning Environment (version 0.5.0)
[Powered by Stella]
Use -help for help screen.
Warning: couldn't load settings file: ./ale.cfg
Game console created:
ROM file: ../roms/breakout.bin
Cart Name: Breakout - Breakaway IV (1978) (Atari)
Cart MD5: f34f08e5eb96e500e851a80be3277a56
Display Format: AUTO-DETECT ==> NTSC
ROM Size: 2048
Bankswitch Type: AUTO-DETECT ==> 2K
Running ROM file...
Random seed is 65
Traceback (most recent call last):
File "run_nips.py", line 60, in <module>
launcher.launch(sys.argv[1:], Defaults, __doc__)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/launcher.py", line 223, in launch
rng)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 53, in __init__
num_actions, num_frames, batch_size)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 168, in build_network
batch_size)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 407, in build_nips_network_dnn
from lasagne.layers import dnn
File "/usr/local/lib/python2.7/dist-packages/Lasagne-0.2.dev1-py2.7.egg/lasagne/layers/dnn.py", line 13, in <module>
raise ImportError("dnn not available") # pragma: no cover
ImportError: dnn not available
I have already tested theano ,numpy, scipy and there was no errors coming out. But when I ran it, it said dnn not available. So I came to find dnn, and the code is like this
import theano
from theano.sandbox.cuda import dnn
from .. import init
from .. import nonlinearities
from .base import Layer
from .conv import conv_output_length
from .pool import pool_output_length
from ..utils import as_tuple
if not theano.config.device.startswith("gpu") or not dnn.dnn_available():
raise ImportError("dnn not available") # pragma: no cover
Just hope someone can help me.
Did you install CUDA and cuDNN?
Lasagne is build on top of Theano and is, in some cases, relying on cuda code (e.g. here), rather than abstracting it away.
This can be seen from the import:
from theano.sandbox.cuda import dnn
Also see: https://github.com/Lasagne/Lasagne/issues/242
To get cuDNN you need to register at NVidia as a developer, see:
https://developer.nvidia.com/accelerated-computing
Hope this helps.
Cheers,
Michael

Resources