I have to apply this code for computer vision project https://www.quora.com/How-do-I-load-train-and-test-data-from-the-local-drive-for-a-deep-learning-Keras-model which is load train and test data from the local drive for Keras model.
I was tried but appears some errors such as:
PermissionError Traceback (most recent call last)
<ipython-input-10-3806351fb2b0> in <module>
14 for sample in train_batch:
15 img_path = train_path+sample
---> 16 x = image.load_img(img_path)
17 # preprocessing if required
18 x_train.append(x)
~\Anaconda3\lib\site-packages\keras_preprocessing\image\utils.py in load_img(path, grayscale, color_mode, target_size, interpolation)
108 raise ImportError('Could not import PIL.Image. '
109 'The use ofload_imgrequires PIL.')
--> 110 img = pil_image.open(path)
111 if color_mode == 'grayscale':
112 if img.mode != 'L':
~\Anaconda3\lib\site-packages\PIL\Image.py in open(fp, mode)
2768
2769 if filename:
-> 2770 fp = builtins.open(filename, "rb")
2771 exclusive_fp = True
2772
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ASUS\\Desktop\\step2_dir/datasets/dataset/Alfalfa'
Note: I already sure about installed PIL successfully.
So, I need some help if anyone can try to apply the code and tell me how to fix the errors.
Thanks.
You don't have the permission to access the filepath:
PermissionError: [Errno 13] Permission denied:
C:\\Users\\ASUS\\Desktop\\step2_dir/datasets/dataset/Alfalfa
Easily solvable! Are you on Windows?
You most likely open jupyter notebook from a Powershell, you just have to open a shell with administrator rights (right-click on cmd and/or powershell --> Run as administrator).
If you are on a bash shell run jupyter notebook as a superuser, and you will have the permission needed:
sudo jupyter notebook
Related
When trying to access ADLS directory with the following PySpark code in Apache Spark I get the error:
ValueError: root_directory must be an absolute path. Got abfss://root#adlspretbiukadlsdev.dfs.core.windows.net/RAW/LANDING/ instead.
Traceback (most recent call last):
File "/home/trusted-service-user/cluster-env/env/lib/python3.6/site-packages/great_expectations/core/usage_statistics/usage_statistics.py", line 262, in usage_statistics_wrapped_method
result = func(*args, **kwargs)
The code that gives the above error when I'm trying to access the directory is as follows:
data_context_config = DataContextConfig(
datasources={"my_spark_datasource": my_spark_datasource_config},
store_backend_defaults=FilesystemStoreBackendDefaults(root_directory='abfss://root#adlspretbiukadlsdev.dfs.core.windows.net/RAW/LANDING/'),
)
context = BaseDataContext(project_config=data_context_config)
When I change the code to
data_context_config = DataContextConfig(
datasources={"my_spark_datasource": my_spark_datasource_config},
store_backend_defaults=FilesystemStoreBackendDefaults(root_directory='/abfss://root#adlspretbiukadlsdev.dfs.core.windows.net/RAW/LANDING/'),
)
I get the following error message:
PermissionError: [Errno 13] Permission denied: '/abfss:'
Traceback (most recent call last):
When I enter the following code
data_context_config = DataContextConfig(
datasources={"my_spark_datasource": my_spark_datasource_config},
store_backend_defaults=FilesystemStoreBackendDefaults(root_directory='/'),
)
context = BaseDataContext(project_config=data_context_config)
I get the error message:
PermissionError: [Errno 13] Permission denied: '/expectations'
Traceback (most recent call last):
However, I don't have a directory called '/expectations
As a side note I'm trying to execute Great_Expectations.
The developers of Great_Expectations have informed me that this error will be fixed a new release of the Great_Expectations
With the current version 0.22.2 there is a interactive tool to enter interactive data and see the results. Its called Libsvm GUI.
I never managed to have it running in Jupyter notebook.
Having seen that there is a binder option. When trying this (which should not depend on my computer environment) errors come up.
https://scikit-learn.org/stable/auto_examples/applications/svm_gui.html
Automatically created module for IPython interactive environment
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-e5e1b6a6b155> in <module>
6
7 import matplotlib
----> 8 matplotlib.use('TkAgg')
9
10 from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*args, **kwargs)
305 f"for the old name will be dropped %(removal)s.")
306 kwargs[new] = kwargs.pop(old)
--> 307 return func(*args, **kwargs)
308
309 # wrapper() must keep the same documented signature as func(): if we
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/__init__.py in use(backend, warn, force)
1305 if force:
1306 from matplotlib.pyplot import switch_backend
-> 1307 switch_backend(name)
1308 else:
1309 # Finally if pyplot is not imported update both rcParams and
/srv/conda/envs/notebook/lib/python3.7/site-packages/matplotlib/pyplot.py in switch_backend(newbackend)
234 "Cannot load backend {!r} which requires the {!r} interactive "
235 "framework, as {!r} is currently running".format(
--> 236 newbackend, required_framework, current_framework))
237
238 rcParams['backend'] = rcParamsDefault['backend'] = newbackend
ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'headless' is currently running
Seeing the first error, it seems even with the untouched binder environment there is something wrong. But I am not sure if it is with binder or with the code itself.
What can I try to make it working?
I am creating an application in python (3.4) with tkinter and I am compiling it with pyinstaller. The code fragment that brings the error is this:
client = paramiko.SSHClient()
known_hosts = open(self.resource_path("known_hosts")) # Linea 73
client.load_host_keys(known_hosts)
The error is thrown when I click on a button that executes that part of the code, that is, the application runs rather well. The error is this:
Exception in Tkinter callback
Traceback (most recent call last):
File "tkinter\__init__.py", line 1538, in __call__
File "prueba.py", line 73, in aceptar
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Hernan\\AppData\\Local\\Temp\\_MEI124282\\known_hosts'
I clarify that I am compiling it and running in windows 10.
I tried to execute the exe as administrator but still giving the same error. I verified the path of the file and it exists, so I discard that the file does not exist. I also tried to compile the exe in a cmd with administrator permissions, but it did not give me a solution either.
Any ideas ?
PD: add code...
def resource_path(self, relative_path):
""" Get absolute path to resource, works for dev and for PyInstaller """
base_path = getattr(sys, '_MEIPASS', os.path.dirname(os.path.abspath(__file__)))
return os.path.join(base_path, relative_path)
I have a short python script (called VaultTransferScript.py) that should transfer a zip file from one machine to another. The destination machine is a mapped network-attached-storage machine, which I have assigned to be the Z: drive.
The script is:
import shutil
import os
from datetime import datetime
time_stamp = datetime.now().strftime('%Y-%m-%d_%H_%M')
title_str = 'VaultBackup.zip'
name = time_stamp + title_str
shutil.move('C:\\Users\\Hawking\\Desktop\\VaultBackups\\MyBackup.zip',
os.path.join('Z:\\VaultBackups\\'+name))
I can run this script from the notepad++ run facility, using
cmd /C python "$(FULL_CURRENT_PATH)"
But running it in a batch script as:
echo off
C:\Users\Hawking\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\Hawking\Desktop\VaultBackupTransfer.py
results in this:
C:\Users\Hawking\Desktop>echo off
Traceback (most recent call last):
File "C:\Users\Hawking\AppData\Local\Programs\Python\Python37-32\lib\shutil.py", line 557, in move
os.rename(src, real_dst)
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\Hawking\\Desktop\\VaultBackups\\MyBackup.zip' -> 'Z:\\VaultBackups\\2018-09-21_14_30VaultBackup.zip'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Hawking\Desktop\VaultBackupTransfer.py", line 7, in <module>
shutil.move('C:\\Users\\Hawking\\Desktop\\VaultBackups\\MyBackup.zip', os.path.join('Z:\\VaultBackups\\'+name))
File "C:\Users\Hawking\AppData\Local\Programs\Python\Python37-32\lib\shutil.py", line 571, in move
copy_function(src, real_dst)
File "C:\Users\Hawking\AppData\Local\Programs\Python\Python37-32\lib\shutil.py", line 257, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "C:\Users\Hawking\AppData\Local\Programs\Python\Python37-32\lib\shutil.py", line 121, in copyfile
with open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 'Z:\\VaultBackups\\2018-09-21_14_30VaultBackup.zip'
WHat is the difference in how I invoke the python script, and why does it error out from the batch script, but not notepad++?
You might be running the Python program with different user permissions in Notepad++ versus the command prompt. Alternatively, another Python VM might be used. Although, nothing in particular makes me think that the later is true.
I installed PySpark and Ipython notebook in ubuntu 12.04.
After installing when I run the "ipython --profile=pyspark", it is throwing the following exception
ubuntu_user#ubuntu_user-VirtualBox:~$ ipython --profile=pyspark
Python 2.7.3 (default, Jun 22 2015, 19:33:41)
Type "copyright", "credits" or "license" for more information.
IPython 0.12.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
IPython profile: pyspark
Error: Must specify a primary resource (JAR or Python or R file)
Run with --help for usage help or --verbose for debug output
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
173 else:
174 filename = fname
--> 175 __builtin__.execfile(filename, *where)
/home/ubuntu_user/.config/ipython/profile_pyspark/startup/00-pyspark-setup.py in <module>()
6 sys.path.insert(0, os.path.join(spark_home, 'python/lib/py4j-0.8.2.1-src.zip'))
7
----> 8 execfile(os.path.join(spark_home, 'python/pyspark/shell.py'))
9
/home/ubuntu_user/spark/python/pyspark/shell.py in <module>()
41 SparkContext.setSystemProperty("spark.executor.uri", os.environ["SPARK_EXECUTOR_URI"])
42
---> 43 sc = SparkContext(pyFiles=add_files)
44 atexit.register(lambda: sc.stop())
45
/home/ubuntu_user/spark/python/pyspark/context.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
108 """
109 self._callsite = first_spark_call() or CallSite(None, None, None)
--> 110 SparkContext._ensure_initialized(self, gateway=gateway)
111 try:
112 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
/home/ubuntu_user/spark/python/pyspark/context.pyc in _ensure_initialized(cls, instance, gateway)
232 with SparkContext._lock:
233 if not SparkContext._gateway:
--> 234 SparkContext._gateway = gateway or launch_gateway()
235 SparkContext._jvm = SparkContext._gateway.jvm
236
/home/ubuntu_user/spark/python/pyspark/java_gateway.pyc in launch_gateway()
92 callback_socket.close()
93 if gateway_port is None:
---> 94 raise Exception("Java gateway process exited before sending the driver its port number")
95
96 # In Windows, ensure the Java child processes do not linger after Python has exited.
Exception: Java gateway process exited before sending the driver its port number
Below is the settings and configuration file.
ubuntu_user#ubuntu_user-VirtualBox:~$ ls /home/ubuntu_user/spark
bin ec2 licenses README.md
CHANGES.txt examples NOTICE RELEASE
conf lib python sbin
data LICENSE R spark-1.5.2-bin-hadoop2.6.tgz
Below is the IPython setting
ubuntu_user#ubuntu_user-VirtualBox:~$ ls .config/ipython/profile_pyspark/
db ipython_config.py log security
history.sqlite ipython_notebook_config.py pid startup
IPython and Spark(PySpark) Configuration
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .config/ipython/profile_pyspark/ipython_notebook_config.py
# Configuration file for ipython-notebook.
c = get_config()
# IPython PySpark
c.NotebookApp.ip = 'localhost'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 7770
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .config/ipython/profile_pyspark/startup/00-pyspark-setup.py
import os
import sys
spark_home = os.environ.get('SPARK_HOME', None)
sys.path.insert(0, spark_home + "/python")
sys.path.insert(0, os.path.join(spark_home, 'python/lib/py4j-0.8.2.1-src.zip'))
execfile(os.path.join(spark_home, 'python/pyspark/shell.py'))
Setting the following environment variables in .bashrc or .bash_profile:
ubuntu_user#ubuntu_user-VirtualBox:~$ vi .bashrc
export SPARK_HOME="/home/ubuntu_user/spark"
export PYSPARK_SUBMIT_ARGS="--master local[2]"
I am new for apache spark and IPython. How to solve this issue?
I had the same exception when my virtual machine doesn't have enough memory for Java. So I allocated more memory for my virtual machine and this exception goes away.
Steps: Shut down the VM -> VirtualBox Settings -> "System" tab -> Set the memory
(However, this may be only a workaround. I guess the correct way to fix this exception might be properly configuring Spark in terms of java memory.)
May be there is an error locating the pyspark shell by the spark.
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
This will work for Spark 1.6.1. If you have a different version try locating the .zip file and adding the path to the extract.
Two thoughts:
Where is your JDK? I don't see a JAVA_HOME parameter configured in your file. That might be enough given:
Error: Must specify a primary resource (JAR or Python or R file)
Second, Make sure your port 7770 is open and available to your JVM.