Python3 - runpy.run_path not working on Ubuntu 10.10 - python-3.x

I have code like this:
import runpy
runpy.run_path('other.py', globals())
It works on my Windows Box with Python 3.2 but fails on the default Python3 installation (from the Repository) on my Ubuntu 10.10 machine with this message:
Traceback (most recent call last):
File "/home/markus/Documents/projects/BlenderSerialize/generate.py", line 2, in <module>
runpy.run_path('other.py', globals())
AttributeError: 'module' object has no attribute 'run_path'
I checked the documentation and it says that run_path was introduced in Python 2.7. What do I have to do to make this work?

It was introduced in Python 2.7 and 3.2. Hence it will not work with Python 3.0 or 3.1. To make it work, use Python 2.7 or 3.2.

When updating Python3 is not an option there is a workaround that allows one to execute a python script.
The following python code works quite well for me:
exec(compile(open("somefile.py").read(), "somefile.py", 'exec'), local_vars, global_vars)
Other examples can be found in What is an alternative to execfile in Python 3?

Related

How to solve AttributeError: 'module' object has no attribute 'ABC'

I am a bit new to Linux but I am having an issue with my recently installed package Frida. It worked fine until yesterday when I installed fridump as well.
When I try to use Frida I am getting this:
Traceback (most recent call last):
File "/usr/local/bin/frida", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/frida_tools/repl.py", line 34, in main
from frida_tools import _repl_magic
File "/usr/local/lib/python2.7/dist-packages/frida_tools/_repl_magic.py", line 7, in <module>
class Magic(abc.ABC):
AttributeError: 'module' object has no attribute 'ABC'
I have Python 3.9.10 installed but I guess it has to do with Python 2.7 and Frida using new version but I don't know how to fix it. Any suggestions?
I got no experience for both tools but after a bit search ,
My suggestion is to uninstall (remove) fridump and Farida,
pip uninstall <packagename>
in your case frida is the package name
And use different virtual environment for each tool :
virtualenv -p /usr/bin/python3 py3env
virtualenv -p /usr/bin/python py2env
to know more about virtual environments here
So each environment can work fine separately without version conflicts and later on will help install any additional needed tools for each package.

How to fix 'TypeError: an integer is required (got type bytes)' error when trying to run pyspark after installing spark 2.4.4

I've installed OpenJDK 13.0.1 and python 3.8 and spark 2.4.4. Instructions to test the install is to run .\bin\pyspark from the root of the spark installation. I'm not sure if I missed a step in the spark installation, like setting some environment variable, but I can't find any further detailed instructions.
I can run the python interpreter on my machine, so I'm confident that it is installed correctly and running "java -version" gives me the expected response, so I don't think the problem is with either of those.
I get a stack trace of errors from cloudpickly.py:
Traceback (most recent call last):
File "C:\software\spark-2.4.4-bin-hadoop2.7\bin\..\python\pyspark\shell.py", line 31, in <module>
from pyspark import SparkConf
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in <module>
from pyspark.context import SparkContext
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\context.py", line 31, in <module>
from pyspark import accumulators
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in <module>
from pyspark.serializers import read_int, PickleSerializer
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\serializers.py", line 71, in <module>
from pyspark import cloudpickle
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in <module>
_cell_set_template_code = _make_cell_set_template_code()
File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code
return types.CodeType(
TypeError: an integer is required (got type bytes)
This is happening because you're using python 3.8. The latest pip release of pyspark (pyspark 2.4.4 at time of writing) doesn't support python 3.8. Downgrade to python 3.7 for now, and you should be fine.
Its python and pyspark version mismatch like John rightly pointed out.
For a newer python version you can try,
pip install --upgrade pyspark
That will update the package, if one is available. If this doesn't help then you might have to downgrade to a compatible version of python.
pyspark package doc clearly states:
NOTE: If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors.
As a dirty workaround one can replace the _cell_set_template_code with the Python3-only implementation suggested by docstring of _make_cell_set_template_code function:
Notes
-----
In Python 3, we could use an easier function:
.. code-block:: python
def f():
cell = None
def _stub(value):
nonlocal cell
cell = value
return _stub
_cell_set_template_code = f()
Here is a patch for spark v2.4.5: https://gist.github.com/ei-grad/d311d0f34b60ebef96841a3a39103622
Apply it by:
git apply <(curl https://gist.githubusercontent.com/ei-grad/d311d0f34b60ebef96841a3a39103622/raw)
This fixes the problem with ./bin/pyspark, but ./bin/spark-submit uses bundled pyspark.zip with its own copy of cloudpickle.py. And if it would be fixed there, then it still wouldn't work, failing with the same error while unpickling some object in pyspark/serializers.py.
But it looks like Python 3.8 support is already arrived to spark v3.0.0-preview2, so one can try it. Or, stick to Python 3.7, like the accepted answer suggests.
The problem with python 3.8 has been resolved in the most recent versions. I got this error because my scikit-learn version was very outdated
pip install scikit-learn --upgrade
solved the problem
Make sure to use the right versions of Java, Python and Spark.
I got the same error caused by an outdated Spark version (Spark 2.4.7).
By downloading the latest Spark 3.0.1, next to Python 3.8 (as part of Anaconda3 2020.07) and Java JDK 8 got the problem solved for me!
As of today (21 Oct 2022), I can confirm that, on Anaconda,
python 3.8.13
pyspark 3.1.2
work together.
I had the same issue earlier: all I had to do was to run a
conda update --all
and wait.
Try to install the latest version of pyinstaller that can be compatible with python 3.8 using this command:
pip install https://github.com/pyinstaller/pyinstaller/archive/develop.tar.gz
reference:
https://github.com/pyinstaller/pyinstaller/issues/4265

PYD file not found

I have installed Python 3.7 on my system and I am running one simple python file in which imports a .pyd file. When running the script I got error like:
Traceback (most recent call last):
File "demoTemp\run.py", line 1, in
from pyddemo import fun
ModuleNotFoundError: No module named 'pyddemo'
pyddemo is pyd file.
Is there any dependency for pyd file?
Thanks
I found solution of my problem its just mismatch of python version in which I have created pyd and python version in which I am using it.

python module not found error

This question has been asked a few times, but the remedy appears to complicated enough that I'm still searching for a user specific solution.
I recently installed Quandl module using pip command. Even after successful installation my python idle is still showing
Traceback (most recent call last):
File "F:\Python36\Machine Learning\01.py", line 3, in <module>
import Quandl
ModuleNotFoundError: No module named 'Quandl'
I have used import command in my code.
I am using python 3.6.1 version.
I am working on a windows 10 Desktop.
I have also tried re-installation of module.
You can better navigate to your python scripts folder and open a command window there and try pip3 install quandl .Hope this helps.

Python 3.4 invalid syntax in setup.py on ubuntu 14.04, why?

I have a bug (ussue #14 on github) in my python project rma. Installing it trow pip 1.5.4 with python 3.4 some got error like this:
Downloading/unpacking rma
Downloading rma-0.1.5.tar.gz
Running setup.py (path:/tmp/pip_build_root/rma/setup.py) egg_info for package rma
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/rma/setup.py", line 47
setup(**sdict, install_requires=['redis', 'tabulate', 'tqdm', 'msgpack-python'])
^
SyntaxError: invalid syntax
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/rma/setup.py", line 47
setup(**sdict, install_requires=['redis', 'tabulate', 'tqdm', 'msgpack-python'])
^
SyntaxError: invalid syntax
----------------------------------------
My own pip version 8.0.2 (python is 3.5).
I newby in python, sorry if this well known issue. I want to know - should i found way to fix it (if this is my issue) or just recommend to update pip to my user?
That package won't install on any Python version < 3.5, because the syntax is indeed invalid on anything but Python 3.5 and newer.
You can't put the **kwargs syntax in front of other keyword arguments. The two should be swapped:
setup(install_requires=['redis', 'tabulate', 'tqdm', 'msgpack-python'], **sdict)
Reporting this as a bug was the correct thing to do; the package states it supports Python 3.4 and up.
Python 3.5 added support for an arbitrary number of *args and **kwargs expansions through PEP 448, opening the door for the above to work too.

Resources