On Redhat 4.4.7-18 I am trying to run python3 code using sqlite, but I get the following import error:
Traceback (most recent call last):
File "database.py", line 7, in <module>
import sqlite3
File "/usr/local/lib/python3.6/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.6/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
I tried to install it:
>sudo pip install sqlite3
Collecting sqlite3
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(101, 'Network is unreachable'))': /simple/sqlite3/
(while the network is reachable...) and with the following command:
> sudo yum install sqlite-devel
Loaded plugins: post-transaction-actions, product-id, refresh-packagekit,
: rhnplugin, search-disabled-repos, security, subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite.
Setting up Install Process
Package sqlite-devel-3.6.20-1.el6_7.2.x86_64 already installed and latest version
Nothing to do
So it is installed and not installed? Any suggestion how I can solve the original problem?
Not a direct answer but I ended up here with my search engine... So for my fellow web-surfers:
I had a similar issue, but on ubuntu 16.04 with a manually compile python3.6 version :
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
I had to install libsqlite3-dev (sudo apt install libsqlite3-dev) and compile from the start python3.6 to make it work.
Yep.
sudo yum install sqlite-devel
Followed by rebuild of Python 3.8.3 from source did the trick. Thx!
I had this issue on linux mint 20 after sqlite3 successfully installed
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
also, could not import sqlite3 into python interpreter
Fix:
sudo apt install libsqlite3-dev
cd your python installer directory
./configure
sudo make install
It happens when you install Python using the source code and then compile.
Usually people tend to install python and sqlite into separate folders. This causes sqlite binary inaccessible to python executables.
To solve this problem, use a common folder to install BOTH, the Python as well as sqlite. So that python binary and sqlite binary, reside in the same bin folder.
Below example script will solve:
INSTALL_BASE_PATH="/home/<user>/<tools>"
cd ~
mkdir build
cd build
[ -f Python-3.9.10.tgz ] || wget --no-check-certificate https://www.python.org/ftp/python/3.9.10/Python-3.9.10.tgz
tar -zxvf Python-3.9.10.tgz
[ -f sqlite-autoconf-3380000.tar.gz ] || wget --no-check-certificate https://www.sqlite.org/2022/sqlite-autoconf-3380000.tar.gz
tar -zxvf sqlite-autoconf-3380000.tar.gz
cd sqlite-autoconf-3380000
./configure --prefix=${INSTALL_BASE_PATH}
make
make install
cd ../Python-3.9.10
LD_RUN_PATH=${INSTALL_BASE_PATH}/lib configure
LDFLAGS="-L ${INSTALL_BASE_PATH}/lib"
CPPFLAGS="-I ${INSTALL_BASE_PATH}/include"
LD_RUN_PATH=${INSTALL_BASE_PATH}/lib make
./configure --prefix=${INSTALL_BASE_PATH}
make
make install
cd ~
LINE_TO_ADD="export PATH=${INSTALL_BASE_PATH}/bin:\$PATH"
if grep -q -v "${LINE_TO_ADD}" $HOME/.bashrc; then echo "${LINE_TO_ADD}" >> $HOME/.bashrc; fi
LINE_TO_ADD="export LD_LIBRARY_PATH=${INSTALL_BASE_PATH}/lib"
if grep -q -v "${LINE_TO_ADD}" $HOME/.bashrc; then echo "${LINE_TO_ADD}" >> $HOME/.bashrc; fi
source $HOME/.bashrc
Related
I utilize Ubuntu 20.4 and python3.10, but when I run:
sudo apt-get update or sudo apt update
my terminal show me this code:
Traceback (most recent call last):
File "/usr/lib/cnf-update-db", line 8, in <module>
from CommandNotFou/?nd.db.creator import DbCreator
File "/usr/lib/python3/dist-packages/CommandNotFound/db/creator.py", line 12, in <module>
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null; fi'
E: Sub-process returned an error code
When I run: sudo apt-get upgrade no problems happen.
Detail my python3.8 and python3.10 run normally
I don't know what is this.
I ran into same problem today. I did some search via google. Per https://www.linuxquestions.org/questions/linux-software-2/errors-trying-to-run-apt-get-update-4175665017/#3, finally my problem is fixed by followings:
cd /usr/lib/python3/dist-packages
ls -la apt.pkg.*
if apt.pkg.so is no there, copy the one to there.
sudo cp apt_pkg.cpython-38-x86_64-linux-gnu.so apt_pkg.so
I am using EC2 Ubuntu 18.04 VM.
Due to CVE-2021-3177, Python needs to be upgraded to the latest version of Python3.9 which would be 3.9.9 currently.
I did that using the deadsnakes option as per the steps mentioned below:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install python3.9
sudo apt-get update
sudo apt upgrade -y
The above ensures that Python3.9.9 is now available. But now python3.6 & python3.9 is available. So next we will use the update-alternatives command to make python3.9 as the default version.
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2
Now that alternatives are defined, we will switch to Option 2 as the default option i.e. Python3.9
sudo update-alternatives --config python3
Once done, the following command would point to the latest version.
sudo python3 -V
However, if you use the sudo apt update command, you will see an error stating that
Traceback (most recent call last):
File "/usr/lib/cnf-update-db", line 8, in <module>
from CommandNotFound.db.creator import DbCreator
File "/usr/lib/python3/dist-packages/CommandNotFound/db/creator.py", line 11, in <module>
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null; fi'
E: Sub-process returned an error code
To fix this we will have to add a link using the following command
cd /usr/lib/python3/dist-packages/
sudo ln -s apt-pkg.cpython-{36m,39m}-x86_64-linux-gnu.so
Also below is optional, I tried with and without the following commands
apt purge python3-apt
apt install python3-apt
sudo apt install python3.9-distutils python3.9-dev
Once done following command will now not result in any errors
sudo apt update
This means that the issue is fixed.
But for some reason, I cannot connect with the machine afterwards or if I create an AMI using this I cannot connect to the launched instance using PUTTY or SCP.
The same issue persists with Ubuntu-20.x too.
Appreciate your help.
After upgrading Python, there are issues with the following Python modules that cloud-init depends on, which in turn prevents EC2 from being able to correctly configure your newly booted EC2 instance using cloud-init, and which is why it is inaccessible:
setuptools
urllib3
requests
jinja2
netifaces
You can debug this issue by going to your EC2 instance in the AWS Web Console and clicking:
Actions -> Monitor and troubleshoot -> Get system log
Sometimes it takes a while to update, so click the refresh button until your logs appear. It is easier to read the logs if you download them. This is what helped me solve the issues that I was having.
The following steps resolved the issue for me on Ubuntu 18.04 LTS:
For Ubuntu 20.04 LTS, change the 36m in the symbolic links to 38.
# Add deadsnakes ppa repository
sudo add-apt-repository ppa:deadsnakes/ppa
# Install new python version
sudo apt update
sudo apt install python3.10
# Fix broken apt_inst after python upgrade
sudo ln -s /usr/lib/python3/dist-packages/apt_inst.cpython-36m-x86_64-linux-gnu.so /usr/lib/python3/dist-packages/apt_inst.so
# Fix broken apt_pkg after python upgrade
sudo ln -s /usr/lib/python3/dist-packages/apt_pkg.cpython-36m-x86_64-linux-gnu.so /usr/lib/python3/dist-packages/apt_pkg.so
# Make installed python version an alternative with a priority of 2
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 2
# Make upgraded python version an alternative with a priority of 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
# Reinstall python3-apt
sudo apt remove --purge python3-apt
sudo apt autoclean
sudo apt install python3-apt
# Install required packages
sudo apt install \
build-essential \
python3.10-distutils \
python3.10-venv \
libpython3.10-dev
# Install latest pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3.10 get-pip.py
# Upgrade outdated python libraries that break cloud-init
sudo -i
pip3 install --upgrade setuptools
pip3 install --upgrade urllib3
pip3 install --upgrade requests
pip3 install --upgrade jinja2
pip3 install --upgrade netifaces
pip3 install --upgrade --ignore-installed pyyaml
exit
# Upgrade cloud-init to latest version
sudo apt install --only-upgrade cloud-init
If you use Ansible, it is also affected by the upgrade.
Ansible can be fixed as follows:
Edit /usr/lib/python3/dist-packages/apt/package.py and change the following line:
from collections import Mapping, Sequence
to:
from collections.abc import Mapping, Sequence
It would be useful if the deadsnakes repository could provide an update for python3-apt (eg. python3.10-apt) to solve this issue.
Reference:
https://cloudbytes.dev/snippets/upgrade-python-to-latest-version-on-ubuntu-linux
I accidentally deleted a directory ~/.local/lib/python3.5/site_packages/pip in my Ubuntu 16.04 installation on WSL.
After this I get errors when running pip3 like
File "/home/harper/.local/bin/pip", line 7, in <module>
from pip._internal.cli.main import main
File "/home/harper/.local/lib/python3.5/site-packages/pip/_internal/cli/main.py", line 60
sys.stderr.write(f"ERROR: {exc}")
Now I tried to repair this by uninstalling pip3 and python3 and following both packages with apt
sudo apt update
sudo apt upgrade
sudo apt uninstall python3-pip python3
sudo apt install python3 python3-pip
But the command pip3 --version still shows the error message above.
What is the best way to do a clean reinstall of Python3? Is there any cache, probably the ~/.local/lib/python3.5/site_packages/__pycache__ directory that can be cleared or erased?
You still have old pip script in your $PATH which apt cannot override. Remove /home/harper/.local/bin/pip and rehash $PATH with hash -r pip.
I'm getting caffe import error even after installing it successfully using the command sudo apt install caffe-cpu. I was able to find caffe file at /usr/lib/python3/dist-packages/caffe (Path was added to PYTHONPATH). All requirements mentioned in the requirements.txt file of caffe directory was also installed.
I'm using Ubuntu 18.04 LTS, Python3.
Could anyone help me with this error?
import caffe
Traceback (most recent call last):
File "6_reconstruct_alphabet_image.py", line 17, in <module>
import caffe
File "/usr/lib/python3/dist-packages/caffe/__init__.py", line 1, in <module>
from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer
File "/usr/lib/python3/dist-packages/caffe/pycaffe.py", line 15, in <module>
import caffe.io
File "/usr/lib/python3/dist-packages/caffe/io.py", line 2, in <module>
import skimage.io
File "/usr/lib/python3/dist-packages/skimage/__init__.py", line 158, in <module>
from .util.dtype import *
File "/usr/lib/python3/dist-packages/skimage/util/__init__.py", line 7, in <module>
from .arraycrop import crop
File "/usr/lib/python3/dist-packages/skimage/util/arraycrop.py", line 8, in <module>
from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name '_validate_lengths'
Problem solved: The error came up because caffe build wasn't done successfully. I recommend not to go up with the sudo apt install caffe-cpu command (Which is mentioned in the official caffe installation guide for Ubuntu); because it will end up in the error as above. It's better to install from the source.
Let me give step by step guidance to install caffe successfully in Ubuntu 18.04 LTS:
1] sudo apt-get install -y --no-install-recommends libboost-all-dev
2] sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev \ libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler
3] git clone https://github.com/BVLC/caffe
cd caffe
cp Makefile.config.example Makefile.config
4] sudo pip install scikit-image protobuf
cd python
for req in $(cat requirements.txt); do sudo pip install $req; done
5] Modify the Makefile.config file:
Uncomment the line CPU_ONLY := 1, and the line OPENCV_VERSION := 3.
6] Find LIBRARIES line in Makefile and change it to as follows:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5 \
opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
7] make all
Now you could get some error like this:
CXX src/caffe/net.cpp
src/caffe/net.cpp:8:18: fatal error: hdf5.h: No such file or directory
compilation terminated.
Makefile:575: recipe for target '.build_release/src/caffe/net.o' failed
make: *** [.build_release/src/caffe/net.o] Error 1
To solve this error follow step 8.
8] install libhdf5-dev
open Makefile.config, locate line containing LIBRARY_DIRS and append /usr/lib /x86_64-linux-gnu/hdf5/serial
locate INCLUDE_DIRS and append /usr/include/hdf5/serial/ (per this SO answer)
rerun make all
9] make test
10] make runtest
11] make pycaffe
Now you could get some error like this:
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
python/caffe/_caffe.cpp:10:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
Makefile:501: recipe for target 'python/caffe/_caffe.so' failed
make: *** [python/caffe/_caffe.so] Error 1
To solve this error follow step 12.
12] Find PYTHON_INCLUDE line in Makefile.config and do the changes as follows:
`PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/local/lib/python2.7/dist-packages/numpy/core/include`
13] Add the module directory to our $PYTHONPATH by adding this line to the end of ~/.bashrc file:
`sudo vim ~/.bashrc`
`export PYTHONPATH=$HOME/Downloads/caffe/python:$PYTHONPATH`
`source ~/.bashrc'
14] Done.
This can be happened if there is any older version of the packages. Particularly, according to the Traceback, main reason can be the older version of scikit-image package. Just upgrade the package using the command.
for Python 2.7 : pip install --upgrade scikit-image
and
for Python 3.x : pip3 install --upgrade scikit-image
Installation of Caffe on ubuntu 18.04 in cpu mode .
conda update conda
conda create -n testcaffe python=3.5
source activate testcaffe
conda install -c menpo opencv3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y build-essential cmake git pkg-config
sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev protobuf-compiler
sudo apt-get install -y libatlas-base-dev
sudo apt-get install -y --no-install-recommends libboost-all-dev
sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev
mkdir build
cd build
make -j8 l8
make all
make install
conda install cython scikit-image ipython h5py nose pandas protobuf pyyaml jupyter
sed -i -e 's/python-dateutil>=1.4,<2/python-dateutil>=2.0/g' requirements.txt
for req in $(cat requirements.txt); do pip install $req; done
cd ../build
cd ../python
export PYTHONPATH=pwd${PYTHONPATH:+:${PYTHONPATH}}
python -c "import caffe;print(caffe.version)"
jupyter notebook
Complete steps just run these command as it is, Caffe will be installed in the system.
Hi I am trying to install nodejs in ubuntu 14 but getting following error.
E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 62%E: Cannot get debconf version. Is debconf installed?
debconf: apt-extracttemplates failed: No such file or directory
Extracting templates from packages: 100%
dpkg: cannot scan updates directory `/var/lib/dpkg/updates/': No such file or directory
E: Sub-process /usr/bin/dpkg returned an error code (2)
when trying to installing with software updated I am getting following error.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/aptdaemon/worker.py", line 300, in _process_transaction
not self.is_dpkg_journal_clean()):
File "/usr/lib/python3/dist-packages/aptdaemon/worker.py", line 1111, in is_dpkg_journal_clean
for dentry in os.listdir(status_updates):
FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/dpkg/updates/'
I had this problem previously, I removed any variables equivalent to zero and it solved it, perhaps try that.
Add the Node.js-maintained repositories to your Ubuntu package source list with this command:
curl -sL https://deb.nodesource.com/setup | sudo bash -
Then install Node.js with apt-get:
sudo apt-get install nodejs
Optionally we can create a symbolic link for node (for reasons mentioned earlier):
sudo ln -s /usr/bin/nodejs /usr/bin/node
Using this install option, we end up with newer versions of Node.js and npm:
$ node -v
$ npm -v
to check node version and npm version
I suggest installing nvm (node version manager)
https://github.com/creationix/nvm
it allows you to install and use ANY node version, and this may be way better, because ubuntu is bound to one specific version. In web development, every project is prepared using dofferen node, so switching it might be necessary