Idea is to build the distroless docker image and available python3 google distorless image version is 3.7 - gcr.io/distroless/python3. Our code is already compiled and running with python3.5 version and required to upgrade the version into 3.7 so that we can get rid of the library, compactability issues and can make use of the distroless image with verison 3.7. Some questions are,
Will version upgrade cause any issues to the existing code compilation?
Do we need to change all the requirements.txt versions according to the 3.7?
If yes, will there is an impact of the application?
the Python language does not provide backward compatibility. My recommendation for you is to run your code on a virtual env with the new version of Python and test your code. If you do not want to use virtual env you could create a new docker image with the Python version and test your app. Regarding the requirements.txt, without seeing the libraries or packages there, it is impossible to say if you should change the file.
Related
I have a production server with Ubuntu 20.04 and Python v3.8.10.
My development environment is Ubuntu 22.04 with Python v3.10.6.
Both environments make use of a virtual environment when running python code.
Thus far the differences between Python 3.8 and 3.10 have not necessitated that I match my environments exactly. However, I am now forced to use a poorly maintained Python library in my code (provided by a payment gateway), and I encountered version-specific bugs with the library immediately after installing it. Needless to say, I am not confident that if I get it working in the development environment, it will work in the production environment as well. I need to be sure it works in production the first time.
I believe I have managed to successfully install Python 3.8 from source on my development system (without uninstalling Python 3.10). I managed to set up the virtualenv with the older version of Python successfully as well. The installation of requirements, however, fails with /usr/bin/ld: cannot find -lboost_python: No such file or directory during the installation of the pgmagick package. My understanding of the error is that I am missing the libboost_python-dev package for Python v3.8.
How can I install the older libboost_python-dev package on ubuntu 22.04?
I am getting since yesterday on appliction from yesterday it's working fine early some upgradtion on cloud breaks the application
This is the error
Error : module 'typing' has no attribute '_ClassVar'
My python env is 3.7 and Django version 2.1.3
Please check the below steps if that helps to fix the issue:
Try downgrading the Python version to 3.6 and check.
Also, run this command pip uninstall dataclasses when using the Python Version 3.7
As Charlie V Said,
Check the version of the Python in your App Service versus the version you use to develop. Check the requirements file and compare that with your development.
I had a similar problem in the past. The first thing is determine if your application is actually using this module. If yes, make sure you have it at your requirements.txt. If no, refer to the logs to understand which python file is referencing it and remove the reference for it.
I have two virtual environments. One has python 3.7.9. the other had 3.8.5. I have downloaded torch 1.6.0 in the 1st virtual env. But when trying to install the same torch version in the second virtual env, it downloads the entire package again instead of installing from cache.
Is there any way to force download from cache as I want the same torch==1.6.0 in both the environments and its frustrating to download it everytime.
Have a look at the listed files...
https://pypi.org/project/torch/#files
There are different wheels for different Python versions.
This makes sense, as eg the syntax changes between Python versions.
I installed the latest version of lightGBM(lgb.__version__ == '2.2.1') which is supported by gcc8, but now I have a model already built with lightgbm==2.0.2 which is supported by gcc7.
I need to conform with the previous version, which means I have to downgrade current version of lightgbm, using pip install lightgbm==2.0.2, however when I import it, I met Library not loaded: /usr/local/opt/gcc/lib/gcc/7/libgomp.1.dylib.
I have checked here and here, the problem is I must use lightgbm of previous version.
I assume the problem is caused by gcc version, so is there a way I can install gcc 7?(by the way I tried create a virtualenv on my computer so that I can have both version of lightgbm, also can I install gcc 7 under the virtual environment and keep gcc 8 on my computer as well?)
Thanks soooo much!
So to start off with, it looks like your problem has more to do with gcc than with your python module. While it is a best practice to use virtual environments for each project, that will only effect the lightgbm module and not your gcc version.
To accomplish what you are trying to do, I would recommend taking a look at the following:
Homebrew install specific version of formula?
Their solution is with postgresql but it should translate to most other programs installed with Homebrew.
The only other option I can think of would be to just use the newest versions of lightgbm and gcc but that doesn't appear to be possible for your project.
In order to run an optimization problem we set up Gurobi 6.0.4 together with
Anaconda (Version 2.2.0) Python (Python 2.7.9.) on
Linux CentOS release 6.6 (Final) with the 2.6.32-504.16.2.el6.x86_64 Kernel
Following the installation guidelines of Gurobi (listed here: http://www.gurobi.com/documentation/6.0/quickstart_linux.pdf)
everything worked out in the first step. Gurobi was installed, could obtain a license. Also the PATH variables have been set (in the .bashrc) according to the manual, with a little extension for the referal to anaconda python (and not the other local Versions of python (being 2.7 and 3.4):
export GUROBI_HOME="/opt/gurobi604/linux64"
export PATH="${PATH}:${GUROBI_HOME}/bin:${PATH}:opt/anaconda/bin"
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${GUROBI_HOME}/lib"
Following the procedure we executed: python2.7 setup.py install in the respective directory /opt/gurobi604/linux64. After this usually you could run the import gurobipy command in the python interpreter wihtout errors. For older Versions of Gurobi (as 5.6.3) this works out very well.
For 6.0.4 though we constantly receive the error:
ImportError: /opt/anaconda/lib/python2.7/site-packages/gurobipy/gurobipy.so: undefined symbol: _Py_FalseStruct
This is very reproducible, no matter if we put anaconda also in the global path, and check the bash for any overwriting of the environment variables, which is not the case.
On Windows 8 the Gurobi 6.0.4 and Anaconda Python 2.2.0 work together without any problems.
Also applying hints from here: Python Module Error on Linux did not work out.
Did anyone else experience these problems with this tooling combination? thx.
The error message indicates that you use the Python module for version 3.4 in your Python 2.7 package directory. This can happen if you do not clean your Python module build directory between builds. Please try the following:
Completely remove the 2.7 package from your Python 2.7 installation (e.g. remove /opt/anaconda/lib/python2.7/site-packages/gurobipy)
Completely remove the Python module build directory from your Gurobi installation (e.g. /opt/gurobi604/linux64/build)
Re-run the build process for the Python 2.7 module (e.g. run "python2 setup.py install" in /opt/gurobi604/linux64)
Please note that CentOS is currently a non-supported platform for Gurobi.
Thank you for the hint, I think we tried that, but did not finish the procedure in this way. We tried to clean the system but in that particular case still hat both python Versions (due to other applications that use 3.4) on the machine. Our solution in this case was just to reinstall everything clean on a Ubuntu 14.04 VM. Since then no further problems occured. (I know not the cleanest solution.)
We had some similar issues when we updated to Gurobi 6.5, but that could be solved when corrctly addressing the usual path issues.
Thank you in any case for the reply, I think this really will help us with the next, then clean deployment :-)