Why do i need to use a virtual envirotment with django? [duplicate] - python-3.x

This question already has answers here:
What is a virtualenv, and why should I use one?
(4 answers)
Why is virtualenv necessary?
(5 answers)
Closed 6 months ago.
I'm new to django and i want to know why i need to use a virtual enviroment for django.

You don't, but it's nice to have. If you plan on working on other python projects or having other people work on it, it'll probably be a good idea to have it..
then to set up the project in any pc all you need is:
virtualenv django
workon django
pip install -r requirements.txt
# ^ remember to make a req! it's super nice
# init the db
python manage.py runserver
Super easy!
Note: if you use windows you need the virtualenvwrapper package

The python installed on your system has its set of packages. When you create any python project, as you grow using Python and Django, you will see that it's not necessarily all the packages you wish to use for that project.
A virtual environment would allow you to have a somewhat isolated environment where you can use the Python version you want, the list of packages you want, and install new packages for that environment only.
A time will come when you would be required to list all the packages necessary to run a particular Django project and put these in a "requirements.txt" file using the desired package manager of your choice. A virtual environment would make it easy to track both the packages and their versions.
You can read the Whys here:
https://www.geeksforgeeks.org/python-virtual-environment/
https://stackoverflow.com/questions/39055728/importance-of-virtual-environment-setup-for-django-with-python#:~:text=In%20most%20simple%20way%2C%20a,installed%20in%20the%20host%20machine.

Virtual Environment is generally about control of versions of different libraries used in your project.
When the only computer is used and the only project is coded, you probably won't deal with versions. But imagine that you have two projects on two computers - one on Django 2.3 and another one - on Django 4. Some of third-party packages will require versions of Django not higher or not lower than the exact version. So if you wanted to switch projects between computers, you would have to reinstall all the libraries on your computers according to needed versions and struggle with some conflicts.
installing libraries inside of a virtual environment solves the problem.

Related

How can I create my own conda environment in a HPC without access to the internet?

I am quite new working with High-Performance Computing (HPC). In the one I am working I cannot install any packages from the internet, I don't even have access to the internet.
The support I have is very limited when I contact them. I am developing a small application and I need the last version of pandas. They have told me that to get the last version I need to do this:
module load lang/Anaconda3
sources/resources/conda/miniconda3/bin/activate
conda activate SigProfilerExtractor
This work but I want to create my own environment. I know how to do this on my own computer but in the HPC I do not know how to do this? I know that the packages must be somewhere but how can I install them in my environment from where they live?
And second question: They have tools and packages located in many different environments so it is very difficult to find out when the future tool I will use is. Not all environments have useful names and the manual they provided is not updated. If I need the tool mytool, how can I find it?

Should I create a virtual environment for a framework?

Guys I'm about to install "scrapy" and I was wondering if it would be a good idea to create a virtual environment? I'm also not the expert in doing these types of things, so I would have to research on it before doing anything... but my question still stands should I create one or should I just install it directly "pip3 install scrapy", I ask this because I read somewhere it can conflict with other frameworks, correct me if I'm wrong please.
Yes, you should try to create virtual environments if you have multiple frameworks.
PEP0405 proposes to add to Python a mechanism for lightweight "virtual environments" with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (allowing creation of environments with various Python versions) and can have its own independent set of installed Python packages in its site directories, but shares the standard library with the base installed Python.
for more information check https://docs.python.org/3/library/venv.html and
https://www.python.org/dev/peps/pep-0405/

How to specify pytorch as a package requirement on windows?

I have a python package which depends on pytorch and which I’d like windows users to be able to install via pip (the specific package is: https://github.com/mindsdb/lightwood, but I don’t think this is very relevant to my question).
What are the best practices for going about this ?
Are there some project I could use as examples ?
It seems like the pypi hosted version of torch & torchvision aren’t windows compatible and the “getting started” section suggests installing from the custom pytorch repository, but beyond that I’m not sure what the ideal solution would be to incorporate this as part of a setup script.
What are the best practices for going about this ?
If your project depends on other projects that are not distributed through PyPI then you have to inform the users of your project one way or another. I recommend the following combination:
clearly specify (in your project's documentation pages, or in the project's long description, or in the README, or anything like this) which dependencies are not available through PyPI (and possibly the reason why, with the appropriate links) as well as the possible locations to get them from;
to facilitate the user experience, publish alongside your project a pre-prepared requirements.txt file with the appropriate --find-links options.
The reason why (or main reason, there are others), is that anyone using pip assumes that (by default) everything will be downloaded from PyPI and nowhere else. In other words anyone using pip puts some trust into pypi.org as a source for Python project distributions. If pip were suddenly to download artifacts from other sources, it would breach this trust. It should be the user's decision to download from other sources.
So you could provide in your project's documentation an example of requirements.txt file like the following:
# ...
torch===1.4.0 --find-links https://download.pytorch.org/whl/torch_stable.html
torchvision===0.5.0 --find-links https://download.pytorch.org/whl/torch_stable.html
# ...
Update
The best solution would be to help the maintainers of the projects in question to publish Windows wheels on PyPI directly:
https://github.com/pytorch/pytorch/issues/24310
https://github.com/pytorch/vision/issues/1774
https://pypi.org/help/#file-size-limit

Different Python Installations

Occasionally, when we install/update modules with pip other dependent modules stop working. Revert back the module version or to resolve the issues takes time. Due to the fact, working programs are also impacted. It would be really helpful to setup two different python environments, sandbox and production.
Would you please suggest how to maintain two python environments seamlessly from sandbox to production?

Best practice for bundling third party libraries for distribution in Python 3

I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
You could use setuptools to create egg files for your libraries, assuming they aren't available in egg form already. You could then bundle the eggs alongside your software, which would need to either install them, or ensure that they were on the import path.
This has some complexities, i.e. if your libraries have C-extensions, then your eggs become platform-specific, but in my experience this is the most widely-accepted means of 'bundling' stuff in Python.
I have to say that this remains one of Python's weaknesses, though; the third-party ecosystem is certainly aimed at developers rather than end-users.
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
For developers use PIP with requirements file.
For end users, specify requirements in setup.py.

Resources