Why is requirements.txt not being installed on deployment to AppEngine? - python-3.x

I'm attempting to upgrade an existing project to the new Python 3 AppEngine Standard Environment. I'm able to deploy my application code, however the app is crashing because it can not find dependencies that are defined in the requirements.txt file. The app file structure looks likes this:
|____requirements.txt
|____dispatch.yaml
|____dashboard
| |____dashboard.yaml
| |____static
| | |____gen
| | | |____favicon.ico
| | | |____fonts
| | | | |____MaterialIcons-Regular.012cf6a1.woff
| | | |____app.js
| | |____img
| | | |____avatar-06.png
| | | |____avatar-07.png
| | | |____avatar-05.png
| | | |____avatar-04.png
| |____templates
| | |____gen
| | | |____index.html
| |____main.py
| |____.gcloudignore
|____.gcloudignore
And the requirements.txt file looks like this:
Flask==0.12.2
pyjwt==1.6.1
flask-cors==3.0.3
requests==2.19.1
google-auth==1.5.1
pillow==5.3.0
grpcio-tools==1.16.1
google-cloud-storage==1.13.0
google-cloud-firestore==0.30.0
requests-toolbelt==0.8.0
Werkzeug<0.13.0,>=0.12.0
firestore-model>=0.0.2
After deploying, when I visit the site on the web, I get a 502. The GCP Console Error Reporting service indicates the error is thrown from a line in main.py where it attempts to import one of the above dependencies: ModuleNotFoundError: No module named 'google'
I've tried moving the requirements.txt into the dashboard folder and get the same result.
Stack Trace:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/srv/main.py", line 12, in <module>
from google.cloud import storage
ModuleNotFoundError: No module named 'google'

A few things could be going wrong. Make sure that:
Your requirements.txt file is in the same directory as your main.py file
Your .gcloudignore is not ignoring your requirements.txt file
You are deploying the function this same directory as requirements.txt and main.py

Related

dbbackup with docker/celery/celery-beat not working: [Errno 2] No such file or directory: 'pg_dump'

I am setting up a backup to use dbbackup. However, I am receiving an error when backing up my data. There is a similar question where the person was able to resolve it, however, the answer doesn't show how. here
My dbbackup version is django-dbbackup==4.0.2
Please find below my dockerfile:
database:
build:
context: .
dockerfile: pg-Dockerfile
expose:
- "5432"
restart: always
volumes:
- .:/myworkdir
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: faranga_db
redis:
image: redis
restart: on-failure
volumes:
- .:/myworkdir
expose:
- "6379"
celery:
build: .
restart: on-failure
command: bash -c "sleep 10; celery -A project worker -l info"
volumes:
- .:/myworkdir
env_file:
- .env
depends_on:
- database
- redis
beat:
build: .
restart: on-failure
command: bash -c "sleep 10; celery -A project beat -l info --pidfile=/tmp/celeryd.pid"
volumes:
- .:/myworkdir
env_file:
- .env
depends_on:
- database
- redis
my celery task:
#app.task
def database_backup():
management.call_command('dbbackup')
# media backup works just fine
#app.task
def media_backup():
management.call_command('mediabackup')
DB backup settings
# django db backup https://django-dbbackup.readthedocs.io/en/master/installation.html
DBBACKUP_STORAGE = 'django.core.files.storage.FileSystemStorage'
DBBACKUP_STORAGE_OPTIONS = {'location': '~/myworkdir/backups/db/'}
def backup_filename(databasename, servername, datetime, extension, content_type):
pass
DBBACKUP_FILENAME_TEMPLATE = backup_filename
DBBACKUP_CONNECTOR = "dbbackup.db.postgresql.PgDumpBinaryConnector"
Error stack trace:
[2023-02-09 14:44:00,052: ERROR/ForkPoolWorker-6] CommandConnectorError: Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom
faranga-celery-1 | [Errno 2] No such file or directory: 'pg_dump'
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/utils.py", line 120, in wrapper
faranga-celery-1 | func(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 93, in handle
faranga-celery-1 | self._save_new_backup(database)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 106, in _save_new_backup
faranga-celery-1 | outputfile = self.connector.create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 92, in create_dump
faranga-celery-1 | return self._create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/postgresql.py", line 112, in _create_dump
faranga-celery-1 | stdout, stderr = self.run_command(cmd, env=self.dump_env)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 180, in run_command
faranga-celery-1 | raise exceptions.CommandConnectorError(
faranga-celery-1 |
faranga-celery-1 | [2023-02-09 14:44:00,080: ERROR/ForkPoolWorker-6] Task administration.tasks.database_backup[601e33a6-0eef-42c0-a355-cb7d33d7ebaa] raised unexpected: CommandConnectorError("Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom \n[Errno 2] No such file or directory: 'pg_dump'")
faranga-celery-1 | Traceback (most recent call last):
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 165, in run_command
faranga-celery-1 | process = Popen(
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/stdlib.py", line 201, in sentry_patched_popen_init
faranga-celery-1 | rv = old_popen_init(self, *a, **kw)
faranga-celery-1 | File "/usr/lib/python3.10/subprocess.py", line 969, in __init__
faranga-celery-1 | self._execute_child(args, executable, preexec_fn, close_fds,
faranga-celery-1 | File "/usr/lib/python3.10/subprocess.py", line 1845, in _execute_child
faranga-celery-1 | raise child_exception_type(errno_num, err_msg, err_filename)
faranga-celery-1 | FileNotFoundError: [Errno 2] No such file or directory: 'pg_dump'
faranga-celery-1 |
faranga-celery-1 | During handling of the above exception, another exception occurred:
faranga-celery-1 |
faranga-celery-1 | Traceback (most recent call last):
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/celery/app/trace.py", line 451, in trace_task
faranga-celery-1 | R = retval = fun(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/celery.py", line 207, in _inner
faranga-celery-1 | reraise(*exc_info)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/_compat.py", line 56, in reraise
faranga-celery-1 | raise value
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/celery.py", line 202, in _inner
faranga-celery-1 | return f(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/celery/app/trace.py", line 734, in __protected_call__
faranga-celery-1 | return self.run(*args, **kwargs)
faranga-celery-1 | File "/myworkdir/administration/tasks.py", line 8, in database_backup
faranga-celery-1 | management.call_command('dbbackup')
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/django/core/management/__init__.py", line 181, in call_command
faranga-celery-1 | return command.execute(*args, **defaults)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/django/core/management/base.py", line 398, in execute
faranga-celery-1 | output = self.handle(*args, **options)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/utils.py", line 120, in wrapper
faranga-celery-1 | func(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 93, in handle
faranga-celery-1 | self._save_new_backup(database)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 106, in _save_new_backup
faranga-celery-1 | outputfile = self.connector.create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 92, in create_dump
faranga-celery-1 | return self._create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/postgresql.py", line 112, in _create_dump
faranga-celery-1 | stdout, stderr = self.run_command(cmd, env=self.dump_env)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 180, in run_command
faranga-celery-1 | raise exceptions.CommandConnectorError(
faranga-celery-1 | dbbackup.db.exceptions.CommandConnectorError: Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom
faranga-celery-1 | [Errno 2] No such file or directory: 'pg_dump'
If you´re running this command inside your project container, you don´t have access to the postgres binaries to dump and restore your database. Try access your postgres container and call this command in shell.
You could get you database container id from docker ps and access your container shell with docker exec -it [your_container_id] bash . Then try to call pg_dump from inside of it. This should probably work

gdal_config_error: [Errno 2] No such file or directory: 'gdal-config'

I'm running Django rest api with the django 3.2 on GCPUbuntu 20.04.5 LTS (GNU/Linux 5.15.0-1025-gcp x86_64). I'm getting an error when i run my docker containers. I need to use GDAL in one of my project modules. I'm pasting log from one the the containers my-api-webserver. There are 2 other containers running worker and scheduler servers which display the same errors.
gdal_config_error: [Errno 2] No such file or directory: 'gdal-config'
The following is the detailed trace :
my-api-webserver | error: subprocess-exited-with-error
my-api-webserver |
my-api-webserver | × python setup.py egg_info did not run successfully.
my-api-webserver | │ exit code: 1
my-api-webserver | ╰─> [125 lines of output]
my-api-webserver | running egg_info
my-api-webserver | creating /tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info
my-api-webserver | writing /tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info/PKG-INFO
my-api-webserver | writing dependency_links to /tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info/dependency_links.txt
my-api-webserver | writing requirements to /tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info/requires.txt
my-api-webserver | writing top-level names to /tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info/top_level.txt
my-api-webserver | writing manifest file '/tmp/pip-pip-egg-info-2zpklao8/GDAL.egg-info/SOURCES.txt'
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 121, in fetch_config
my-api-webserver | p = subprocess.Popen([command, args], stdout=subprocess.PIPE)
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__
my-api-webserver | self._execute_child(args, executable, preexec_fn, close_fds,
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child
my-api-webserver | raise child_exception_type(errno_num, err_msg, err_filename)
my-api-webserver | FileNotFoundError: [Errno 2] No such file or directory: '../../apps/gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 205, in get_gdal_config
my-api-webserver | return fetch_config(option, gdal_config=self.gdal_config)
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 124, in fetch_config
my-api-webserver | raise gdal_config_error(e)
my-api-webserver | __main__.gdal_config_error: [Errno 2] No such file or directory: '../../apps/gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 121, in fetch_config
my-api-webserver | p = subprocess.Popen([command, args], stdout=subprocess.PIPE)
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__
my-api-webserver | self._execute_child(args, executable, preexec_fn, close_fds,
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child
my-api-webserver | raise child_exception_type(errno_num, err_msg, err_filename)
my-api-webserver | FileNotFoundError: [Errno 2] No such file or directory: 'gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 212, in get_gdal_config
my-api-webserver | return fetch_config(option)
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 124, in fetch_config
my-api-webserver | raise gdal_config_error(e)
my-api-webserver | __main__.gdal_config_error: [Errno 2] No such file or directory: 'gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "<string>", line 2, in <module>
my-api-webserver | File "<pip-setuptools-caller>", line 34, in <module>
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 414, in <module>
my-api-webserver | setup(**setup_kwargs)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
my-api-webserver | return distutils.core.setup(**attrs)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
my-api-webserver | return run_commands(dist)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
my-api-webserver | dist.run_commands()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
my-api-webserver | self.run_command(cmd)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
my-api-webserver | super().run_command(command)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
my-api-webserver | cmd_obj.run()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 308, in run
my-api-webserver | self.find_sources()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 316, in find_sources
my-api-webserver | mm.run()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 560, in run
my-api-webserver | self.add_defaults()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/command/egg_info.py", line 597, in add_defaults
my-api-webserver | sdist.add_defaults(self)
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/command/sdist.py", line 106, in add_defaults
my-api-webserver | super().add_defaults()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/command/sdist.py", line 252, in add_defaults
my-api-webserver | self._add_defaults_ext()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
my-api-webserver | build_ext = self.get_finalized_command('build_ext')
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 305, in get_finalized_command
my-api-webserver | cmd_obj.ensure_finalized()
my-api-webserver | File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 111, in ensure_finalized
my-api-webserver | self.finalize_options()
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 275, in finalize_options
my-api-webserver | self.gdaldir = self.get_gdal_config('prefix')
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 218, in get_gdal_config
my-api-webserver | raise gdal_config_error(traceback_string + '\n' + msg)
my-api-webserver | __main__.gdal_config_error: Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 121, in fetch_config
my-api-webserver | p = subprocess.Popen([command, args], stdout=subprocess.PIPE)
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__
my-api-webserver | self._execute_child(args, executable, preexec_fn, close_fds,
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child
my-api-webserver | raise child_exception_type(errno_num, err_msg, err_filename)
my-api-webserver | FileNotFoundError: [Errno 2] No such file or directory: '../../apps/gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 205, in get_gdal_config
my-api-webserver | return fetch_config(option, gdal_config=self.gdal_config)
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 124, in fetch_config
my-api-webserver | raise gdal_config_error(e)
my-api-webserver | gdal_config_error: [Errno 2] No such file or directory: '../../apps/gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 121, in fetch_config
my-api-webserver | p = subprocess.Popen([command, args], stdout=subprocess.PIPE)
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__
my-api-webserver | self._execute_child(args, executable, preexec_fn, close_fds,
my-api-webserver | File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child
my-api-webserver | raise child_exception_type(errno_num, err_msg, err_filename)
my-api-webserver | FileNotFoundError: [Errno 2] No such file or directory: 'gdal-config'
my-api-webserver |
my-api-webserver | During handling of the above exception, another exception occurred:
my-api-webserver |
my-api-webserver | Traceback (most recent call last):
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 212, in get_gdal_config
my-api-webserver | return fetch_config(option)
my-api-webserver | File "/tmp/pip-install-am7iveqp/gdal_636d6d66a9b14d6e9183a48706abd53e/setup.py", line 124, in fetch_config
my-api-webserver | raise gdal_config_error(e)
my-api-webserver | gdal_config_error: [Errno 2] No such file or directory: 'gdal-config'
my-api-webserver |
my-api-webserver | Could not find gdal-config. Make sure you have installed the GDAL native library and development headers.
my-api-webserver | [end of output]
my-api-webserver |
my-api-webserver | note: This error originates from a subprocess, and is likely not a problem with pip.
my-api-webserver | error: metadata-generation-failed
my-api-webserver |
my-api-webserver | × Encountered error while generating package metadata.
my-api-webserver | ╰─> See above for output.
my-api-webserver |
my-api-webserver | note: This is an issue with the package mentioned above, not pip.
my-api-webserver | hint: See above for details.
This is my docker file :
FROM python:3.8.14
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y ffmpeg && apt install libgdal-dev --upgrade
RUN export CPLUS_INCLUDE_PATH=/usr/include/gdal
RUN export C_INCLUDE_PATH=/usr/include/gdal
COPY wait-for-it.sh /wait-for-it.sh
# Copy any files over
COPY entrypoint.sh /entrypoint.sh
# Copy any files over
COPY bootstrap_development_data.sh /bootstrap_development_data.sh
# Change permissions
RUN chmod +x /entrypoint.sh
RUN chmod +x /bootstrap_development_data.sh
RUN chmod +x /wait-for-it.sh
RUN groupadd -r docker && useradd -r -g docker earthling
RUN chown -R earthling /root/
ENTRYPOINT ["/entrypoint.sh"]
COPY requirements.txt /requirements.txt
RUN pip3 install --upgrade pip setuptools wheel
RUN pip3 install -r /requirements.txt
RUN yes | pip uninstall django-rq-scheduler
RUN yes | pip install -U django-rq-scheduler
VOLUME ["/opt/my-api"]
EXPOSE 80
CMD ["python", "manage.py", "runserver", "0.0.0.0:80"]
I have separately also done the following :
sudo add-apt-repository ppa:ubuntugis/ppa
sudo apt-get update
sudo apt-get install gdal-bin --upgrade
sudo apt remove gdal-bin
sudo apt install libgdal-dev --upgrade
sudo apt remove libgdal-dev
After the above, when i run :
gdal-config --version
Result : 3.3.2
This is my requirements.txt
aioredis==1.3.1
appdirs==1.4.4
asgiref==3.5.2
async-timeout==4.0.2
attrs==22.1.0
autobahn==22.7.1
Automat==20.2.0
bandit==1.7.4
babel==2.11.0
beautifulsoup4==4.11.1
boto3==1.26.41
botocore==1.29.41
certifi==2022.9.14
cffi==1.15.1
channels==3.0.5
channels-redis==3.4.1
chardet==5.0.0
click==8.1.3
colorama==0.4.5
colormath==3.0.0
constantly==15.1.0
coverage==6.4.4
croniter==1.3.7
cryptography==38.0.1
daphne==3.0.2
decorator==5.1.1
Django==3.2
django-amazon-ses==4.0.1
django-extra-views==0.14.0
django-appconf==1.0.5
django-cursor-pagination==0.2.0
django-extensions==3.2.1
django-imagekit==4.1.0
django-media-fixtures-next==1.0.1
django-model-utils==4.2.0
django-modeltranslation==0.18.4
django-nose==1.4.7
django-ordered-model==3.6
django-positions==0.6.0
django-proxy==1.2.1
django-redis==5.2.0
django-replicated==2.7
django-rq==2.5.1
django-rq-scheduler==2022.9
django-storages==1.13.1
django-utils-six==2.0
djangorestframework==3.13.1
django-widget-tweaks==1.4.12
django-haystack==3.2.1
django-treebeard==4.5.1
django-tables2==2.4.1
django-extensions==3.2.1
django-phone-verify==2.0.1
dparse==0.6.0
easy-thumbnails==2.8.4
Faker==12.0.1
feedparser==6.0.10
ffmpy==0.3.0
filelock==3.8.0
funcy==1.17
factory-boy>=3.2,<3.3
gitdb==4.0.9
GitPython==3.1.27
halo==0.0.31
hiredis==2.0.0
hyperlink==21.0.0
idna==3.4
incremental==21.3.0
jmespath==1.0.1
langdetect==1.0.9
log-symbols==0.0.14
mixer==7.2.2
msgpack==1.0.4
mysqlclient==2.1.1
natsort==8.2.0
networkx==2.8.6
nose==1.3.7
nose-exclude==0.5.0
numpy==1.23.3
onesignal-sdk==2.0.0
opencv-python==4.6.0.66
packaging==21.3
pathtools==0.1.2
pbr==5.10.0
pilkit==2.0
Pillow==9.2.0
pinocchio==0.4.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
PyJWT==2.4.0
pyOpenSSL==22.0.0
pyparsing==3.0.9
pytest-cov>=2.12,<3.1
python-dateutil==2.8.2
python-dotenv==0.21.0
python-magic==0.4.27
pytz==2022.2.1
PyYAML==6.0
purl==1.6
redis==4.3.4
requests==2.28.1
requests-file==1.5.1
rest-framework-generic-relations==2.1.0
rq==1.11.0
rq-scheduler==0.11.0
s3transfer==0.6.0
safety==2.1.1
sentry-sdk==1.9.8
service-identity==21.1.0
sgmllib3k==1.0.0
shutilwhich==1.1.0
six==1.16.0
smmap==5.0.0
soupsieve==2.3.2.post1
spectra==0.0.11
spinners==0.0.24
sqlparse==0.4.2
stevedore==4.0.0
stripe==4.1.0
sorl-thumbnail==12.9.0
termcolor==2.0.1
text-unidecode==1.3
tox>=3.23,<3.26
tldextract==3.3.1
toml==0.10.2
txaio==22.2.1
typing_extensions==4.3.0
uritools==4.0.0
url-normalize==1.4.3
urlextract==1.6.0
urllib3==1.26.12
watchdog==2.1.9
webpreview==1.7.2
whitenoise==6.2.0
zipp==3.8.1
zope.interface==5.4.0
django-cacheops==6.1.0
onesignal==0.1.3
onesignal-client==0.0.2
isort>=5.9,<5.11
flake8>=6.0.0
django-environ==0.9.0
mock==4.0.3
GDAL==3.3.2
setuptools==57.5.0
#commerce packages - development
# Excluding package already included above)
Werkzeug>=1.0,<2.1
django-debug-toolbar>=2.2,<3.6
psycopg2-binary>=2.8,<2.10
#Social-commerce packages - sandbox
Whoosh>=2.7,<2.8
pysolr==3.9.0
uWSGI>=2.0.19,<2.1
# Linting
flake8-debugger==4.1.2
# Helpers
pyprof2calltree>=1.4,<1.5
ipdb>=0.13,<0.14
ipython>=7.12,<9
# Country data
pycountry
#cli-based
colorlog==6.7.0
halo==0.0.31
requests==2.28.1
click==8.1.3
Help appreciated !

How to fix "RuntimeError: CUDA error: out of memory"

I have successfully trained in one GPU, but it cant work in multi GPU. I check the code but it just simple set some val in map, then carry out multi GPU training like torch.distributed.barrier.
I set the following code but failed even I set the batch size = 1.
docker exec -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -it jy /bin/bash
os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
The use of GPU.
|===============================+======================+======================|
| 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A |
| 24% 26C P8 21W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... On | 00000000:3E:00.0 Off | N/A |
| 25% 27C P8 2W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... On | 00000000:40:00.0 Off | N/A |
| 25% 25C P8 20W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... On | 00000000:41:00.0 Off | N/A |
| 26% 25C P8 15W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
The error informations.
/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
| distributed init (rank 2): env://
| distributed init (rank 1): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
Traceback (most recent call last):
File "main_track.py", line 398, in <module>
main(args)
File "main_track.py", line 159, in main
utils.init_distributed_mode(args)
File "/jy/TransTrack/util/misc.py", line 459, in init_distributed_mode
torch.distributed.barrier()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 355 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 357 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 358 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 356) of binary: /root/anaconda3/envs/pytorch171/bin/python3
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main_track.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-10-13_08:54:25
host : 2f923a848f88
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 356)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

How to solve "Operation not permitted: '/var/lib/pgadmin'" error in laradock at Windows Subsystem for Linux?

I am using the Laradock in my Laravel project for dockerizing with Nginx, Postgres, and Pgadmin. All the containers are running well but the Pgadmin is unable to do so. Here is my error log,
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | sudo: setrlimit(RLIMIT_CORE): Operation not permitted
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Starting gunicorn 19.9.0
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Using worker: threads
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [83] [INFO] Booting worker with pid: 83
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [ERROR] Exception in worker process
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
pgadmin_1 | worker.init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
pgadmin_1 | super(ThreadWorker, self).init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 129, in init_process
pgadmin_1 | self.load_wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
pgadmin_1 | self.wsgi = self.app.wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
pgadmin_1 | self.callable = self.load()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
pgadmin_1 | return self.load_wsgiapp()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
pgadmin_1 | return util.import_app(self.app_uri)
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 350, in import_app
pgadmin_1 | __import__(module)
pgadmin_1 | File "/pgadmin4/run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [INFO] Worker exiting (pid: 83)
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Shutting down: Master
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Reason: Worker failed to boot.
I have tried many ways to solve this problem. Such as,
OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html
and some other github issues and their solutions. I also run the sudo chmod -R 777 ~/.laradock/data/pgadmin and sudo chmod -R 777 /var/lib/pgadmin command to get the permission but still the same error log. Can you guys help me on this? I think some others are also getting this error on their local machine.
Thanks 🙂
You may try this:
sudo chown -R 5050:5050 ~/.laradock/data/pgadmin
Then restart the container. Cause in the container with:
uid=5050(pgadmin) gid=5050(pgadmin)
and
drwx------ 4 pgadmin pgadmin 56 Jan 27 08:25 pgadmin
As others have noted above, I found that Permission denied: '/var/lib/pgadmin/sessions' in Docker speaks to the challenge on the persistent local folder not having the correct user permissions.
After running sudo chown -R 5050:5050 ~/.laradock/data/pgadmin and restarting the container, the below error is no longer in my log
PermissionError: [Errno 13] Permission denied:
A similar error happens when using Kubernetes and the pgadmin4 helm chart from https://github.com/rowanruseler/helm-charts.
The solution is to set:
VolumePermissions:
enabled: true
even when persistence is not enabled. In this way also the /var/lib/pgadmin folder in the container gets assigned of the correct permissions and the pgadmin4.db database can be created correctly.
Assuming you have folder with pgadmin4.db already defined on your git repo with an other user than pgadmin, you can do so:
postgres_interface:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=user#domain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
ports:
- "5050:80"
user: root
volumes:
- ./env/local/pgadmin/pgadmin4.db:/pgadmin4.db
entrypoint: /bin/sh -c "cp /pgadmin4.db /var/lib/pgadmin/pgadmin4.db && cd /pgadmin4 && /entrypoint.sh"
The only solution I can provide is to log in to the container with
docker-compose exec --user root pgadmin sh
and then
chmod 0777 /var/lib/pgadmin -R
probably it may be better to create your own dockerfile from dpage/pgadmin4 and run these commands in advance.

Why is my Dockerized Flask app timing out on mail.send?

I'm trying to send mail from my Flask app that is served by Gunicorn & Nginx. Connection to Flask app times out when using mail.send() from container, works fine when running locally with werkzeug's development server.
I tried changing gunicorn worker classes to 'gevent' from 'sync', but the problem persists.
app.py
from flask import Flask
from werkzeug.middleware.proxy_fix import ProxyFix
from flask_sqlalchemy import SQLAlchemy
from flask_mail import Mail
app = Flask(__name__, instance_relative_config=True)
app.config.from_pyfile('config.py')
app.wsgi_app = ProxyFix(app.wsgi_app)
db = SQLAlchemy(app)
mail = Mail(app)
import views
views.py
from app import app, mail
from flask_mail import Message
import logging
#app.route('/')
def hello_world():
email_msg = Message("Subject", sender="mail#mail.com", recipients=["someone#mail.com"])
email_msg.body = f"Hello!"
logging.error("Starting email send /request")
mail.send(email_msg)
return 'Hello World!'
wsgi.py
from app import app
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
if __name__ == "__main__":
app.run()#host='0.0.0.0')
Dockerfile
...
CMD ["gunicorn", "-b", "0.0.0.0:8080", "--worker-class=gevent", "--workers=9","wsgi:app"]
I expect the email to be sent, but instead it just times-out.
Gunicorn output:
gbcodes | ERROR:root:Starting email send /request
gbcodes | ERROR:app:Exception on / [GET]
gbcodes | Traceback (most recent call last):
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 2446, in wsgi_app
gbcodes | response = self.full_dispatch_request()
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1951, in full_dispatch_request
gbcodes | rv = self.handle_user_exception(e)
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1820, in handle_user_exception
gbcodes | reraise(exc_type, exc_value, tb)
gbcodes | File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
gbcodes | raise value
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request
gbcodes | rv = self.dispatch_request()
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request
gbcodes | return self.view_functions[rule.endpoint](**req.view_args)
gbcodes | File "/app/views.py", line 13, in hello_world
gbcodes | mail.send(email_msg)
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 491, in send
gbcodes | with self.connect() as connection:
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 144, in __enter__
gbcodes | self.host = self.configure_host()
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 156, in configure_host
gbcodes | host = smtplib.SMTP_SSL(self.mail.server, self.mail.port)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 1031, in __init__
gbcodes | source_address)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 251, in __init__
gbcodes | (code, msg) = self.connect(host, port)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 336, in connect
gbcodes | self.sock = self._get_socket(host, port, self.timeout)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 1037, in _get_socket
gbcodes | self.source_address)
gbcodes | File "/usr/lib/python3.6/site-packages/gevent/socket.py", line 96, in create_connection
gbcodes | sock.connect(sa)
gbcodes | File "/usr/lib/python3.6/site-packages/gevent/_socket3.py", line 335, in connect
gbcodes | raise error(err, strerror(err))
gbcodes | TimeoutError: [Errno 110] Operation timed out
Turns out my VPS host - Scaleway is blocking SMTP. Fixed by using port 443 which the SMTP service I use also supports.

Resources