When I used the backup tool cbbackup of couchbase,i meet a exception in thread w2 - multithreading

when i use the couchbase cbbackup to backup my data,i meet a question like this .
E:\couchbase\bin>cbbackup -m diff http://localhost:8091 E:\couchbase_backup -
u Administrator -p password
[ ] 0.0% (0/estimated 7303 msgs)
[enter image description here][1]bucket: beer-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
.
bucket: bucket-beer-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[ ] 0.0% (0/estimated 586 msgs)
bucket: bucket-gamesim-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[ ] 0.0% (0/estimated 31369 msgs)
bucket: bucket-travel-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[####################] 100.0% (586/estimated 586 msgs)
bucket: gamesim-sample, msgs transferred...
: total | last | per sec
byte : 94693 | 94693 | 20342.2
Exception in thread w1:
Traceback (most recent call last):
File "threading.pyc", line 551, in __bootstrap_inner
File "threading.pyc", line 504, in run
File "pump.pyc", line 302, in run_worker
File "pump.pyc", line 360, in run
File "pump_dcp.pyc", line 180, in provide_batch
File "pump_dcp.pyc", line 508, in get_dcp_conn
File "pump_dcp.pyc", line 629, in setup_dcp_streams
File "pump_dcp.pyc", line 649, in request_dcp_stream
File "cb_bin_client.pyc", line 65, in _sendMsg
error: [Errno 10053]
I don't know how to resloving it. I stuck it in two days ,please help me. thanks.
By the way.It doesn't happen two days ago. The exception sometimes is "Exception in thread w2".
https://i.stack.imgur.com/HzXTl.png

Related

How to fix "RuntimeError: CUDA error: out of memory"

I have successfully trained in one GPU, but it cant work in multi GPU. I check the code but it just simple set some val in map, then carry out multi GPU training like torch.distributed.barrier.
I set the following code but failed even I set the batch size = 1.
docker exec -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -it jy /bin/bash
os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
The use of GPU.
|===============================+======================+======================|
| 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A |
| 24% 26C P8 21W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... On | 00000000:3E:00.0 Off | N/A |
| 25% 27C P8 2W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... On | 00000000:40:00.0 Off | N/A |
| 25% 25C P8 20W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... On | 00000000:41:00.0 Off | N/A |
| 26% 25C P8 15W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
The error informations.
/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
| distributed init (rank 2): env://
| distributed init (rank 1): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
Traceback (most recent call last):
File "main_track.py", line 398, in <module>
main(args)
File "main_track.py", line 159, in main
utils.init_distributed_mode(args)
File "/jy/TransTrack/util/misc.py", line 459, in init_distributed_mode
torch.distributed.barrier()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 355 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 357 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 358 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 356) of binary: /root/anaconda3/envs/pytorch171/bin/python3
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main_track.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-10-13_08:54:25
host : 2f923a848f88
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 356)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

How to solve "Operation not permitted: '/var/lib/pgadmin'" error in laradock at Windows Subsystem for Linux?

I am using the Laradock in my Laravel project for dockerizing with Nginx, Postgres, and Pgadmin. All the containers are running well but the Pgadmin is unable to do so. Here is my error log,
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | sudo: setrlimit(RLIMIT_CORE): Operation not permitted
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Starting gunicorn 19.9.0
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Using worker: threads
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [83] [INFO] Booting worker with pid: 83
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [ERROR] Exception in worker process
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
pgadmin_1 | worker.init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
pgadmin_1 | super(ThreadWorker, self).init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 129, in init_process
pgadmin_1 | self.load_wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
pgadmin_1 | self.wsgi = self.app.wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
pgadmin_1 | self.callable = self.load()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
pgadmin_1 | return self.load_wsgiapp()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
pgadmin_1 | return util.import_app(self.app_uri)
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 350, in import_app
pgadmin_1 | __import__(module)
pgadmin_1 | File "/pgadmin4/run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [INFO] Worker exiting (pid: 83)
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Shutting down: Master
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Reason: Worker failed to boot.
I have tried many ways to solve this problem. Such as,
OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html
and some other github issues and their solutions. I also run the sudo chmod -R 777 ~/.laradock/data/pgadmin and sudo chmod -R 777 /var/lib/pgadmin command to get the permission but still the same error log. Can you guys help me on this? I think some others are also getting this error on their local machine.
Thanks 🙂
You may try this:
sudo chown -R 5050:5050 ~/.laradock/data/pgadmin
Then restart the container. Cause in the container with:
uid=5050(pgadmin) gid=5050(pgadmin)
and
drwx------ 4 pgadmin pgadmin 56 Jan 27 08:25 pgadmin
As others have noted above, I found that Permission denied: '/var/lib/pgadmin/sessions' in Docker speaks to the challenge on the persistent local folder not having the correct user permissions.
After running sudo chown -R 5050:5050 ~/.laradock/data/pgadmin and restarting the container, the below error is no longer in my log
PermissionError: [Errno 13] Permission denied:
A similar error happens when using Kubernetes and the pgadmin4 helm chart from https://github.com/rowanruseler/helm-charts.
The solution is to set:
VolumePermissions:
enabled: true
even when persistence is not enabled. In this way also the /var/lib/pgadmin folder in the container gets assigned of the correct permissions and the pgadmin4.db database can be created correctly.
Assuming you have folder with pgadmin4.db already defined on your git repo with an other user than pgadmin, you can do so:
postgres_interface:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=user#domain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
ports:
- "5050:80"
user: root
volumes:
- ./env/local/pgadmin/pgadmin4.db:/pgadmin4.db
entrypoint: /bin/sh -c "cp /pgadmin4.db /var/lib/pgadmin/pgadmin4.db && cd /pgadmin4 && /entrypoint.sh"
The only solution I can provide is to log in to the container with
docker-compose exec --user root pgadmin sh
and then
chmod 0777 /var/lib/pgadmin -R
probably it may be better to create your own dockerfile from dpage/pgadmin4 and run these commands in advance.

Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

filter and update row a particular column of each row based on the multiple column value in the same row python sqlalchemy

Here's a SQL table
|---------------------|------------------|---------------------|------------------|
| ID | StartDate | EndDate | Status |
|---------------------|------------------|---------------------|------------------|
| 0 | 2019-12-19T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 1 | 2019-11-19T10:00 | 2019-11-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 2 | 2019-12-13T10:00 | 2019-12-17T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 3 | 2019-10-19T10:00 | 2019-10-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 4 | 2019-12-24T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
I want to update update the column Status of each row based on the value of same row two column values StartDate and EndDate
Condition would be
if StartDate < current_date and EndDate < current_date
then update that specific column Status value of that particular row as Inactive
if current_date is 2019-12-13T10:00
this should be the resultant output of this operation should be
|---------------------|------------------|---------------------|------------------|
| ID | StartDate | EndDate | Status |
|---------------------|------------------|---------------------|------------------|
| 0 | 2019-12-19T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 1 | 2019-11-19T10:00 | 2019-11-28T10:00 | Inactive |
|---------------------|------------------|---------------------|------------------|
| 2 | 2019-12-13T10:00 | 2019-12-17T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 3 | 2019-10-19T10:00 | 2019-10-28T10:00 | Inactive |
|---------------------|------------------|---------------------|------------------|
| 4 | 2019-12-24T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
I tried
DBSession.query(User).filter(and_(User.c.Status=="Active",User.c.StartDate < current_date, User.c.EndDate < current_date)).update({"Status":"Inactive"})
Even when i try this
from sqlalchemy import and_, func, update
DBSession.query(User).filter(and_(User.c.Status=="Active",func.date(User.c.StartDate) < current_date, func.date(User.c.EndDate) < current_date)).update({"Status":"Inactive"})
source: SQLAlchemy: how to filter date field?
but I get this error
> Traceback (most recent call last): File
> "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2463, in
> __call__
> return self.wsgi_app(environ, start_response) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2449, in
> wsgi_app
> response = self.handle_exception(e) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1866, in
> handle_exception
> reraise(exc_type, exc_value, tb) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\_compat.py", line 39, in
> reraise
> raise value File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2446, in
> wsgi_app
> response = self.full_dispatch_request() File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1951, in
> full_dispatch_request
> rv = self.handle_user_exception(e) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1820, in
> handle_user_exception
> reraise(exc_type, exc_value, tb) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\_compat.py", line 39, in
> reraise
> raise value File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1949, in
> full_dispatch_request
> rv = self.dispatch_request() File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1935, in
> dispatch_request
> return self.view_functions[rule.endpoint](**req.view_args) File "D:\cs\serverv0.6.py", line 145, in campaign
> DBSession.query(User).filter(and_(User.c.StartDate < current_date, User.c.EndDate < current_date)).update({"status": "Inactive"}) File
> "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\query.py", line
> 3862, in update
> update_op.exec_() File "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\persistence.py",
> line 1692, in exec_
> self._do_pre_synchronize() File "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\persistence.py",
> line 1754, in _do_pre_synchronize
> target_cls = query._mapper_zero().class_ AttributeError: 'NoneType' object has no attribute 'class_'
What is going wrong?
I found this way
active_users = userTableSession.query(userTable).filter(userTable.c.Status == 'Active').all()
active_users_df = pd.DataFrame(active_users)
# targeting the end date
active_users_df['EndDates'] = pd.to_datetime(active_users_df['EndDate'], format = '%Y-%m-%dT%H:%M')
# fetch the current date
current_date_time = datetime.now().strftime('%Y-%m-%dT%H:%M')
# pick out the rows end date have not expired
expired_user = active_users_df.loc[(active_users_df['EndDates'] < current_date_time)]
# fetch the user ID in a list
inactive_user_list = list(expired_user['ID'])
print(inactive_user_list)
# update the table
u = userTable.update().values(Status="Inactive").where(userTable.c.ID.in_(inactive_user_list))
userTableSession.execute(u)
userTableSession.commit()
Please post a more lazy answer than this?

Why is requirements.txt not being installed on deployment to AppEngine?

I'm attempting to upgrade an existing project to the new Python 3 AppEngine Standard Environment. I'm able to deploy my application code, however the app is crashing because it can not find dependencies that are defined in the requirements.txt file. The app file structure looks likes this:
|____requirements.txt
|____dispatch.yaml
|____dashboard
| |____dashboard.yaml
| |____static
| | |____gen
| | | |____favicon.ico
| | | |____fonts
| | | | |____MaterialIcons-Regular.012cf6a1.woff
| | | |____app.js
| | |____img
| | | |____avatar-06.png
| | | |____avatar-07.png
| | | |____avatar-05.png
| | | |____avatar-04.png
| |____templates
| | |____gen
| | | |____index.html
| |____main.py
| |____.gcloudignore
|____.gcloudignore
And the requirements.txt file looks like this:
Flask==0.12.2
pyjwt==1.6.1
flask-cors==3.0.3
requests==2.19.1
google-auth==1.5.1
pillow==5.3.0
grpcio-tools==1.16.1
google-cloud-storage==1.13.0
google-cloud-firestore==0.30.0
requests-toolbelt==0.8.0
Werkzeug<0.13.0,>=0.12.0
firestore-model>=0.0.2
After deploying, when I visit the site on the web, I get a 502. The GCP Console Error Reporting service indicates the error is thrown from a line in main.py where it attempts to import one of the above dependencies: ModuleNotFoundError: No module named 'google'
I've tried moving the requirements.txt into the dashboard folder and get the same result.
Stack Trace:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/srv/main.py", line 12, in <module>
from google.cloud import storage
ModuleNotFoundError: No module named 'google'
A few things could be going wrong. Make sure that:
Your requirements.txt file is in the same directory as your main.py file
Your .gcloudignore is not ignoring your requirements.txt file
You are deploying the function this same directory as requirements.txt and main.py

Resources