I'm trying to deploy a script on google cloud functions for the first time. I went through the documentation and figured out the basics. Then, I started trying to deploy my actual script. I'm facing an error with dependencies from the requirements.txt file. I'm at the stage where I don't know enough to be specific about my problem so I'll list down what I did.
After I run the gcloud command gcloud functions deploy FILENAME --runtime python37 with my file name, I hit this error:
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed:
{
"error": {
"canonicalCode": "INVALID_ARGUMENT",
"errorMessage": "`pip_download_ wheels` had stderr output:\nERROR: Could not find
a version that satisfies the requirement pywin32==227 (from -r requirements.txt (line 32))
(from versions: n one)\nERROR:\r\nNo matching distribution found for pywin32==227 (from -r requirements.txt (line 32))
\n\nerror: `pip_download_wheels` returned code: 1",
"errorTyp e": "InternalError",
"errorId": "8C994D6A"
}
}
This is my requirements.txt file:
attrs==19.3.0
autobahn==20.4.3
Automat==20.2.0
cachetools==4.1.0
certifi==2020.4.5.1
cffi==1.14.0
chardet==3.0.4
constantly==15.1.0
cryptography==2.9.2
enum34==1.1.10
google-api-core==1.17.0
google-auth==1.14.1
google-cloud-bigquery==1.24.0
google-cloud-core==1.3.0
google-resumable-media==0.5.0
googleapis-common-protos==1.51.0
hyperlink==19.0.0
idna==2.9
incremental==17.5.0
kiteconnect==3.8.2
numpy==1.18.3
pandas==1.0.3
protobuf==3.11.3
pyarrow==0.17.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyHamcrest==2.0.2
pyOpenSSL==19.1.0
python-dateutil==2.8.1
pytz==2020.1
pywin32==227
requests==2.23.0
rsa==4.0
service-identity==18.1.0
six==1.14.0
tqdm==4.45.0
Twisted==20.3.0
txaio==20.4.1
urllib3==1.25.9
wincertstore==0.2
zope.interface==5.1.0
Can you help me figure out how to get past this error?
Edit: Based on the suggestion to only keep required dependencies in the requirements.txt file, I tried that and I'm getting a slightly different error
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed:
{
"error": {
"canonicalCode": "INVALID_ARGUMENT",
"errorMessage": "`pip_download_\r\nwheels` had stderr output:
\n WARNING: Legacy build of wheel for 'kiteconnect' created no files.
\n Command arguments: /opt/python3.7/bin/python3.7 -u -c 'imp\r\nort sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/tmp/pip-wheel-fdr9r30n/kiteconnect/setup.py'\"'\"'; __file__='\"'\"'/tmp/pip-wheel-fdr9r30n/kiteconnect/s\r\netup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(c\r\node, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d /tmp/pip-wheel-zkanpa3p\n Command output: [use --verbose to show]\nERROR: Failed to build one or more whe\r\nels\n\nerror: `pip_download_wheels` returned code: 1",
"errorType": "InternalError",
"errorId": "7EF920E4"
}
}
The new requirements.txt file looks like this:
google-api-core==1.17.0
google-auth==1.14.1
google-cloud-bigquery==1.24.0
google-cloud-core==1.3.0
google-resumable-media==0.5.0
googleapis-common-protos==1.51.0
kiteconnect==3.8.2
numpy==1.18.3
pandas==1.0.3
pyarrow==0.17.0
python-dateutil==2.8.1
tqdm==4.45.0
The pywin32 package only provides distributions for the Windows platform, so you won't be able to install it in the Google Cloud Functions runtime.
Do you really need it? Your requirements.txt file looks like the output of pip freeze. You probably don't need all those dependencies. It should only include the dependencies you need to import in your function.
Related
Goal: run .py files via. dvc.yaml.
There are stages before it, in dvc.yaml, that don't produce the error.
dvc exp run:
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ dvc exp run
Stage 'inference' didn't change, skipping
Running stage 'load_data':
> load_data.py
/bin/bash: line 1: load_data.py: Permission denied
ERROR: failed to reproduce 'load_data': failed to run: load_data.py, exited with 126
dvc repro:
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ dvc repro
Stage 'predict' didn't change, skipping
Stage 'evaluate' didn't change, skipping
Stage 'inference' didn't change, skipping
Running stage 'load_data':
> load_data.py
/bin/bash: line 1: load_data.py: Permission denied
ERROR: failed to reproduce 'load_data': failed to run: pdl1_lung_model/load_data.py, exited with 126
dvc doctor:
DVC version: 2.10.2 (pip)
---------------------------------
Platform: Python 3.9.12 on Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Supports:
webhdfs (fsspec = 2022.5.0),
http (aiohttp = 3.8.1, aiohttp-retry = 2.5.2),
https (aiohttp = 3.8.1, aiohttp-retry = 2.5.2),
s3 (s3fs = 2022.5.0, boto3 = 1.21.21)
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme0n1p5
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme0n1p5
Repo: dvc, git
dvc exp run -v:
output.txt
dvc exp run -vv:
output2.txt
Solution 1
.py files weren't running as scripts.
They need to be; if you want to run one .py file per stage in dvc.yaml.
To do so, you want to append Boiler-plate code, at the bottom of each .py file.
if __name__ == "__main__":
# invoke primary function() in .py file, w/ params
Solution 2
chmod 777 ....py
Soution 3
I forgot the python in cmd:
load_data:
cmd: python pdl1_lung_model/load_data.py
I am trying to setup for the first time bazel with rules_nodejs on the dummy project.
The project structure is organised as follows:
The folder app1 contains NODE application and the folder app2 will be a GO based application.
The index.js is a very simple application:
const _ = require("lodash");
const numbers = [1, 5, 8, 10, 1, 5, 15, 42, 5];
const uniqNumbers = _.uniq(numbers);
console.log(uniqNumbers);
Running the application with the command:
bazel run //app1
it shows the error message:
ERROR: An error occurred during the fetch of repository 'app1':
Traceback (most recent call last):
File "/private/var/tmp/_bazel_developer/de6f8818d27e2451d76ec34773e78a6e/external/build_bazel_rules_nodejs/internal/npm_install/npm_install.bzl", line 965, column 13, in _yarn_install_impl
fail("yarn_install failed: %s (%s)" % (result.stdout, result.stderr))
Error in fail: yarn_install failed: yarn install v1.22.11
[1/4] Resolving packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
(error Couldn't find any versions for "lodash" that matches "ˆ4.17.21"
)
ERROR: /Users/developer/node/bazel_play/WORKSPACE.bazel:22:13: fetching yarn_install rule //external:app1: Traceback (most recent call last):
File "/private/var/tmp/_bazel_developer/de6f8818d27e2451d76ec34773e78a6e/external/build_bazel_rules_nodejs/internal/npm_install/npm_install.bzl", line 965, column 13, in _yarn_install_impl
fail("yarn_install failed: %s (%s)" % (result.stdout, result.stderr))
Error in fail: yarn_install failed: yarn install v1.22.11
[1/4] Resolving packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
(error Couldn't find any versions for "lodash" that matches "ˆ4.17.21"
)
ERROR: /Users/developer/node/bazel_play/app1/BUILD.bazel:3:14: //app1:app1 depends on #app1//lodash:lodash in repository #app1 which failed to fetch. no such package '#app1//lodash': yarn_install failed: yarn install v1.22.11
[1/4] Resolving packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
(error Couldn't find any versions for "lodash" that matches "ˆ4.17.21"
)
ERROR: Analysis of target '//app1:app1' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.591s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded, 0 targets configured)
FAILED: Build did NOT complete successfully (1 packages loaded, 0 targets configured)
I assume the problem is the lodash module is missing. How can I do yarn install with bazel command?
All source files can be found here:
https://github.com/softshipper/bazel_play
And also could anyone please explain the purpose of yarn_install rule?
I am following this tutorial: https://blog.paperspace.com/train-yolov5-custom-data/ in order to train a custom dataset. I followed exactly the steps it says inside this tutorial. But when I try this command: python3 train.py --img 640 --cfg yolov5s.yaml --hyp hyp.scratch.yaml --batch 32 --epochs 100 --data road_sign_data.yaml --weights yolov5s.pt --workers 24 --name yolo_road_det I get this error:
File "/home/UbuntuUser/.local/lib/python3.8/site-packages/torch/serialization.py", line 242, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory
I have searched in google, and I found similar threads like this: https://discuss.pytorch.org/t/error-on-torch-load-pytorchstreamreader-failed/95103 and this: last.ckpt | RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory but I couldn't find a solution! The problem is here:
class _open_zipfile_reader(_opener):
def __init__(self, name_or_buffer) -> None:
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
I followed the steps of the above tutorial, I don't know how to fix it...Could you help me?
I want to use the custom options with distributed and subprocess testing.
I have 2 addoption --resources_dir and --output_dir.
Try to start it with :
python3 -m pytest -vs --junitxml=/tmp/result_alert_test.xml --resources_dir=test/resources --output_dir=/tmp/ -n auto test_*
The error message:
Replacing crashed worker gw82
Cusage: -c [options] [file_or_dir] [file_or_dir] [...]
-c: error: the following arguments are required: --resources_dir, --output_dir
[gw83] node down: Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/execnet/gateway_base.py", line 1072, in executetask
Without xdist (-n auto) when i run the tests in a single process, it is working.
python3 -m pytest -vs --junitxml=/tmp/result_alert_test.xml --resources_dir=test/resources --output_dir=/tmp/ test_*
If i start with the last command. Its work with single process. No errors.
=============================== test session starts ===============================
platform linux -- Python 3.5.2, pytest-3.5.0, py-1.5.3, pluggy-0.6.0 -- /usr/bin/python3
cachedir: ../../../../../.pytest_cache
rootdir: /, inifile:
plugins: xdist-1.22.2, forked-0.2
collected 115 items
https://github.com/pytest-dev/pytest/issues/2026
There is not fix for this bug.
I used environments.
python3 -m pytest -vsx --full-trace --junitxml=${TEST_REPORT_DIR}/result_alert_test.xml --tx=popen//env:TEST_DIR=${TESTS_ROOT} --tx=popen//env:TEST_OUTPUT_DIR=${TEST_OUTPUT_DIR} -n auto -vs test_*
I try to throw job with Pytorch code in google-cloud-ml.
so I code the "setup.py" file. And add option "install_requires"
"setup.py"
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['http://download.pytorch.org/whl/cpu/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl','torchvision']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My keras trainer application package.'
)
and throw the job to the google-cloud-ml, but it doesn't work
with error message
{
insertId: "3m78xtf9czd0u"
jsonPayload: {
created: 1516845879.49039
levelname: "ERROR"
lineno: 829
message: "Command '['pip', 'install', '--user', '--upgrade', '--force-reinstall', '--no-deps', u'trainer-0.1.tar.gz']' returned non-zero exit status 1"
pathname: "/runcloudml.py"
}
labels: {
compute.googleapis.com/resource_id: "6637909247101536087"
compute.googleapis.com/resource_name: "cmle-training-master-5502b52646-0-ql9ds"
compute.googleapis.com/zone: "us-central1-c"
ml.googleapis.com/job_id: "run_ml_engine_pytorch_test_20180125_015752"
ml.googleapis.com/job_id/log_area: "root"
ml.googleapis.com/task_name: "master-replica-0"
ml.googleapis.com/trial_id: ""
}
logName: "projects/exem-191100/logs/master-replica-0"
receiveTimestamp: "2018-01-25T02:04:55.421517460Z"
resource: {
labels: {…}
type: "ml_job"
}
severity: "ERROR"
timestamp: "2018-01-25T02:04:39.490387916Z"
}
====================================================================
See detailed message here
so how can i use pytorch in google cloud ml engine?
i find solution about setting up PYTORCH in google-cloud-ml
first
you have to get a .whl file about pytorch and store it to google storage bucket.
and you will get the link for bucket link.
gs://bucketname/directory/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl
the .whl file is depend on your python version or cuda version....
second
you write the command line and setup.py because you have to set up the google-cloud-ml setting.
related link is this submit_job_to_ml-engine
you write the setup.py file to describe your setup.
the related link is this write_setup.py_file
this is my command code and setup.py file
=====================================================================
"command"
#commandline code
JOB_NAME="run_ml_engine_pytorch_test_$(date +%Y%m%d_%H%M%S)"
REGION=us-central1
OUTPUT_PATH=gs://yourbucket
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $OUTPUT_PATH \
--runtime-version 1.4 \
--module-name models.pytorch_test \
--package-path models/ \
--packages gs://yourbucket/directory/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl \
--region $REGION \
-- \
--verbosity DEBUG
=====================================================================
"setup.py"
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My pytorch trainer application package.'
)
=====================================================================
third
if you have experience submitting job to the ml-engine.
you might know the file structure about submitting ml-engine
packaging_training_model.
you have to follow above link and know how to pack files.
The actual error message is a bit buried, but it is this:
'install_requires' must be a string or list of strings containing
valid project/version requirement specifiers; Invalid requirement,
parse error at "'://downl'"
To use packages not hosted on PyPI, you need to use dependency_links (see this documentation). Something like this ought to work:
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
DEPENDENCY_LINKS = ['http://download.pytorch.org/whl/cpu/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
dependency_links=DEPENDENCY_LINKS,
packages=find_packages(),
include_package_data=True,
description='My keras trainer application package.'
)