Building container image for AWS Lambda fails - python-3.x

I want to deploy a container image with some Python libs and Python code which should be used as AWS lambda function. I followed the following blog post:
https://aws.amazon.com/de/blogs/machine-learning/using-container-images-to-run-pytorch-models-in-aws-lambda/
Locally on my machine I can build and run the Docker image successfully. From Docker perspective I thought this is ok. But when I use AWSSAMCLI, I get the following issue:
F:\AWS>sam build && sam deploy --guided
Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri: F:\AWS runtime: None metadata: {'Dockerfile': 'Dockerfile', 'DockerContext': 'F:\\\AWS\\inference-ai'} architecture: x86_64 functions: InferenceFunction
Building image for InferenceFunction function
Setting DockerBuildArgs: {} for InferenceFunction function
Step 1/13 : FROM public.ecr.aws/lambda/python:3.9
---> 7afff3a8d417
Step 2/13 : COPY requirements.txt ./
---> Using cache
---> 3e5e3aa3f81d
Step 3/13 : RUN yum install -y gcc git unzip
---> Using cache
---> 9ce3cda3d21a
Step 4/13 : RUN yum install -y python3-devel.x86_64
---> Using cache
---> ffcd1e85aa7d
Step 5/13 : RUN python3.9 -m pip install cython
---> Using cache
---> 178bccdc8433
Step 6/13 : RUN python3.9 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
---> Using cache
---> 389d5f5d0a59
Step 7/13 : RUN python3.9 -m pip install git+https://github.com/gautamchitnis/cocoapi.git#cocodataset-master#subdirectory=PythonAPI
---> Using cache
---> 6a1b0cc8c2c7
Step 8/13 : RUN python3.9 -m pip install -r requirements.txt
---> Using cache
---> 4111e9474c42
Step 9/13 : RUN curl https://gitlab.com/username/inference-ai/-/archive/main/inference-ai-main.zip --output project.zip
---> Using cache
---> 580214e52eb2
Step 10/13 : RUN unzip project.zip
---> Using cache
---> 7d49342aeca1
Step 11/13 : RUN mv inference-ai-main inference-ai
---> Using cache
---> c5ed0c584cbd
Step 12/13 : WORKDIR inference-ai
---> Using cache
---> 768f6092aa72
Step 13/13 : CMD ["test-infer-with-coco.lambda_handler"]
---> Using cache
---> d09be07424dd
Successfully built d09be07424dd
Successfully tagged inferencefunction:latest
Traceback (most recent call last):
File "runpy.py", line 194, in _run_module_as_main
File "runpy.py", line 87, in _run_code
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module>
cli(prog_name="sam")
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 166, in wrapped
raise exception # pylint: disable=raising-bad-type
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 124, in wrapped
return_value = func(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\utils\version_checker.py", line 41, in wrapped
actual_result = func(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\cli\main.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 182, in cli
do_cli(
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 262, in do_cli
ctx.run()
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\build_context.py", line 255, in run
modified_template = builder.update_template(
File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\build\app_builder.py", line 321, in update_template
absolute_output_path = pathlib.Path(
File "pathlib.py", line 1180, in resolve
File "pathlib.py", line 205, in resolve
OSError: [WinError 123] Die Syntax für den Dateinamen, Verzeichnisnamen oder die Datenträgerbezeichnung ist falsch: 'inferencefunction:latest'
F:\AWS>
I have no idea why an invalid file will be used (colon character). How can I solve this issue?
Dockerfile:
FROM public.ecr.aws/lambda/python:3.9
COPY requirements.txt ./
RUN yum install -y gcc git unzip
RUN yum install -y python3-devel.x86_64
RUN python3.9 -m pip install cython
RUN python3.9 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
RUN python3.9 -m pip install git+https://github.com/gautamchitnis/cocoapi.git#cocodataset-master#subdirectory=PythonAPI
RUN python3.9 -m pip install -r requirements.txt
RUN curl https://gitlab.com/username/inference-ai/-/archive/main/inference-ai-main.zip --output project.zip
RUN unzip project.zip
RUN mv inference-ai-main inference-ai
WORKDIR inference-ai
CMD ["test-infer-with-coco.lambda_handler"]
template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 50
MemorySize: 5000
Api:
BinaryMediaTypes:
- image/png
- image/jpg
- image/jpeg
Resources:
InferenceFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Architectures:
- x86_64
Events:
Inference:
Type: Api
Properties:
Path: /inference-ai
Method: post
Metadata:
Dockerfile: Dockerfile
DockerContext: ./inference-ai
# DockerTag: python3.9-v1
Outputs:
InferenceApi:
Description: "API Gateway endpoint URL for Prod stage for Inference function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/inference-ai/"
InferenceFunction:
Description: "Inference Lambda Function ARN"
Value: !GetAtt InferenceFunction.Arn
InferenceFunctionIamRole:
Description: "Implicit IAM Role created for Inference function"
Value: !GetAtt InferenceFunction.Arn

Related

pip not reading the ~/.pip/pip.conf

This is pertaining to jfrog artifactory. pypi-public is our virtual repo and our internal pypi-internal is associated to pypi-public. I can see the package vapi_common on the web UI.
The below command is able to search the package
pip search vapi_common --index=https://<username>:<apikey>#company.jfrog.io/artifactory/api/pypi/pypi-public/simple
However, if I use the same index-url in ~/.pip/pip.conf
[global]
index-url = https://<username>:<apikey>#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
and then use the below command
pip search vapi_common -vvv -> fails the below error. As you can see, it is trying to reach pypi.org and is not honoring the index url given in pip.conf
pip search vapi_common -vvv
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "POST /pypi HTTP/1.1" 200 419
ERROR: Exception:
Traceback (most recent call last):
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 228, in _main
status = self.run(options, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 60, in run
pypi_hits = self.search(query, options)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 80, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1109, in __call__
return self.__send(self.__name, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1450, in __request
response = self.__transport.request(
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/network/xmlrpc.py", line 45, in request
return self.parse_response(response.raw)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1341, in parse_response
return u.close()
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 655, in close
raise Fault(**self._stack[0])
xmlrpc.client.Fault: <Fault -32500: "RuntimeError: PyPI's XMLRPC API is currently disabled due to unmanageable load and will be deprecated in the near future. See https://status.python.org/ for more information.">
Please note you yourself use pip search --index=…. That is, you should use option index in pip.conf, not index-url. index is for pip search, index-url is for pip download/install.
See the docs at https://pip.pypa.io/en/stable/reference/pip_search/#options
Fix config:
pip config set global.index https://:#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
Perhaps even
pip config set global.index `pip config get global.index-url`

Docker selenium: Chrome timeout in headless mode

I am running selenium inside docker and getting this error on this call: driver.get(URL)
Traceback (most recent call last):
File "esdm.py", line 244, in <module>
upload_to_esdm(browser, version_url, args.im_file, args.build, args.user_name, args.password, args.apps_file)
File "esdm.py", line 78, in upload_to_esdm
browser.get(version_url)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 333, in get
self.execute(Command.GET, {'url': url})
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_CONNECTION_TIMED_OUT
(Session info: headless chrome=87.0.4280.66)
This is my code:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--js-flags=--max-old-space-size=8196')
driver = webdriver.Chrome(options=chrome_options)
driver.set_window_size(1920, 1080)
driver.get(URL)
And my Dockerfile:
FROM ubuntu:18.04
WORKDIR /src
COPY . /src
RUN apt-get -y update
RUN apt-get install -y python3 python3-pip chromium-chromedriver
# set display port to avoid a crash
ENV DISPLAY=:99
RUN pip3 install selenium
ENTRYPOINT "/bin/bash"
This works properly in windows, but when running in docker it doesn't

Issue running fortinet.fortios in Ansible playbook (bad host file? plugins not installed?)

I am getting an error with this playbook and am not sure where to look. Perhaps something isn't defined right in my host file? (I'm told the playbook is good)
YML Playbook
- hosts: fortigates
collections:
- fortinet.fortios
connection: httpapi
vars:
vdom: "root"
ansible_httpapi_use_ssl: yes
ansible_httpapi_validate_certs: no
ansible_httpapi_port: 443
tasks:
- name: Configure global attributes.
fortios_system_global:
vdom: "{{ vdom }}"
system_global:
admintimeout: "23"
hostname: "FortiGate02"
Host file
[fortigates]
fortigate01 ansible_host=192.168.0.103 ansible_user="admin" ansible_password="password"
[fortigates:vars]
ansible_network_os=fortinet.fortios.fortios
#ansible_python_interpreter=/usr/bin/python3
Error Output
TASK [Configure global attributes.] ****************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'
fatal: [fortigate01]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 102, in \n _ansiballz_main()\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.fortinet.fortios.plugins.modules.fortios_system_global', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib/python2.7/runpy.py", line 188, in run_module\n fname, loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 72, in _run_code\n exec code in run_globals\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2075, in \n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2043, in main\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1544, in fortios_system\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1533, in system_global\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 173, in set\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 146, in get_mkey\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 137, in get_mkeyname\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 126, in schema\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible/module_utils/connection.py", line 185, in rpc\nansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
I have installed the Ansible Galaxy package, as per the documentation
# ansible-galaxy collection install fortinet.fortios
Process install dependency map
Starting collection install process
Skipping 'fortinet.fortios' as it is already installed
Same issue on Ubuntu 18.04 (WSL).
I fixed it by installing ansible with pip3.
# remove ansible
sudo apt remove ansible
# install python3 & pip3
sudo apt install python3 python3-pip
# install ansible with pip3
pip3 install ansible --user
# update the environment PATH variable for ansible commands
echo "export PATH=$PATH:$HOME/.local/bin" >> ~/.bashrc
source ~/.bashrc
# install fortios module
ansible-galaxy collection install fortinet.fortios

Running Boto3 on Ubuntu 18.04

I cannot run boto3 on Ubuntu 18.04 (AWS), initially I thought the code was broken but it might actually be the installation itself?
ubuntu#ip-172-30-10-199:~$ sudo -H pip3 install boto3
Requirement already satisfied: boto3 in /usr/lib/python3/dist-packages
ubuntu#ip-172-30-10-199:~$ sudo -H pip3 install botocore
Requirement already satisfied: botocore in /usr/lib/python3/dist-packages
ubuntu#ip-172-30-10-199:~$ python3 --version
**Python 3.6.9**
ubuntu#ip-172-30-10-199:~$ sudo apt-get install python3-boto3
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-boto3 is already the newest version (1.4.2-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
ubuntu#ip-172-30-10-199:~$
s3_list.py file:
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
Running the code.
ubuntu#ip-172-30-10-199:~$ python3 s3_list.py
Traceback (most recent call last):
File "s3_list.py", line 4, in <module>
s3 = boto3.resource('s3')
File "/home/ubuntu/boto3/__init__.py", line 100, in resource
return _get_default_session().resource(*args, **kwargs)
File "/home/ubuntu/boto3/session.py", line 389, in resource
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/botocore/session.py", line 835, in create_client
client_config=config, api_version=api_version)
File "/home/ubuntu/botocore/client.py", line 79, in create_client
cls = self._create_client_class(service_name, service_model)
File "/home/ubuntu/botocore/client.py", line 109, in _create_client_class
base_classes=bases)
File "/home/ubuntu/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/ubuntu/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/ubuntu/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/ubuntu/boto3/utils.py", line 61, in _handler
module = import_module(module)
File "/home/ubuntu/boto3/utils.py", line 52, in import_module
__import__(name)
File "/home/ubuntu/boto3/s3/inject.py", line 15, in <module>
from boto3.s3.transfer import create_transfer_manager
File "/home/ubuntu/boto3/s3/transfer.py", line 127, in <module>
from s3transfer.exceptions import RetriesExceededError as \
File "/home/ubuntu/s3transfer/__init__.py", line 134, in <module>
import concurrent.futures
File "/home/ubuntu/concurrent/futures/__init__.py", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File "/home/ubuntu/concurrent/futures/_base.py", line 414
raise exception_type, self._exception, self._traceback
^
SyntaxError: invalid syntax
ubuntu#ip-172-30-10-199:~$
I do have a file ~/.aws/credentials containing the respective keys. I do use these same keys to access the bucket from my laptop (S3 Browser).
I manage to find the answer.
Somehow it was related to S3 permissions, instead of using Key/secret e manage to give S3 Fullcontrol role to my EC2 and it worked like a charm.
Mea culpa, I think the error message could be also more informative. Please learn from my mistake :-)

`pipenv install` timing out on RPi

I'm trying to bring up [pipenv][1] on a Raspberry Pi Zero W. The symptom I'm seeing is that pexpect times out when trying to create a tutorial project.
Admittedly, the RPi is a small machine, but I was monitoring memory usage and swap space during the process, and it wasn't running out of memory or swap.
Any idea what it was trying to do? Or how I should debug this? Here's the stack trace:
pi#blue-server:~/testdir $ pipenv install requests
Creating a virtualenv for this project…
Using /usr/bin/python3 (3.5.3) to create virtualenv…
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 109, in expect_loop
return self.timeout()
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 82, in timeout
raise TIMEOUT(msg)
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0xb555c950>
searcher: searcher_re:
0: EOF
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/.local/bin/pipenv", line 11, in <module>
sys.exit(cli())
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/cli.py", line 478, in uninstall
keep_outdated=keep_outdated,
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 2077, in do_uninstall
ensure_project(three=three, python=python)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 620, in ensure_project
three=three, python=python, site_packages=site_packages
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 569, in ensure_virtualenv
do_create_virtualenv(python=python, site_packages=site_packages)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 936, in do_create_virtualenv
click.echo(crayons.blue(c.out), err=True)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/delegator.py", line 99, in out
self.__out = self._pexpect_out
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/delegator.py", line 87, in _pexpect_out
result += self.subprocess.read()
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 441, in read
self.expect(self.delimiter)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 341, in expect
timeout, searchwindowsize, async_)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 369, in expect_list
return exp.expect_loop(timeout)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 119, in expect_loop
return self.timeout(e)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 82, in timeout
raise TIMEOUT(msg)
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0xb553a710>
searcher: searcher_re:
0: EOF
<pexpect.popen_spawn.PopenSpawn object at 0xb553a710>
searcher: searcher_re:
0: EOF
Here's the environment info:
pi#blue-server:~/foo $ uname -a
Linux blue-server 4.14.34+ #1110 Mon Apr 16 14:51:42 BST 2018 armv6l GNU/Linux
pi#blue-server:~/foo $ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
pi#blue-server:~/foo $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
pi#blue-server:~/foo $ free -m
total used free shared buff/cache available
Mem: 433 26 260 3 145 353
Swap: 99 0 99
additional info
I noticed it was timing out inside of a subprocess call. Using pdb, I traced it down to the command:
<Command ['/usr/bin/python3.5', '-m', 'pipenv.pew', 'new', 'foo-su43ObVR', '-d', '-p', '/usr/bin/python3.5']>
I tried replicating that call from the command line and it completed without error:
pi#blue-server:~/foo $ /usr/bin/python3.5 -m pipenv.pew new 'asdf' -d -p /usr/bin/python3.5
Already using interpreter /usr/bin/python3.5
Using base prefix '/usr'
New python executable in /home/pi/.local/share/virtualenvs/asdf/bin/python3.5
Also creating executable in /home/pi/.local/share/virtualenvs/asdf/bin/python
Installing setuptools, pip, wheel...done.
That seems hopeful, but I still don't know what causes the timeout.
TL;DR: the fix is to extend the timeout
Solved it. Because the RPi Zero is slow, it was timing out. Taking a clue from the discussion on github, I noticed that its now possible to extend the default timeout with environment variables. So this solved the problem:
pi#blue-server:~/testdir $ export PIPENV_TIMEOUT=500
pi#blue-server:~/testdir $ pipenv install requests

Resources