pip not reading the ~/.pip/pip.conf - python-3.x

This is pertaining to jfrog artifactory. pypi-public is our virtual repo and our internal pypi-internal is associated to pypi-public. I can see the package vapi_common on the web UI.
The below command is able to search the package
pip search vapi_common --index=https://<username>:<apikey>#company.jfrog.io/artifactory/api/pypi/pypi-public/simple
However, if I use the same index-url in ~/.pip/pip.conf
[global]
index-url = https://<username>:<apikey>#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
and then use the below command
pip search vapi_common -vvv -> fails the below error. As you can see, it is trying to reach pypi.org and is not honoring the index url given in pip.conf
pip search vapi_common -vvv
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "POST /pypi HTTP/1.1" 200 419
ERROR: Exception:
Traceback (most recent call last):
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 228, in _main
status = self.run(options, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 60, in run
pypi_hits = self.search(query, options)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 80, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1109, in __call__
return self.__send(self.__name, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1450, in __request
response = self.__transport.request(
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/network/xmlrpc.py", line 45, in request
return self.parse_response(response.raw)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1341, in parse_response
return u.close()
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 655, in close
raise Fault(**self._stack[0])
xmlrpc.client.Fault: <Fault -32500: "RuntimeError: PyPI's XMLRPC API is currently disabled due to unmanageable load and will be deprecated in the near future. See https://status.python.org/ for more information.">

Please note you yourself use pip search --index=…. That is, you should use option index in pip.conf, not index-url. index is for pip search, index-url is for pip download/install.
See the docs at https://pip.pypa.io/en/stable/reference/pip_search/#options
Fix config:
pip config set global.index https://:#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
Perhaps even
pip config set global.index `pip config get global.index-url`

Related

Unable to run index migration in OpenSearch

I have a docker compose running where a django backend, opensearch & opensearch dashboard are running. I have connected the backend to talk to opensearch and I'm able to query it successfully. I'm trying to create indexes using this command inside the docker container.
./manage.py opensearch --rebuild
Reference: https://django-opensearch-dsl.readthedocs.io/en/latest/getting_started/#create-and-populate-opensearchs-indices
I get the following error when I run the above command
root#ed186e462ca3:/app# ./manage.py opensearch --rebuild
/usr/local/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography import utils, x509
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 224, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 37, in load_command_class
return module.Command()
File "/usr/local/lib/python3.6/site-packages/django_opensearch_dsl/management/commands/opensearch.py", line 32, in __init__
if settings.TESTING: # pragma: no cover
File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 80, in __getattr__
val = getattr(self._wrapped, name)
AttributeError: 'Settings' object has no attribute 'TESTING'
Sentry is attempting to send 1 pending error messages
Waiting up to 2 seconds
Press Ctrl-C to quit
I'm not sure where I'm going wrong. Any help would be greatful.
TIA
For future reference, this was indeed a bug in django-opensearch-dsl which was fix in the 0.3.0 release.

Jupyter labextension Fail to build, error package.json cannot be found

I am having issue trying to 'jupyter labextension build'. Any guidance would be greatly appreciated please.
here is the jupyter labextension list
JupyterLab v3.0.14
C:\Users\xxxx.conda\envs\jz\share\jupyter\labextensions
#jupyter-widgets/jupyterlab-manager v3.0.0 enabled ok (python, jupyterlab_widgets)
#voila-dashboards/jupyterlab-preview v2.0.2 enabled ok (python, voila)
Other labextensions (built into JupyterLab)
app dir: C:\Users\xxxx.conda\envs\jz\share\jupyter\lab
jupyterlab-plotly v4.14.3 enabled ok
plotlywidget v4.14.3 enabled ok
as you can see, everything pretty much the most up to date and in 'sync'.
the log looks this:
Building extension in C:\Users\xxxxx
Traceback (most recent call last):
File "C:\Users\xxxxx.conda\envs\jz\lib\site-packages\jupyterlab\debuglog.py", line 47, in debug_logging
yield
File "C:\Users\xxxxx.conda\envs\jz\lib\site-packages\jupyterlab\labextensions.py", line 128, in start
ans = self.run_task()
File "C:\Users\xxxxx.conda\envs\jz\lib\site-packages\jupyterlab\labextensions.py", line 227, in run_task
build_labextension(self.extra_args[0], logger=self.log, development=self.development, static_url=self.static_url or None, source_map = self.source_map,
File "C:\Users\xxxxx.conda\envs\jz\lib\site-packages\jupyterlab\federated_labextensions.py", line 168, in build_labextension
builder = _ensure_builder(ext_path, core_path)
File "C:\Users\xxxxx.conda\envs\jz\lib\site-packages\jupyterlab\federated_labextensions.py", line 226, in _ensure_builder
with open(osp.join(ext_path, 'package.json')) as fid:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\xxxxx\package.json'
Exiting application: lab

Issue running fortinet.fortios in Ansible playbook (bad host file? plugins not installed?)

I am getting an error with this playbook and am not sure where to look. Perhaps something isn't defined right in my host file? (I'm told the playbook is good)
YML Playbook
- hosts: fortigates
collections:
- fortinet.fortios
connection: httpapi
vars:
vdom: "root"
ansible_httpapi_use_ssl: yes
ansible_httpapi_validate_certs: no
ansible_httpapi_port: 443
tasks:
- name: Configure global attributes.
fortios_system_global:
vdom: "{{ vdom }}"
system_global:
admintimeout: "23"
hostname: "FortiGate02"
Host file
[fortigates]
fortigate01 ansible_host=192.168.0.103 ansible_user="admin" ansible_password="password"
[fortigates:vars]
ansible_network_os=fortinet.fortios.fortios
#ansible_python_interpreter=/usr/bin/python3
Error Output
TASK [Configure global attributes.] ****************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'
fatal: [fortigate01]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 102, in \n _ansiballz_main()\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.fortinet.fortios.plugins.modules.fortios_system_global', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib/python2.7/runpy.py", line 188, in run_module\n fname, loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 72, in _run_code\n exec code in run_globals\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2075, in \n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2043, in main\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1544, in fortios_system\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1533, in system_global\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 173, in set\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 146, in get_mkey\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 137, in get_mkeyname\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 126, in schema\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible/module_utils/connection.py", line 185, in rpc\nansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
I have installed the Ansible Galaxy package, as per the documentation
# ansible-galaxy collection install fortinet.fortios
Process install dependency map
Starting collection install process
Skipping 'fortinet.fortios' as it is already installed
Same issue on Ubuntu 18.04 (WSL).
I fixed it by installing ansible with pip3.
# remove ansible
sudo apt remove ansible
# install python3 & pip3
sudo apt install python3 python3-pip
# install ansible with pip3
pip3 install ansible --user
# update the environment PATH variable for ansible commands
echo "export PATH=$PATH:$HOME/.local/bin" >> ~/.bashrc
source ~/.bashrc
# install fortios module
ansible-galaxy collection install fortinet.fortios

Getting SSL Handshake Error while running my ansible-playbook

When I'm trying to run my playbook I'm getting an error which I believe is related to some sort of SSL certificate validation, but I'm not sure of the actual reason for it.
I tried a lot of configuration but the one's that I believe worked for me are as below:
Troubleshooting Steps:
Add pip global trust profile under $HOME/.config/pip/pip.conf and copy below content:
[global]
trusted-host = pypi.python.org
pypi.org
files.pythonhosted.org
pip install --upgrade pip. This was although not a necessary step but as nothing was working I tried it.
pip install pyopenssl. This step actually resolved my issue as my ansible playbook was constantly throwing error of SSL Handshake and certificate verify failed.
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "From cffi callback :\nTraceback (most recent call last):\n File
\"/usr/lib/python2.7/site-packages/OpenSSL/SSL.py\", line 309, in
wrapper\n _lib.X509_up_ref(x509)\nAttributeError: 'module' object has
no attribute 'X509_up_ref'\nTraceback (most recent call last):\n File
\"/root/.ansible/tmp/ansible-tmp-1550051069.59-
120598072724498/AnsiballZ_azure_rm_virtualnetwork.py\", line 113, in
\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-
1550051069.59-120598072724498/AnsiballZ_azure_rm_virtualnetwork.py\",
line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1550051069.59-
120598072724498/AnsiballZ_azure_rm_virtualnetwork.py\", line 48, in
invoke_module\n imp.load_module('main', mod, module, MOD_DESC)\n
File \"/tmp/ansible_azure_rm_virtualnetwork_payload_TxAf7f/main.py\",
line 349, in \n File
\"/tmp/ansible_azure_rm_virtualnetwork_payload_TxAf7f/main.py\", line
345, in main\n File
\"/tmp/ansible_azure_rm_virtualnetwork_payload_TxAf7f/main.py\", line
201, in init\n File
mp/ansible_azure_rm_virtualnetwork_payload_TxAf7f/ansible_azure_rm_virtua
lnetwork_payload.zip/ansible/module_utils/azure_rm_common.py\", line 301,
in init\n File
lnetwork_payload.zip/ansible/module_utils/azure_rm_common.py\", line
1021, in init\n File \"/usr/lib/python2.7/site-
packages/msrestazure/azure_active_directory.py\", line 453, in init\n
self.set_token()\n File \"/usr/lib/python2.7/site-
packages/msrestazure/azure_active_directory.py\", line 480, in
set_token\n raise_with_traceback(AuthenticationError, \"\", err)\n
File \"/usr/lib/python2.7/site-packages/msrest/exceptions.py\", line 48,
in raise_with_traceback\n raise
error\nmsrest.exceptions.AuthenticationError: , SSLError:
HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max
retries exceeded with url: /1564e0a7-162f-4a3c-b5f3-
837525c8ad64/oauth2/token (Caused by SSLError(SSLError(\"bad handshake:`
Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate
verify failed')],)\",),))\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
If anyone could explain what exactly the cause of this error is, it would be so helpful to me to know what are the basic things required while working with some modules.
Thanks!!
Python or Ansible communicates using SSL so openssl is a binary through which SSL is validated in python.
Apart from ansible execution if you try to install any packages using python it will throw an error if openssl is not installed and also it depends on the pip and python version as well

Python 3.7 and Dataflow - SSL Certificate Issue

I need to use the google cloud api to write my Dataflow jobs.
As I understand it, I can't use pip install google-cloud-dataflow since Apache Beam wont' work on Python 3, so I've been using googleapiclient.discovery. However, when I issue the build() command, it bombs out citing the error:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)
Background notes:
I'm sitting behind a Corporate Proxy, with HTTP(S)_PROXY set at the environment level
I also have CA_BUNDLE and REQUESTS_CA_BUNDLE set to my custom certs
I've installed certifi, but no love
I've attempted to run /Applications/Python\ 3.6/Install\
Certificates.command but couldn't find the .command in my virtualenv. Also, would prefer not to go down this path as it will make my Prod deployment a nightmare
Here's my code:
from oauth2client.client import GoogleCredentials
from googleapiclient.discovery import build
credentials = GoogleCredentials.get_application_default()
dataflow = build('dataflow', 'v1b3', credentials=credentials)
Result:
Traceback (most recent call last):
File "test_dataflow_creds.py", line 6, in
dataflow = build('dataflow', 'v1b3', credentials=credentials)
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/googleapiclient/discovery.py", line 222, in build
requested_url, discovery_http, cache_discovery, cache)
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/googleapiclient/discovery.py", line 269, in _retrieve_discovery_doc
resp, content = http.request(actual_url)
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/httplib2/init.py", line 1924, in request
cachekey,
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/httplib2/init.py", line 1595, in _request
conn, request_uri, method, body, headers
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/httplib2/init.py", line 1501, in _conn_request
conn.connect()
File "/Users/user/.pyenv/versions/unit-test-3.7/lib/python3.7/site-packages/httplib2/init.py", line 1291, in connect
self.sock = self._context.wrap_socket(sock, server_hostname=self.host)
File "/Users/user/.pyenv/versions/3.7.0/lib/python3.7/ssl.py", line 412, in wrap_socket
session=session
File "/Users/user/.pyenv/versions/3.7.0/lib/python3.7/ssl.py", line 850, in _create
self.do_handshake()
File "/Users/user/.pyenv/versions/3.7.0/lib/python3.7/ssl.py", line 1108, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)
tl;dr: got it working by exporting all Certs to a common file, then appending to the Cert file in the path as specified by Certifi
steps:
In Firefox > Preferences > View Certificates > Your Certificates, export all the required ones.
Concatenate all of the above .crt files into one big bundle.
In bash, run python -m requests.certs to get the certs file python is using.
Append the bundled certs from step 2 above to the file from step 3.
Done

Resources