I'm unable to create an Ansible dynamic inventory for Azure. I get the following error:
bash-5.1# ansible-inventory -i inventory_azure_rm.yaml --graph -vvv
ansible-inventory [core 2.12.2]
config file = /playbook/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-inventory
python version = 3.9.7 (default, Nov 24 2021, 21:15:59) [GCC 10.3.1 20211027]
jinja version = 3.0.3
libyaml = False
Using /playbook/ansible.cfg as config file
host_list declined parsing /playbook/inventory_azure_rm.yaml as it did not pass its verify_file() method
toml declined parsing /playbook/inventory_azure_rm.yaml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /playbook/inventory_azure_rm.yaml with script plugin: problem running /playbook/inventory_azure_rm.yaml --list ([Errno 13] Permission denied:
'/playbook/inventory_azure_rm.yaml')
File "/usr/lib/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3.9/site-packages/ansible/plugins/inventory/script.py", line 150, in parse
raise AnsibleParserError(to_native(e))
[WARNING]: * Failed to parse /playbook/inventory_azure_rm.yaml with auto plugin: name 'client_secret' is not defined
File "/usr/lib/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 58, in parse
plugin.parse(inventory, loader, path, cache=cache)
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 219, in parse
self._credential_setup()
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 240, in _credential_setup
self.azure_auth = AzureRMAuth(**auth_options)
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py", line 1522, in __init__
self.azure_credential_track2 = client_secret.ClientSecretCredential(client_id=self.credentials['client_id'],
[WARNING]: * Failed to parse /playbook/inventory_azure_rm.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory
File "/usr/lib/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3.9/site-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /playbook/inventory_azure_rm.yaml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
File "/usr/lib/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3.9/site-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /playbook/inventory_azure_rm.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
#all:
|--#ungrouped:
The inventory_azure_rm.yaml file is:
plugin: azure.azcollection.azure_rm
auth_source: credential_file
plain_host_names: yes
include_vm_resource_groups:
- <redacted>
keyed_groups:
- key: tags.applicationRole
separator: ""
The ansible.cfg file is:
[defaults]
inventory = inventory_azure_rm.yaml
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
Ansible Azure collection version
bash-5.1# ansible-galaxy collection list
# /root/.ansible/collections/ansible_collections
Collection Version
------------------ -------
azure.azcollection 1.11.0
I would appreciate any help on trying to solve this.
Thank you.
Update:
Fixed inventory_azure_rm.yaml file permissions.
bash-5.1# ls -la inventory_azure_rm.yaml
-rw-r--r-- 1 root root 200 Feb 24 17:27 inventory_azure_rm.yaml
Updated the error stacktrace on the problem description running the command again.
Update2:
The Azure credentials file looks like this:
bash-5.1# cat ~/.azure/credentials
[default]
subscription_id=<redacted>
client_id=<redacted>
secret=<redacted>
tenant=<redacted>
cloud_environment=AzureCloud
I finally managed to fix the problem on parsing the dynamic inventory. I was doing the following:
pip install -r https://raw.githubusercontent.com/ansible-collections/azure/dev/requirements-azure.txt && \
ansible-galaxy collection install azure.azcollection:1.11.0
I've changed 2 things:
Invert the order on installing the collection and its dependencies. First I need to install the azure.azcollection and after that, its dependencies.
Install the azure.azcollection dependencies from the requirements.txt coming with the collection itself instead of doing it from Github.
This is the code working:
ansible-galaxy collection install azure.azcollection:1.11.0 && \
pip install -r ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt
The difference between the requirements.txt file from GitHub at https://raw.githubusercontent.com/ansible-collections/azure/dev/requirements-azure.txt and the local requirements.txt file at ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt is on azure-mgmt-network package version. The online version is 19.1.0 and the local (working) version is 12.0.0.
bash-5.1# diff -w requirements-azure.txt ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt
--- requirements-azure.txt
+++ /root/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt
## -19,7 +19,7 ##
azure-mgmt-monitor==3.0.0
azure-mgmt-managedservices==1.0.0
azure-mgmt-managementgroups==0.2.0
-azure-mgmt-network==19.1.0
+azure-mgmt-network==12.0.0
azure-mgmt-nspkg==2.0.0
azure-mgmt-privatedns==0.1.0
azure-mgmt-redis==5.0.0
Related
`Hello everyone,
I am facing issue with dynamic inventory for azure
Getting following error:
ansible-inventory -i test.azure_rm.yaml --graph -vvv
ansible-inventory [core 2.13.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-inventory
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /root/test.azure_rm.yaml as it did not pass its verify_file() method
script declined parsing /root/test.azure_rm.yaml as it did not pass its verify_file() method
Using inventory plugin 'ansible_collections.azu`your text`re.azcollection.plugins.inventory.azure_rm' to process inventory source '/root/test.azure_rm.yaml'
toml declined parsing /root/test.azure_rm.yaml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /root/test.azure_rm.yaml with auto plugin: Failed to get credentials. Either pass as
parameters, set environment variables, define a profile in ~/.azure/credentials, or install Azure CLI and log in (az login).
File "/usr/lib/python3/dist-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3/dist-packages/ansible/plugins/inventory/auto.py", line 59, in parse
plugin.parse(inventory, loader, path, cache=cache)
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 220, in parse
self._credential_setup()
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 241, in _credential_setup
self.azure_auth = AzureRMAuth(**auth_options)
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py", line 1479, in init
self.fail("Failed to get credentials. Either pass as parameters, set environment variables, "
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py", line 1605, in fail
self._fail_impl(msg)
File "/root/.ansible/collections/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py", line 1608, in _default_fail_impl
raise AzureRMAuthException(msg)
[WARNING]: * Failed to parse /root/test.azure_rm.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory
File "/usr/lib/python3/dist-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3/dist-packages/ansible/plugins/inventory/yaml.py", line 114, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /root/test.azure_rm.yaml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in
is not allowed, this character is reserved to provide a port.
File "/usr/lib/python3/dist-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/lib/python3/dist-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /root/test.azure_rm.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
> #all:
> |--#ungrouped:
Able to do az vm list
az vm list | wc -l
1341
My azure inventory yaml is below
*cat test.azure_rm.yam`your text`l*
plugin: azure.azcollection.azure_rm
include_vm_resource_groups:
- '*'
auth_source: auto
I've configured the credentials But I can't list the inventory using the dynamic inventory plugin. I have azure_rm.py in the same directory.`
This is pertaining to jfrog artifactory. pypi-public is our virtual repo and our internal pypi-internal is associated to pypi-public. I can see the package vapi_common on the web UI.
The below command is able to search the package
pip search vapi_common --index=https://<username>:<apikey>#company.jfrog.io/artifactory/api/pypi/pypi-public/simple
However, if I use the same index-url in ~/.pip/pip.conf
[global]
index-url = https://<username>:<apikey>#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
and then use the below command
pip search vapi_common -vvv -> fails the below error. As you can see, it is trying to reach pypi.org and is not honoring the index url given in pip.conf
pip search vapi_common -vvv
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "POST /pypi HTTP/1.1" 200 419
ERROR: Exception:
Traceback (most recent call last):
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 228, in _main
status = self.run(options, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 60, in run
pypi_hits = self.search(query, options)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/commands/search.py", line 80, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1109, in __call__
return self.__send(self.__name, args)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1450, in __request
response = self.__transport.request(
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pip/_internal/network/xmlrpc.py", line 45, in request
return self.parse_response(response.raw)
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 1341, in parse_response
return u.close()
File "/home/varmour/.pyenv/versions/3.8.8/lib/python3.8/xmlrpc/client.py", line 655, in close
raise Fault(**self._stack[0])
xmlrpc.client.Fault: <Fault -32500: "RuntimeError: PyPI's XMLRPC API is currently disabled due to unmanageable load and will be deprecated in the near future. See https://status.python.org/ for more information.">
Please note you yourself use pip search --index=…. That is, you should use option index in pip.conf, not index-url. index is for pip search, index-url is for pip download/install.
See the docs at https://pip.pypa.io/en/stable/reference/pip_search/#options
Fix config:
pip config set global.index https://:#company.jfrog.io.jfrog.io/artifactory/api/pypi/pypi-public/simple
Perhaps even
pip config set global.index `pip config get global.index-url`
Does anyone have a good example of testing a python program using Robot Framework.
I am trying to run my python program (chaptermarkers.py) with an argument of --test and checking the results.
Passing the argument has been an odyssey of google searches for 2 days.
[file: common_resourses.robot]
*** Settings ***
Library OperatingSystem
*** Variables ***
${CHAPTERMARKERS_EXEC} chaptermarkers
${LOG LEVEL} DEBUG
*** Keywords ***
# TODO
[Test Cases: file: default_suite.robot]
*** Settings ***
Documentation *Test Chapter Markers runs but has error in filename*
Metadata Github https://github.com/cbitterfield/chaptermarkers
Metadata Version 1.0.0
Metadata Executed At ${HOST}
# External libraries imports
Library Process
Library String
Resource common_resources.robot
*** Variables ***
${EXPECTED_MESSAGE} Movie Filename
${REPORT FILE} report.html
${LOG FILE} logfile.html
${LOG LEVEL} DEBUG
${OUTPUT DIR} /Users/colin/IdeaProjects/chaptermarkers
${test} --test
*** Test Cases ***
Scenerio test chaptermarkers run
[Tags] DEBUG
[Documentation] Verifies that chaptermarkers is executed well and without errors
${result}= Run process ${CHAPTERMARKERS_EXEC} ${test}
Should Contain ${result.stdout} ${EXPECTED_MESSAGE}
Should Be Empty ${result.stderr}
No matter how I try to put something after the program, I get an error of directory missing.
[Crazy unusable error messages]
$ robot --loglevel DEBUG --log log.html --report report.html stests/default_suite.robot
[ ERROR ] Error in file '/Users/colin/IdeaProjects/chaptermarkers/stests/default_suite.robot' on line 25: Invalid variable name '${test} --test'.
==============================================================================
Default Suite :: *Test Chapter Markers runs but has error in filename*
==============================================================================
Scenerio test chaptermarkers run :: Verifies that chaptermarkers i... | FAIL |
Variable '${test}' not found. Did you mean:
${TEST_TAGS}
${TEST_NAME}
------------------------------------------------------------------------------
Default Suite :: *Test Chapter Markers runs but has error in filen... | PASS |
0 critical tests, 0 passed, 0 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output: /Users/colin/IdeaProjects/chaptermarkers/output.xml
[ ERROR ] Unexpected error: FileNotFoundError: [Errno 2] No such file or directory: '/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/htmldata/rebot/log.html'
Traceback (most recent call last):
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/utils/application.py", line 83, in _execute
rc = self.main(arguments, **options)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/run.py", line 451, in main
writer.write_results(settings.get_rebot_settings())
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/reporting/resultwriter.py", line 65, in write_results
self._write_log(results.js_result, settings.log, config)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/reporting/resultwriter.py", line 79, in _write_log
self._write('Log', LogWriter(js_result).write, path, config)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/reporting/resultwriter.py", line 86, in _write
writer(path, *args)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/reporting/logreportwriters.py", line 43, in write
self._write_file(path, config, LOG)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/reporting/logreportwriters.py", line 36, in _write_file
writer.write(template)
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/htmldata/htmlfilewriter.py", line 33, in write
for line in HtmlTemplate(template):
File "/Users/colin/IdeaProjects/chaptermarkers/env/lib/python3.8/site-packages/robot/htmldata/normaltemplate.py", line 28, in __iter__
with codecs.open(self._path, encoding='UTF-8') as file:
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
(env) razzamataz:chaptermarkers colin$
A few things I noticed:
There needs to be at least two spaces between ${test} and --test in your variables section. You provided only one. The same in the Run Process line, there should be at least two spaces between exec and argument. The rule is that RF uses two spaces as delimiter.
Your test name is indented. RF inteprets it as a keyword calling which is unexpected. It should start from the begining of the line without indent
The file missing error looks weired and I don't know the reason. The file is part of RF library and if you installed RF correctly, it should be there. I suggest that you start a new virtual environment and try again if it's not too complex.
P.S. Just FYI, all RF buil-in libraries can be found here:
https://robotframework.org/#libraries.
I am getting an error with this playbook and am not sure where to look. Perhaps something isn't defined right in my host file? (I'm told the playbook is good)
YML Playbook
- hosts: fortigates
collections:
- fortinet.fortios
connection: httpapi
vars:
vdom: "root"
ansible_httpapi_use_ssl: yes
ansible_httpapi_validate_certs: no
ansible_httpapi_port: 443
tasks:
- name: Configure global attributes.
fortios_system_global:
vdom: "{{ vdom }}"
system_global:
admintimeout: "23"
hostname: "FortiGate02"
Host file
[fortigates]
fortigate01 ansible_host=192.168.0.103 ansible_user="admin" ansible_password="password"
[fortigates:vars]
ansible_network_os=fortinet.fortios.fortios
#ansible_python_interpreter=/usr/bin/python3
Error Output
TASK [Configure global attributes.] ****************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'
fatal: [fortigate01]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 102, in \n _ansiballz_main()\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/root/.ansible/tmp/ansible-local-454799bt3QT/ansible-tmp-1593138436.55-45584-34169098305172/AnsiballZ_fortios_system_global.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.fortinet.fortios.plugins.modules.fortios_system_global', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib/python2.7/runpy.py", line 188, in run_module\n fname, loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 72, in _run_code\n exec code in run_globals\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2075, in \n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 2043, in main\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1544, in fortios_system\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/modules/fortios_system_global.py", line 1533, in system_global\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 173, in set\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 146, in get_mkey\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 137, in get_mkeyname\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible_collections/fortinet/fortios/plugins/module_utils/fortios/fortios.py", line 126, in schema\n File "/tmp/ansible_fortios_system_global_payload_CQaHFo/ansible_fortios_system_global_payload.zip/ansible/module_utils/connection.py", line 185, in rpc\nansible.module_utils.connection.ConnectionError: addinfourl instance has no attribute 'getheaders'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
I have installed the Ansible Galaxy package, as per the documentation
# ansible-galaxy collection install fortinet.fortios
Process install dependency map
Starting collection install process
Skipping 'fortinet.fortios' as it is already installed
Same issue on Ubuntu 18.04 (WSL).
I fixed it by installing ansible with pip3.
# remove ansible
sudo apt remove ansible
# install python3 & pip3
sudo apt install python3 python3-pip
# install ansible with pip3
pip3 install ansible --user
# update the environment PATH variable for ansible commands
echo "export PATH=$PATH:$HOME/.local/bin" >> ~/.bashrc
source ~/.bashrc
# install fortios module
ansible-galaxy collection install fortinet.fortios
When attempting to build with Pants, I am seeing the following error:
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 154, in _transitive_download_remote_libs
all_known_addresses)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 105, in _transitive_download_remote_libs
fetcher.fetch(go_remote_lib.import_path, dest=tmp_fetch_root, rev=go_remote_lib.rev)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 437, in fetch
github_root, github_rev = self._map_import_path(import_path, rev)
File "/Users/chad/.cache/pants/setup/bootstrap/pants.mbFDa8/install/lib/python2.7/site-packages/pants/util/memo.py", line 95, in memoize
result = func(*args, **kwargs)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 454, in _map_import_path
raise self.FetchError('Invalid gopkg.in package and rev in: {}'.format(import_path))
Exception message: Invalid gopkg.in package and rev in: gopkg.in/amz.v1/aws
Here is the contents of my BUILD file:
# Auto-generated by pants!
# To re-generate run: `pants buildgen.go --materialize --remote`
go_remote_library(rev='v1')
Looking into the code, I see that the error comes from a failure to match a regex in fetchers.py, on line 453.
I am running Pants version 0.0.59 on Mac OS X 10.10 (Yosemite)
Noting that #Huckphin stumbled on a bug here in pantsbuild.pants<=0.0.59. He filed an issue and now things are fixed up for handling gopkg.in remote import paths that point to sub-packages in the remote repo. The fix will be released with the regular Friday release on 11/20/2015 in 0.0.60.