I'm starting out with Ansible, trying to make vms etc in Azure.
I am stuck a bit on the authentication thing. This is the command I used to create what I thought I needed:
az ad sp create-for-rbac --name AzureTools --password "A Password I Made Up"
Then I made the ~/.ansible/credentials file with the following contents:
[default]
subscription_id=my-sub-id
client_id=the appId from when I ran the previous command
secret='A Password I Made Up'
tenant=the tenantid from the above command
And when I try to run the ansible playbook, I get this (Invalid client secret is provided) See full error below:
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_QL57O_/ansible_module_azure_rm_virtualmachine.py\", line 1553, in <module>\n main()\n File \"/tmp/ansible_QL57O_/ansible_module_azure_rm_virtualmachine.py\", line 1550, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_QL57O_/ansible_module_azure_rm_virtualmachine.py\", line 651, in __init__\n supports_check_mode=True)\n File \"/tmp/ansible_QL57O_/ansible_modlib.zip/ansible/module_utils/azure_rm_common.py\", line 265, in __init__\n File \"/usr/local/lib/python2.7/dist-packages/msrestazure/azure_active_directory.py\", line 440, in __init__\n self.set_token()\n File \"/usr/local/lib/python2.7/dist-packages/msrestazure/azure_active_directory.py\", line 473, in set_token\n raise_with_traceback(AuthenticationError, \"\", err)\n File \"/usr/local/lib/python2.7/dist-packages/msrest/exceptions.py\", line 48, in raise_with_traceback\n raise error\nmsrest.exceptions.AuthenticationError: , InvalidClientError: (invalid_client) AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided.\r\nTrace ID: 34de605e-5d21-4be2-84c1-27759ffe0000\r\nCorrelation ID: e62ed2ee-46b8-4847-9c1d-0c1e24ab711a\r\nTimestamp: 2018-03-08 21:00:55Z\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 0
So, what am I missing? Is the secret not supposed to be that password? If not, what should it be? All the docs just say "just put your secret here" but they don't explain what it is or where it comes from.
Environment: Ubuntu 16.04 running in a vm in Azure.
ansible 2.4.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/path/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]
Please let me know if I've missed providing any info.
Thanks in advance!
In the secret line, you should remove single quotes. I test in my lab, if I use single quotes, I will get same error log with you.
The second problem is you should create credentials in ~/.azure/credentials not ~/.ansible. More information about this please refer to this link.
Related
Issue
I have created a pipeline in Azure DevOps (using the classic editor - no YAML), which configures some Azure resources including an Azure Key Vault. The pipeline was working successfully, when I ran it last time in August 2020. Now it is February 2021 and the same unmodified pipeline crashes with the first Azure CLI task (see error logs at the end).
The code that causes the issue is
az keyvault set-policy \
--resource-group $RESOURCEGROUP_NAME \
--name $KEYVAULT_NAME \
--object-id $APPREGISTRATION_OBJECTID \
--secret-permissions get list set
From the log of the last successful run (in August 2020) I can see that Azure CLI version 2.0.16 was used. When I run it today (in February 2021) and see it crash, it uses Azure CLI version 2.19.1. I assume the Azure CLI task always uses the most recent version.
Idea
So my suspicion is that something changed in the Azure CLI library. But neither can I figure out what the problem is, nor are there any settings to downgrade to my original version.
Since the pipeline runs on an Ubuntu-Linux machine, I tried to replace the Azure CLI task by a bash script task in order to control the Azure CLI version. I tried the following code, but I was only able to downgrade the version down to 2.0.81, which strangely dates back to February 2020 according to the release notes: https://learn.microsoft.com/en-us/cli/azure/release-notes-azure-cli?tabs=azure-cli#february-04-2020
sudo apt-get update
sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
curl -sL https://packages.microsoft.com/keys/microsoft.asc |
gpg --dearmor |
sudo tee /etc/apt/trusted.gpg.d/microsoft.gpg > /dev/null
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" |
sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo apt-get update
sudo apt-get remove azure-cli
sudo apt-get install azure-cli=2.0.81+ds-4ubuntu0.2
I can verify that Azure CLI 2.0.81 was installed. But unfortunately I still see the same error log (see below).
Any suggestions are highly appreciated!
Error logs
ERROR: The command failed with an unexpected error. Here is the traceback:
ERROR: APIVersion 2020-04-01-preview is not available
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/commands/__init__.py", line 664, in execute
raise ex
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/commands/__init__.py", line 727, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/commands/__init__.py", line 720, in _run_job
six.reraise(*sys.exc_info())
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/six.py", line 703, in reraise
raise value
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/commands/__init__.py", line 698, in _run_job
result = cmd_copy(params)
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/commands/__init__.py", line 331, in __call__
return self.handler(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/core/__init__.py", line 807, in default_command_handler
client = client_factory(cmd.cli_ctx, command_args) if client_factory else None
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/cli/command_modules/keyvault/_client_factory.py", line 124, in _keyvault_mgmt_client_factory
return getattr(get_mgmt_service_client(cli_ctx, resource_type), client_name)
File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/azure/mgmt/keyvault/_key_vault_management_client.py", line 157, in vaults
raise NotImplementedError("APIVersion {} is not available".format(api_version))
NotImplementedError: APIVersion 2020-04-01-preview is not available
Edit - 03/01/2021
I previously created the key vault with the help of a resource template (json-file) from the deployment pipeline and there is one particular line that just caught my attention:
"apiVersion": "2016-10-01",
If I create the key vault manually in Azure, a template with the same api version is created. This api version does not match the one mentioned in the error logs, but I am not sure if they should.
ERROR: APIVersion 2020-04-01-preview is not available
#Kevin Lu-MSFT: My Azure CLI output looks very similar to yours. But the error persists.
why I get python errors running a Azure CLI command.
For this question, as far as I know, azure cli is written in python code, so when you get the errors, it will show python errors.
Here is the source code of the azure cli .
ERROR: APIVersion 2020-04-01-preview is not available
I have tested the same script in Azure Cli task with all kind of Linux Agents(e.g. ubuntu 16.04 18.04 20.04) in Azure Devops. But they all could work fine.
Here is my sample:
The Azure CLI version: 2.19.1 (latest)
The error message is related to the APIVersion, but we couldn't set it in Azure CLI. So this could be related to azure cli version or keyvault itself.
You could try my Azure CLI task settings. Or you could create a new Azure Key Vault and test the same script. Then you could check if it could work.
I'm trying to use the azure_rm plugin for Ansible to generate a dynamic inventory for VMs in Azure, but am getting a "batched request" error of 403 when I try to run the sanity-check command:
$ ansible all -m ping
[WARNING]: * Failed to parse /project/ansible/inventory.azure_rm.yml with
ansible_collections.azure.azcollection.plugins.inventory.azure_rm plugin: a batched request failed with status code 403, url
/subscriptions/<redacted>/resourceGroups/<redacted>/providers/Microsoft.Compute/virtualMachines
...
Here are the specifics of my macOS setup:
$ ansible --version
ansible 2.10.3
config file = /project/ansible/ansible.cfg
configured module search path = ['/Users/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/Cellar/ansible/2.10.3_1/libexec/bin/ansible
python version = 3.9.0 (default, Dec 6 2020, 18:02:34) [Clang 12.0.0 (clang-1200.0.32.27)]
This is the inventory.azure_rm.yml file:
plugin: azure_rm
include_vm_resource_groups:
- <redacted>
auth_source: auto
keyed_groups:
- prefix: tag
key: tags
And I've also added this to the local ansible.cfg file:
inventory = ./inventory.azure_rm.yml
I've also defined the particulars for authenticating to Azure as environment variables:
$ env | grep AZURE
AZURE_TENANT=<redacted>
AZURE_CLIENT_ID=<redacted>
AZURE_USE_PRIVATE_IP=yes
AZURE_SECRET=<redacted>
AZURE_SUBSCRIPTION_ID=<redacted>
These are the same "credentials" that I used with Terraform to create the VMs which I'm now trying to dynamically inventory, so they should be good. So at a bit of a loss as to what is behind the 403 error.
I then added a -vvvv option to the command and got some additional info:
$ ansible all -m ping -vvvv
ansible 2.10.3
config file = /Users/me/project/ansible/ansible.cfg
configured module search path = ['/Users/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/Cellar/ansible/2.10.3_1/libexec/bin/ansible
python version = 3.9.0 (default, Dec 6 2020, 18:02:34) [Clang 12.0.0 (clang-1200.0.32.27)]
Using /Users/me/project/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
script declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
redirecting (type: inventory) ansible.builtin.azure_rm to azure.azcollection.azure_rm
Loading collection azure.azcollection from /Users/me/.ansible/collections/ansible_collections/azure/azcollection
toml declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /Users/me/project/ansible/inventory.azure_rm.yml with
ansible_collections.azure.azcollection.plugins.inventory.azure_rm plugin: a batched request failed with status code 403, url
/subscriptions/<redacted>/resourceGroups/<redacted>/providers/Microsoft.Compute/virtualMachines
File "/usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible/inventory/manager.py", line 289, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 59, in parse
plugin.parse(inventory, loader, path, cache=cache)
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 206, in parse
self._get_hosts()
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 263, in _get_hosts
self._process_queue_batch()
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 405, in _process_queue_batch
raise AnsibleError("a batched request failed with status code {0}, url {1}".format(status_code, result.url))
Has anyone come across this before and figured out a fix? I'm assuming the Service Principal I'm using is missing some role or permission, but I have no idea what it is given the same SP is used to provision the VM in the first place.
Add the collection to get the latest version and then try this:
plugin: azure.azcollection.azure_rm
This will ensure you're using the latest and not the built-in version that won't contain bug fixes and support newer api versions.
we are sitting behind a firewall and try to run a docker image (cBioportal). The docker itself could be installed with a proxy but now we encounter the following issue:
Starting validation...
INFO: -: Unable to read xml containing cBioPortal version.
DEBUG: -: Requesting cancertypes from portal at 'http://cbioportal-container:8081'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error occurred during validation step:
Traceback (most recent call last):
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4491, in request_from_portal_api
response.raise_for_status()
File "/usr/local/lib/python3.5/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Timeout for url: http://cbioportal-container:8081/api-legacy/cancertypes
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/metaImport.py", line 127, in <module>
exitcode = validateData.main_validate(args)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4969, in main_validate
portal_instance = load_portal_info(server_url, logger)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4622, in load_portal_info
parsed_json = request_from_portal_api(path, api_name, logger)
File "/cbioportal/core/src/main/scripts/importer/validateData.py", line 4495, in request_from_portal_api
) from e
ConnectionError: Failed to fetch metadata from the portal at [http://cbioportal-container:8081/api-legacy/cancertypes]
Now we know that it is a firewall issue, because it works when we install it outside the firewall. But we do not know how to change the firewall yet. Our idea was to look up the files and lines which throw the errors. But we do not know how to look into the files since they are within the docker.
So we can not just do something like
vim /cbioportal/core/src/main/scripts/importer/validateData.py
...because ... there is nothing. Of course we know this file is within the docker image, but like i said we dont know how to look into it. At the moment we do not know how to solve this riddle - any help appreciated.
maybe you still might need this.
You can access this python file within the container by usingdocker-compose exec cbioportal sh or docker-compose exec cbioportal bash
Then you can us cd, cat, vi, vim or else to access the given path in your post.
I'm not sure which command you're actually running but when I did the import call like
docker-compose run --rm cbioportal metaImport.py -u http://cbioportal:8080 -s study/lgg_ucsf_2014/lgg_ucsf_2014/ -o
I had to replace the http://cbioportal:8080 with the servers ip address.
Also notice that the studies path is one level deeper than in the official documentation.
In cbioportal behind proxy the study import is only available in offline mode via:
First you need to get inside the container
docker exec -it cbioportal-container bash
Then generate portal info folder
cd $PORTAL_HOME/core/src/main/scripts ./dumpPortalInfo.pl $PORTAL_HOME/my_portal_info_folder
Then import the study offline. -o is important to overwrite despite of warnings.
cd $PORTAL_HOME/core/src/main/scripts
./importer/metaImport.py -p $PORTAL_HOME/my_portal_info_folder -s /study/lgg_ucsf_2014 -v -o
Hope this helps.
I’m trying to install Anaconda, Python 3 and Jupyter notebooks on an AWS EC2 instance. I’m running Ubuntu on the instance. I’ve installed Python using Anaconda. I’ve set the default Python to the Anaconda version. I created a Jupyter notebook config file. In the Jupyter notebook config file I added:
c = get_config()
# Notebook config this is where you saved your pem cert
c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem'
# Run on all IP addresses of your instance
c.NotebookApp.ip = '*'
# Don't open browser by default
c.NotebookApp.open_browser = False
# Fix port to 8888
c.NotebookApp.port = 8888
I also created a directory for the certs using the code below:
mkdir certs
cd certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
But when I try to run Jupiter notebook with the command below:
jupyter notebook
I get the error message below. My end goal is to be able to launch Jupiter notebook on the AWS EC2 instance and then connect to it remotely in a browser on my laptop. Does anyone know what my issue might be?
Error:
Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py", line 528, in get
value = obj._trait_values[self.name]
KeyError: 'allow_remote_access'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 864, in _default_allow_remote
addr = ipaddress.ip_address(self.ip)
File "/home/ubuntu/anaconda3/lib/python3.7/ipaddress.py", line 54, in ip_address
address)
ValueError: '' does not appear to be an IPv4 or IPv6 address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "</home/ubuntu/anaconda3/lib/python3.7/site-packages/decorator.py:decorator-gen-7>", line 2, in initialize
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 1630, in initialize
self.init_webapp()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 1378, in init_webapp
self.jinja_environment_options,
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 159, in __init__
default_url, settings_overrides, jinja_env_options)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 252, in init_settings
allow_remote_access=jupyter_app.allow_remote_access,
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/notebook/notebookapp.py", line 867, in _default_allow_remote
for info in socket.getaddrinfo(self.ip, self.port, 0, socket.SOCK_STREAM):
File "/home/ubuntu/anaconda3/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Go to your AWS instance security group and then configure inbound security group like this below screenshot:
If you are sure that AWS is configured correctly for permissions, check if your network is not blocking the outbound traffic. You could try to do port tunneling when SSHing into your instance by doing:
ssh -i -L 8888:127.0.0.1:8888
then you can access jupyter locally by going to localhost:8888 on your browser.
In the Jupyter notebook config file that you have shared in the question above a few lines seem to be missing.
To configure the jupyter config file thoroughly, follow these steps:
cd ~/.jupyter/
vi jupyter_notebook_config.py
Insert this at the beginning of the document:
c = get_config()
# Kernel config
c.IPKernelApp.pylab = 'inline' # if you want plotting support always in your notebook
# Notebook config
c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem' #location of your certificate file
c.NotebookApp.ip = '0.0.0.0'
c.NotebookApp.open_browser = False #so that the ipython notebook does not opens up a browser by default
c.NotebookApp.password = u'sha1:98ff0e580111:12798c72623a6eecd54b51c006b1050f0ac1a62d' #the encrypted password we generated above
# Set the port to 8888, the port we set up in the AWS EC2 set-up
c.NotebookApp.port = 8888
Once you enter these above lines, make sure you save the config file before you exit the vi editor!
And also, most importantly remember to replace sha1:98ff0e580111:12798c72623a6eecd54b51c006b1050f0ac1a62d with your password!
Note that since in the above config file we have given port as 8888, the same is added in the security group. (Custom TCP type,TCP protocol, port range as 8888 and source is custom)
Now you are good to go!
Type the following command:
screen
This command will allow you to create a separate screen for just your Jupyter process logs while you continue to do other work on the ec2 instance.
And now start the jupyter notebook by typing the command:
jupyter notebook
To visit the jupyter notebook from the browser in your local machine:
Your EC2 instance will have a long url, like this:
ec2-52-39-239-66.us-west-2.compute.amazonaws.com
Visit that URL in you browser locally. Make sure to have https at the beginning and port 8888 at the end as shown below.
https://ec2-52-39-239-66.us-west-2.compute.amazonaws.com:8888/
You can start the jupyter server using the following command:-
jupyter notebook --ip=*
If you want to keep it running even after the terminal is closed then use:-
nohup jupyter notebook --ip=* > nohup_jupyter.out&
Remember to open the port 8888 in the AWS EC2 security group inbound to Anywhere (0.0.0.0/0, ::/0)
Then you can access jupyter using http://:8888
Hope this helps. This just a one liner solution!!
I am able to create a resource group with the following ansible playbook in the azure cloud shell, but not from my local pc. Why? I tried to recreate the application/secrets multiple times but nothing worked.
- name: Create Azure Kubernetes Service
hosts: localhost
connection: local
vars:
resource_group: birdy71
location: westeurope
aks_name: birdy7-cluster
username: birdy7
ssh_key: "ssh-rsa xxxxxxxx"
client_id: "xxxx"
client_secret: "xxx"
tenant: "xxx"
subscription_id: "xxx"
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
client_id: "{{ client_id }}"
secret: "{{ client_secret }}"
subscription_id: "{{ subscription_id }}"
tenant: "{{ tenant }}"
In the azure cloud shell I removed the ~/.azure folder completely but it works nonetheless. On my local pc I get this error: AADSTS7000215: Invalid client secret is provided.
But how can that be? The secret works well if it is used from within the azure cloud shell.
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Timestamp: 2019-03-20 13: 34: 02Z
fatal: [localhost
]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/Users/tobias/.ansible/tmp/ansible-tmp-1553088840.81-75656009010434/AnsiballZ_azure_rm_resourcegroup.py\", line 113, in <module>\n _ansiballz_main()\n File \"/Users/tobias/.ansible/tmp/ansible-tmp-1553088840.81-75656009010434/AnsiballZ_azure_rm_resourcegroup.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/tobias/.ansible/tmp/ansible-tmp-1553088840.81-75656009010434/AnsiballZ_azure_rm_resourcegroup.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/var/folders/fl/pps_zz4s3lx6569226xr_2bh0000gn/T/ansible_azure_rm_resourcegroup_payload_CeouHT/__main__.py\", line 256, in <module>\n File \"/var/folders/fl/pps_zz4s3lx6569226xr_2bh0000gn/T/ansible_azure_rm_resourcegroup_payload_CeouHT/__main__.py\", line 252, in main\n File \"/var/folders/fl/pps_zz4s3lx6569226xr_2bh0000gn/T/ansible_azure_rm_resourcegroup_payload_CeouHT/__main__.py\", line 136, in __init__\n File \"/var/folders/fl/pps_zz4s3lx6569226xr_2bh0000gn/T/ansible_azure_rm_resourcegroup_payload_CeouHT/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 301, in __init__\n File \"/var/folders/fl/pps_zz4s3lx6569226xr_2bh0000gn/T/ansible_azure_rm_resourcegroup_payload_CeouHT/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 1021, in __init__\n File \"/Users/tobias/.venv/azure2/lib/python2.7/site-packages/msrestazure/azure_active_directory.py\", line 453, in __init__\n self.set_token()\n File \"/Users/tobias/.venv/azure2/lib/python2.7/site-packages/msrestazure/azure_active_directory.py\", line 480, in set_token\n raise_with_traceback(AuthenticationError, \"\", err)\n File \"/Users/tobias/.venv/azure2/lib/python2.7/site-packages/msrest/exceptions.py\", line 48, in raise_with_traceback\n raise error\nmsrest.exceptions.AuthenticationError: , InvalidClientError: (invalid_client) AADSTS7000215: Invalid client secret is provided.\r\nTrace ID: c7fab593-93e7-415f-a3e8-5ba973e81e00\r\nCorrelation ID: 5ee1181d-f0ac-4c08-a0e7-dfba9c722073\r\nTimestamp: 2019-03-20 13:34:02Z\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
I can reappear the error that happened to you:
From the error it shows the core problem you meet:
InvalidClientError: (invalid_client) AADSTS7000215: Invalid client
secret is provided.
So you must input the wrong secret of the service principal. For service principal secret, you just can see it when you create. So I suggest you can reset the secret through CLI command az ad sp credential reset if you really do not remember it.
Also, you can check if the secret of your service principal is right through the CLI command:
az login --service-principal --username APP_ID --password PASSWORD --tenant TENANT_ID
In addition, when you use the Cloud Shell to execute ansible, it means Automatic credential by Azure. See Automatic credential configuration
When signed into the Cloud Shell, Ansible authenticates with Azure to
manage infrastructure without any additional configuration.
The screenshot below is the result of my test:
The solution was to regenerate my client secret until I get one without special characters like "&" and "\". :-(