Why does the ansible remote copy not work from local - linux

I just started learning ansible and writing first copy module but i am clueless why its not working.
---
- hosts: osa
tasks:
- name: Copying file to remote host
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
Ran playbook but nothing happened and no error too.
# /usr/bin/ansible-playbook /home/spatel/ansible/first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [10.5.1.160]
TASK [Copying file to remote host] **************************************************************************************************************************************************
ok: [10.5.1.160]
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=2 changed=0 unreachable=0 failed=0
My /etc/ansible/hosts file has Remote host IP address:
[osa]
10.5.1.160
Verbose output:
# ansible-playbook -vv first-playbook.yml
ansible-playbook 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: first-playbook.yml ********************************************************************************************************************************************************
1 plays in first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
META: ran handlers
TASK [Copying file to remote host] **************************************************************************************************************************************************
task path: /home/spatel/ansible/first-playbook.yml:5
ok: [10.5.1.160] => {"changed": false, "checksum": "4383f040dc0303e55260bca327cc0eeb213b04b5", "failed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/tmp/bar.txt", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 8, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=1 changed=0 unreachable=0 failed=0
UPDATE
I have noticed its copying file to localhost not remote host, this makes no sense, why is it saying in the output that it is executing the playbook on the remote host (10.5.1.160) ?????
# ls -l /tmp/bar.txt
-rw-r--r--. 1 root root 16 May 24 10:45 /tmp/bar.txt

I had a similar problem that it was looking for a file on the remote server. You are running your tasks on remote server hosts: osa but the file is on localhost. I could solve it with delegate_to: localhost.
---
- hosts: osa
tasks:
- name: Copying file to remote host
delegate_to: localhost
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
However, I had to make even first a task to get the file from localhost and then to provide it afterwards to the remote host:
- name: read files
# This needs to run on localhost, because that's where
# the keys are stored.
delegate_to: localhost
command: cat {{item}}
# Register the results of this task in a variable called
# "files"
register: files
with_fileglob:
- "/tmp/*.txt"
- name: show what was stored in the files variable
debug:
var: files
- name: Copying file to remote host
delegate_to: localhost
copy: src="{{item.stdout}}" dest=/tmp/bar.txt
with_items: "{{keys.results}}"

Related

Creating a ansible playbook for getting the apache status and want to display just the status

I have written an Ansible playbook which is as follows, but the problem it shows the status as disabled when i launch the playbook and when i check the same on my ubuntu server manually for the server status it shows active.
Please help and suggest what mistake I am doing in writing the playbook .
Note :- My goal is to get the status of the apache server {that is it in started or stopped status} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the status
testfile.yml
---
-
name: "To check if the apache is running or not"
hosts: webserver1
become: yes
tasks:
- name: Check Apache2 service status
service:
name: apache2
state: started
register: apache2_status
# - name: apache2 status
# command: service apache2 status
# register: apache2_status
- name: Print Apache2 service status
debug:
msg: "Apache2 service is {{ 'started' if apache2_status.status == 'started' else 'stopped' }}"
Running the following ansible command to run the playbook
ansible-playbook testfile.yml -i inventory --ask-become-pass
output
PLAY [To check if the apache is running or not] ********************************************************************************
TASK [Gathering Facts] *********************************************************************************************************
ok: [webserver1]
TASK [Check Apache2 service status] ********************************************************************************************
ok: [webserver1]
TASK [Print Apache2 service status] ********************************************************************************************
ok: [webserver1] => {
"msg": "Apache2 service is stopped"
}
PLAY RECAP *********************************************************************************************************************
webserver1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And When I check the same manually on my ubuntu webserver1
ubuntu#ubuntu1:~$ service apache2 status
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-12-31 08:56:30 UTC; 5h 28min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 632 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 708 (apache2)
Tasks: 55 (limit: 1695)
Memory: 7.5M
CPU: 2.724s
CGroup: /system.slice/apache2.service
├─708 /usr/sbin/apache2 -k start
├─713 /usr/sbin/apache2 -k start
└─714 /usr/sbin/apache2 -k start
Dec 31 08:56:29 ubuntu1 systemd[1]: Starting The Apache HTTP Server...
Dec 31 08:56:30 ubuntu1 apachectl[685]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name,>
Dec 31 08:56:30 ubuntu1 systemd[1]: Started The Apache HTTP Server.
I have tried to run the Ansible playbook with different parameters but not able to replicate my expected output
What my expecting is My goal is to get the status of the apache server {that is it in started or stopped status ?} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the new status
For example, given the inventory
shell> cat hosts
[webservers]
webserver1 ansible_host=10.1.0.74
[webservers:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/bin/python3.6
The playbook below in the first task makes sure the web server is running. The second task collects the status of all services. The third task writes the status to a file at the controller
- name: Get status of webserver and append the new status to file
hosts: webservers
tasks:
- name: Start lighttpd if not running already
service:
name: lighttpd
state: started
register: status
- debug:
var: status.state
- name:
service_facts:
- debug:
var: ansible_facts.services['lighttpd.service']
- lineinfile:
create: true
path: /tmp/webservers.status
line: "{{ '%Y-%m-%d %H:%M:%S'|strftime }} {{ item }} {{ s.state }} {{ s.status }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
vars:
s: "{{ hostvars[item].ansible_facts.services['lighttpd.service'] }}"
gives
PLAY [Get status of webserver and append the new status to file] *****************************
TASK [Start lighttpd if not running already] *************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
status.state: started
TASK [service_facts] *************************************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
ansible_facts.services['lighttpd.service']:
name: lighttpd.service
source: systemd
state: running
status: disabled
TASK [lineinfile] ****************************************************************************
changed: [webserver1 -> localhost] => (item=webserver1)
PLAY RECAP ***********************************************************************************
webserver1: ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and appends the status of the webserver(s) to the file
shell> cat /tmp/webservers.status
2023-01-01 01:03:15 webserver1 running disabled
2023-01-01 01:03:32 webserver1 running disabled
I'm pretty sure your problem is that you want to check the state, but you are effectively checking whether the service is enabled or not.
(ie a service can be currently running but not enabled, and even though your service is enabled, it is not specified in ansible, and it is not collecting that for you)
I'd suggest using systemd instead of service like this. Set the status to started, but in check mode. The module won't actually do anything, but it will set "changed" to true if a change needed to be made.
This is more Ansible-ish in my opinion than calling a shell command.
(I tested this on Fedora -- adjust the service name as you need to.)
---
- hosts: localhost
tasks:
- name: check if apache is running
ansible.builtin.systemd:
# on Ubuntu, I think this is apache2
name: httpd
state: started
register: httpd_status
check_mode: true
- name: debug
ansible.builtin.debug:
msg: "Apache2 service is {{ httpd_status.changed | ternary('stopped','started') }}"

Using relative path in yaml file for id_rsa file

I'm trying to use "~/.ssh/id_rsa" as the key path in cluster.yaml to deploy a kubernetes cluster but when it's called, the error says 'No such file or directory: ~/.ssh/id_rsa'
node_defaults:
keyfile: "~/.ssh/id_rsa"
username: "nikhil"
nodes:
- name: "k8s-control-plane"
address: "10.0.0.1"
internal_address: "192.***.**.***"
roles: ["control-plane", "worker"]
cluster_name: "k8s-stack.testcluster.com"
It works fine if i use absolute path for keyfile:(/home/user/.ssh/id_rsa) but facing issue while using relative path like this
Some environment variables are available in Ansible. For example, the playbook
shell> cat pb.yml
- hosts: test_11
gather_facts: true
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: echo $HOME
register: result
- debug:
var: result.stdout
- debug:
var: ansible_env
shows the environment variables collected by setup (gather_facts: true)
shell> ansible-playbook pb.yml
PLAY [test_11] *******************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: admin
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: /home/admin
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_env:
BLOCKSIZE: K
HOME: /home/admin
LANG: C.UTF-8
LOGNAME: admin
MAIL: /var/mail/admin
MM_CHARSET: UTF-8
PATH: /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/admin/bin
PWD: /home/admin
SHELL: /bin/sh
SSH_CLIENT: 10.1.0.184 58084 22
SSH_CONNECTION: 10.1.0.184 58084 10.1.0.61 22
SSH_TTY: /dev/pts/1
TERM: xterm-256color
USER: admin
PLAY RECAP ***********************************************************************************
test_11: ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes
Some environment variables are available in the configuration files.
See How do I access shell environment variables?
See Running under fakeroot

connect() got an unexpected keyword argument 'sock'

I'm getting below error when trying to connect to routers and i'm running latest version of Paramiko. What could be causing this error and is there any workaround ?
Update:
its simple scriptusing ansible to collect "show system uptime" command output.
I have inventory files which contains ip address of routers
$ cat uptime.yaml
---
- name: Get device uptime
hosts:
- all
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: UsernName
private: no
- name: password
prompt: Password
private: yes
tasks:
- name: get uptime using ansible moduel
junos_command:
commands:
- show system uptime
provider:
host: "{{ansible_host}}"
port: 22
username: "{{ username }}"
password: "{{ password }}"
>>> print(paramiko.__version__)
2.7.2
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature
will be removed from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
UsernName: regress
Password:
PLAY [Get device uptime] ************************************************************************************************************************************************************************
TASK [get uptime using ansible moduel] **********************************************************************************************************************************************************
[WARNING]: ['connection local support for this module is deprecated and will be removed in version 2.14, use connection ansible.netcommon.netconf']
fatal: [R1_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
fatal: [R2_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
PLAY RECAP **************************************************************************************************************************************************************************************
R1_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
R2_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

ImportError: No module named influxdb. Failed to import the required Python library (influxdb)

I have problem to manage influxdb through ansible using "influxdb_database" module. Even though it printouts error about python dependency, it is failing only when container where ansible playbook runs is run on diff VM from one where influxdb is hosted. I run ansible playbook from docker container, and if I run container on the host where influxdb is installed, it works fine - it is managing to create db. But, when same container (created from same image as one mentioned before) runs on different VM from one where influxdb is hosted, it is failing with error pasted below. So I am confused now with the error about python dependency and do not understand where the problem is.
Ansible playbook:
hosts: "tag_deployment_sysmiromis:&tag_service_tick_yes"
user: centos
become: yes
tasks:
- name: Install InfluxDB package
yum: name="influxdb-{{ frame_tick_influxdb_package_version }}" state=present disable_gpg_check=yes
register: frame_yum_run
retries: 10
until: frame_yum_run is success
- name: Restrict InfluxDB user login
user:
name: "influxdb"
group: "influxdb"
shell: /sbin/nologin
- name: Enable InfluxDB service
systemd:
name: influxdb
enabled: yes
state: started
- name: Create InfluxDB data directory
file:
path: "{{ frame_tick_influxdb_data_directory }}"
owner: influxdb
group: influxdb
state: directory
mode: 0750
- name: Create database
influxdb_database:
hostname: localhost
database_name: miroslav
Ansible log on failed task
TASK [Create database] ***********************************************************************************************************************************************************
task path: /app/lib/ansible/playbooks/influx.yml:6
Using module file /usr/lib/python3.8/site-packages/ansible/modules/database/influxdb/influxdb_database.py
Pipelining is enabled.
<10.246.44.196> ESTABLISH SSH CONNECTION FOR USER: centos
<10.246.44.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="centos"' -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/be4c96d801 10.246.44.196 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-brzvkupumuacfsjirccgazqszuzzfwwx ; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<10.246.44.196> (1, b'\n{"msg": "Failed to import the required Python library (influxdb) on frame-tick10-246-44-196\'s Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_influxdb_database_payload_IrdxhN/ansible_influxdb_database_payload.zip/ansible/module_utils/influxdb.py\\", line 23, in <module>\\n from influxdb import InfluxDBClient\\nImportError: No module named influxdb\\n", "invocation": {"module_args": {"username": "root", "retries": 3, "use_udp": true, "proxies": {}, "database_name": "miroslav", "hostname": "localhost", "udp_port": 4444, "ssl": false, "state": "present", "timeout": null, "password": "root", "validate_certs": true, "port": 8086}}}\n', b'OpenSSH_8.1p1, OpenSSL 1.1.1g 21 Apr 2020\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 10.246.44.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 2147\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<10.246.44.196> Failed to connect to the host via ssh: OpenSSH_8.1p1, OpenSSL 1.1.1g 21 Apr 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: resolve_canonicalize: hostname 10.246.44.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 2147
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_influxdb_database_payload_IrdxhN/ansible_influxdb_database_payload.zip/ansible/module_utils/influxdb.py", line 23, in <module>
from influxdb import InfluxDBClient
ImportError: No module named influxdb
fatal: [10.246.44.196]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"database_name": "miroslav",
"hostname": "localhost",
"password": "root",
"port": 8086,
"proxies": {},
"retries": 3,
"ssl": false,
"state": "present",
"timeout": null,
"udp_port": 4444,
"use_udp": true,
"username": "root",
"validate_certs": true
}
}
}
MSG:
Failed to import the required Python library (influxdb) on frame-tick10-246-44-196's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
sounds like you're dealing with the same issue as me. I was struggling to find out what was wrong then I read the requirements and saw this. I'm using a newer version of influxdb than this module supports so I get the same error as you
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.6
influxdb >= 0.9 & <= 1.2.4
requests
https://docs.ansible.com/ansible/latest/modules/influxdb_database_module.html
UPDATE:
I have been able to find a way to interact with influx DB using the api instead of the modules since they don't work. This involves editing the influxdb.conf to enable using the API
# modifying the influxdb.conf is required to be able to use the influxDB API
- name: Enable http
lineinfile:
path: /etc/influxdb/influxdb.conf
regexp: 'Determines whether HTTP endpoint is enabled.'
line: ' enabled = true'
- name: Enable bind address :8086
lineinfile:
path: /etc/influxdb/influxdb.conf
regexp: '# bind-address = ":8086"'
line: ' bind-address = ":8086"'
- name: Restart influxdb,
systemd:
name: influxdb
state: restarted
- name: Create influxDB database via api
uri:
url: "http://localhost:8086/query"
method: POST
body: 'q=CREATE DATABASE "grafanadb"'
body_format: form-urlencoded
- name: create root user in influxdb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=CREATE USER user WITH PASSWORD 'pass' WITH ALL PRIVILEGES"
- name: create grafana user in influxdb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=CREATE USER grafana WITH PASSWORD 'grafana'"
- name: Grant all privileges to grafana user on grafanadb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=GRANT ALL ON grafanadb TO grafana"
body_format: form-urlencoded
In order to get influxdb_database module working you have to make sure you have influxdb-python installed on your machine. It is also recommended to have influxdb installed as you might want to directly access the database from the command line.
For CentOS7/RHEL7 installations this can be done as follows:
yum install python-pip
pip install influxdb
CentOS8/RHEL8:
dnf install python3-pip
pip3 install influxdb
Note: You have to use a different python version as the default python interpreter is different for CentOS7 and 8. Therefore influxdb_database python interpreter will be different too.
Therefore, the playbook would look something like this:
- name: Install applications for CentOS 7
yum:
name:
- influxdb
- python-pip
- name: Install applications for CentOS 8
yum:
name:
- influxdb
- python3-pip
- name: Install required pip packages
pip:
name:
- influxdb
For debian/ubuntu setups you might do the following:
apt-get install python-influxdb
or
- name: Install applications for CentOS
apt:
name:
- python-influxdb
If you are trying to connect to a remote InfluxDB you should ensure that you are authenticating over SSL. You have to manually enable this as it is not enabled by default.
This is what a remote influx database creation would thus look like:
- name: Create database using custom credentials
influxdb_database:
hostname: "{{influxurl}}"
username: "{{influxusername}}"
password: "{{influxpassword}}"
database_name: "{{influxdbv7}}"
port: "{{influxport}}"
ssl: yes
validate_certs: yes
Note: I have tested this setup with CentOS7/8. It possibly works fine with Ubuntu/Debian setups too. For some reason CentOS7 required me to disable validate_certs, otherwise it fails. Possibly its a bug.
Tested version:
Database: InfluxDB version 1.8.3
Ansible: version 2.9
What I was missing is influxdb installed in targeted host.Once it is installed, influxdb ansible module start working fine.
Also struggled with this issue. Downgrading the version of the requests python package helped me.
pip install requests==2.6.0
(2.25.1 did not work for me)

Ansible and route53 unsupported parameter for module: connection

I'm trying to run an Ansible playbook with Amazon's Route53 service but I get the error in the title.
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [configure dns] *********************************************************
failed: [localhost] => {"failed": true}
msg: unsupported parameter for module: connection
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
This is my play:
$ cat play-dns.yml
---
- hosts: localhost
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes
connection: local
And this is my Ansible hosts file:
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost ansible_connection=local
If I remove ansible_connection=local from the hosts file
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost
then I get this error:
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [configure dns] *********************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=0 changed=0 unreachable=2 failed=0
What am I doing wrong?
Your issue is simply indentation. Currently Ansible is parsing your playbook and seeing the connection line as a parameter to the route53 module which is then complaining that connection is not a valid parameter for the module.
Instead you simply need to unindent the line to the same level as the hosts so that Ansible parses it as a parameter to the playbook overall rather than the module:
---
- hosts: localhost
connection: local
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes

Resources