Ansible task failed while executing the specific long running task - linux

While running all the task at a time Ansible disconnected the SSH session in between running the task which took 26 hours to complete but ansible disconnected the SSH session after the 6 hours execution. Target server SSH configuration to keep the session as below:
ClientAliveInterval 172000
ClientAliveCountMax 10
Ansible task:
- name: Executing script
remote_user: "{{admin_user}}"
become: yes
shell: sudo -u test bash ./customscript.sh > /log_dir/customscript.log 2>&1
args:
chdir: "deployment_source/common"
tags:
- custom-test
Find the error log below:
22:11:44 TASK [role-deployment : Executing script] ************
22:11:44 fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to x.x.x.x closed.\r\n", "unreachable": true}
22:11:44
22:11:44 NO MORE HOSTS LEFT *************************************************************
22:11:44 to retry, use: --limit #/opt/ansible/test/deployment.retry
22:11:44
22:11:44 PLAY RECAP *********************************************************************
22:11:44 x.x.x.x : ok=6 changed=2 unreachable=1 failed=0
Kindly inform, what is the issue of disconnection? how can solve it?

You should never expect network connection to be stable that long.
There's async mechanism in Ansible to work with long-running jobs.
Refactor your code to be:
- name: Executing script
remote_user: "{{admin_user}}"
become: yes
shell: sudo -u test bash ./customscript.sh > /log_dir/customscript.log 2>&1
args:
chdir: "deployment_source/common"
async: 180000
poll: 60
tags:
- custom-test
To allow your task to be executed as much as 50 hours and check for completion every 60 seconds.

Related

Creating a ansible playbook for getting the apache status and want to display just the status

I have written an Ansible playbook which is as follows, but the problem it shows the status as disabled when i launch the playbook and when i check the same on my ubuntu server manually for the server status it shows active.
Please help and suggest what mistake I am doing in writing the playbook .
Note :- My goal is to get the status of the apache server {that is it in started or stopped status} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the status
testfile.yml
---
-
name: "To check if the apache is running or not"
hosts: webserver1
become: yes
tasks:
- name: Check Apache2 service status
service:
name: apache2
state: started
register: apache2_status
# - name: apache2 status
# command: service apache2 status
# register: apache2_status
- name: Print Apache2 service status
debug:
msg: "Apache2 service is {{ 'started' if apache2_status.status == 'started' else 'stopped' }}"
Running the following ansible command to run the playbook
ansible-playbook testfile.yml -i inventory --ask-become-pass
output
PLAY [To check if the apache is running or not] ********************************************************************************
TASK [Gathering Facts] *********************************************************************************************************
ok: [webserver1]
TASK [Check Apache2 service status] ********************************************************************************************
ok: [webserver1]
TASK [Print Apache2 service status] ********************************************************************************************
ok: [webserver1] => {
"msg": "Apache2 service is stopped"
}
PLAY RECAP *********************************************************************************************************************
webserver1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And When I check the same manually on my ubuntu webserver1
ubuntu#ubuntu1:~$ service apache2 status
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-12-31 08:56:30 UTC; 5h 28min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 632 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 708 (apache2)
Tasks: 55 (limit: 1695)
Memory: 7.5M
CPU: 2.724s
CGroup: /system.slice/apache2.service
├─708 /usr/sbin/apache2 -k start
├─713 /usr/sbin/apache2 -k start
└─714 /usr/sbin/apache2 -k start
Dec 31 08:56:29 ubuntu1 systemd[1]: Starting The Apache HTTP Server...
Dec 31 08:56:30 ubuntu1 apachectl[685]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name,>
Dec 31 08:56:30 ubuntu1 systemd[1]: Started The Apache HTTP Server.
I have tried to run the Ansible playbook with different parameters but not able to replicate my expected output
What my expecting is My goal is to get the status of the apache server {that is it in started or stopped status ?} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the new status
For example, given the inventory
shell> cat hosts
[webservers]
webserver1 ansible_host=10.1.0.74
[webservers:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/bin/python3.6
The playbook below in the first task makes sure the web server is running. The second task collects the status of all services. The third task writes the status to a file at the controller
- name: Get status of webserver and append the new status to file
hosts: webservers
tasks:
- name: Start lighttpd if not running already
service:
name: lighttpd
state: started
register: status
- debug:
var: status.state
- name:
service_facts:
- debug:
var: ansible_facts.services['lighttpd.service']
- lineinfile:
create: true
path: /tmp/webservers.status
line: "{{ '%Y-%m-%d %H:%M:%S'|strftime }} {{ item }} {{ s.state }} {{ s.status }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
vars:
s: "{{ hostvars[item].ansible_facts.services['lighttpd.service'] }}"
gives
PLAY [Get status of webserver and append the new status to file] *****************************
TASK [Start lighttpd if not running already] *************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
status.state: started
TASK [service_facts] *************************************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
ansible_facts.services['lighttpd.service']:
name: lighttpd.service
source: systemd
state: running
status: disabled
TASK [lineinfile] ****************************************************************************
changed: [webserver1 -> localhost] => (item=webserver1)
PLAY RECAP ***********************************************************************************
webserver1: ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and appends the status of the webserver(s) to the file
shell> cat /tmp/webservers.status
2023-01-01 01:03:15 webserver1 running disabled
2023-01-01 01:03:32 webserver1 running disabled
I'm pretty sure your problem is that you want to check the state, but you are effectively checking whether the service is enabled or not.
(ie a service can be currently running but not enabled, and even though your service is enabled, it is not specified in ansible, and it is not collecting that for you)
I'd suggest using systemd instead of service like this. Set the status to started, but in check mode. The module won't actually do anything, but it will set "changed" to true if a change needed to be made.
This is more Ansible-ish in my opinion than calling a shell command.
(I tested this on Fedora -- adjust the service name as you need to.)
---
- hosts: localhost
tasks:
- name: check if apache is running
ansible.builtin.systemd:
# on Ubuntu, I think this is apache2
name: httpd
state: started
register: httpd_status
check_mode: true
- name: debug
ansible.builtin.debug:
msg: "Apache2 service is {{ httpd_status.changed | ternary('stopped','started') }}"

ImportError: No module named influxdb. Failed to import the required Python library (influxdb)

I have problem to manage influxdb through ansible using "influxdb_database" module. Even though it printouts error about python dependency, it is failing only when container where ansible playbook runs is run on diff VM from one where influxdb is hosted. I run ansible playbook from docker container, and if I run container on the host where influxdb is installed, it works fine - it is managing to create db. But, when same container (created from same image as one mentioned before) runs on different VM from one where influxdb is hosted, it is failing with error pasted below. So I am confused now with the error about python dependency and do not understand where the problem is.
Ansible playbook:
hosts: "tag_deployment_sysmiromis:&tag_service_tick_yes"
user: centos
become: yes
tasks:
- name: Install InfluxDB package
yum: name="influxdb-{{ frame_tick_influxdb_package_version }}" state=present disable_gpg_check=yes
register: frame_yum_run
retries: 10
until: frame_yum_run is success
- name: Restrict InfluxDB user login
user:
name: "influxdb"
group: "influxdb"
shell: /sbin/nologin
- name: Enable InfluxDB service
systemd:
name: influxdb
enabled: yes
state: started
- name: Create InfluxDB data directory
file:
path: "{{ frame_tick_influxdb_data_directory }}"
owner: influxdb
group: influxdb
state: directory
mode: 0750
- name: Create database
influxdb_database:
hostname: localhost
database_name: miroslav
Ansible log on failed task
TASK [Create database] ***********************************************************************************************************************************************************
task path: /app/lib/ansible/playbooks/influx.yml:6
Using module file /usr/lib/python3.8/site-packages/ansible/modules/database/influxdb/influxdb_database.py
Pipelining is enabled.
<10.246.44.196> ESTABLISH SSH CONNECTION FOR USER: centos
<10.246.44.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="centos"' -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/be4c96d801 10.246.44.196 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-brzvkupumuacfsjirccgazqszuzzfwwx ; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<10.246.44.196> (1, b'\n{"msg": "Failed to import the required Python library (influxdb) on frame-tick10-246-44-196\'s Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_influxdb_database_payload_IrdxhN/ansible_influxdb_database_payload.zip/ansible/module_utils/influxdb.py\\", line 23, in <module>\\n from influxdb import InfluxDBClient\\nImportError: No module named influxdb\\n", "invocation": {"module_args": {"username": "root", "retries": 3, "use_udp": true, "proxies": {}, "database_name": "miroslav", "hostname": "localhost", "udp_port": 4444, "ssl": false, "state": "present", "timeout": null, "password": "root", "validate_certs": true, "port": 8086}}}\n', b'OpenSSH_8.1p1, OpenSSL 1.1.1g 21 Apr 2020\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug2: resolve_canonicalize: hostname 10.246.44.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 2147\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<10.246.44.196> Failed to connect to the host via ssh: OpenSSH_8.1p1, OpenSSL 1.1.1g 21 Apr 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: resolve_canonicalize: hostname 10.246.44.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 2147
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_influxdb_database_payload_IrdxhN/ansible_influxdb_database_payload.zip/ansible/module_utils/influxdb.py", line 23, in <module>
from influxdb import InfluxDBClient
ImportError: No module named influxdb
fatal: [10.246.44.196]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"database_name": "miroslav",
"hostname": "localhost",
"password": "root",
"port": 8086,
"proxies": {},
"retries": 3,
"ssl": false,
"state": "present",
"timeout": null,
"udp_port": 4444,
"use_udp": true,
"username": "root",
"validate_certs": true
}
}
}
MSG:
Failed to import the required Python library (influxdb) on frame-tick10-246-44-196's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
sounds like you're dealing with the same issue as me. I was struggling to find out what was wrong then I read the requirements and saw this. I'm using a newer version of influxdb than this module supports so I get the same error as you
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.6
influxdb >= 0.9 & <= 1.2.4
requests
https://docs.ansible.com/ansible/latest/modules/influxdb_database_module.html
UPDATE:
I have been able to find a way to interact with influx DB using the api instead of the modules since they don't work. This involves editing the influxdb.conf to enable using the API
# modifying the influxdb.conf is required to be able to use the influxDB API
- name: Enable http
lineinfile:
path: /etc/influxdb/influxdb.conf
regexp: 'Determines whether HTTP endpoint is enabled.'
line: ' enabled = true'
- name: Enable bind address :8086
lineinfile:
path: /etc/influxdb/influxdb.conf
regexp: '# bind-address = ":8086"'
line: ' bind-address = ":8086"'
- name: Restart influxdb,
systemd:
name: influxdb
state: restarted
- name: Create influxDB database via api
uri:
url: "http://localhost:8086/query"
method: POST
body: 'q=CREATE DATABASE "grafanadb"'
body_format: form-urlencoded
- name: create root user in influxdb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=CREATE USER user WITH PASSWORD 'pass' WITH ALL PRIVILEGES"
- name: create grafana user in influxdb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=CREATE USER grafana WITH PASSWORD 'grafana'"
- name: Grant all privileges to grafana user on grafanadb
uri:
url: "http://localhost:8086/query"
method: POST
body: "q=GRANT ALL ON grafanadb TO grafana"
body_format: form-urlencoded
In order to get influxdb_database module working you have to make sure you have influxdb-python installed on your machine. It is also recommended to have influxdb installed as you might want to directly access the database from the command line.
For CentOS7/RHEL7 installations this can be done as follows:
yum install python-pip
pip install influxdb
CentOS8/RHEL8:
dnf install python3-pip
pip3 install influxdb
Note: You have to use a different python version as the default python interpreter is different for CentOS7 and 8. Therefore influxdb_database python interpreter will be different too.
Therefore, the playbook would look something like this:
- name: Install applications for CentOS 7
yum:
name:
- influxdb
- python-pip
- name: Install applications for CentOS 8
yum:
name:
- influxdb
- python3-pip
- name: Install required pip packages
pip:
name:
- influxdb
For debian/ubuntu setups you might do the following:
apt-get install python-influxdb
or
- name: Install applications for CentOS
apt:
name:
- python-influxdb
If you are trying to connect to a remote InfluxDB you should ensure that you are authenticating over SSL. You have to manually enable this as it is not enabled by default.
This is what a remote influx database creation would thus look like:
- name: Create database using custom credentials
influxdb_database:
hostname: "{{influxurl}}"
username: "{{influxusername}}"
password: "{{influxpassword}}"
database_name: "{{influxdbv7}}"
port: "{{influxport}}"
ssl: yes
validate_certs: yes
Note: I have tested this setup with CentOS7/8. It possibly works fine with Ubuntu/Debian setups too. For some reason CentOS7 required me to disable validate_certs, otherwise it fails. Possibly its a bug.
Tested version:
Database: InfluxDB version 1.8.3
Ansible: version 2.9
What I was missing is influxdb installed in targeted host.Once it is installed, influxdb ansible module start working fine.
Also struggled with this issue. Downgrading the version of the requests python package helped me.
pip install requests==2.6.0
(2.25.1 did not work for me)

Why does the ansible remote copy not work from local

I just started learning ansible and writing first copy module but i am clueless why its not working.
---
- hosts: osa
tasks:
- name: Copying file to remote host
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
Ran playbook but nothing happened and no error too.
# /usr/bin/ansible-playbook /home/spatel/ansible/first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [10.5.1.160]
TASK [Copying file to remote host] **************************************************************************************************************************************************
ok: [10.5.1.160]
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=2 changed=0 unreachable=0 failed=0
My /etc/ansible/hosts file has Remote host IP address:
[osa]
10.5.1.160
Verbose output:
# ansible-playbook -vv first-playbook.yml
ansible-playbook 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: first-playbook.yml ********************************************************************************************************************************************************
1 plays in first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
META: ran handlers
TASK [Copying file to remote host] **************************************************************************************************************************************************
task path: /home/spatel/ansible/first-playbook.yml:5
ok: [10.5.1.160] => {"changed": false, "checksum": "4383f040dc0303e55260bca327cc0eeb213b04b5", "failed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/tmp/bar.txt", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 8, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=1 changed=0 unreachable=0 failed=0
UPDATE
I have noticed its copying file to localhost not remote host, this makes no sense, why is it saying in the output that it is executing the playbook on the remote host (10.5.1.160) ?????
# ls -l /tmp/bar.txt
-rw-r--r--. 1 root root 16 May 24 10:45 /tmp/bar.txt
I had a similar problem that it was looking for a file on the remote server. You are running your tasks on remote server hosts: osa but the file is on localhost. I could solve it with delegate_to: localhost.
---
- hosts: osa
tasks:
- name: Copying file to remote host
delegate_to: localhost
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
However, I had to make even first a task to get the file from localhost and then to provide it afterwards to the remote host:
- name: read files
# This needs to run on localhost, because that's where
# the keys are stored.
delegate_to: localhost
command: cat {{item}}
# Register the results of this task in a variable called
# "files"
register: files
with_fileglob:
- "/tmp/*.txt"
- name: show what was stored in the files variable
debug:
var: files
- name: Copying file to remote host
delegate_to: localhost
copy: src="{{item.stdout}}" dest=/tmp/bar.txt
with_items: "{{keys.results}}"

Gitlab CI 9.5 service is not running

I am searching a solution since 2 weeks on the web and I really need some help.
I am facing 3 problems:
Linux Gitlab-runner is not running
I have been trying to install gilab-runner with all the ways (GitLab's official repository, manualy, docker).
Everytime, when I am launching the command "gitlab-runner status" the answer is always "The server is not running." I have tried a million times to uninstall the service and re-install it but I do not want to work. I have register runners of all kind and with/without the sudo user. Without any success. This is my setup server:
Config
Ubuntu 16.04.1
Docker container gitlab 9.4.3
Port:
webservice :8088
https : 4433
ssh : 2222
gitlab-runner 9.5.0
How to reproduce
Register a shell runner http://192.168.1.10:8088/
Launch the command "sudo service gitlab-runner status"
Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since ven. 2017-08-25 15:17:45 CEST; 45s ago
Process: 13201 ExecStart=/usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner (code=exited, status=1/FAILURE)
Main PID: 13201 (code=exited, status=1/FAILURE)
systemd1: gitlab-runner.service: Unit entered failed state.
systemd1: gitlab-runner.service: Failed with result 'exit-code'.
Windows gitlab-runner Error 500
Because of my problem to install gitlab-runner in Linux, I have tried to install it on another computer on Windows 10.
It worked and finally the commande gitlab-runner status answered me "Service is running" (but this is just a temporary solution, I really need to make it work on linux).
Anyway, I have added a CI script to a test program and launch the job but it was turning in loop over and over.
When I launch the command "gitlab-runner --debug run":
...
passfile: true
extension: cmd
job=183 project=19 runner=679ccd01
Using Shell executor... job=183 project=19 runner=679ccd01
Waiting for signals... job=183 project=19 runner=679ccd01
WARNING: Job failed: exit status 128 job=183 project=19 runner=679ccd01
WARNING: Submitting job to coordinator... failed job=183 runner=679ccd01 status=500 Internal Server Error
WARNING: Submitting job to coordinator... failed job=183 runner=679ccd01 status=500 Internal Server Error
...
Gitlab.com and run command
So I have decided to add my project on gitlab.com, to test it.
git#gitlab.com:sandbox_test/test_ci.git
Once again the job was turning in infinite loop until I launch on my Windows computer the command "gitlab-runner run".
Dialing: tcp gitlab.com:443 ...
Feeding runners to channel builds=0
Checking for jobs... received job=30315630 repo_url=https://gitlab.com/sandbox_test/test_ci.git runner=d98c0af1
Failed to requeue the runner: builds=1 runner=d98c0af1
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on Windows_shell_gitlab_com (d98c0af1) job=30315630 project=3992201 runner=d98c0af1
Shell configuration: environment: []
dockercommand: []
command: cmd
arguments:
- /C
passfile: true
extension: cmd
job=30315630 project=3992201 runner=d98c0af1
Using Shell executor... job=30315630 project=3992201 runner=d98c0af1
Waiting for signals... job=30315630 project=3992201 runner=d98c0af1
Job succeeded job=30315630 project=3992201 runner=d98c0af1
Why is it necessary to launch the run command to make work my job on gitlab.com?
I expect when I run a new job it will figure out by itself without to launch manually the gitlab-runner on the CI computer...
Script .gitlab-ci.yml
Validate on CI Lint
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "building"
test:
stage: test
script:
- echo "test"
I really need answers very fast, thanks for your help.
Best Regards,Clement
UPDATE 1
I have resoved a part of my problems :
Linux Gitlab-runner is not running
Launch the command "gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner"
First Error : chdir /home/gitlab-runner: no such file or directory
Solution: sudo mkdir /home/gitlab-runner
Second Error : open /etc/gitlab-runner/config.toml: permission denied
Solution : sudo chmod 755 /etc/gitlab-runner/config.toml
I have resoved a part of my problems :
Linux Gitlab-runner is not running
Launch the command "gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner"
First Error : chdir /home/gitlab-runner: no such file or directory
Solution: sudo mkdir /home/gitlab-runner
Second Error : open /etc/gitlab-runner/config.toml: permission denied
Solution : sudo chmod 755 /etc/gitlab-runner/config.toml

Ansible and route53 unsupported parameter for module: connection

I'm trying to run an Ansible playbook with Amazon's Route53 service but I get the error in the title.
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [configure dns] *********************************************************
failed: [localhost] => {"failed": true}
msg: unsupported parameter for module: connection
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
This is my play:
$ cat play-dns.yml
---
- hosts: localhost
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes
connection: local
And this is my Ansible hosts file:
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost ansible_connection=local
If I remove ansible_connection=local from the hosts file
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost
then I get this error:
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [configure dns] *********************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=0 changed=0 unreachable=2 failed=0
What am I doing wrong?
Your issue is simply indentation. Currently Ansible is parsing your playbook and seeing the connection line as a parameter to the route53 module which is then complaining that connection is not a valid parameter for the module.
Instead you simply need to unindent the line to the same level as the hosts so that Ansible parses it as a parameter to the playbook overall rather than the module:
---
- hosts: localhost
connection: local
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes

Resources