connect() got an unexpected keyword argument 'sock' - python-3.x

I'm getting below error when trying to connect to routers and i'm running latest version of Paramiko. What could be causing this error and is there any workaround ?
Update:
its simple scriptusing ansible to collect "show system uptime" command output.
I have inventory files which contains ip address of routers
$ cat uptime.yaml
---
- name: Get device uptime
hosts:
- all
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: UsernName
private: no
- name: password
prompt: Password
private: yes
tasks:
- name: get uptime using ansible moduel
junos_command:
commands:
- show system uptime
provider:
host: "{{ansible_host}}"
port: 22
username: "{{ username }}"
password: "{{ password }}"
>>> print(paramiko.__version__)
2.7.2
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature
will be removed from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
UsernName: regress
Password:
PLAY [Get device uptime] ************************************************************************************************************************************************************************
TASK [get uptime using ansible moduel] **********************************************************************************************************************************************************
[WARNING]: ['connection local support for this module is deprecated and will be removed in version 2.14, use connection ansible.netcommon.netconf']
fatal: [R1_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
fatal: [R2_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
PLAY RECAP **************************************************************************************************************************************************************************************
R1_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
R2_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Related

Creating a ansible playbook for getting the apache status and want to display just the status

I have written an Ansible playbook which is as follows, but the problem it shows the status as disabled when i launch the playbook and when i check the same on my ubuntu server manually for the server status it shows active.
Please help and suggest what mistake I am doing in writing the playbook .
Note :- My goal is to get the status of the apache server {that is it in started or stopped status} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the status
testfile.yml
---
-
name: "To check if the apache is running or not"
hosts: webserver1
become: yes
tasks:
- name: Check Apache2 service status
service:
name: apache2
state: started
register: apache2_status
# - name: apache2 status
# command: service apache2 status
# register: apache2_status
- name: Print Apache2 service status
debug:
msg: "Apache2 service is {{ 'started' if apache2_status.status == 'started' else 'stopped' }}"
Running the following ansible command to run the playbook
ansible-playbook testfile.yml -i inventory --ask-become-pass
output
PLAY [To check if the apache is running or not] ********************************************************************************
TASK [Gathering Facts] *********************************************************************************************************
ok: [webserver1]
TASK [Check Apache2 service status] ********************************************************************************************
ok: [webserver1]
TASK [Print Apache2 service status] ********************************************************************************************
ok: [webserver1] => {
"msg": "Apache2 service is stopped"
}
PLAY RECAP *********************************************************************************************************************
webserver1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And When I check the same manually on my ubuntu webserver1
ubuntu#ubuntu1:~$ service apache2 status
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-12-31 08:56:30 UTC; 5h 28min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 632 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 708 (apache2)
Tasks: 55 (limit: 1695)
Memory: 7.5M
CPU: 2.724s
CGroup: /system.slice/apache2.service
├─708 /usr/sbin/apache2 -k start
├─713 /usr/sbin/apache2 -k start
└─714 /usr/sbin/apache2 -k start
Dec 31 08:56:29 ubuntu1 systemd[1]: Starting The Apache HTTP Server...
Dec 31 08:56:30 ubuntu1 apachectl[685]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name,>
Dec 31 08:56:30 ubuntu1 systemd[1]: Started The Apache HTTP Server.
I have tried to run the Ansible playbook with different parameters but not able to replicate my expected output
What my expecting is My goal is to get the status of the apache server {that is it in started or stopped status ?} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the new status
For example, given the inventory
shell> cat hosts
[webservers]
webserver1 ansible_host=10.1.0.74
[webservers:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/bin/python3.6
The playbook below in the first task makes sure the web server is running. The second task collects the status of all services. The third task writes the status to a file at the controller
- name: Get status of webserver and append the new status to file
hosts: webservers
tasks:
- name: Start lighttpd if not running already
service:
name: lighttpd
state: started
register: status
- debug:
var: status.state
- name:
service_facts:
- debug:
var: ansible_facts.services['lighttpd.service']
- lineinfile:
create: true
path: /tmp/webservers.status
line: "{{ '%Y-%m-%d %H:%M:%S'|strftime }} {{ item }} {{ s.state }} {{ s.status }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
vars:
s: "{{ hostvars[item].ansible_facts.services['lighttpd.service'] }}"
gives
PLAY [Get status of webserver and append the new status to file] *****************************
TASK [Start lighttpd if not running already] *************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
status.state: started
TASK [service_facts] *************************************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
ansible_facts.services['lighttpd.service']:
name: lighttpd.service
source: systemd
state: running
status: disabled
TASK [lineinfile] ****************************************************************************
changed: [webserver1 -> localhost] => (item=webserver1)
PLAY RECAP ***********************************************************************************
webserver1: ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and appends the status of the webserver(s) to the file
shell> cat /tmp/webservers.status
2023-01-01 01:03:15 webserver1 running disabled
2023-01-01 01:03:32 webserver1 running disabled
I'm pretty sure your problem is that you want to check the state, but you are effectively checking whether the service is enabled or not.
(ie a service can be currently running but not enabled, and even though your service is enabled, it is not specified in ansible, and it is not collecting that for you)
I'd suggest using systemd instead of service like this. Set the status to started, but in check mode. The module won't actually do anything, but it will set "changed" to true if a change needed to be made.
This is more Ansible-ish in my opinion than calling a shell command.
(I tested this on Fedora -- adjust the service name as you need to.)
---
- hosts: localhost
tasks:
- name: check if apache is running
ansible.builtin.systemd:
# on Ubuntu, I think this is apache2
name: httpd
state: started
register: httpd_status
check_mode: true
- name: debug
ansible.builtin.debug:
msg: "Apache2 service is {{ httpd_status.changed | ternary('stopped','started') }}"

Using relative path in yaml file for id_rsa file

I'm trying to use "~/.ssh/id_rsa" as the key path in cluster.yaml to deploy a kubernetes cluster but when it's called, the error says 'No such file or directory: ~/.ssh/id_rsa'
node_defaults:
keyfile: "~/.ssh/id_rsa"
username: "nikhil"
nodes:
- name: "k8s-control-plane"
address: "10.0.0.1"
internal_address: "192.***.**.***"
roles: ["control-plane", "worker"]
cluster_name: "k8s-stack.testcluster.com"
It works fine if i use absolute path for keyfile:(/home/user/.ssh/id_rsa) but facing issue while using relative path like this
Some environment variables are available in Ansible. For example, the playbook
shell> cat pb.yml
- hosts: test_11
gather_facts: true
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: echo $HOME
register: result
- debug:
var: result.stdout
- debug:
var: ansible_env
shows the environment variables collected by setup (gather_facts: true)
shell> ansible-playbook pb.yml
PLAY [test_11] *******************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: admin
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: /home/admin
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_env:
BLOCKSIZE: K
HOME: /home/admin
LANG: C.UTF-8
LOGNAME: admin
MAIL: /var/mail/admin
MM_CHARSET: UTF-8
PATH: /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/admin/bin
PWD: /home/admin
SHELL: /bin/sh
SSH_CLIENT: 10.1.0.184 58084 22
SSH_CONNECTION: 10.1.0.184 58084 10.1.0.61 22
SSH_TTY: /dev/pts/1
TERM: xterm-256color
USER: admin
PLAY RECAP ***********************************************************************************
test_11: ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes
Some environment variables are available in the configuration files.
See How do I access shell environment variables?
See Running under fakeroot

Dry run in Ansible not showing errors as in actual run

I have written a playbook to add one entry in resolv.conf and add a new user. Group I am specifying in not present in target system however I am not getting the error in dry run ( using --check) option but getting it in actual run.
[root#ansible-controller ansible-test-project]# ansible-playbook lineinfile-playbook.yaml -i inventory.txt --check
PLAY [Add a new nameserver and webuser] ****************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [ansible-target]
TASK [Update entry into /etc/resolv.conf] **************************************************************
changed: [ansible-target]
TASK [Add user web_user] *******************************************************************************
changed: [ansible-target]
PLAY RECAP *********************************************************************************************
ansible-target : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root#ansible-controller ansible-test-project]# ansible-playbook lineinfile-playbook.yaml -i inventory.txt
PLAY [Add a new nameserver and webuser] ****************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [ansible-target]
TASK [Update entry into /etc/resolv.conf] **************************************************************
changed: [ansible-target]
TASK [Add user web_user] *******************************************************************************
fatal: [ansible-target]: FAILED! => {"changed": false, "msg": "Group developers does not exist"}
PLAY RECAP *********************************************************************************************
ansible-target : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
[root#ansible-controller ansible-test-project]#
My playbook
-
name: 'Add a new nameserver and webuser'
hosts: targets
tasks:
-
name: 'Update entry into /etc/resolv.conf'
lineinfile:
path: /etc/resolv.conf
line: 'nameserver 10.1.250.10'
-
name: 'Add user web_user'
user:
name: web_user
uid: 1040
group: developers
The issue is that in check mode, the user module only checks whether or not the user exists. The solution is to add a task to your playbook that ensures the group exists as well:
- name: add developers group
group:
name: developers
gid: 1000
- name: add user web_user
user:
name: web_user
uid: 1040
group: developers
Running this in against a host that does not have either
the developers group or the web_user user results in:
TASK [add developers group] ******************************************************************************************************************************************************************
changed: [node0]
TASK [add user web_user] *********************************************************************************************************************************************************************
changed: [node0]

Why does the ansible remote copy not work from local

I just started learning ansible and writing first copy module but i am clueless why its not working.
---
- hosts: osa
tasks:
- name: Copying file to remote host
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
Ran playbook but nothing happened and no error too.
# /usr/bin/ansible-playbook /home/spatel/ansible/first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [10.5.1.160]
TASK [Copying file to remote host] **************************************************************************************************************************************************
ok: [10.5.1.160]
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=2 changed=0 unreachable=0 failed=0
My /etc/ansible/hosts file has Remote host IP address:
[osa]
10.5.1.160
Verbose output:
# ansible-playbook -vv first-playbook.yml
ansible-playbook 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: first-playbook.yml ********************************************************************************************************************************************************
1 plays in first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
META: ran handlers
TASK [Copying file to remote host] **************************************************************************************************************************************************
task path: /home/spatel/ansible/first-playbook.yml:5
ok: [10.5.1.160] => {"changed": false, "checksum": "4383f040dc0303e55260bca327cc0eeb213b04b5", "failed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/tmp/bar.txt", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 8, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=1 changed=0 unreachable=0 failed=0
UPDATE
I have noticed its copying file to localhost not remote host, this makes no sense, why is it saying in the output that it is executing the playbook on the remote host (10.5.1.160) ?????
# ls -l /tmp/bar.txt
-rw-r--r--. 1 root root 16 May 24 10:45 /tmp/bar.txt
I had a similar problem that it was looking for a file on the remote server. You are running your tasks on remote server hosts: osa but the file is on localhost. I could solve it with delegate_to: localhost.
---
- hosts: osa
tasks:
- name: Copying file to remote host
delegate_to: localhost
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
However, I had to make even first a task to get the file from localhost and then to provide it afterwards to the remote host:
- name: read files
# This needs to run on localhost, because that's where
# the keys are stored.
delegate_to: localhost
command: cat {{item}}
# Register the results of this task in a variable called
# "files"
register: files
with_fileglob:
- "/tmp/*.txt"
- name: show what was stored in the files variable
debug:
var: files
- name: Copying file to remote host
delegate_to: localhost
copy: src="{{item.stdout}}" dest=/tmp/bar.txt
with_items: "{{keys.results}}"

Ansible and route53 unsupported parameter for module: connection

I'm trying to run an Ansible playbook with Amazon's Route53 service but I get the error in the title.
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [configure dns] *********************************************************
failed: [localhost] => {"failed": true}
msg: unsupported parameter for module: connection
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
This is my play:
$ cat play-dns.yml
---
- hosts: localhost
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes
connection: local
And this is my Ansible hosts file:
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost ansible_connection=local
If I remove ansible_connection=local from the hosts file
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost
then I get this error:
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [configure dns] *********************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=0 changed=0 unreachable=2 failed=0
What am I doing wrong?
Your issue is simply indentation. Currently Ansible is parsing your playbook and seeing the connection line as a parameter to the route53 module which is then complaining that connection is not a valid parameter for the module.
Instead you simply need to unindent the line to the same level as the hosts so that Ansible parses it as a parameter to the playbook overall rather than the module:
---
- hosts: localhost
connection: local
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes

Resources