Creating a ansible playbook for getting the apache status and want to display just the status - linux

I have written an Ansible playbook which is as follows, but the problem it shows the status as disabled when i launch the playbook and when i check the same on my ubuntu server manually for the server status it shows active.
Please help and suggest what mistake I am doing in writing the playbook .
Note :- My goal is to get the status of the apache server {that is it in started or stopped status} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the status
testfile.yml
---
-
name: "To check if the apache is running or not"
hosts: webserver1
become: yes
tasks:
- name: Check Apache2 service status
service:
name: apache2
state: started
register: apache2_status
# - name: apache2 status
# command: service apache2 status
# register: apache2_status
- name: Print Apache2 service status
debug:
msg: "Apache2 service is {{ 'started' if apache2_status.status == 'started' else 'stopped' }}"
Running the following ansible command to run the playbook
ansible-playbook testfile.yml -i inventory --ask-become-pass
output
PLAY [To check if the apache is running or not] ********************************************************************************
TASK [Gathering Facts] *********************************************************************************************************
ok: [webserver1]
TASK [Check Apache2 service status] ********************************************************************************************
ok: [webserver1]
TASK [Print Apache2 service status] ********************************************************************************************
ok: [webserver1] => {
"msg": "Apache2 service is stopped"
}
PLAY RECAP *********************************************************************************************************************
webserver1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And When I check the same manually on my ubuntu webserver1
ubuntu#ubuntu1:~$ service apache2 status
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-12-31 08:56:30 UTC; 5h 28min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 632 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 708 (apache2)
Tasks: 55 (limit: 1695)
Memory: 7.5M
CPU: 2.724s
CGroup: /system.slice/apache2.service
├─708 /usr/sbin/apache2 -k start
├─713 /usr/sbin/apache2 -k start
└─714 /usr/sbin/apache2 -k start
Dec 31 08:56:29 ubuntu1 systemd[1]: Starting The Apache HTTP Server...
Dec 31 08:56:30 ubuntu1 apachectl[685]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name,>
Dec 31 08:56:30 ubuntu1 systemd[1]: Started The Apache HTTP Server.
I have tried to run the Ansible playbook with different parameters but not able to replicate my expected output
What my expecting is My goal is to get the status of the apache server {that is it in started or stopped status ?} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the new status

For example, given the inventory
shell> cat hosts
[webservers]
webserver1 ansible_host=10.1.0.74
[webservers:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/bin/python3.6
The playbook below in the first task makes sure the web server is running. The second task collects the status of all services. The third task writes the status to a file at the controller
- name: Get status of webserver and append the new status to file
hosts: webservers
tasks:
- name: Start lighttpd if not running already
service:
name: lighttpd
state: started
register: status
- debug:
var: status.state
- name:
service_facts:
- debug:
var: ansible_facts.services['lighttpd.service']
- lineinfile:
create: true
path: /tmp/webservers.status
line: "{{ '%Y-%m-%d %H:%M:%S'|strftime }} {{ item }} {{ s.state }} {{ s.status }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
vars:
s: "{{ hostvars[item].ansible_facts.services['lighttpd.service'] }}"
gives
PLAY [Get status of webserver and append the new status to file] *****************************
TASK [Start lighttpd if not running already] *************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
status.state: started
TASK [service_facts] *************************************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
ansible_facts.services['lighttpd.service']:
name: lighttpd.service
source: systemd
state: running
status: disabled
TASK [lineinfile] ****************************************************************************
changed: [webserver1 -> localhost] => (item=webserver1)
PLAY RECAP ***********************************************************************************
webserver1: ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and appends the status of the webserver(s) to the file
shell> cat /tmp/webservers.status
2023-01-01 01:03:15 webserver1 running disabled
2023-01-01 01:03:32 webserver1 running disabled

I'm pretty sure your problem is that you want to check the state, but you are effectively checking whether the service is enabled or not.
(ie a service can be currently running but not enabled, and even though your service is enabled, it is not specified in ansible, and it is not collecting that for you)
I'd suggest using systemd instead of service like this. Set the status to started, but in check mode. The module won't actually do anything, but it will set "changed" to true if a change needed to be made.
This is more Ansible-ish in my opinion than calling a shell command.
(I tested this on Fedora -- adjust the service name as you need to.)
---
- hosts: localhost
tasks:
- name: check if apache is running
ansible.builtin.systemd:
# on Ubuntu, I think this is apache2
name: httpd
state: started
register: httpd_status
check_mode: true
- name: debug
ansible.builtin.debug:
msg: "Apache2 service is {{ httpd_status.changed | ternary('stopped','started') }}"

Related

Using relative path in yaml file for id_rsa file

I'm trying to use "~/.ssh/id_rsa" as the key path in cluster.yaml to deploy a kubernetes cluster but when it's called, the error says 'No such file or directory: ~/.ssh/id_rsa'
node_defaults:
keyfile: "~/.ssh/id_rsa"
username: "nikhil"
nodes:
- name: "k8s-control-plane"
address: "10.0.0.1"
internal_address: "192.***.**.***"
roles: ["control-plane", "worker"]
cluster_name: "k8s-stack.testcluster.com"
It works fine if i use absolute path for keyfile:(/home/user/.ssh/id_rsa) but facing issue while using relative path like this
Some environment variables are available in Ansible. For example, the playbook
shell> cat pb.yml
- hosts: test_11
gather_facts: true
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: echo $HOME
register: result
- debug:
var: result.stdout
- debug:
var: ansible_env
shows the environment variables collected by setup (gather_facts: true)
shell> ansible-playbook pb.yml
PLAY [test_11] *******************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: admin
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
result.stdout: /home/admin
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_env:
BLOCKSIZE: K
HOME: /home/admin
LANG: C.UTF-8
LOGNAME: admin
MAIL: /var/mail/admin
MM_CHARSET: UTF-8
PATH: /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/admin/bin
PWD: /home/admin
SHELL: /bin/sh
SSH_CLIENT: 10.1.0.184 58084 22
SSH_CONNECTION: 10.1.0.184 58084 10.1.0.61 22
SSH_TTY: /dev/pts/1
TERM: xterm-256color
USER: admin
PLAY RECAP ***********************************************************************************
test_11: ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes
Some environment variables are available in the configuration files.
See How do I access shell environment variables?
See Running under fakeroot

GKE deployment ReactJS app CrashLoopBackoff

I have built my Docker image for my ReactJS app. I run the image locally and tested it, which works fine.
I'm using Google Cloud Build which automatically puts my container image to gcr.io container repo.
Then I try to create a deployment from my container image which resides in gcr.io
But the deployment is not successfully finishing. It gives me
Pod errors: CrashLoopBackOff
Does not have minimum availability
Here is my Docker image.
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
When I look at the pod -> container logs, I see my app has been restarting without any error log.
I 2020-04-27T15:07:57.505370601Z > react-scripts start
I 2020-04-27T15:07:57.505375001Z
I 2020-04-27T15:07:59.392880949Z [34mℹ[39m [90m「wds」[39m: Project is running at http://10.48.0.14/
I 2020-04-27T15:07:59.393329591Z [34mℹ[39m [90m「wds」[39m: webpack output is served from
I 2020-04-27T15:07:59.393494921Z [34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /usr/src/app/public
I 2020-04-27T15:07:59.393641770Z [34mℹ[39m [90m「wds」[39m: 404s will fallback to /
I 2020-04-27T15:07:59.393881277Z Starting the development server...
I suspect what happens is, kubernetes does not wait enough and try to restart application.
I'm not using a deployment.yaml but simply using the GCP console.
There is no health endpoint in my ReactJS app.
This is the output for kubectl describe pod ...
Name: helloworld-gke-7fd977fd94-kvrcj
Namespace: default
Priority: 0
Node: gke-helloworld-gke-default-pool-a23be758-g8q7/10.182.0.2
Start Time: Mon, 27 Apr 2020 17:15:18 +0200
Labels: app=hello
pod-template-hash=7fd977fd94
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container hello-app
Status: Running
IP: 10.48.0.15
IPs: <none>
Controlled By: ReplicaSet/helloworld-gke-7fd977fd94
Containers:
hello-app:
Container ID: docker://389151ed6...
Image: gcr.io/tuition-h...
Image ID: docker-pullable://gcr.io/...
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 27 Apr 2020 18:28:00 +0200
Finished: Mon, 27 Apr 2020 18:28:02 +0200
Ready: False
Restart Count: 19
Requests:
cpu: 100m
Environment:
PORT: 8080
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqpm7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sqpm7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqpm7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m17s (x320 over 74m) kubelet, gke-helloworld-gke-default-pool-... Back-off restarting failed container

Why does the ansible remote copy not work from local

I just started learning ansible and writing first copy module but i am clueless why its not working.
---
- hosts: osa
tasks:
- name: Copying file to remote host
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
Ran playbook but nothing happened and no error too.
# /usr/bin/ansible-playbook /home/spatel/ansible/first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [10.5.1.160]
TASK [Copying file to remote host] **************************************************************************************************************************************************
ok: [10.5.1.160]
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=2 changed=0 unreachable=0 failed=0
My /etc/ansible/hosts file has Remote host IP address:
[osa]
10.5.1.160
Verbose output:
# ansible-playbook -vv first-playbook.yml
ansible-playbook 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: first-playbook.yml ********************************************************************************************************************************************************
1 plays in first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
META: ran handlers
TASK [Copying file to remote host] **************************************************************************************************************************************************
task path: /home/spatel/ansible/first-playbook.yml:5
ok: [10.5.1.160] => {"changed": false, "checksum": "4383f040dc0303e55260bca327cc0eeb213b04b5", "failed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/tmp/bar.txt", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 8, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=1 changed=0 unreachable=0 failed=0
UPDATE
I have noticed its copying file to localhost not remote host, this makes no sense, why is it saying in the output that it is executing the playbook on the remote host (10.5.1.160) ?????
# ls -l /tmp/bar.txt
-rw-r--r--. 1 root root 16 May 24 10:45 /tmp/bar.txt
I had a similar problem that it was looking for a file on the remote server. You are running your tasks on remote server hosts: osa but the file is on localhost. I could solve it with delegate_to: localhost.
---
- hosts: osa
tasks:
- name: Copying file to remote host
delegate_to: localhost
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
However, I had to make even first a task to get the file from localhost and then to provide it afterwards to the remote host:
- name: read files
# This needs to run on localhost, because that's where
# the keys are stored.
delegate_to: localhost
command: cat {{item}}
# Register the results of this task in a variable called
# "files"
register: files
with_fileglob:
- "/tmp/*.txt"
- name: show what was stored in the files variable
debug:
var: files
- name: Copying file to remote host
delegate_to: localhost
copy: src="{{item.stdout}}" dest=/tmp/bar.txt
with_items: "{{keys.results}}"

Ansible task failed while executing the specific long running task

While running all the task at a time Ansible disconnected the SSH session in between running the task which took 26 hours to complete but ansible disconnected the SSH session after the 6 hours execution. Target server SSH configuration to keep the session as below:
ClientAliveInterval 172000
ClientAliveCountMax 10
Ansible task:
- name: Executing script
remote_user: "{{admin_user}}"
become: yes
shell: sudo -u test bash ./customscript.sh > /log_dir/customscript.log 2>&1
args:
chdir: "deployment_source/common"
tags:
- custom-test
Find the error log below:
22:11:44 TASK [role-deployment : Executing script] ************
22:11:44 fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to x.x.x.x closed.\r\n", "unreachable": true}
22:11:44
22:11:44 NO MORE HOSTS LEFT *************************************************************
22:11:44 to retry, use: --limit #/opt/ansible/test/deployment.retry
22:11:44
22:11:44 PLAY RECAP *********************************************************************
22:11:44 x.x.x.x : ok=6 changed=2 unreachable=1 failed=0
Kindly inform, what is the issue of disconnection? how can solve it?
You should never expect network connection to be stable that long.
There's async mechanism in Ansible to work with long-running jobs.
Refactor your code to be:
- name: Executing script
remote_user: "{{admin_user}}"
become: yes
shell: sudo -u test bash ./customscript.sh > /log_dir/customscript.log 2>&1
args:
chdir: "deployment_source/common"
async: 180000
poll: 60
tags:
- custom-test
To allow your task to be executed as much as 50 hours and check for completion every 60 seconds.

Ansible and route53 unsupported parameter for module: connection

I'm trying to run an Ansible playbook with Amazon's Route53 service but I get the error in the title.
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [configure dns] *********************************************************
failed: [localhost] => {"failed": true}
msg: unsupported parameter for module: connection
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
This is my play:
$ cat play-dns.yml
---
- hosts: localhost
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes
connection: local
And this is my Ansible hosts file:
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost ansible_connection=local
If I remove ansible_connection=local from the hosts file
$ cat /etc/ansible/hosts|grep localhost
[localhost]
localhost
then I get this error:
$ ansible-playbook play-dns.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [configure dns] *********************************************************
fatal: [localhost] => SSH Error: ssh: connect to host localhost port 22: Connection refused
while connecting to 127.0.0.1:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/myuser/play-dns.retry
localhost : ok=0 changed=0 unreachable=2 failed=0
What am I doing wrong?
Your issue is simply indentation. Currently Ansible is parsing your playbook and seeing the connection line as a parameter to the route53 module which is then complaining that connection is not a valid parameter for the module.
Instead you simply need to unindent the line to the same level as the hosts so that Ansible parses it as a parameter to the playbook overall rather than the module:
---
- hosts: localhost
connection: local
tasks:
- name: configure dns
route53:
command: create
aws_access_key: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_key: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
zone: myzone.info
record: test.myzone.info
type: A
ttl: 7200
value: 1.1.1.1
wait: yes

Resources