Ansible - List of Linux security updates needed on remote servers - linux

I wanted to run a playbook that will accurately report if one of the remote servers requires security updates. Ansible server = Centos 7, remote servers Amazon Linux.
Remote server would highlight on startup something like below:
https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 46 available
Run "sudo yum update" to apply all updates.
To confirm this, I put a playbook together, cobbled from many sources (below) that does perform that function to a degree. It does suggest whether the remote server requires security updates but doesn't say what these updates are?
- name: check if security updates are needed
hosts: elk
tasks:
- name: check yum security updates
shell: "yum updateinfo list all security"
changed_when: false
register: security_update
- debug: msg="Security update required"
when: security_update.stdout != "0"
- name: list some packages
yum: list=available
Then, when I run my updates install playbook:
- hosts: elk
remote_user: ansadm
become: yes
become_method: sudo
tasks:
- name: Move repos from backup to yum.repos.d
shell: mv -f /backup/* /etc/yum.repos.d/
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: Remove redhat.repo
shell: rm -f /etc/yum.repos.d/redhat.repo
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: add line to yum.conf
lineinfile:
dest: /etc/yum.conf
line: exclude=kernel* redhat-release*
state: present
create: yes
- name: yum clean
shell: yum make-cache
register: shell_result
failed_when: '"There are no enabled repos" in shell_result.stderr_lines'
- name: install all security patches
yum:
name: '*'
state: latest
security: yes
bugfix: yes
skip_broken: yes
After install, you would get something similar to below (btw - these are outputs from different servers)
https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 37 packages available
Run "sudo yum update" to apply all updates.
But if I run my list security updates playbook again - it gives a false positive as it still reports security updates needed?
PLAY [check if security updates are needed] ************************************
TASK [Gathering Facts] *********************************************************
ok: [10.10.10.192]
TASK [check yum security updates] **********************************************
ok: [10.10.10.192]
TASK [debug] *******************************************************************
ok: [10.10.10.192] => {
"msg": "Security update required"
}
TASK [list some packages] ******************************************************
ok: [10.10.10.192]
PLAY RECAP *********************************************************************
10.10.10.192 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ansadm#ansible playbooks]$
What do I need to omit/include in playbook to reflect the changes after the install of the updates?
Thanks in advance :)

So I ran your yum command locally on my system I get the following.
45) local-user#server:/home/local-user> yum updateinfo list all security
Loaded plugins: ulninfo
local_repo | 2.9 kB 00:00:00
updateinfo list done
Now granted our systems may have different output here, but it will serve the purpose of my explanation. The output of the entire command is saved to your register, but your when conditional says to run when the output of that command is not EXACTLY "0".
So unless you par that response down with some awk's or sed's, and it responds with any more text that literally just the character "0" that debug task is always going to fire off.

Related

Ansible reboot not rebooting even though '/var/run/reboot-required' exists

I have tested this playbook with updating so I know that the credentials work, as well as the elevation to sudo. I have a test server with an extant /var/run/reboot-required file. I cannot get my ansible playbook to reboot the server though. This is an Ubuntu server. Playbook currently:
---
- hosts: server
vars:
ansible_user: sudo_user
ansible_password: "password"
become: yes
become_user: sudo_user
tasks:
- name: Check if reboot required
stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Reboot if required
reboot:
when: reboot_required_file.stat.exists == true
Ive tried variations of this playbook and I cant get the playbook to reboot the server. Playbook returns:
PLAY [server] *******************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************
ok: [server]
PLAY RECAP **********************************************************************************************************************************************************************
server : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Ive also tried just doing a shell command:
- name:
shell: if [ -f /var/run/reboot-required ]; then init 6; else wall "reboot not required"; fi
ignore_errors: true
This also doesnt work.
Cheers
You may change the alignment of the tasks in the playbook. I performed the operation successfully by running the following playbook:
---
- hosts: nodes
become: yes
become_user: sudo_user
tasks:
- name: Check if reboot required
stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Diplay variable
debug:
msg: '{{reboot_required_file.stat.exists}}'
- name: Reboot if required
reboot:
when: reboot_required_file.stat.exists == true
For the password set you may run the playbook as following and put the password in prompt:
ansible-playbook playbook.yml – ask-become-pass

Is it possilbe to setup your local environment with packer (like with ansible playbook)

I am trying to setup my development environment (on my local Linux pc) via some automatic mechanism.
I tried ansible playbooks, they work quite well.
Let's assume a playbook like this ("SetupDevEnv.yml")
---
- hosts: localhost
tasks:
- name: install packages
apt:
state: present
name:
- gcc
- clang
- vscode
- ...
After running :
$ ansible-playbook SetupDevEnv.yml
My local pc is ready to go with all the toolchain tools needed (gcc, clang, vscode, ...).
Is it possible to do the same thing with packer?
If yes, how would it look like?
Would it also be possible to use the existing Ansible playbooks?
Remark1: The important point here is I want to do it for my local pc (localhost) only. I do not want to generate a VM or a Docker container to which I have to log in afterward.
The idea is: the "SetupDevEnv.yml" (or the packer file) is located in the repository. The developer checks out the repository runs the setup generation and starts to work.
Remark2: To clarify the question, my question is: "How would the packer HCL/json file look like to do the same as in "SetupDevEnv.yml"? Or how would the packer HCL/json file look like which uses the "SetupDevEnv.yml"? If this is possible.
Sure. Supposing you're installing Packer using the APT repositories provided by Hashicorp on Ubuntu 20.04, that could be accomplished with the following playbook:
---
- hosts: localhost
connection: local
become: true
tasks:
- name: Add Hashicorp's GPG key
apt_key:
state: present
url: https://apt.releases.hashicorp.com/gpg
- name: Add Hashicorp's repository
apt_repository:
mode: 0644
repo: deb [arch=amd64] https://apt.releases.hashicorp.com focal main
state: present
update_cache: yes
- name: Install Packer
apt:
name: packer
state: present
Best regards!

Changing ansible loop due to v2.11 deprecation

I'm running a playbook which defined several packages to install via apt:
- name: Install utility packages common to all hosts
apt:
name: "{{ item }}"
state: present
autoclean: yes
with_items:
- aptitude
- jq
- curl
- git-core
- at
...
A recent ansible update on my system now renders this message concerning the playbook above:
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of
using a loop to supply multiple items and specifying `name: {{ item }}`, please use `name: [u'aptitude',
u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',
u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']` and remove the loop.
If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this:
name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']
Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
You can code the array in YAML style to make it more readable:
- name: Install utility packages common to all hosts
apt:
name:
- aptitude
- jq
- curl
- git-core
- at
state: present
autoclean: yes
I had this same question and it looks like each set of packages with the same states will have to be their own block. Looking at Ansible's documentation, they have a block for each state as an example so I took that example, cut up my packages based off their states and followed ignacio's example and it ended up working perfectly.
So basically it would look like this
- name: Install packages required for log-deployment
apt:
name:
- gcc
- python-devel
state: latest
autoclean: yes
- name: Install packages required for log-deployment
apt:
name:
- python
- mariadb
- mysql-devel
state: installed
Hope that makes sense and helps!
I came across this exact same problem , but with a much longer list of apps , held in a vars file. This is the code I implemented to get around that problem. The list of the apps is placed into the "apps" variable and Ansible iterates over that.
- name: Install default applications
apt:
name: "{{item}}"
state: latest
loop: "{{ apps }}"
when: ansible_distribution == 'Ubuntu' or ansible_distribution == 'Debian'
tags:
- instapps
The file holding the list of apps to install is in the Defaults directory in the role directory for this task - namely the "common" role directory.
roles
- common
- Defaults
- main.yml
For a record, you can have this written this way:
- name: Install utility packages common to all hosts
apt:
state: present
autoclean: yes
pkg: [
"aptitude",
"jq",
"curl",
"git-core",
"at",
]
however if you are upgrading existing roles to new apt's requirements, Ignacio's accepted answer is far better as all you need is just to add some indentation to already existing entries.

Error provision cassandra with playbook ansible.(Vagrant VM)

I'm trying to install and provision(using Ansible) Cassandra in a virtual machine but i have the follow issue:
FAILED! => {"changed": false, "failed": true,
"msg":"AnsibleUndefinedVariable: 'SimpleSnitch' is undefined"}
this issue occrus in the fifth task: "Change /etc/cassandra.yaml"
The file .yml is like this:
- name: "add datastax cassandra debian repository"
apt_repository: repo='deb http://debian.datastax.com/community stable main'
- name: "Add datastax repo key"
apt_key: url=http://debian.datastax.com/debian/repo_key
- name: "Install cassandra"
apt: name=dsc30 state=latest update_cache=yes install_recommends=yes
- name: "Install cassandra-tools"
apt: name=cassandra-tools state=latest update_cache=yes install_recommends=yes
- name: "Change /etc/cassandra.yaml"
template: src=cassandra.yaml.j2 dest=/etc/cassandra/cassandra.yaml
- name: "Restart cassandra"
service: name=cassandra state=restarted
- name: stop cassandra
service: name=cassandra state=stopped
- name: clear test data
shell: rm -rf /var/lib/cassandra/data/system/*
- name: clear test data
shell: rm -rf /var/lib/cassandra/data/system_data/*
- name: start cassandra
service: name=cassandra state=started
- name: "Stop to back the Cassandra node"
pause: seconds=30
Thanks in advance
This kind of error usually means that you forgot to wrap a string in quotes. The problem isn't in the playbook file you've pasted though. Somewhere else you're setting some kind of snitch variable used by the cassandra.yaml.j2 template, and you have forgotten to wrap the "SimpleSnitch" value in quotes so ansible is mistakenly interpreting it as an undefined variable name.
Also, even for dev-clusters there's generally no reason not to get into the habit of using Gossiping Property File snitch. It's very simple to configure and will put you in a good position to migrate your config to a prod cluster someday.
In your cassandra.yaml.j2 file, you are using a variable SimpleSnitch at endpoint_snitch: section. For that reason you are having this error:
FAILED! => {"changed": false, "failed": true, "msg":"AnsibleUndefinedVariable: 'SimpleSnitch' is undefined"}

Running Forever in Ansible Provision Never Fires or Always Hangs

I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.

Resources