Changing ansible loop due to v2.11 deprecation - linux

I'm running a playbook which defined several packages to install via apt:
- name: Install utility packages common to all hosts
apt:
name: "{{ item }}"
state: present
autoclean: yes
with_items:
- aptitude
- jq
- curl
- git-core
- at
...
A recent ansible update on my system now renders this message concerning the playbook above:
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of
using a loop to supply multiple items and specifying `name: {{ item }}`, please use `name: [u'aptitude',
u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',
u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']` and remove the loop.
If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this:
name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']
Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.

You can code the array in YAML style to make it more readable:
- name: Install utility packages common to all hosts
apt:
name:
- aptitude
- jq
- curl
- git-core
- at
state: present
autoclean: yes

I had this same question and it looks like each set of packages with the same states will have to be their own block. Looking at Ansible's documentation, they have a block for each state as an example so I took that example, cut up my packages based off their states and followed ignacio's example and it ended up working perfectly.
So basically it would look like this
- name: Install packages required for log-deployment
apt:
name:
- gcc
- python-devel
state: latest
autoclean: yes
- name: Install packages required for log-deployment
apt:
name:
- python
- mariadb
- mysql-devel
state: installed
Hope that makes sense and helps!

I came across this exact same problem , but with a much longer list of apps , held in a vars file. This is the code I implemented to get around that problem. The list of the apps is placed into the "apps" variable and Ansible iterates over that.
- name: Install default applications
apt:
name: "{{item}}"
state: latest
loop: "{{ apps }}"
when: ansible_distribution == 'Ubuntu' or ansible_distribution == 'Debian'
tags:
- instapps
The file holding the list of apps to install is in the Defaults directory in the role directory for this task - namely the "common" role directory.
roles
- common
- Defaults
- main.yml

For a record, you can have this written this way:
- name: Install utility packages common to all hosts
apt:
state: present
autoclean: yes
pkg: [
"aptitude",
"jq",
"curl",
"git-core",
"at",
]
however if you are upgrading existing roles to new apt's requirements, Ignacio's accepted answer is far better as all you need is just to add some indentation to already existing entries.

Related

Is it possilbe to setup your local environment with packer (like with ansible playbook)

I am trying to setup my development environment (on my local Linux pc) via some automatic mechanism.
I tried ansible playbooks, they work quite well.
Let's assume a playbook like this ("SetupDevEnv.yml")
---
- hosts: localhost
tasks:
- name: install packages
apt:
state: present
name:
- gcc
- clang
- vscode
- ...
After running :
$ ansible-playbook SetupDevEnv.yml
My local pc is ready to go with all the toolchain tools needed (gcc, clang, vscode, ...).
Is it possible to do the same thing with packer?
If yes, how would it look like?
Would it also be possible to use the existing Ansible playbooks?
Remark1: The important point here is I want to do it for my local pc (localhost) only. I do not want to generate a VM or a Docker container to which I have to log in afterward.
The idea is: the "SetupDevEnv.yml" (or the packer file) is located in the repository. The developer checks out the repository runs the setup generation and starts to work.
Remark2: To clarify the question, my question is: "How would the packer HCL/json file look like to do the same as in "SetupDevEnv.yml"? Or how would the packer HCL/json file look like which uses the "SetupDevEnv.yml"? If this is possible.
Sure. Supposing you're installing Packer using the APT repositories provided by Hashicorp on Ubuntu 20.04, that could be accomplished with the following playbook:
---
- hosts: localhost
connection: local
become: true
tasks:
- name: Add Hashicorp's GPG key
apt_key:
state: present
url: https://apt.releases.hashicorp.com/gpg
- name: Add Hashicorp's repository
apt_repository:
mode: 0644
repo: deb [arch=amd64] https://apt.releases.hashicorp.com focal main
state: present
update_cache: yes
- name: Install Packer
apt:
name: packer
state: present
Best regards!

Ansible: How to add Linux modul command path (opkg not in path)

My Linux box (busybox) using read-only filesystems mostly. I have option to install my programs different path like PATH=/opt/bin:/opt/sbin. The package manager also sitting in this folder (exec. file name: /opt/bin/opkg) .
When I want to use Ansible opkg module I got the following error:
"Failed to find required executable opkg in paths: /bin:/usr/bin:/bin:/usr/sbin:/sbin"
Question: How can I say to my Ansible to look for opkg package in different path?
Any ideas are welcome!
Thank you!
I found some useful link:
https://docs.ansible.com/ansible/latest/reference_appendices/faq.html
Ansible - Set environment path as inventory variable
And here is my example:
---
- hosts: CBOX-0001
gather_facts: True
gather_subset:
- "!all"
environment:
PATH: "/opt/bin:/opt/sbin:/usr/bin:/usr/sbin:{{ ansible_env.PATH }}"
collections:
- community.general
tasks:
- name: "install opkg packages"
opkg:
name: "{{ item }}"
state: present
with_items:
- screen
- mc
- rclone

Ansible - List of Linux security updates needed on remote servers

I wanted to run a playbook that will accurately report if one of the remote servers requires security updates. Ansible server = Centos 7, remote servers Amazon Linux.
Remote server would highlight on startup something like below:
https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 46 available
Run "sudo yum update" to apply all updates.
To confirm this, I put a playbook together, cobbled from many sources (below) that does perform that function to a degree. It does suggest whether the remote server requires security updates but doesn't say what these updates are?
- name: check if security updates are needed
hosts: elk
tasks:
- name: check yum security updates
shell: "yum updateinfo list all security"
changed_when: false
register: security_update
- debug: msg="Security update required"
when: security_update.stdout != "0"
- name: list some packages
yum: list=available
Then, when I run my updates install playbook:
- hosts: elk
remote_user: ansadm
become: yes
become_method: sudo
tasks:
- name: Move repos from backup to yum.repos.d
shell: mv -f /backup/* /etc/yum.repos.d/
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: Remove redhat.repo
shell: rm -f /etc/yum.repos.d/redhat.repo
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: add line to yum.conf
lineinfile:
dest: /etc/yum.conf
line: exclude=kernel* redhat-release*
state: present
create: yes
- name: yum clean
shell: yum make-cache
register: shell_result
failed_when: '"There are no enabled repos" in shell_result.stderr_lines'
- name: install all security patches
yum:
name: '*'
state: latest
security: yes
bugfix: yes
skip_broken: yes
After install, you would get something similar to below (btw - these are outputs from different servers)
https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 37 packages available
Run "sudo yum update" to apply all updates.
But if I run my list security updates playbook again - it gives a false positive as it still reports security updates needed?
PLAY [check if security updates are needed] ************************************
TASK [Gathering Facts] *********************************************************
ok: [10.10.10.192]
TASK [check yum security updates] **********************************************
ok: [10.10.10.192]
TASK [debug] *******************************************************************
ok: [10.10.10.192] => {
"msg": "Security update required"
}
TASK [list some packages] ******************************************************
ok: [10.10.10.192]
PLAY RECAP *********************************************************************
10.10.10.192 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ansadm#ansible playbooks]$
What do I need to omit/include in playbook to reflect the changes after the install of the updates?
Thanks in advance :)
So I ran your yum command locally on my system I get the following.
45) local-user#server:/home/local-user> yum updateinfo list all security
Loaded plugins: ulninfo
local_repo | 2.9 kB 00:00:00
updateinfo list done
Now granted our systems may have different output here, but it will serve the purpose of my explanation. The output of the entire command is saved to your register, but your when conditional says to run when the output of that command is not EXACTLY "0".
So unless you par that response down with some awk's or sed's, and it responds with any more text that literally just the character "0" that debug task is always going to fire off.

Ansible: ignoring errors in a loop

I need to install a number of packages on linux boxes. Some (few) of the packages may be missing for various reasons (OS version, essentially)
- vars:
pkgs:
- there_1
- not_there_1
- there_2
...
but I would like too manage them from a single playbook. So I cannot stick them all in a single
yum: state=latest name="{{pkgs}}"
Because missing packages would mess the transaction so that nothing gets installed.
However, the obvious (and slow) one by one install also fails, because the first missing package blows the entire loop out of the water, thusly:
- name Packages after not_there_1 are not installed
yum: state=latest name="{{item}}"
ignore_errors: yes
with_items: "{{ pkgs }}"
Is there a way to ignore errors within a loop in such a way that all items be given a chance? (i.e. install errors behave as a continue in the loop)
If you need to loop a set of tasks a a unit it would be -so- nice if we could use with_items on an error handling block right?
Until that feature comes around you can accomplish the same thing with include_tasks and with_items. Doing this should allow a block to handle failed packages, or you could even include some checks and package installs in the sub-tasks if you wanted.
First setup a sub-tasks.yml to contain your install tasks:
Sub-Tasks.yml
- name: Install package and handle errors
block:
- name Install package
yum: state=latest name="{{ package_name }}"
rescue:
- debug:
msg: "I caught an error with {{ package_name }}"
Then your playbook will setup a loop of these tasks:
- name: Install all packages ignoring errors
include_tasks: Sub-Tasks.yml
vars:
package_name: "{{ item }}"
with_items:
- "{{ pkgs }}"

Running Forever in Ansible Provision Never Fires or Always Hangs

I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.

Resources