I´m trying to install docker and create a docker image within a local ansible playbook containing multiple plays, adding the user to docker group in between:
- hosts: localhost
connection: local
become: yes
gather_facts: no
tasks:
- name: install docker
ansible.builtin.apt:
update_cache: yes
pkg:
- docker.io
- python3-docker
- name: Add current user to docker group
ansible.builtin.user:
name: "{{ lookup('env', 'USER') }}"
append: yes
groups: docker
- name: Ensure that docker service is running
ansible.builtin.service:
name: docker
state: started
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Create docker container
community.docker.docker_container:
image: ...
name: ...
When executing this playbook with ansible-playbook I´m getting a permission denied error at the "Create docker container" task. Rebooting and calling the playbook again solves the error.
I have tried manually executing some of the commands suggested here and executing the playbook again which works, but I´d like to do everything from within the playbook.
Adding a task like
- name: allow user changes to take effect
ansible.builtin.shell:
cmd: exec sg docker newgrp `id -gn`
does not work.
How can I refresh the Linux user group assignments from within the playbook?
I´m on Ubuntu 18.04.
I wanted to run a playbook that will accurately report if one of the remote servers requires security updates. Ansible server = Centos 7, remote servers Amazon Linux.
Remote server would highlight on startup something like below:
https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 46 available
Run "sudo yum update" to apply all updates.
To confirm this, I put a playbook together, cobbled from many sources (below) that does perform that function to a degree. It does suggest whether the remote server requires security updates but doesn't say what these updates are?
- name: check if security updates are needed
hosts: elk
tasks:
- name: check yum security updates
shell: "yum updateinfo list all security"
changed_when: false
register: security_update
- debug: msg="Security update required"
when: security_update.stdout != "0"
- name: list some packages
yum: list=available
Then, when I run my updates install playbook:
- hosts: elk
remote_user: ansadm
become: yes
become_method: sudo
tasks:
- name: Move repos from backup to yum.repos.d
shell: mv -f /backup/* /etc/yum.repos.d/
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: Remove redhat.repo
shell: rm -f /etc/yum.repos.d/redhat.repo
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: add line to yum.conf
lineinfile:
dest: /etc/yum.conf
line: exclude=kernel* redhat-release*
state: present
create: yes
- name: yum clean
shell: yum make-cache
register: shell_result
failed_when: '"There are no enabled repos" in shell_result.stderr_lines'
- name: install all security patches
yum:
name: '*'
state: latest
security: yes
bugfix: yes
skip_broken: yes
After install, you would get something similar to below (btw - these are outputs from different servers)
https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 37 packages available
Run "sudo yum update" to apply all updates.
But if I run my list security updates playbook again - it gives a false positive as it still reports security updates needed?
PLAY [check if security updates are needed] ************************************
TASK [Gathering Facts] *********************************************************
ok: [10.10.10.192]
TASK [check yum security updates] **********************************************
ok: [10.10.10.192]
TASK [debug] *******************************************************************
ok: [10.10.10.192] => {
"msg": "Security update required"
}
TASK [list some packages] ******************************************************
ok: [10.10.10.192]
PLAY RECAP *********************************************************************
10.10.10.192 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ansadm#ansible playbooks]$
What do I need to omit/include in playbook to reflect the changes after the install of the updates?
Thanks in advance :)
So I ran your yum command locally on my system I get the following.
45) local-user#server:/home/local-user> yum updateinfo list all security
Loaded plugins: ulninfo
local_repo | 2.9 kB 00:00:00
updateinfo list done
Now granted our systems may have different output here, but it will serve the purpose of my explanation. The output of the entire command is saved to your register, but your when conditional says to run when the output of that command is not EXACTLY "0".
So unless you par that response down with some awk's or sed's, and it responds with any more text that literally just the character "0" that debug task is always going to fire off.
I have been using helm chart to install elasticserahc and kibana into kubernetes,
using the defualt configuration everything went ok but I want to enable the security on both elasticsearch and kibana
I didi what's recommanded in the documentation , the security was enabled for elasticsearch but I have probleme upgrading kibana with security configuratuion it gives me this error :
Error: release helm-kibana-security failed: timed out waiting for the condition
once I run make ( from /kibana/examples/security )
I even tried to install is directly without using the Makefile :
helm install --wait --timeout=600 --values ./security.yml --name helm-kibana-security ../../
but having the same issue , can any one help me please
"failed: timed out waiting for the condition"
This message occurs when you install a release with the --wait flag, however, the pods are unable to start for some reason.
The problem is most likely in "./security.yml"
try running the below commands to debug the issue:
kubectl describe pod kibana-pod-name
kubectl logs kibana-pod-name
this is the security.yml file
---
elasticsearchHosts: "https://security-master:9200"
extraEnvs:
- name: 'ELASTICSEARCH_USERNAME'
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: 'ELASTICSEARCH_PASSWORD'
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/kibana/kibana.key
certificate: /usr/share/kibana/config/certs/kibana/kibana.crt
xpack.security.encryptionKey: something_at_least_32_characters
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/certs/elastic-certificate.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: elastic-certificate-pem
secretName: elastic-certificate-pem
path: /usr/share/kibana/config/certs
- name: kibana-certificates
secretName: kibana-certificates
path: /usr/share/kibana/config/certs/kibana
I'm running a playbook which defined several packages to install via apt:
- name: Install utility packages common to all hosts
apt:
name: "{{ item }}"
state: present
autoclean: yes
with_items:
- aptitude
- jq
- curl
- git-core
- at
...
A recent ansible update on my system now renders this message concerning the playbook above:
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of
using a loop to supply multiple items and specifying `name: {{ item }}`, please use `name: [u'aptitude',
u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',
u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']` and remove the loop.
If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this:
name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']
Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
You can code the array in YAML style to make it more readable:
- name: Install utility packages common to all hosts
apt:
name:
- aptitude
- jq
- curl
- git-core
- at
state: present
autoclean: yes
I had this same question and it looks like each set of packages with the same states will have to be their own block. Looking at Ansible's documentation, they have a block for each state as an example so I took that example, cut up my packages based off their states and followed ignacio's example and it ended up working perfectly.
So basically it would look like this
- name: Install packages required for log-deployment
apt:
name:
- gcc
- python-devel
state: latest
autoclean: yes
- name: Install packages required for log-deployment
apt:
name:
- python
- mariadb
- mysql-devel
state: installed
Hope that makes sense and helps!
I came across this exact same problem , but with a much longer list of apps , held in a vars file. This is the code I implemented to get around that problem. The list of the apps is placed into the "apps" variable and Ansible iterates over that.
- name: Install default applications
apt:
name: "{{item}}"
state: latest
loop: "{{ apps }}"
when: ansible_distribution == 'Ubuntu' or ansible_distribution == 'Debian'
tags:
- instapps
The file holding the list of apps to install is in the Defaults directory in the role directory for this task - namely the "common" role directory.
roles
- common
- Defaults
- main.yml
For a record, you can have this written this way:
- name: Install utility packages common to all hosts
apt:
state: present
autoclean: yes
pkg: [
"aptitude",
"jq",
"curl",
"git-core",
"at",
]
however if you are upgrading existing roles to new apt's requirements, Ignacio's accepted answer is far better as all you need is just to add some indentation to already existing entries.
I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.