apt cache update keeps failing in sensible playbook - linux

This is the content of my ansible playbook, I am tying to install php8.1 using the ppa:ondrej/php repo. I keep getting this error
fatal: [debian-machine]: FAILED! => {"changed": false, "msg": "apt cache update failed"}
The playbook is to deploy a laravel app but I need to install my PHP8.1 first before moving to the app deployment. This is my playbook below.
---
- name: A playbook for laravel deployment
hosts: laravelserver
become: yes
remote_user: root
vars:
password: "*******"
ip: "xxx.xxx.xxx.xxx"
tasks:
- name: Update and Upgrade apt repository
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400
- name: Installation of Apache Server
apt:
name: apache2
state: present
- name: Enabling Apache Service
service:
name: apache2
enabled: yes
- name: Installing PHP dependencies
apt:
name:
- ca-certificates
- apt-transport-https
- software-properties-common
state: present
- name: Adding Deb Sury Repo
apt_repository:
repo: ppa:ondrej/php
state: present
- name: Update and Upgrade apt repository
apt:
update_cache: yes
- name: Installation of PHP8.1 and its dependencies
apt:
name:
- php8.1
- php8.1-mysql
- libapache2-mod-php
- php8.1-imap
- php8.1-ldap
- php8.1-xml
- php8.1-fpm
- php8.1-curl
- php8.1-mbstring
- php8.1-zip
state: present
Please how can I stop this error from displaying when I run the playbook.

Related

pm2 unable to register service on port when started with ansible

I have the following ansible playbook
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
#pwd
- name: Executing node
become: true
become_method: sudo
become_user: root
shell: |
cd ~/backend
npm install
npm run build
pm2 stop default
pm2 start npm --name "backend" -- run start --port 3030
pm2 status
cd ~/backend/dist
pm2 start main.js --update-env
pm2 status
I have the following issue with the task Executing node. So on my remote server, if I run each of the commands manually, after logging onto the remote machine via SSH, as root user, pm2 starts the services as expected as I can also confirm from pm2 status. I can also verify that the service is bound to port 3030 and listening on it (as expected).
But when I try doing the same using Ansible playbook, PM2 does start the service, but for some reason it does not bind the service to the port 3030 as I can see there is nothing listening on 3030.
Can someone please help?
Major edit 1
I also tried breaking up the entire task into smaller ones and run them as individual commands through the command module. But the results are still the same.
Updated roles playbook:
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
#dest: /home/ubuntu/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
- name: NPM install
become: true
command: npm install
args:
chdir: /root/backend
register: shell_output
- name: NPM build
become: true
command: npm run build
args:
chdir: /root/backend
register: shell_output
- name: PM2 start backend
become: true
command: pm2 start npm --name "backend" -- run start
args:
chdir: /root/backend
register: shell_output
- name: PM2 check backend status in pm2
become: true
command: pm2 status
args:
chdir: /root/backend
register: shell_output
- name: PM2 start main
become: true
command: pm2 start main.js --update-env
args:
chdir: /root/backend/dist
register: shell_output
- debug: var=shell_output
No change in the result even with the above:
while running the same manually, either individually each command on the bash shell or through a .sh script (as below) works just fine.
#!/bin/bash
cd ~/backend
pm2 start npm --name "backend" -- run start
pm2 status
cd dist
pm2 start main.js --update-env
pm2 status
pm2 save
Sometimes, even the manual steps also do not get the port up and listening while the service is still up. But this happens rarely and I am unable to consistently reproduce this issue.
Why is this not working is my primary question. What's am I doing wrong? My assumption is that, conceptually, it should work.
My secondary question is how can I make this work?

How to apply the changes of a Linux users group assignments inside a local ansible playbook?

I´m trying to install docker and create a docker image within a local ansible playbook containing multiple plays, adding the user to docker group in between:
- hosts: localhost
connection: local
become: yes
gather_facts: no
tasks:
- name: install docker
ansible.builtin.apt:
update_cache: yes
pkg:
- docker.io
- python3-docker
- name: Add current user to docker group
ansible.builtin.user:
name: "{{ lookup('env', 'USER') }}"
append: yes
groups: docker
- name: Ensure that docker service is running
ansible.builtin.service:
name: docker
state: started
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Create docker container
community.docker.docker_container:
image: ...
name: ...
When executing this playbook with ansible-playbook I´m getting a permission denied error at the "Create docker container" task. Rebooting and calling the playbook again solves the error.
I have tried manually executing some of the commands suggested here and executing the playbook again which works, but I´d like to do everything from within the playbook.
Adding a task like
- name: allow user changes to take effect
ansible.builtin.shell:
cmd: exec sg docker newgrp `id -gn`
does not work.
How can I refresh the Linux user group assignments from within the playbook?
I´m on Ubuntu 18.04.

Why profile does not get loaded properly?

I have run a playbook with the following content on host:
---
- name: Test
hosts: debian
vars_files:
- "./secret.vault.yaml"
tasks: # Roles, modules, and any variables
- name: Install aptitude using apt
apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
- name: Install required system packages
apt: name={{ item }} state=latest update_cache=yes
loop:
[
"apt-transport-https",
"ca-certificates",
"curl",
"software-properties-common",
"python3-pip",
"virtualenv",
"python3-setuptools",
]
- name: Install snap
apt:
update_cache: yes
name: snapd
- name: Install git
apt:
update_cache: yes
name: git
- name: Install certbot
apt:
update_cache: yes
name: certbot
- name: Install htop
apt:
update_cache: yes
name: htop
- name: Ensure group "sudo" exists
group:
name: sudo
state: present
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/debian buster stable
state: present
- name: Index new repo into the cache
apt:
name: "*"
state: latest
update_cache: yes
force_apt_get: yes
- name: Update apt and install docker-ce
apt:
update_cache: yes
name: docker-ce
state: latest
- name: Ensure group "docker" exists
group:
name: docker
state: present
- name: Add admin user
user:
name: admin
comment: administrator
groups: sudo, docker
password: "{{ adminpw | password_hash('sha512') }}"
- name: Ensure docker-compose is installed and available
get_url:
url: https://github.com/docker/compose/releases/download/1.25.4/docker-compose-{{ ansible_system }}-{{ ansible_userspace_architecture }}
dest: /usr/local/bin/docker-compose
mode: "u=rwx,g=rx,o=rx"
- name: Copy SSH file
copy:
src: ~/.ssh
dest: /home/admin/
force: yes
owner: admin
group: admin
remote_src: yes
When I try to login ssh admin#xxx.xxx.xxx.xxx, the .profile does not get loaded correctly:
after typing the bash command, it shows:
properly.
I triggered the playbook as follows:
ansible-playbook playbook.yaml -i ./hosts -u root --ask-vault-pass
What am doing wrong?
It appears based on your "after typing bash" statement that you are expecting the user's shell to be /bin/bash but is not; if that's your question, then you need to update the user: task to specify the shell you want:
- name: Add admin user
user:
name: admin
shell: /bin/bash

Ansible - List of Linux security updates needed on remote servers

I wanted to run a playbook that will accurately report if one of the remote servers requires security updates. Ansible server = Centos 7, remote servers Amazon Linux.
Remote server would highlight on startup something like below:
https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 46 available
Run "sudo yum update" to apply all updates.
To confirm this, I put a playbook together, cobbled from many sources (below) that does perform that function to a degree. It does suggest whether the remote server requires security updates but doesn't say what these updates are?
- name: check if security updates are needed
hosts: elk
tasks:
- name: check yum security updates
shell: "yum updateinfo list all security"
changed_when: false
register: security_update
- debug: msg="Security update required"
when: security_update.stdout != "0"
- name: list some packages
yum: list=available
Then, when I run my updates install playbook:
- hosts: elk
remote_user: ansadm
become: yes
become_method: sudo
tasks:
- name: Move repos from backup to yum.repos.d
shell: mv -f /backup/* /etc/yum.repos.d/
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: Remove redhat.repo
shell: rm -f /etc/yum.repos.d/redhat.repo
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: add line to yum.conf
lineinfile:
dest: /etc/yum.conf
line: exclude=kernel* redhat-release*
state: present
create: yes
- name: yum clean
shell: yum make-cache
register: shell_result
failed_when: '"There are no enabled repos" in shell_result.stderr_lines'
- name: install all security patches
yum:
name: '*'
state: latest
security: yes
bugfix: yes
skip_broken: yes
After install, you would get something similar to below (btw - these are outputs from different servers)
https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 37 packages available
Run "sudo yum update" to apply all updates.
But if I run my list security updates playbook again - it gives a false positive as it still reports security updates needed?
PLAY [check if security updates are needed] ************************************
TASK [Gathering Facts] *********************************************************
ok: [10.10.10.192]
TASK [check yum security updates] **********************************************
ok: [10.10.10.192]
TASK [debug] *******************************************************************
ok: [10.10.10.192] => {
"msg": "Security update required"
}
TASK [list some packages] ******************************************************
ok: [10.10.10.192]
PLAY RECAP *********************************************************************
10.10.10.192 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ansadm#ansible playbooks]$
What do I need to omit/include in playbook to reflect the changes after the install of the updates?
Thanks in advance :)
So I ran your yum command locally on my system I get the following.
45) local-user#server:/home/local-user> yum updateinfo list all security
Loaded plugins: ulninfo
local_repo | 2.9 kB 00:00:00
updateinfo list done
Now granted our systems may have different output here, but it will serve the purpose of my explanation. The output of the entire command is saved to your register, but your when conditional says to run when the output of that command is not EXACTLY "0".
So unless you par that response down with some awk's or sed's, and it responds with any more text that literally just the character "0" that debug task is always going to fire off.

Remove/resolve Travis CI weird messages when deploying to npm

When package is being deployed to npm registry, some weird additional messages appear on Travis CI console:
Standard messages:
Deploying application
NPM API key format changed recently. If your deployment fails, check your API key in ~/.npmrc.
http://docs.travis-ci.com/user/deployment/npm/
~/.npmrc size: 48
+ my-package#1.0.0
Then weird messages follow:
Already up-to-date!
Not currently on any branch.
nothing to commit, working tree clean
Dropped refs/stash#{0} (bff3fdd...1c6d37a)
.travis.yml file:
dist: trusty
sudo: required
env:
- CXX="g++-4.8"
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- g++-4.8
language: node_js
node_js:
- 5
- 6
- 7
deploy:
provider: npm
email: me#me.xx
api_key:
secure: ja...w=
on:
tags: true
branch: master
How to get rid of these messages, and also why are they there?
Cheers!

Resources