Why profile does not get loaded properly? - linux

I have run a playbook with the following content on host:
---
- name: Test
hosts: debian
vars_files:
- "./secret.vault.yaml"
tasks: # Roles, modules, and any variables
- name: Install aptitude using apt
apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
- name: Install required system packages
apt: name={{ item }} state=latest update_cache=yes
loop:
[
"apt-transport-https",
"ca-certificates",
"curl",
"software-properties-common",
"python3-pip",
"virtualenv",
"python3-setuptools",
]
- name: Install snap
apt:
update_cache: yes
name: snapd
- name: Install git
apt:
update_cache: yes
name: git
- name: Install certbot
apt:
update_cache: yes
name: certbot
- name: Install htop
apt:
update_cache: yes
name: htop
- name: Ensure group "sudo" exists
group:
name: sudo
state: present
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/debian buster stable
state: present
- name: Index new repo into the cache
apt:
name: "*"
state: latest
update_cache: yes
force_apt_get: yes
- name: Update apt and install docker-ce
apt:
update_cache: yes
name: docker-ce
state: latest
- name: Ensure group "docker" exists
group:
name: docker
state: present
- name: Add admin user
user:
name: admin
comment: administrator
groups: sudo, docker
password: "{{ adminpw | password_hash('sha512') }}"
- name: Ensure docker-compose is installed and available
get_url:
url: https://github.com/docker/compose/releases/download/1.25.4/docker-compose-{{ ansible_system }}-{{ ansible_userspace_architecture }}
dest: /usr/local/bin/docker-compose
mode: "u=rwx,g=rx,o=rx"
- name: Copy SSH file
copy:
src: ~/.ssh
dest: /home/admin/
force: yes
owner: admin
group: admin
remote_src: yes
When I try to login ssh admin#xxx.xxx.xxx.xxx, the .profile does not get loaded correctly:
after typing the bash command, it shows:
properly.
I triggered the playbook as follows:
ansible-playbook playbook.yaml -i ./hosts -u root --ask-vault-pass
What am doing wrong?

It appears based on your "after typing bash" statement that you are expecting the user's shell to be /bin/bash but is not; if that's your question, then you need to update the user: task to specify the shell you want:
- name: Add admin user
user:
name: admin
shell: /bin/bash

Related

apt cache update keeps failing in sensible playbook

This is the content of my ansible playbook, I am tying to install php8.1 using the ppa:ondrej/php repo. I keep getting this error
fatal: [debian-machine]: FAILED! => {"changed": false, "msg": "apt cache update failed"}
The playbook is to deploy a laravel app but I need to install my PHP8.1 first before moving to the app deployment. This is my playbook below.
---
- name: A playbook for laravel deployment
hosts: laravelserver
become: yes
remote_user: root
vars:
password: "*******"
ip: "xxx.xxx.xxx.xxx"
tasks:
- name: Update and Upgrade apt repository
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400
- name: Installation of Apache Server
apt:
name: apache2
state: present
- name: Enabling Apache Service
service:
name: apache2
enabled: yes
- name: Installing PHP dependencies
apt:
name:
- ca-certificates
- apt-transport-https
- software-properties-common
state: present
- name: Adding Deb Sury Repo
apt_repository:
repo: ppa:ondrej/php
state: present
- name: Update and Upgrade apt repository
apt:
update_cache: yes
- name: Installation of PHP8.1 and its dependencies
apt:
name:
- php8.1
- php8.1-mysql
- libapache2-mod-php
- php8.1-imap
- php8.1-ldap
- php8.1-xml
- php8.1-fpm
- php8.1-curl
- php8.1-mbstring
- php8.1-zip
state: present
Please how can I stop this error from displaying when I run the playbook.

pm2 unable to register service on port when started with ansible

I have the following ansible playbook
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
#pwd
- name: Executing node
become: true
become_method: sudo
become_user: root
shell: |
cd ~/backend
npm install
npm run build
pm2 stop default
pm2 start npm --name "backend" -- run start --port 3030
pm2 status
cd ~/backend/dist
pm2 start main.js --update-env
pm2 status
I have the following issue with the task Executing node. So on my remote server, if I run each of the commands manually, after logging onto the remote machine via SSH, as root user, pm2 starts the services as expected as I can also confirm from pm2 status. I can also verify that the service is bound to port 3030 and listening on it (as expected).
But when I try doing the same using Ansible playbook, PM2 does start the service, but for some reason it does not bind the service to the port 3030 as I can see there is nothing listening on 3030.
Can someone please help?
Major edit 1
I also tried breaking up the entire task into smaller ones and run them as individual commands through the command module. But the results are still the same.
Updated roles playbook:
- name: "update apt package."
become: yes
apt:
update_cache: yes
- name: "update packages"
become: yes
apt:
upgrade: yes
- name: "Remove dependencies that are no longer required"
become: yes
apt:
autoremove: yes
- name: "Install Dependencies"
become: yes
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: Install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates Directory
become: yes
file:
path: ~/backend
state: directory
mode: 0755
- name: Copy backend dist files web server
become: yes
copy:
src: ~/project/artifact.tar.gz
dest: ~/backend/artifact.tar.gz
#dest: /home/ubuntu/backend/artifact.tar.gz
- name: Extract backend files
become: yes
shell: |
cd ~/backend
tar -vxf artifact.tar.gz
- name: NPM install
become: true
command: npm install
args:
chdir: /root/backend
register: shell_output
- name: NPM build
become: true
command: npm run build
args:
chdir: /root/backend
register: shell_output
- name: PM2 start backend
become: true
command: pm2 start npm --name "backend" -- run start
args:
chdir: /root/backend
register: shell_output
- name: PM2 check backend status in pm2
become: true
command: pm2 status
args:
chdir: /root/backend
register: shell_output
- name: PM2 start main
become: true
command: pm2 start main.js --update-env
args:
chdir: /root/backend/dist
register: shell_output
- debug: var=shell_output
No change in the result even with the above:
while running the same manually, either individually each command on the bash shell or through a .sh script (as below) works just fine.
#!/bin/bash
cd ~/backend
pm2 start npm --name "backend" -- run start
pm2 status
cd dist
pm2 start main.js --update-env
pm2 status
pm2 save
Sometimes, even the manual steps also do not get the port up and listening while the service is still up. But this happens rarely and I am unable to consistently reproduce this issue.
Why is this not working is my primary question. What's am I doing wrong? My assumption is that, conceptually, it should work.
My secondary question is how can I make this work?

How can I reinstall java with ansible using the DNF command

I've been trying to remove some java files and reinstall them to prevent a bug on rocky linux but I have troubles doing so while using the DNF module.
My problem might be from me using the shell command "rpm -qa | grep java" to gather the files that I need to reinstall but I just can't tell.
Here's my code:
---
- name: Rocky | Java reinstall to prevent bugs
hosts: "fakeHost"
gather_facts: false
become: true
tasks:
#Ping the server
- name: Test reachability
ping:
#Check if the path exist
- name: Check java file path
stat:
path: /usr/lib/jvm/java
register: dir_name
#Report if the dir exists
- name: Report if the dir exists
debug:
msg: "The directory exists"
when:
- dir_name.stat.exists
#Load up all the java file that the machine has
- name: grep all java file
shell: "rpm -qa | grep java"
args:
warn: false #prevent false change
register: java_files
when:
- dir_name.stat.exists
#Display all the java files of the machine
- name: Show all java java_files
debug:
msg: "{{ item }}"
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Uninstall each java file with the DNF command
- name: Uninstall all the java files
dnf:
name: "{{ item }}"
state: absent
autoremove: no
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Install each java file with the DNF command
- name: Install all the java files
dnf:
name: "{{ item }}"
state: present
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists

Ansible password setup in user module. It didn't set properly

I'm new in ansible, I'm setting up my new instance in digitalocean for configuring new user. Basically, I have the playbook for setting up it and everythings okay when I run the playbook but when I tried to check if my password is working it didn't work.
I did the
sudo apt-get update
to if the password is working. It didn't.
---
- name: Configure Server
hosts: sample_server
gather_facts: no
remote_user: root
vars:
username: sample_user
password: sample_password
tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Safe aptitude upgrade
apt: upgrade=safe
async: 600
poll: 5
- name: Add my user
user:
name: "{{ username }}"
password: "{{ password }}"
update_password: always
shell: /bin/bash
groups: sudo
append: yes
generate_ssh_key: yes
ssh_key_bits: 2048
state: present
- name: Add my workstation user's public key to the new user
authorized_key:
user: "{{ username }}"
key: "{{ lookup('file', 'certificates/id_rsa.pub') }}"
state: present
- name: Change SSH port
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^Port"
line: "Port 30000"
state: present
# notify:
# - Restart SSH
- name: Remove root SSH access
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
line: "PermitRootLogin no"
state: present
# notify:
# - Restart SSH
- name: Remove password SSH access
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^PasswordAuthentication"
line: "PasswordAuthentication no"
state: present
# notify:
# - Restart SSH
- name: Reboot the server
service: name=ssh state=restarted
handlers:
- name: Restart SSH
service: name=ssh state=restarted
Any idea for this. Thanks
Ansible user module takes passwords as crypted values and jinja2 filters have the capability to handle the generation of encrypted passwords. You can modify your user creation task like this:
password: "{{ password | password_hash('sha512') }}"
Hope that will help you

Ansible file copy with sudo fails after upgrading to 1.9

In a playbook, I copy files using sudo. It used to work... Until we migrated to Ansible 1.9... Since then, it fails with the following error message:
"ssh connection closed waiting for sudo password prompt"
I provide the ssh and sudo passwords (through the Ansible prompt), and all the other commands running through sudo are successful (only the file copy and template fail).
My command is:
ansible-playbook -k --ask-become-pass --limit=testhost -C -D playbooks/debug.yml
and the playbookd contains:
- hosts: designsync
gather_facts: yes
tasks:
- name: Make sure the syncmgr home folder exists
action: file path=/home/syncmgr owner=syncmgr group=syncmgr mode=0755 state=directory
sudo: yes
sudo_user: syncmgr
- name: Copy .cshrc file
action: copy src=roles/designsync/files/syncmgr.cshrc dest=/home/syncmgr/.cshrc owner=syncmgr group=syncmgr mode=0755
sudo: yes
sudo_user: syncmgr
Is this a bug or did I miss something?
François.
Your playbook should look like:
- hosts: designsync
gather_facts: yes
tasks:
- name: Make sure the syncmgr home folder exists
sudo: yes
sudo_user: syncmgr
file:
path: "/home/syncmgr"
owner: syncmgr
group: syncmgr
mode: 0755
state: directory
- name: Copy .cshrc file
sudo: yes
sudo_user: syncmgr
copy:
src: "roles/designsync/files/syncmgr.cshrc"
dest: "/home/syncmgr/.cshrc"
owner: syncmgr
group: syncmgr
mode: 0755
Depending on the exact version of Ansible you're using, there may be a bug with sudo_user (experienced it myself).
Trying changing your playbooks from "sudo_user" to "remote_user".

Resources