I'm new in ansible, I'm setting up my new instance in digitalocean for configuring new user. Basically, I have the playbook for setting up it and everythings okay when I run the playbook but when I tried to check if my password is working it didn't work.
I did the
sudo apt-get update
to if the password is working. It didn't.
---
- name: Configure Server
hosts: sample_server
gather_facts: no
remote_user: root
vars:
username: sample_user
password: sample_password
tasks:
- name: Update apt cache
apt: update_cache=yes
- name: Safe aptitude upgrade
apt: upgrade=safe
async: 600
poll: 5
- name: Add my user
user:
name: "{{ username }}"
password: "{{ password }}"
update_password: always
shell: /bin/bash
groups: sudo
append: yes
generate_ssh_key: yes
ssh_key_bits: 2048
state: present
- name: Add my workstation user's public key to the new user
authorized_key:
user: "{{ username }}"
key: "{{ lookup('file', 'certificates/id_rsa.pub') }}"
state: present
- name: Change SSH port
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^Port"
line: "Port 30000"
state: present
# notify:
# - Restart SSH
- name: Remove root SSH access
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
line: "PermitRootLogin no"
state: present
# notify:
# - Restart SSH
- name: Remove password SSH access
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^PasswordAuthentication"
line: "PasswordAuthentication no"
state: present
# notify:
# - Restart SSH
- name: Reboot the server
service: name=ssh state=restarted
handlers:
- name: Restart SSH
service: name=ssh state=restarted
Any idea for this. Thanks
Ansible user module takes passwords as crypted values and jinja2 filters have the capability to handle the generation of encrypted passwords. You can modify your user creation task like this:
password: "{{ password | password_hash('sha512') }}"
Hope that will help you
Related
I've been trying to remove some java files and reinstall them to prevent a bug on rocky linux but I have troubles doing so while using the DNF module.
My problem might be from me using the shell command "rpm -qa | grep java" to gather the files that I need to reinstall but I just can't tell.
Here's my code:
---
- name: Rocky | Java reinstall to prevent bugs
hosts: "fakeHost"
gather_facts: false
become: true
tasks:
#Ping the server
- name: Test reachability
ping:
#Check if the path exist
- name: Check java file path
stat:
path: /usr/lib/jvm/java
register: dir_name
#Report if the dir exists
- name: Report if the dir exists
debug:
msg: "The directory exists"
when:
- dir_name.stat.exists
#Load up all the java file that the machine has
- name: grep all java file
shell: "rpm -qa | grep java"
args:
warn: false #prevent false change
register: java_files
when:
- dir_name.stat.exists
#Display all the java files of the machine
- name: Show all java java_files
debug:
msg: "{{ item }}"
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Uninstall each java file with the DNF command
- name: Uninstall all the java files
dnf:
name: "{{ item }}"
state: absent
autoremove: no
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Install each java file with the DNF command
- name: Install all the java files
dnf:
name: "{{ item }}"
state: present
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
We have a Playbook to create a Linux VM in Mincrosoft azure following a role to do post install things like installing some applications packages. Our playbook is running fine and able to deploy the VM in azure, However the second role which is to configure the VM after deployment is not running on the VM as, we are not able to pass the vm (IP/Hostname) to the second role.
What we wan to achieve is to deploy VM using ansible playbook/role, Run the roles after the machine is deployed to configure the business specific tasks.
Path:
Below is the path where all ansible plays and roles are coming. here roles folders contains all the post install tasks, i believe there will be a better way there Keeping the test-creatVM-sur.yml there itself within role but as a learner i'm bit struggling..
$ ls -l /home/azure1/ansible_Dir
-rw-r-----. 1 azure1 hal 1770 Sep 17 17:03 test-creatVM-sur.yml
-rw-r-----. 1 azure1 hal 320 Sep 17 22:30 licence-test.yml
drwxr-x---. 6 azure1 hal 4096 Sep 17 21:46 roles
My main Play file:
$ cat licence-test.yml
---
- name: create vm
hosts: localhost
connection: local
become: yes
become_method: sudo
become_user: root
vars:
Res_Group: "some_value"
LOCATION: "some_value"
VNET: "some_value"
IMAGE_ID: "some_value"
SUBNET: "some_value"
KEYDATA: "some_value"
DISK_SIZE: 100
DISK_TYPE: Premium_LRS
tasks:
- name: include task
include_tasks:
file: creattest_VM.yml <-- This portion works fine
- hosts: "{{ VM_NAME }}" <-- this portion does not work as it's not able to fetch the newly created VM name.
become: yes
become_method: sudo
become_user: root
roles:
- azure_license
...
Play (test-creatVM-sur.yml) which created VM in azure is below:
---
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}-nsg"
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 100
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}-nic1"
subnet: "{{ SUBNET }}"
virtual_network: "{{ VNET }}"
security_group: "{{ VM_NAME }}-nsg"
enable_accelerated_networking: True
public_ip: no
state: present
- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}"
vm_size: Standard_D4s_v3
admin_username: automation
ssh_password_enabled: false
ssh_public_keys:
- path: /home/automation/.ssh/authorized_keys
key_data: "{{ KEYDATA }}"
network_interfaces: "{{ VM_NAME }}-nic1"
os_disk_name: "{{ VM_NAME }}-osdisk"
managed_disk_type: "{{ DISK_TYPE }}"
os_disk_caching: ReadWrite
os_type: Linux
image:
id: "{{ IMAGE_ID }}"
publisher: redhat
plan:
name: rhel-lvm78
product: rhel-byos
publisher: redhat
- name: Add disk to VM
azure_rm_manageddisk:
name: "{{ VM_NAME }}-datadisk01"
location: "{{ LOCATION }}"
resource_group: "{{ Res_Group }}"
disk_size_gb: "{{ DISK_SIZE }}"
managed_by: "{{ VM_NAME }}"
- name: "wait for 3 Min"
pause:
minutes: 3
...
Edit:
I managed to define the vars into a separated name_vars under include_vars.
---
- name: create vm
hosts: localhost
connection: local
become: yes
become_method: sudo
become_user: root
tasks:
- include_vars: name_vars.yml
- include_tasks: creattest_VM.yml
- name: Apply license hardening stuff
hosts: "{{ VM_NAME }}"
become: yes
become_method: sudo
become_user: root
roles:
- azure_license
...
It works after doing some dirty hack but that doesn't looks proper as i am creating an invetory test to putting there VM_NAME as well as an extra variable with -e below.
$ ansible-playbook -i test -e VM_NAME=mylabhost01.hal.com licence-test.yml -k -u my_user_id
Any help will be much appreciated.
I have run a playbook with the following content on host:
---
- name: Test
hosts: debian
vars_files:
- "./secret.vault.yaml"
tasks: # Roles, modules, and any variables
- name: Install aptitude using apt
apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
- name: Install required system packages
apt: name={{ item }} state=latest update_cache=yes
loop:
[
"apt-transport-https",
"ca-certificates",
"curl",
"software-properties-common",
"python3-pip",
"virtualenv",
"python3-setuptools",
]
- name: Install snap
apt:
update_cache: yes
name: snapd
- name: Install git
apt:
update_cache: yes
name: git
- name: Install certbot
apt:
update_cache: yes
name: certbot
- name: Install htop
apt:
update_cache: yes
name: htop
- name: Ensure group "sudo" exists
group:
name: sudo
state: present
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/debian buster stable
state: present
- name: Index new repo into the cache
apt:
name: "*"
state: latest
update_cache: yes
force_apt_get: yes
- name: Update apt and install docker-ce
apt:
update_cache: yes
name: docker-ce
state: latest
- name: Ensure group "docker" exists
group:
name: docker
state: present
- name: Add admin user
user:
name: admin
comment: administrator
groups: sudo, docker
password: "{{ adminpw | password_hash('sha512') }}"
- name: Ensure docker-compose is installed and available
get_url:
url: https://github.com/docker/compose/releases/download/1.25.4/docker-compose-{{ ansible_system }}-{{ ansible_userspace_architecture }}
dest: /usr/local/bin/docker-compose
mode: "u=rwx,g=rx,o=rx"
- name: Copy SSH file
copy:
src: ~/.ssh
dest: /home/admin/
force: yes
owner: admin
group: admin
remote_src: yes
When I try to login ssh admin#xxx.xxx.xxx.xxx, the .profile does not get loaded correctly:
after typing the bash command, it shows:
properly.
I triggered the playbook as follows:
ansible-playbook playbook.yaml -i ./hosts -u root --ask-vault-pass
What am doing wrong?
It appears based on your "after typing bash" statement that you are expecting the user's shell to be /bin/bash but is not; if that's your question, then you need to update the user: task to specify the shell you want:
- name: Add admin user
user:
name: admin
shell: /bin/bash
I am trying to use ansible-pull method for running a playbooks with extra vars on run time of playbooks.
Here is how i needed to run my playbook with vars looks like.
ansible-playbook decode.yml --extra-vars "host_name=xxxxxxx bind_password=xxxxxxxxx swap_disk=xxxxx"
The bind_password will have encoded value of admin password.
and i have tried writing below playbook for it.
I am able to debug every value and getting it correctly but after decoding password not getting exact value or not sure whether i am doing it correct or not?
---
- name: Install and configure AD authentication
hosts: test
become: yes
become_user: root
vars:
hostname: "{{ host_name }}"
diskname: "{{ swap_disk }}"
password: "{{ bind_password }}"
tasks:
- name: Ansible prompt example.
debug:
msg: "{{ bind_password }}"
- name: Ansible prompt example.
debug:
msg: "{{ host_name }}"
- name: Ansible prompt example.
debug:
msg: "{{ swap_disk }}"
- name: Setup the hostname
command: hostnamectl set-hostname --static "{{ host_name }}"
- name: decode passwd
command: export passwd=$(echo "{{ bind_password }}" | base64 --decode)
- name: print decoded password
shell: echo "$passwd"
register: mypasswd
- name: debug decode value
debug:
msg: "{{ mypasswd }}"
but while we can decode base64 value with command:
echo "encodedvalue" | base64 --decode
How can i run this playbook with ansible-pull as well.
later i want to convert this playbook into roles (role1) and then needs to run it as below:
How can we run role based playbook using ansible-pull?
The problem is not b64decoding your value. Your command should not cause any problems and probably gives the expected result if you type it manually in your terminal.
But ansible is creating an ssh connection for each task, therefore each shell/command task starts on a new session. So exporting an env var in one command task and using that env var in the next shell task will never work.
Moreover, why do you want to handle all this with so many command/shell tasks when you have all the needed tools directly in ansible ? Here is a possible rewrite of your last 3 tasks that fits into a single one.
- name: debug decoded value of bind_password
debug:
msg: "{{ bind_password | b64decode }}"
I am trying to update permissions on all the shell script in a particular directory on remote servers using ansible but it gives me error:
- name: update permissions
file: dest=/home/goldy/scripts/*.sh mode=a+x
This is the error I am getting:
fatal: [machineA]: FAILED! => {"changed": false, "msg": "file (/home/goldy/scripts/*.sh) is absent, cannot continue", "path": "/home/goldy/scripts/*.sh", "state": "absent"}
to retry, use: --limit #/var/lib/jenkins/workspace/copy/copy.retry
What wrong I am doing here?
you should run a task with find module to collect all .sh files on that directory, and register the results in a variable.
then run a 2nd task with the file module that will update the permissions when file's extension ends in .sh.
check sample playbook:
- hosts: localhost
gather_facts: false
vars:
tasks:
- name: parse /tmp directory
find:
paths: /tmp
patterns: '*.sh'
register: list_of_files
- debug:
var: item.path
with_items: "{{ list_of_files.files }}"
- name: change permissions
file:
path: "{{ item.path }}"
mode: a+x
with_items: "{{ list_of_files.files }}"