Extend Volume Group using Ansible - linux

I have trying to extend the VG via ansible passing the pvname by variable, however I really don't understand why is not working.
Below you can see my code.
Variable file:
new_disk:
- diskname: /dev/sdc
pvname: /dev/sdb1, dev/sdc1
vgname: datavg
lvm_settings:
- lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
tasks file:
include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item.diskname }}"
number: 1
state: present
with_items: "{{ new_disk }}"
register: partition_status
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ vgname }}"
pvs: "{{ pvname }}"
pvresize: yes
Below, you can see the error message:
TASK [resize_fs_linux : Extending the Volume Group] **********************************************************************************************************************************************************fatal: [10.1.33.225]: FAILED! => {"changed": false, "msg": "Device /home/icc-admin/ dev/sdc1 not found."}
Do you know have any idea why is not working?
I really appreciate your help and time
Best Regards,

For it works that way:
Variable file
diskname:
- /dev/sdb
- /dev/sdc
disks_settings:
- vgname: datavg
pvname:
- /dev/sdb1
- /dev/sdc1
lvm_settings:
- vgname: datavg
lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
Tasks file:
---
# tasks file for resize_fs_linux
- include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item }}"
number: 1
state: present
with_items: "{{ diskname }}"
register: partition_status
run_once: true
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pvresize: yes
with_items: "{{ disks_settings }}"
- name: Increasing the filesystems
community.general.lvol:
vg: "{{ vgname }}"
lv: "{{ item.lv_name }}"
size: "{{ item.lv_size }}"
resizefs: true
with_items: "{{ lvm_settings }}"

Related

FAILED! => {"msg": "'item' is undefined"}

Trying to create a partition and mountpoint on the Azure Disks, which are getting attached to VM on creation as part of terraform. Disks should be created based on user input through Jenkins.
Each disk was passed with a LUN number and I am fetching the device name(sdc,sdd etc.) for each disk using that LUN number and grep. In my_tasks.yml this tasks to be looped with include_tasks in playbook.yml as below:
my_tasks.yml
---
- parted:
device: "{{ volumename.stdout }}"
number: 1
state: present
- filesystem:
fstype: xfs
dev: "{{ volumename.stdout }}"
- mount:
fstype: xfs
opts: noatime
src: "{{ volumename.stdout }}"
path: "{{ item.mountpoint }}"
state: mounted
- command: blkid -s UUID -o value {{ volumename.stdout }}
register: volumename_disk
- blockinfile:
path: /etc/fstab
state: present
block: |
UUID={{ volumename_disk.stdout }} {{ volumename.stdout }} xfs defaults,noatime,nofail 0 0
playbook.yml
---
- hosts: "{{ host }}"
become: true
become_method: sudo
become_user: root
vars:
mount: "{{ lookup('file', '/home/xyz/vars.txt') }}"
tasks:
- name: Generate the Lun_Name
shell: "tree /dev/disk/azure/scsi1 | grep -i lun | awk '{print $2}'"
register: lun
- set_fact:
lun_name: "{{ lun_name|default([]) + [ { 'name': lun.stdout } ] }}"
- debug:
msg: "LUN is: {{ lun_name }}"
- name: Generate the Volume_Name
shell: echo "$(ls -l /dev/disk/azure/scsi1 |grep lun |egrep -o "([^\/]+$)")"
register: volumename
- set_fact:
volumenames: "{{ volumenames|default([]) + [ { 'name': volumename.stdout } ] }}"
- debug:
msg: "VOLUMENAME is: {{ volumenames }}"
# - debug:
# msg: "the mountpoints are {{ mount }}"
- set_fact:
mountpoint: "{{ lookup('file', '/home/xyz/vars.txt').split(',') }}"
- debug:
msg: "the mountpoints are {{ mountpoint }}"
# loop: "{{ mountpoint }}"
- include_tasks: my_tasks.yml
loop: "{{ item.volumenames | list }} {{ item.mountpoint | list }}"
loop_control:
loop_var: "{{ item }}"
fatal: [10.102.26.74]: FAILED! => {"msg": "'item' is undefined"}
Issue seems to be with loop inside a include_tasks, I'm able to get the loop working for mountpoint which is running after set_fact in playbook.yml
How can I resolve this issue or improve the code?

Multiple with_items in an Ansible module block

I want to create multiple logical volumes with a variable file but it return a sintax error found character that cannot start any token, I have tried in different ways but still doesn't work
main.yml
---
- name: playbook for create volume groups
hosts: localhost
become: true
tasks:
- include_vars: vars.yml
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item.var1 }}"
size: "{{ item.var2 }}"
with_items:
- { var1: "{{ var_lv_name }}", var2: "{{ var_lv_size }}" }
vars.yml
var_lv_name:
- lv05
- lv06
var_lv_size:
- 1g
- 1g
Use with_together. Test it first. For example,
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
with_together:
- "{{ var_lv_name }}"
- "{{ var_lv_size }}"
gives (abridged)
msg: 'Create lv: lv05 size: 1g'
msg: 'Create lv: lv06 size: 1g'
Optionally, put the declaration below into the file vars.yml
var_lv: "{{ var_lv_name|zip(var_lv_size) }}"
This creates the list
var_lv:
- [lv05, 1g]
- [lv06, 1g]
Use it in the code. The simplified task below gives the same results
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
loop: "{{ var_lv }}"
The previous answer it's totally correct but In my humble opinion we should be getting into the new way to do the things with loop and filters.
Here's my answer:
---
- name: playbook for create volume groups
hosts: localhost
gather_facts: no
become: true
vars_files: vars.yml
tasks:
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item[0] }}"
size: "{{ item[1] }}"
loop: "{{ var_lv_name | zip(var_lv_size) | list }}"
In this answer you're using the new way to use loops with keyword loop and using filters like zip and turning the result into a list type for iteration in the loop.

Ansible to execute a task only when multiple files exist

I want to execute a task only when multiple files exist . if only single file is exist i need to ignore this task. How can i achieve this.
Am unable to achieve with the below playbook
---
- name: Standardize
hosts: test
gather_facts: false
vars:
file_vars:
- {id: 1, name: /etc/h_cm}
- {id: 2, name: /etc/H_CM}
tasks:
- block:
- name: Check if both exists
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: cm_result
- name: Move both files
shell: mv "{{ item.item }}" /tmp/merged
with_items: "{{ cm_result.results }}"
when: item.stat.exists
After check if both exist task, you can add a set fact task like this one:
- name: set facts
set_fact:
files_exist: "{{ (files_exist | default([])) + [item.stat.exists] }}"
with_items: "{{ cm_result.results }}"
And you change your move files task to:
- name: Move both files
debug:
msg: "{{ item.stat.exists }}"
with_items: "{{ cm_result.results }}"
when: false not in files_exist
You have to specify shell: mv "{{ item.item.name }}" /tmp/merged insted of shell: mv "{{ item.item }}" /tmp/merged
Check the below works?:
- name: Standardize
hosts: test
gather_facts: false
become: yes ## If needed
vars:
file_vars:
- {id: 1, name: /etc/h_cm}
- {id: 2, name: /etc/H_CM}
tasks:
- block:
- name: Check if both file exists
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: cm_result
- debug:
var: item.stat.exists
loop: "{{ cm_result.results }}"
- name: Crate a dummy list
set_fact:
file_state: []
- name: Add true to list if file exists
set_fact:
file_state: "{{ file_state }} + ['{{ item.stat.exists }}']"
loop: "{{ cm_result.results }}"
when: item.stat.exists == true
- name: Move both files
shell: mv "{{ item.item.name }}" /tmp/merged
loop: "{{ cm_result.results }}"
when: file_state|length > 1

Ansible vmware_guest optional disk with Ansible Tower survey

I have a playbook for the creation of a VM from a template in VMware ESXi 6.7. My playbook is below. I want to only configure the second (and possible subsequent) disks if the DISK1_SIZE_GB variable is > 0. This is not working. I've also tried using 'when: DISK1_SIZE_GB is defined' with no luck. I'm using a survey in Ansible Tower, with the 2nd disk configuration being an optional answer. In this case I get an error about 0 being an invalid disk size, or when I check for variable definition I get an error about DISK1_SIZE_GB being undefined. Either way, the 'when' conditional doesn't seem to be working.
If I hardcode the size, as in the first 'disk' entry, it works fine .. same if I enter a valid size from Ansible Tower. I need to NOT configure additional disks unless the size is defined in the Tower survey.
Thanks!
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone a template to a VM
vmware_guest:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
template: RHEL-Server-7.7
datacenter: Production
folder: Templates
state: poweredon
hardware:
num_cpus: "{{ CPU_NUM }}"
memory_mb: "{{ MEM_MB }}"
disk:
- size_gb: 20
autoselect_datastore: true
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
when: DISK1_SIZE_GB > 0
networks:
- name: "{{ NETWORK }}"
type: static
ip: "{{ IP_ADDR }}"
netmask: "{{ NETMASK }}"
gateway: "{{ GATEWAY }}"
dns_servers: "{{ DNS_SERVERS }}"
start_connected: true
wait_for_ip_address: yes
AFAIK this can't be accomplished in a single task. You were on the right track with when: DISK1_SIZE_GB is defined if disk: was a task and not a parameter though. Below is how I would approach this.
Create two survey questions:
DISK1_SIZE_GB - integer - required answer - enforce a non-zero
minimum value such as 20 (since you're deploying RHEL)
DISK2_SIZE_GB - integer - optional answer - no minimum or maximum
value
Create disk 1 in your existing vmware_guest task:
disk:
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
Create a new vmware_guest_disk task which runs immediately afterwards and conditionally adds the second disk:
- name: Add second hard disk if necessary
vmware_guest_disk:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
datacenter: Production
folder: Templates
state: poweredon
disk:
- size_gb: "{{ DISK2_SIZE_GB }}"
autoselect_datastore: true
when: DISK2_SIZE_GB is defined

How to create Azure vm with data disk and then format it via ansible

I can do it separate but cannot combine them together, since I don't know disk device name.
My configuration:
- name: Create Virtual Machine
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ item }}"
vm_size: "{{ flavor }}"
managed_disk_type: "{{ disks.disk_type }}"
network_interface_names: "NIC-{{ item }}"
ssh_password_enabled: false
admin_username: "{{ cloud_config.admin_username }}"
image:
offer: "{{ image.offer }}"
publisher: "{{ image.publisher }}"
sku: "{{ image.sku }}"
version: "{{ image.version }}"
tags:
Node: "{{ tags.Node }}"
ssh_public_keys:
- path: "/home/{{ cloud_config.admin_username }}/.ssh/authorized_keys"
key_data: "{{ cloud_config.ssh.publickey }}"
data_disks:
- lun: 0
disk_size_gb: "{{ disks.disk_size }}"
caching: "{{ disks.caching }}"
managed_disk_type: "{{ disks.disk_type }}"
Other part to format and mount the disk
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: /dev/sdc1
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: /dev/sdc1
fstype: xfs
state: mounted
My question: device name cannot be configured.
It can be /dev/sdc or /dev/sdb. For AWS ec2, I can set volumes[device_name], But I don't find such field in Azure. How could I fix it?
/dev/sdb are used for temporary disk by default, but sometimes it was used by my data disk.
I found a workaround to check device name before format.
I know it's not a smart way.
- name: check device name which should be parted
shell: parted -l
register: device_name
- name: Show middle device name
debug:
msg: "{{ device_name.stderr.split(':')[1] }}"
register: mid_device
- name: Display real device name
debug:
msg: "{{ mid_device.msg.split()[0] }}"
register: real_device
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk {{ real_device.msg }}'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: "{{ real_device.msg }}1"
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: "{{ real_device.msg }}1"
fstype: xfs
state: mounted
We can use softlink rather than /dev/sdb to format data disk, the link was located in /dev/disk/azure.
You can run command "tree /dev/disk/azure" to know the detail structure.
Here is my example to format one data disk, if there are more disks, you can change the softlink to be like /dev/disk/azure/scsi1/lun1, /dev/disk/azure/scsi1/lun2, /dev/disk/azure/scsi1/lun3...
- name: use parted to make label
shell: "parted /dev/disk/azure/scsi1/lun0 mklabel msdos"
args:
executable: /bin/bash
- name: partition new disk
shell: "parted /dev/disk/azure/scsi1/lun0 mkpart primary 1 100%"
args:
executable: /bin/bash
- name: inform the OS of partition table changes (partprobe)
command: partprobe
- name: Makes file system on block device with xfs file system
filesystem:
fstype: xfs
dev: /dev/disk/azure/scsi1/lun0-part1
- name: create data dir for mounting
file: path=/data state=directory
- name: Get UUID of the new filesystem
shell: |
blkid -s UUID -o value $(readlink -f /dev/disk/azure/scsi1/lun0-part1)
register: uuid
- name: show real uuid
debug:
msg: "{{ uuid.stdout }}"
- name: mount the dir
mount:
path: /data
src: "UUID={{ uuid.stdout }}"
fstype: xfs
state: mounted
- name: check disk status
shell: df -h | grep /dev/sd
register: df2_status
- debug: var=df2_status.stdout_lines
Maybe can try the azure_rm_managed_disk module and then attach it to VM. Then you have all the properties of the disk.
If you need LVM...
- name: Mount disks with logical volume management
block:
- name: Add disks to logical volume group
community.general.lvg:
vg: "{{ my_volume_group }}"
pvs: "{{ my_physical_devices }}"
- name: Manage logical volume
community.general.lvol:
vg: "{{ my_volume_group }}"
lv: "{{ my_logical_volume }}"
size: "{{ my_volume_size }}"
- name: Manage mount point
ansible.builtin.file:
path: "{{ my_path }}"
state: directory
mode: 0755
- name: Manage file system
community.general.filesystem:
dev: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
- name: Mount volume
ansible.posix.mount:
path: "{{ my_path }}"
state: mounted
src: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
opts: defaults,nodev

Resources