Trying to create a partition and mountpoint on the Azure Disks, which are getting attached to VM on creation as part of terraform. Disks should be created based on user input through Jenkins.
Each disk was passed with a LUN number and I am fetching the device name(sdc,sdd etc.) for each disk using that LUN number and grep. In my_tasks.yml this tasks to be looped with include_tasks in playbook.yml as below:
my_tasks.yml
---
- parted:
device: "{{ volumename.stdout }}"
number: 1
state: present
- filesystem:
fstype: xfs
dev: "{{ volumename.stdout }}"
- mount:
fstype: xfs
opts: noatime
src: "{{ volumename.stdout }}"
path: "{{ item.mountpoint }}"
state: mounted
- command: blkid -s UUID -o value {{ volumename.stdout }}
register: volumename_disk
- blockinfile:
path: /etc/fstab
state: present
block: |
UUID={{ volumename_disk.stdout }} {{ volumename.stdout }} xfs defaults,noatime,nofail 0 0
playbook.yml
---
- hosts: "{{ host }}"
become: true
become_method: sudo
become_user: root
vars:
mount: "{{ lookup('file', '/home/xyz/vars.txt') }}"
tasks:
- name: Generate the Lun_Name
shell: "tree /dev/disk/azure/scsi1 | grep -i lun | awk '{print $2}'"
register: lun
- set_fact:
lun_name: "{{ lun_name|default([]) + [ { 'name': lun.stdout } ] }}"
- debug:
msg: "LUN is: {{ lun_name }}"
- name: Generate the Volume_Name
shell: echo "$(ls -l /dev/disk/azure/scsi1 |grep lun |egrep -o "([^\/]+$)")"
register: volumename
- set_fact:
volumenames: "{{ volumenames|default([]) + [ { 'name': volumename.stdout } ] }}"
- debug:
msg: "VOLUMENAME is: {{ volumenames }}"
# - debug:
# msg: "the mountpoints are {{ mount }}"
- set_fact:
mountpoint: "{{ lookup('file', '/home/xyz/vars.txt').split(',') }}"
- debug:
msg: "the mountpoints are {{ mountpoint }}"
# loop: "{{ mountpoint }}"
- include_tasks: my_tasks.yml
loop: "{{ item.volumenames | list }} {{ item.mountpoint | list }}"
loop_control:
loop_var: "{{ item }}"
fatal: [10.102.26.74]: FAILED! => {"msg": "'item' is undefined"}
Issue seems to be with loop inside a include_tasks, I'm able to get the loop working for mountpoint which is running after set_fact in playbook.yml
How can I resolve this issue or improve the code?
Related
I want to have text file which contains name and password
name: "Peter", "Joe", "Mark"
password: "smith", "biden", "garyy"
And I have playbook like this
---
- hosts: myhosts
become: yes
remote_user: root1
become_user: root
vars_files:
- vars.yml
vars:
ansible_ssh_private_key_file: "{{key}}"
tasks:
- name: Create users
user: name="{{item.name}}" shell=/bin/bash home="/srv/{{item.name}}" groups=root generate_ssh_key=yes ssh_key_bits=2048
loop: "{{ lookup('file', 'userspasswd.txt', wantList=True)| list }}"
- name: Set password to users
shell: echo "{{item.name}}:{{item.password}}" | sudo chpasswd
no_log: True
loop: "{{ lookup('file', 'userspasswd.txt', wantList=True)| list }}"
I am getting error like this
fatal: [xxx.xxx.xxx.xxx]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'name'\n\nThe error appears to be in '/home/root1/Documents/ansiblekernel/main.yml': line 12, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Create users\n ^ here\n"}
Is there any correct way of doing this? Cause I am new to this?
Given the file
shell> cat userspasswd.txt
name: "Peter", "Joe", "Mark"
password: "smith", "biden", "garyy"
Neither wantList=True nor the filter list will help you to parse the file because it's not a valid YAML. If you can't change the structure of the file you'll have to parse it on your own.
Declare the variables
userspasswd_lines: "{{ lookup('file', 'userspasswd.txt').splitlines() }}"
userspasswd_values: "{{ userspasswd_lines|
map('split', ':')|
map('last')|
map('regex_replace', '\"', '')|
map('split', ',')|
map('map', 'trim')|
list }}"
userspasswd_dict: "{{ dict(userspasswd_values.0|
zip(userspasswd_values.1)) }}"
give
userspasswd_lines:
- 'name: "Peter", "Joe", "Mark"'
- 'password: "smith", "biden", "garyy"'
userspasswd_values:
- - Peter
- Joe
- Mark
- - smith
- biden
- garyy
userspasswd_dict:
Joe: biden
Mark: garyy
Peter: smith
Iterate the dictionary. Test it
- name: Create users
debug:
msg: |
name: {{ item }}
shell: /bin/bash
home: /srv/{{ item }}
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
loop: "{{ userspasswd_dict.keys()|list }}"
- name: Set password to users
debug:
msg: 'echo "{{ item.key }}:{{ item.value}}" | sudo chpasswd'
loop: "{{ userspasswd_dict|dict2items }}"
gives
TASK [Create users] **************************************************************************
ok: [test_11] => (item=Peter) =>
msg: |-
name: Peter
shell: /bin/bash
home: /srv/Peter
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
ok: [test_11] => (item=Joe) =>
msg: |-
name: Joe
shell: /bin/bash
home: /srv/Joe
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
ok: [test_11] => (item=Mark) =>
msg: |-
name: Mark
shell: /bin/bash
home: /srv/Mark
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
TASK [Set password to users] *****************************************************************
ok: [test_11] => (item={'key': 'Peter', 'value': 'smith'}) =>
msg: echo "Peter:smith" | sudo chpasswd
ok: [test_11] => (item={'key': 'Joe', 'value': 'biden'}) =>
msg: echo "Joe:biden" | sudo chpasswd
ok: [test_11] => (item={'key': 'Mark', 'value': 'garyy'}) =>
msg: echo "Mark:garyy" | sudo chpasswd
Example of a complete playbook for testing
- hosts: myhosts
vars:
userspasswd_lines: "{{ lookup('file', 'userspasswd.txt').splitlines() }}"
userspasswd_values: "{{ userspasswd_lines|
map('split', ':')|
map('last')|
map('regex_replace', '\"', '')|
map('split', ',')|
map('map', 'trim')|
list }}"
userspasswd_dict: "{{ dict(userspasswd_values.0|
zip(userspasswd_values.1)) }}"
tasks:
- block:
- debug:
var: userspasswd_lines
- debug:
var: userspasswd_values
- debug:
var: userspasswd_dict
run_once: true
- name: Create users
debug:
msg: |
name: {{ item }}
shell: /bin/bash
home: /srv/{{ item }}
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
loop: "{{ userspasswd_dict.keys()|list }}"
- name: Set password to users
debug:
msg: 'echo "{{ item.key }}:{{ item.value}}" | sudo chpasswd'
loop: "{{ userspasswd_dict|dict2items }}"- hosts: myhosts
vars:
userspasswd_lines: "{{ lookup('file', 'userspasswd.txt').splitlines() }}"
userspasswd_values: "{{ userspasswd_lines|
map('split', ':')|
map('last')|
map('regex_replace', '\"', '')|
map('split', ',')|
map('map', 'trim')|
list }}"
userspasswd_dict: "{{ dict(userspasswd_values.0|
zip(userspasswd_values.1)) }}"
tasks:
- block:
- debug:
var: userspasswd_lines
- debug:
var: userspasswd_values
- debug:
var: userspasswd_dict
run_once: true
- name: Create users
debug:
msg: |
name: {{ item }}
shell: /bin/bash
home: /srv/{{ item }}
groups: root
generate_ssh_key: yes
ssh_key_bits: 2048
loop: "{{ userspasswd_dict.keys()|list }}"
- name: Set password to users
debug:
msg: 'echo "{{ item.key }}:{{ item.value}}" | sudo chpasswd'
loop: "{{ userspasswd_dict|dict2items }}"
I want to create multiple logical volumes with a variable file but it return a sintax error found character that cannot start any token, I have tried in different ways but still doesn't work
main.yml
---
- name: playbook for create volume groups
hosts: localhost
become: true
tasks:
- include_vars: vars.yml
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item.var1 }}"
size: "{{ item.var2 }}"
with_items:
- { var1: "{{ var_lv_name }}", var2: "{{ var_lv_size }}" }
vars.yml
var_lv_name:
- lv05
- lv06
var_lv_size:
- 1g
- 1g
Use with_together. Test it first. For example,
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
with_together:
- "{{ var_lv_name }}"
- "{{ var_lv_size }}"
gives (abridged)
msg: 'Create lv: lv05 size: 1g'
msg: 'Create lv: lv06 size: 1g'
Optionally, put the declaration below into the file vars.yml
var_lv: "{{ var_lv_name|zip(var_lv_size) }}"
This creates the list
var_lv:
- [lv05, 1g]
- [lv06, 1g]
Use it in the code. The simplified task below gives the same results
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
loop: "{{ var_lv }}"
The previous answer it's totally correct but In my humble opinion we should be getting into the new way to do the things with loop and filters.
Here's my answer:
---
- name: playbook for create volume groups
hosts: localhost
gather_facts: no
become: true
vars_files: vars.yml
tasks:
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item[0] }}"
size: "{{ item[1] }}"
loop: "{{ var_lv_name | zip(var_lv_size) | list }}"
In this answer you're using the new way to use loops with keyword loop and using filters like zip and turning the result into a list type for iteration in the loop.
I have to traverse through package list file which contains list of packages with their architecture. How can I feed those input to my playbook file? I found a way to get the package names alone but architecture version is not coming. This is my package_list file
nginx | x86_64
telnet| x86_64
openssh | i386
This is my playbook
- name: get contents of package.txt
command: cat "/root/packages.txt"
register: _packages
- name: get contents of architecture from packages.txt
command: cat "/root/packages.txt" | awk '{print $3}'
register: _arch
- name: Filter
theforeman.foreman.content_view_filter:
username: "admin"
password: "mypass"
server_url: "myhost"
name: "myfilter"
organization: "COT"
content_view: "M_view"
filter_type: "rpm"
architecture: "{{ _arch }}"
package_name: "{{ item }}"
inclusion: True
loop: "{{ _packages.stdout_lines }}"
loop: "{{ _arch.stdout_lines }}"
Any help would be appreciated
The required output is package name and architecture should be read from packages.txt file through ansible-playbook
try this playbook:
- name: Reproduce issue
hosts: localhost
gather_facts: no
tasks:
- name: get contents of package.txt
command: cat "/root/packages.txt"
register: _packages
- debug:
msg: "package: {{ line.0 }}, arch: {{ line.1 }}"
loop: "{{ _packages.stdout_lines }}"
vars:
line: "{{ item.split('|')|list }}"
result:
ok: [localhost] => (item=nginx | x86_64) => {
"msg": "package: nginx , arch: x86_64 "
}
ok: [localhost] => (item=telnet| x86_64) => {
"msg": "package: telnet, arch: x86_64 "
}
ok: [localhost] => (item=openssh | i386) => {
"msg": "package: openssh , arch: i386 "
}
for your case:
- name: Filter
theforeman.foreman.content_view_filter:
:
:
architecture: "{{ line.1 }}"
package_name: "{{ line.0 }}"
inclusion: True
loop: "{{ _packages.stdout_lines }}"
vars:
line: "{{ item.split('|')|list }}"
following the version of ansible you could write too line: "{{ item | split('|') | list }}"
You need to split up the line into the necessary values by filtering.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Gahter package list
shell:
cmd: cat package.txt
register: _packages
- name: Show packages
debug:
msg: "Name: {{ item.split('|')[0] }}, Arch: {{ item.split('|')[1] }}"
loop: "{{ _packages.stdout_lines }}"
Further Documentation
Playbook filters - Manipulating strings
Jinja Template Designer - Filters
Further Q&A
Split string into list in Jinja?
I have trying to extend the VG via ansible passing the pvname by variable, however I really don't understand why is not working.
Below you can see my code.
Variable file:
new_disk:
- diskname: /dev/sdc
pvname: /dev/sdb1, dev/sdc1
vgname: datavg
lvm_settings:
- lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
tasks file:
include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item.diskname }}"
number: 1
state: present
with_items: "{{ new_disk }}"
register: partition_status
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ vgname }}"
pvs: "{{ pvname }}"
pvresize: yes
Below, you can see the error message:
TASK [resize_fs_linux : Extending the Volume Group] **********************************************************************************************************************************************************fatal: [10.1.33.225]: FAILED! => {"changed": false, "msg": "Device /home/icc-admin/ dev/sdc1 not found."}
Do you know have any idea why is not working?
I really appreciate your help and time
Best Regards,
For it works that way:
Variable file
diskname:
- /dev/sdb
- /dev/sdc
disks_settings:
- vgname: datavg
pvname:
- /dev/sdb1
- /dev/sdc1
lvm_settings:
- vgname: datavg
lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
Tasks file:
---
# tasks file for resize_fs_linux
- include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item }}"
number: 1
state: present
with_items: "{{ diskname }}"
register: partition_status
run_once: true
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pvresize: yes
with_items: "{{ disks_settings }}"
- name: Increasing the filesystems
community.general.lvol:
vg: "{{ vgname }}"
lv: "{{ item.lv_name }}"
size: "{{ item.lv_size }}"
resizefs: true
with_items: "{{ lvm_settings }}"
I can do it separate but cannot combine them together, since I don't know disk device name.
My configuration:
- name: Create Virtual Machine
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ item }}"
vm_size: "{{ flavor }}"
managed_disk_type: "{{ disks.disk_type }}"
network_interface_names: "NIC-{{ item }}"
ssh_password_enabled: false
admin_username: "{{ cloud_config.admin_username }}"
image:
offer: "{{ image.offer }}"
publisher: "{{ image.publisher }}"
sku: "{{ image.sku }}"
version: "{{ image.version }}"
tags:
Node: "{{ tags.Node }}"
ssh_public_keys:
- path: "/home/{{ cloud_config.admin_username }}/.ssh/authorized_keys"
key_data: "{{ cloud_config.ssh.publickey }}"
data_disks:
- lun: 0
disk_size_gb: "{{ disks.disk_size }}"
caching: "{{ disks.caching }}"
managed_disk_type: "{{ disks.disk_type }}"
Other part to format and mount the disk
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: /dev/sdc1
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: /dev/sdc1
fstype: xfs
state: mounted
My question: device name cannot be configured.
It can be /dev/sdc or /dev/sdb. For AWS ec2, I can set volumes[device_name], But I don't find such field in Azure. How could I fix it?
/dev/sdb are used for temporary disk by default, but sometimes it was used by my data disk.
I found a workaround to check device name before format.
I know it's not a smart way.
- name: check device name which should be parted
shell: parted -l
register: device_name
- name: Show middle device name
debug:
msg: "{{ device_name.stderr.split(':')[1] }}"
register: mid_device
- name: Display real device name
debug:
msg: "{{ mid_device.msg.split()[0] }}"
register: real_device
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk {{ real_device.msg }}'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: "{{ real_device.msg }}1"
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: "{{ real_device.msg }}1"
fstype: xfs
state: mounted
We can use softlink rather than /dev/sdb to format data disk, the link was located in /dev/disk/azure.
You can run command "tree /dev/disk/azure" to know the detail structure.
Here is my example to format one data disk, if there are more disks, you can change the softlink to be like /dev/disk/azure/scsi1/lun1, /dev/disk/azure/scsi1/lun2, /dev/disk/azure/scsi1/lun3...
- name: use parted to make label
shell: "parted /dev/disk/azure/scsi1/lun0 mklabel msdos"
args:
executable: /bin/bash
- name: partition new disk
shell: "parted /dev/disk/azure/scsi1/lun0 mkpart primary 1 100%"
args:
executable: /bin/bash
- name: inform the OS of partition table changes (partprobe)
command: partprobe
- name: Makes file system on block device with xfs file system
filesystem:
fstype: xfs
dev: /dev/disk/azure/scsi1/lun0-part1
- name: create data dir for mounting
file: path=/data state=directory
- name: Get UUID of the new filesystem
shell: |
blkid -s UUID -o value $(readlink -f /dev/disk/azure/scsi1/lun0-part1)
register: uuid
- name: show real uuid
debug:
msg: "{{ uuid.stdout }}"
- name: mount the dir
mount:
path: /data
src: "UUID={{ uuid.stdout }}"
fstype: xfs
state: mounted
- name: check disk status
shell: df -h | grep /dev/sd
register: df2_status
- debug: var=df2_status.stdout_lines
Maybe can try the azure_rm_managed_disk module and then attach it to VM. Then you have all the properties of the disk.
If you need LVM...
- name: Mount disks with logical volume management
block:
- name: Add disks to logical volume group
community.general.lvg:
vg: "{{ my_volume_group }}"
pvs: "{{ my_physical_devices }}"
- name: Manage logical volume
community.general.lvol:
vg: "{{ my_volume_group }}"
lv: "{{ my_logical_volume }}"
size: "{{ my_volume_size }}"
- name: Manage mount point
ansible.builtin.file:
path: "{{ my_path }}"
state: directory
mode: 0755
- name: Manage file system
community.general.filesystem:
dev: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
- name: Mount volume
ansible.posix.mount:
path: "{{ my_path }}"
state: mounted
src: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
opts: defaults,nodev