How to copy a file to remote in `ansible_connection` local? - azure

I am creating an azure vm using ansible using azure_rm_virtualmachine command. For this case the host is localhost (ansible_connection=local). I need to copy a ssh private key which is ansible-vault encrypted. How can i do this?
Here's what is already tried:
Use command and run SCP: problem is the file is still encrypted.
Decrypt the file , scp and encrypt: problem is after decryption if the scp command fails the file is now open decrypted.
Anyone has any idea on how to approach this problem?
FYI: While creating the VM i have added my pub key there so i can access the machine

As far as I understand your use-case, you first create a new VM in Azure, and then you want to send a new private key on that fresh VM. I have two options for you.
Split in 2 plays
In the same playbook, you can have 2 different plays:
---
- name: Provisioning of my pretty little VM in Azure
hosts: localhost
vars:
my_vm_name: myprettyvm
my_resource_group: myprettygroup
…
tasks:
- name: Create the VM
azure_rm_virtualmachine:
resource_group: "{{ my_resource_group }}"
name: "{{ my_vm_name }}"
…
- name: Configure my pretty little VM with
hosts: myprettyvm
vars:
my_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
tasks:
- name: Copy my private key
copy:
content: "{{ my_priv_key }}"
dest: /root/.ssh/id_rsa
Delagate to localhost
Only one play in your playbook, but you delegate the provisioning task to localhost.
---
- name: Creation of my pretty little VM in Azure
hosts: myprettyvm
gather_facts: no
vars:
my_vm_name: myprettyvm
my_resource_group: myprettygroup
…
my_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
tasks:
- name: Create the VM
azure_rm_virtualmachine:
resource_group: "{{ my_resource_group }}"
name: "{{ my_vm_name }}"
…
delegate_to: localhost
- name: Copy my private key
copy:
content: "{{ my_priv_key }}"
dest: /root/.ssh/id_rsa
Don't forget to set gather_facts to no as host is the VM that does not exist yet. So no fact available.

Related

Deploying multiple VM's with ansible

I'm learning ansible to create Linux VM's on azure and I used this sample playbook in this link (https://learn.microsoft.com/en-us/azure/developer/ansible/vm-configure?tabs=ansible) to create one VM on azure. If I want to deploy 10 VM's exactly like this with ansible-playbook how should I do it? Please help. Thanks in advance
Update: I tried it like this but the script fails after creating two public IP addresses.
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group to hold VM
azure_rm_resourcegroup:
name: TestingResource
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: TestingResource
name: testingvnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: TestingResource
name: testingsubnet
address_prefix: "10.0.1.0/24"
virtual_network: testingvnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: TestingResource
allocation_method: Static
name: "{{ item }}" #CHANGE HERE
loop:
- testingpublicIP2
- testingpublicIP3
register: output_ip_address
#- name: Dump public IP for VM which will be created
#debug:
#msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: TestingResource
name: TestingSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: TestingResource
name: "{{ item }}" #CHANGE HERE
loop:
- TestingNIC2
- TestingNIC3
virtual_network: testingvnet
subnet: testingsubnet
public_ip_name: "{{ item }}" #CHANGE HERE
loop:
- testingpublicIP2
- testingpublicIP3
security_group: TestingSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: TestingResource
name: "{{ item }}" #CHANGE HERE VM NAME
loop:
- TestingResource2
- TestingResource3
vm_size: Standard_B2s
admin_username: admin
admin_password: password#123
ssh_password_enabled: true
network_interfaces: "{{ item }}" #CHANGE HERE
loop:
- TestingNIC2
- TestingNIC3
image:
offer: UbuntuServer
publisher: Canonical
sku: '18.04-LTS'
version: latest
You can use the loops function to create multiple VMs through ansible as you showed in the question, but you'd better use a list variable to loop so that you don't need to write all the elements every time. And the variables also can be used for other things like resource group name, location, and so on that use multiple times in the code. Here is the example:
- hosts: localhost
vars:
resource_group: myResourceGroup
...
tasks:
- name: Create resource group to hold VM
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: eastus
...
And the variable for the loop:
loop: "{{ var_list }}"
I found out it's quite easy with terraform to deploy multiple VM's on azure just by changing one variable in the configuration file, Here's the configuration file that i used (https://github.com/RichardPhilipsRoy/terraform-azure-linuxVMs)

How to run an ansible role after running task first in azure

We have a Playbook to create a Linux VM in Mincrosoft azure following a role to do post install things like installing some applications packages. Our playbook is running fine and able to deploy the VM in azure, However the second role which is to configure the VM after deployment is not running on the VM as, we are not able to pass the vm (IP/Hostname) to the second role.
What we wan to achieve is to deploy VM using ansible playbook/role, Run the roles after the machine is deployed to configure the business specific tasks.
Path:
Below is the path where all ansible plays and roles are coming. here roles folders contains all the post install tasks, i believe there will be a better way there Keeping the test-creatVM-sur.yml there itself within role but as a learner i'm bit struggling..
$ ls -l /home/azure1/ansible_Dir
-rw-r-----. 1 azure1 hal 1770 Sep 17 17:03 test-creatVM-sur.yml
-rw-r-----. 1 azure1 hal 320 Sep 17 22:30 licence-test.yml
drwxr-x---. 6 azure1 hal 4096 Sep 17 21:46 roles
My main Play file:
$ cat licence-test.yml
---
- name: create vm
hosts: localhost
connection: local
become: yes
become_method: sudo
become_user: root
vars:
Res_Group: "some_value"
LOCATION: "some_value"
VNET: "some_value"
IMAGE_ID: "some_value"
SUBNET: "some_value"
KEYDATA: "some_value"
DISK_SIZE: 100
DISK_TYPE: Premium_LRS
tasks:
- name: include task
include_tasks:
file: creattest_VM.yml <-- This portion works fine
- hosts: "{{ VM_NAME }}" <-- this portion does not work as it's not able to fetch the newly created VM name.
become: yes
become_method: sudo
become_user: root
roles:
- azure_license
...
Play (test-creatVM-sur.yml) which created VM in azure is below:
---
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}-nsg"
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 100
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}-nic1"
subnet: "{{ SUBNET }}"
virtual_network: "{{ VNET }}"
security_group: "{{ VM_NAME }}-nsg"
enable_accelerated_networking: True
public_ip: no
state: present
- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ Res_Group }}"
location: "{{ LOCATION }}"
name: "{{ VM_NAME }}"
vm_size: Standard_D4s_v3
admin_username: automation
ssh_password_enabled: false
ssh_public_keys:
- path: /home/automation/.ssh/authorized_keys
key_data: "{{ KEYDATA }}"
network_interfaces: "{{ VM_NAME }}-nic1"
os_disk_name: "{{ VM_NAME }}-osdisk"
managed_disk_type: "{{ DISK_TYPE }}"
os_disk_caching: ReadWrite
os_type: Linux
image:
id: "{{ IMAGE_ID }}"
publisher: redhat
plan:
name: rhel-lvm78
product: rhel-byos
publisher: redhat
- name: Add disk to VM
azure_rm_manageddisk:
name: "{{ VM_NAME }}-datadisk01"
location: "{{ LOCATION }}"
resource_group: "{{ Res_Group }}"
disk_size_gb: "{{ DISK_SIZE }}"
managed_by: "{{ VM_NAME }}"
- name: "wait for 3 Min"
pause:
minutes: 3
...
Edit:
I managed to define the vars into a separated name_vars under include_vars.
---
- name: create vm
hosts: localhost
connection: local
become: yes
become_method: sudo
become_user: root
tasks:
- include_vars: name_vars.yml
- include_tasks: creattest_VM.yml
- name: Apply license hardening stuff
hosts: "{{ VM_NAME }}"
become: yes
become_method: sudo
become_user: root
roles:
- azure_license
...
It works after doing some dirty hack but that doesn't looks proper as i am creating an invetory test to putting there VM_NAME as well as an extra variable with -e below.
$ ansible-playbook -i test -e VM_NAME=mylabhost01.hal.com licence-test.yml -k -u my_user_id
Any help will be much appreciated.

Ansible-AWX get file from remote Windows to local linux

Hello to all stack overflow community.
I'm seeking you help because I've been trying to accomplish the task of getting a file from remote Windows to local linux using Ansible-AWX and I can't get it to work. Bellow I shared the playbook and most of tests I've done but none of them worked.
I'm getting latest file in a windows directory and trying to transfer that file to local AWX either inside the docker or in the linux server where AWX is running.
Test_1: Said file was copied but when I go inside the docker nothing there. I can't find an answer and couldn't find any on Google.
Test_2: Didn't work. It says can't authenticate to linux server
Test_3: Task became idle and I have to restart the docker to be able to stop it. It gets crazy. No idea why.
Test_4: It says connection unexpectedly closed.
I didn't want to provide output to reduce noise and because I can't share the information. I removed names and ips from playbook as well.
I'm connecting to Windows server using AD.
Please, I don't know what else to do. Thanks for your help in advance.
---
- name: Get file from Windows to Linux
hosts: all # remote windows server ip
gather_facts: true
become: true
vars:
local_dest_path_test1: \var\lib\awx\public\ # Inside AWX docker
local_dest_path_test2: \\<linux_ip>\home\user_name\temp\ # Outside AWX docker in the linux server
local_dest_path_test3: /var/lib/awx/public/ # Inside AWX docker
# Source file in remote windows server
src_file: C:\temp\
tasks:
# Getting file information to be copied
- name: Get files in a folder
win_find:
paths: "{{ src_file }}"
register: found_files
- name: Get latest file
set_fact:
latest_file: "{{ found_files.files | sort(attribute='creationtime',reverse=true) | first }}"
# Test 1
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test1 }}"
remote_src: yes
# Test 2
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test2 }}"
remote_src: yes
become: yes
become_method: su
become_flags: logon_type=new_credentials logon_flags=netcredentials_only
vars:
ansible_become_user: <linux_user_name>
ansible_become_pass: <linux_user_password>
ansible_remote_tmp: <linux_remote_path>
# Test 3
- name: Fetch latest file to linux
fetch:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
flat: yes
fail_on_missing: yes
delegate_to: 127.0.0.1
# Test 4
- name: Transfer file from Windows to Linux
synchronize:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
mode: pull
delegate_to: 127.0.0.1

Ansible Azure Dynamic Inventory and Sharing variables between hosts in a single playbook

Problem: referencing a fact about a host ( in this case, the private ip ) from another host in a playbook using a wildcard only seems to work in the "Host" part of a playbook, not inside a task. vm_ubuntu* cannot be used in a task.
In a single playbook, I have a couple of hosts, and because the inventory is dynamic, I don't have the hostname ahead of time as Azure appends an identifier after it has been created.
I am using TF to create.
And using the Azure dynamic inventory method.
I am calling my playbook like this, where myazure_rm.yml is a bog standard azure dynamic inventory method, as of the time of this writing.
ansible-playbook -i ./myazure_rm.yml ./bwaf-playbook.yaml --key-file ~/.ssh/id_rsa --u azureuser
My playbook looks like this ( abbreviated ).
- hosts: vm_ubuntu*
tasks:
- name: housekeeping
set_fact:
vm_ubuntu_private_ip="{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
#"
- debug: var=vm_ubuntu_private_ip
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['vm_ubuntu*']['ip'] }}"
api_url: "http://{{ vm_bwaf_public_ip }}:8000/restapi/{{ api_version }}"
#"
I am answering my own question to get rep, and to to help others of course.
I also want to thank the person ( https://stackoverflow.com/users/4281353/mon ) who came up with this first, which appears in here: How do I set register a variable to persist between plays in ansible?
- name: "Save private ip to dummy host"
add_host:
name: "dummy_host"
ip: "{{ vm_ubuntu_private_ip }}"
And then this can be referenced in the other host in the playbook like this:
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['dummy_host']['ip'] }}"

Gitlab CI Review Apps - get information from a deployment script back into a gitlab ci environment variable

I use gitlab-ci for the automated tests. Now i extended it to allow for review apps deployed on digitalocean droplets via an ansible playbook.
This is working very well, but i need to get a variable from ansible to the .gitlab-ci - i can't find a way todo it.
.gitlab-ci.yml
Deploy for Review:
before_script: []
stage: review
script: 'cd /home/playbooks/oewm/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID api_git_branch=$CI_BUILD_REF_NAME" digitalocean.yml'
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$IP_FROM_ANSIBLE
on_stop: "Stop Review"
only:
- branches
when: manual
tags:
- deploy
the relevant parts from the playbook:
- name: Create DO Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=present
command=droplet
name=review-{{ do_name }}
api_token={{ do_token }}
region_id={{ do_region }}
image_id={{ do_image }}
size_id={{ do_size }}
ssh_key_ids={{ do_ssh }}
wait_timeout=500
register: my_droplet
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="ID is {{ my_droplet.droplet.id }} IP is {{ my_droplet.droplet.ip_address }}"
So how can i get the droplet ID and IP to gitlab-ci?
(The ID is needed for the later Stop action, the IP to be viewed to the developer)
Ansible is a YAML-configured scripting tool itself and probably nearly turing complete automation scripting environment itself. Why not have it write a file called "./ip_address.sh" somewhere, and then dot-include that .sh into your gitlab CI?
The very top level of all this, in .gitlab-ci.yml would have this:
script:
- ./run_ansible.sh ./out/run_file_generated_from_ansible.sh
- . ./out/run_file_generated_from_ansible.sh
- echo $IP_FROM_ANSIBLE
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$IP_FROM_ANSIBLE
on_stop: "Stop Review"
Writing the two shell scripts above is left as an exercise to the reader. The magic happens inside the Ansible "playbook" which is really just a script, where YOU "export a variable to disk" with filename "./out/run_file_generate_from_ansible.sh".
What you didn't make clear is what you need to do in Gitlab-CI with that variable and where it ends up, and what happens next. So above, I'm just showing a way you could "export" via a temporary on-disk-file, an IP address.
You could save that exported value as an artifact and capture it in other stages as well, so such "artifact-exports" can be passed among stages, if you put them all in a directory called ./out and then declare an artifacts statement in gitlab-ci.yml.
I finally got the setup to run. My solution uses AWS Route53 for dynamic hostname generation. (A problem i ignored to long - i needed hostnames for the different review apps)
Step 1:
Build the hostname dynamicly. For that i used $CI_PIPELINE_ID I created a subdomain on Route53 for this example we call it review.mydomain.com. The Ansible Playbook takes the IP from the create_droplet and creates a record on Route53 with the Pipeline id. 1234.review.mydomain.com.
Now my .gitlab-ci.yml knows this hostname (because it can build it anytime) - no more need to get the Digitalocean droplet IP out of the ansible skript.
Step 2:
After review - the user should be able to stop/destroy the droplet. For this i need the droplet id i get when this droplet is created.
But the destroy is a different playbook, which will run later - invoked by a developer.
So i need a way to store variables somewhere.
But wait, now that i know which host it is, i can just create a facts file on this host, storing the ID for me. when i need to destroy the host, ansible provides me with the facts, and i know the ID.
in the playbook it looks like this:
Role: digitalocean
---
- name: Create DO Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=present
command=droplet
name=oewm-review-{{ do_name }}
api_token={{ do_token }}
region_id={{ do_region }}
image_id={{ do_image }}
size_id={{ do_size }}
ssh_key_ids={{ do_ssh }}
wait_timeout=500
register: my_droplet
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="DO-ID:{{ my_droplet.droplet.id }}"
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="DO-IP:{{ my_droplet.droplet.ip_address }}"
# DNS
- name: Get existing host information
route53:
command: get
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
register: currentip
- name: Add DNS Record for Web-Application
route53:
command: create
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is not defined
- name: Add DNS Record for API
route53:
command: create
zone: "{{ r53_zone }}"
record: "api.{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is not defined
- name: Add new droplet to host group
add_host:
hostname: "{{ my_droplet.droplet.ip_address }}"
groupname: api,web-application
ansible_user: root
api_domain: "api.{{ do_name }}.review.mydomain.com"
app_domain: "{{ do_name }}.review.mydomain.com"
- name: Wait until SSH is available on {{ my_droplet.droplet.ip_address }}
local_action:
module: wait_for
host: "{{ my_droplet.droplet.ip_address }}"
port: 22
delay: 5
timeout: 320
state: started
Playbook digitalocean.yml:
---
- name: Launch DO Droplet
hosts: all
run_once: true
gather_facts: false
roles:
- digitalocean
- name: Store Facts
hosts: api
tasks:
- name: Ensure facts directory exists
file:
path: "/etc/ansible/facts.d"
state: directory
- name: store variables on host for later fact gathering
template:
src={{ playbook_dir }}/roles/digitalocean/templates/digitalocean.fact.js2
dest="/etc/ansible/facts.d/digitalocean.fact"
mode=0644
- name: Deploy
hosts: api
roles:
- deployroles
Playbook digitalocean_destroy.yml:
- name: Add Host to Inventory
hosts: all
vars:
r53_zone: review.mydomain.com
r53_access_key: "xxxx"
r53_secret_key: "xxxx"
tasks:
- name: Get existing DNS host information
route53:
command: get
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
register: currentip
- name: Remove DNS Record for Web-Application
route53:
command: delete
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is defined
- name: Remove DNS Record for API
route53:
command: delete
zone: "{{ r53_zone }}"
record: "api.{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is defined
- name: Add droplet to host group
add_host:
hostname: "{{ do_name }}.review.mydomain.com"
groupname: api,web-application
ansible_user: root
- name: Digitalocean
hosts: api
vars:
do_token: xxxxx
tasks:
- name: Delete Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=deleted
command=droplet
api_token={{ do_token }}
id="{{ ansible_local.digitalocean.DO_ID }}"
relevant parts from .gitlab-ci.yml:
Deploy for Review:
before_script: []
stage: review
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID api_git_branch=$CI_BUILD_REF_NAME" digitalocean.yml'
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$CI_PIPELINE_ID.review.mydomain.com
on_stop: "Stop Review"
only:
- branches
when: manual
tags:
- deploy
Stop Review:
before_script: []
stage: review
variables:
GIT_STRATEGY: none
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID" digitalocean_destroy.yml'
when: manual
environment:
name: review/$CI_BUILD_REF_NAME
action: stop
only:
- branches
tags:
- deploy
# STAGING
Deploy to Staging:
before_script: []
stage: staging
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/staging --extra-vars "api_git_branch=$CI_BUILD_REF_NAME" deploy.yml'
environment:
name: staging
url: https://staging.mydomain.com
when: manual
tags:
- deploy

Resources