Splunkorwarder using ansible - linux

I would like to monitor multiple logs on the universal forwarder. How can i do this? Also when I set forward-server am running out in error. with Enable boot-start somehow i have to accept license manually to finish up the installation. Any suggestions, please?
- name: connect forward server to Splunk server
command: "{{ splunkbin }} add forward-server {{ item }} -auth {{ splunkcreds }}"
with_items: "{{ splunkserver }}"
when: splunkserver is defined
notify: restart_splunk
- name: Enable Boot Start
command: "{{ splunkbin }} enable boot-start"
- name: add temporary monitor to create directory
command: "{{ splunkbin }} add monitor /etc/hosts -auth {{ splunkcreds }}"
notify: restart_splunk

Use the following to accept the license without prompting
- name: Enable Boot Start
command: "{{ splunkbin }} enable boot-start --accept-license"

Related

Ansible Azure Dynamic Inventory and Sharing variables between hosts in a single playbook

Problem: referencing a fact about a host ( in this case, the private ip ) from another host in a playbook using a wildcard only seems to work in the "Host" part of a playbook, not inside a task. vm_ubuntu* cannot be used in a task.
In a single playbook, I have a couple of hosts, and because the inventory is dynamic, I don't have the hostname ahead of time as Azure appends an identifier after it has been created.
I am using TF to create.
And using the Azure dynamic inventory method.
I am calling my playbook like this, where myazure_rm.yml is a bog standard azure dynamic inventory method, as of the time of this writing.
ansible-playbook -i ./myazure_rm.yml ./bwaf-playbook.yaml --key-file ~/.ssh/id_rsa --u azureuser
My playbook looks like this ( abbreviated ).
- hosts: vm_ubuntu*
tasks:
- name: housekeeping
set_fact:
vm_ubuntu_private_ip="{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
#"
- debug: var=vm_ubuntu_private_ip
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['vm_ubuntu*']['ip'] }}"
api_url: "http://{{ vm_bwaf_public_ip }}:8000/restapi/{{ api_version }}"
#"
I am answering my own question to get rep, and to to help others of course.
I also want to thank the person ( https://stackoverflow.com/users/4281353/mon ) who came up with this first, which appears in here: How do I set register a variable to persist between plays in ansible?
- name: "Save private ip to dummy host"
add_host:
name: "dummy_host"
ip: "{{ vm_ubuntu_private_ip }}"
And then this can be referenced in the other host in the playbook like this:
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['dummy_host']['ip'] }}"

How do we use encoded value in playbook and decode it whenever needed in ansible playbook?

I am trying to use ansible-pull method for running a playbooks with extra vars on run time of playbooks.
Here is how i needed to run my playbook with vars looks like.
ansible-playbook decode.yml --extra-vars "host_name=xxxxxxx bind_password=xxxxxxxxx swap_disk=xxxxx"
The bind_password will have encoded value of admin password.
and i have tried writing below playbook for it.
I am able to debug every value and getting it correctly but after decoding password not getting exact value or not sure whether i am doing it correct or not?
---
- name: Install and configure AD authentication
hosts: test
become: yes
become_user: root
vars:
hostname: "{{ host_name }}"
diskname: "{{ swap_disk }}"
password: "{{ bind_password }}"
tasks:
- name: Ansible prompt example.
debug:
msg: "{{ bind_password }}"
- name: Ansible prompt example.
debug:
msg: "{{ host_name }}"
- name: Ansible prompt example.
debug:
msg: "{{ swap_disk }}"
- name: Setup the hostname
command: hostnamectl set-hostname --static "{{ host_name }}"
- name: decode passwd
command: export passwd=$(echo "{{ bind_password }}" | base64 --decode)
- name: print decoded password
shell: echo "$passwd"
register: mypasswd
- name: debug decode value
debug:
msg: "{{ mypasswd }}"
but while we can decode base64 value with command:
echo "encodedvalue" | base64 --decode
How can i run this playbook with ansible-pull as well.
later i want to convert this playbook into roles (role1) and then needs to run it as below:
How can we run role based playbook using ansible-pull?
The problem is not b64decoding your value. Your command should not cause any problems and probably gives the expected result if you type it manually in your terminal.
But ansible is creating an ssh connection for each task, therefore each shell/command task starts on a new session. So exporting an env var in one command task and using that env var in the next shell task will never work.
Moreover, why do you want to handle all this with so many command/shell tasks when you have all the needed tools directly in ansible ? Here is a possible rewrite of your last 3 tasks that fits into a single one.
- name: debug decoded value of bind_password
debug:
msg: "{{ bind_password | b64decode }}"

How to get unmounted device from Ansible facts

I would like to implement such demands via Ansible playbook
1. Get Ansible host facts
2. Run through "ansible_mounts.device" using ansible_device
3. If any device is not in ansible.mounts.device then print them into a file.
Below is my playbook:
- hosts: all
become: true
tasks:
- name: list all mounted device
shell: /bin/echo {{ item.device }} >> /root/mounted
with_items: "{{ ansible_mounts }}"
register: mounted_device
- name: list all umount disks
shell: /bin/echo {{ item }}
with_items: "{{ ansible_devices.keys() }}"
when: '{{ item }} not in {{ mounted_device }} '
However, the mounted_device is always a list of all information in ansible_mounts element which I thought it should be list of devices like "/dev/xvda1". Actually, in /root/mounted it is "/dev/xvda1"
Can anyone please help with this? Or is there any more brilliant way to achieve the goal?
Whilst you could get something to work using the approach you are taking, I would not recommend it as it will be complicated and fragile.
AWS provides a special API endpoint that will expose information about your running instance. This endpoint is accessible (from your running instance) at http://169.254.169.254.
Information about block devices is located at http://169.254.169.254/latest/meta-data/block-device-mapping/ which will give you a list of block devices. The primary block device is named 'ami', and then any subsequent EBS volumes are named 'ebs2', 'ebs3', ..., 'ebsn'. You can then visit http://169.254.169.254/latest/meta-data/block-device-mapping/ebs2 which will simply return the OS device name mapped to that block device (i.e. 'sdb').
Taking this info, here is some example code to access the data for the 1st additional EBS volume:
- name: Set EBS name to query
set_fact:
ebs_volume: ebs2
- name: Get device mapping data
uri:
url: "http://169.254.169.254/latest/meta-data/block-device-mapping/{{ ebs_volume }}"
return_content: yes
register: ebs_volume_data
- name: Display returned data
debug:
msg: "{{ ebs_volume_data.content }}"

Gitlab CI Review Apps - get information from a deployment script back into a gitlab ci environment variable

I use gitlab-ci for the automated tests. Now i extended it to allow for review apps deployed on digitalocean droplets via an ansible playbook.
This is working very well, but i need to get a variable from ansible to the .gitlab-ci - i can't find a way todo it.
.gitlab-ci.yml
Deploy for Review:
before_script: []
stage: review
script: 'cd /home/playbooks/oewm/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID api_git_branch=$CI_BUILD_REF_NAME" digitalocean.yml'
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$IP_FROM_ANSIBLE
on_stop: "Stop Review"
only:
- branches
when: manual
tags:
- deploy
the relevant parts from the playbook:
- name: Create DO Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=present
command=droplet
name=review-{{ do_name }}
api_token={{ do_token }}
region_id={{ do_region }}
image_id={{ do_image }}
size_id={{ do_size }}
ssh_key_ids={{ do_ssh }}
wait_timeout=500
register: my_droplet
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="ID is {{ my_droplet.droplet.id }} IP is {{ my_droplet.droplet.ip_address }}"
So how can i get the droplet ID and IP to gitlab-ci?
(The ID is needed for the later Stop action, the IP to be viewed to the developer)
Ansible is a YAML-configured scripting tool itself and probably nearly turing complete automation scripting environment itself. Why not have it write a file called "./ip_address.sh" somewhere, and then dot-include that .sh into your gitlab CI?
The very top level of all this, in .gitlab-ci.yml would have this:
script:
- ./run_ansible.sh ./out/run_file_generated_from_ansible.sh
- . ./out/run_file_generated_from_ansible.sh
- echo $IP_FROM_ANSIBLE
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$IP_FROM_ANSIBLE
on_stop: "Stop Review"
Writing the two shell scripts above is left as an exercise to the reader. The magic happens inside the Ansible "playbook" which is really just a script, where YOU "export a variable to disk" with filename "./out/run_file_generate_from_ansible.sh".
What you didn't make clear is what you need to do in Gitlab-CI with that variable and where it ends up, and what happens next. So above, I'm just showing a way you could "export" via a temporary on-disk-file, an IP address.
You could save that exported value as an artifact and capture it in other stages as well, so such "artifact-exports" can be passed among stages, if you put them all in a directory called ./out and then declare an artifacts statement in gitlab-ci.yml.
I finally got the setup to run. My solution uses AWS Route53 for dynamic hostname generation. (A problem i ignored to long - i needed hostnames for the different review apps)
Step 1:
Build the hostname dynamicly. For that i used $CI_PIPELINE_ID I created a subdomain on Route53 for this example we call it review.mydomain.com. The Ansible Playbook takes the IP from the create_droplet and creates a record on Route53 with the Pipeline id. 1234.review.mydomain.com.
Now my .gitlab-ci.yml knows this hostname (because it can build it anytime) - no more need to get the Digitalocean droplet IP out of the ansible skript.
Step 2:
After review - the user should be able to stop/destroy the droplet. For this i need the droplet id i get when this droplet is created.
But the destroy is a different playbook, which will run later - invoked by a developer.
So i need a way to store variables somewhere.
But wait, now that i know which host it is, i can just create a facts file on this host, storing the ID for me. when i need to destroy the host, ansible provides me with the facts, and i know the ID.
in the playbook it looks like this:
Role: digitalocean
---
- name: Create DO Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=present
command=droplet
name=oewm-review-{{ do_name }}
api_token={{ do_token }}
region_id={{ do_region }}
image_id={{ do_image }}
size_id={{ do_size }}
ssh_key_ids={{ do_ssh }}
wait_timeout=500
register: my_droplet
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="DO-ID:{{ my_droplet.droplet.id }}"
- name: print info about droplet
delegate_to: localhost
local_action:
module: debug
msg="DO-IP:{{ my_droplet.droplet.ip_address }}"
# DNS
- name: Get existing host information
route53:
command: get
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
register: currentip
- name: Add DNS Record for Web-Application
route53:
command: create
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is not defined
- name: Add DNS Record for API
route53:
command: create
zone: "{{ r53_zone }}"
record: "api.{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is not defined
- name: Add new droplet to host group
add_host:
hostname: "{{ my_droplet.droplet.ip_address }}"
groupname: api,web-application
ansible_user: root
api_domain: "api.{{ do_name }}.review.mydomain.com"
app_domain: "{{ do_name }}.review.mydomain.com"
- name: Wait until SSH is available on {{ my_droplet.droplet.ip_address }}
local_action:
module: wait_for
host: "{{ my_droplet.droplet.ip_address }}"
port: 22
delay: 5
timeout: 320
state: started
Playbook digitalocean.yml:
---
- name: Launch DO Droplet
hosts: all
run_once: true
gather_facts: false
roles:
- digitalocean
- name: Store Facts
hosts: api
tasks:
- name: Ensure facts directory exists
file:
path: "/etc/ansible/facts.d"
state: directory
- name: store variables on host for later fact gathering
template:
src={{ playbook_dir }}/roles/digitalocean/templates/digitalocean.fact.js2
dest="/etc/ansible/facts.d/digitalocean.fact"
mode=0644
- name: Deploy
hosts: api
roles:
- deployroles
Playbook digitalocean_destroy.yml:
- name: Add Host to Inventory
hosts: all
vars:
r53_zone: review.mydomain.com
r53_access_key: "xxxx"
r53_secret_key: "xxxx"
tasks:
- name: Get existing DNS host information
route53:
command: get
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
register: currentip
- name: Remove DNS Record for Web-Application
route53:
command: delete
zone: "{{ r53_zone }}"
record: "{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is defined
- name: Remove DNS Record for API
route53:
command: delete
zone: "{{ r53_zone }}"
record: "api.{{ do_name }}.review.mydomain.com"
type: A
ttl: 600
value: "{{ my_droplet.droplet.ip_address }}"
aws_access_key: "{{ r53_access_key }}"
aws_secret_key: "{{ r53_secret_key }}"
when: currentip.set.value is defined
- name: Add droplet to host group
add_host:
hostname: "{{ do_name }}.review.mydomain.com"
groupname: api,web-application
ansible_user: root
- name: Digitalocean
hosts: api
vars:
do_token: xxxxx
tasks:
- name: Delete Droplet
delegate_to: localhost
local_action:
module: digital_ocean
state=deleted
command=droplet
api_token={{ do_token }}
id="{{ ansible_local.digitalocean.DO_ID }}"
relevant parts from .gitlab-ci.yml:
Deploy for Review:
before_script: []
stage: review
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID api_git_branch=$CI_BUILD_REF_NAME" digitalocean.yml'
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$CI_PIPELINE_ID.review.mydomain.com
on_stop: "Stop Review"
only:
- branches
when: manual
tags:
- deploy
Stop Review:
before_script: []
stage: review
variables:
GIT_STRATEGY: none
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/review --extra-vars "do_name=$CI_PIPELINE_ID" digitalocean_destroy.yml'
when: manual
environment:
name: review/$CI_BUILD_REF_NAME
action: stop
only:
- branches
tags:
- deploy
# STAGING
Deploy to Staging:
before_script: []
stage: staging
script:
- 'cd /home/playbooks/myname/deployment && ansible-playbook -i inventories/staging --extra-vars "api_git_branch=$CI_BUILD_REF_NAME" deploy.yml'
environment:
name: staging
url: https://staging.mydomain.com
when: manual
tags:
- deploy

How to add MAILTO to a cron.d cron_file in Ansible?

I'm using Ansible to create a cron.d file using the cron_file parameter.
But how can I add a MAILTO to the file?
It seems the env=true is only for crontab, not cron.d files. Am I wrong?
Since Ansible 2.0 you have the cronvar command:
# modify /etc/cron.d/sweep_for_rebel_code
- cronvar:
name: MAILTO
value: vader#evilempire.com
cron_file: sweep_for_rebel_code
See the official documentation at https://docs.ansible.com/ansible/latest/modules/cronvar_module.html
This works for me with ansible 2.1:
- cron:
cron_file: ansible_test
env: "{{ item.env }}"
name: "{{ item.name }}"
job: "{{ item.job }}"
user: vagrant
with_items:
- env: true
name: MAILTO
job: test#test.com
- env: false
name: cmd
job: /bin/true

Resources