How to set ansible mitogen strategy on playbook? - python-3.x

I'm trying to make it work mitogen strategy on my ansible playbook. I'm following tutorial on mitogen tutorial. My python version is 3.6 and ansible version is 2.7.10.
Mitogen is installed in /usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy
When I try to add keys on my playbook as:
- hosts: "{{ host_group | default('host-list') }}"
...
strategy: mitogen_linear
strategy_plugins: /usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy
I have the following error:
ERROR! 'strategy_plugins' is not a valid attribute for a Play
Also, I'm trying to configure it as a environment variable in playbook execution:
command = ['ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy', 'ANSIBLE_STRATEGY=mitogen_linear', 'ansible-playbook', '-ihosts', 'ansible_scripts/inventory.yml']
process = subprocess.Popen(command, stdout=subprocess.PIPE)
Here is not finding directory:
FileNotFoundError: [Errno 2] No such file or directory: 'ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy': 'ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy'
How I can correctly configure mitogen strategy on my playbook? How I can make it work?

This might be some misconfiguration "No such file or directory". Try the configuration file. For example put into the [defaults] section:
$ grep strategy /etc/ansible/ansible.cfg
strategy_plugins = /usr/local/ansible/plugins/ansible_mitogen/plugins/strategy
strategy = mitogen_linear
Fit the configuration file and path to your needs. (Works for me with ansible 2.7.9 and mitogen-0.2.6)
FWIW, if you want to automate the installation and configuration see plugins.yml and example of vars.

Related

Issues with Running Ansible Playbook on Linux T2 Instance Localhost

I am trying to figure out why my Ansible playbook is not working. I have tried 20 different ways of indenting the playbook but it is not working.
I am currently launching an Amazon Linux t2 instance and then installing ansible using following commands:
sudo yum update -y
sudo amazon-linux-extras install ansible2 -y
Then I create a playbook first.yml using "vim first.yml" , and the playbook looks like this:
---
- name: update web servers
hosts: localhost
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
I run playbook using "ansible-playbook first.yml" and get the following error:
ERROR! We were unable to read either as JSON nor YAML, these are the
errors we got from each: JSON: No JSON object could be decoded
Syntax Error while loading YAML. mapping values are not allowed in
this context
The error appears to be in '/home/ec2-user/first.yml': line 7, column
8, but may be elsewhere in the file depending on the exact syntax
problem.
The offending line appears to be:
tasks:
^ here
I would appreciate any help, thank you !

Handle YUM package installation deployment for target environments(dev/prod/systest) using ansible playbook

Need to handle YUM package installation deployement process with different versions/packages, for target environments(dev/prod/systest) using ansible playbook.
NOTE: I have gone through groups_var and hosts_var concept but did not understand if multiple packages with different versions can handled for deployment in multiple environment based on input
As you found out, this separation can be achieved by using group_vars and host_vars. These are loaded in relation to the path of inventory file.
Simple example tasks like below will install different versions in dev and prod environments as explained below.
Example playbook1.yml:
- hosts: appservers
tasks:
- name: install app-a
yum:
name: 'app-a-{{ app_a_version }}'
- name: install app-b
yum:
name: 'app-b-{{ app_b_version }}'
Consider the example directory structure separating each environment's inventory:
dev/hosts
prod/hosts
systest/hosts
Each inventory file will contain hosts/groups for that environment.
Dev environment:
Example dev/hosts:
[appservers]
appserver1.dev
appserver2.dev
Then we can have variables specific to this environments in dev/group_vars/appservers.yml:
---
app_a_version: 1.1
app_b_version: 5.5
Will install app-a-1.1 and app-b-5.5 when run as:
ansible-playbook playbook1.yml -i dev/hosts
Prod environment:
Example prod/hosts:
[appservers]
appserver1.prod
appserver2.prod
And variables defined in prod/group_vars/appservers.yml:
app_a_version: 1.0
app_b_version: 5.0
But in prod it will install app-a-1.0 and app-b-5.0 when run as:
ansible-playbook playbook1.yml -i prod/hosts
host_vars work in similar way, and can be used to provide variables specific to each host of the inventory rather than groups in inventory.

Playbook failing when using Azure dynamic inventory

As the title suggests, I'm using an Azure dynamic inventory file and am having an issue running a playbook against the collected inventory.
I'm using Ansible 2.9.1 and used the instructions found here to setup the inventory file.
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/myuser/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /home/myuser/.local/lib/python3.6/site-packages/ansible
executable location = /home/myuser/.local/bin/ansible
python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
My inventory file:
plugin: azure_rm
include_vm_resource_groups:
- mytestrg
auth_source: cli
cloud_environment: AzureUSGovernment
hostvar_expressions:
ansible_connection: "'winrm'"
ansible_user: "'azureuser"
ansible_password: "'Password1'"
ansible_winrm_server_cert_validation: "'ignore'"
keyed_groups:
- prefix: some_tag
key: tags.sometag | default('none')
exclude_host_filters:
- powerstate != 'running'
Simple ad-hoc commands, like ping are successful when using the inventory file. What I'm not able to get working though is running a playbook against it.
My playbook:
- hosts: all
name: Run whoami
tasks:
- win_command: whoami
register: whoami_out
- debug:
var: whoami_out
Command I'm using to run the playbook:
ansible-playbook -i ./inventory_azure_rm.yaml whoami.yaml
Regardless of the hosts I target the playbook against, it fails with:
[WARNING]: Could not match supplied host pattern, ignoring:
playbooks/whoami.yaml
[WARNING]: No hosts matched, nothing to do
Any advice on how I can get past this? I appreciate any assistance!

puppet-5 hiera not working

I recently upgraded puppet version 3 to version 5. all is working fine with the new version but hiera configurations for puppet 5 is not working as expected. I think I missing something which would deploy changes in the remote node. Please advise what should I do here. below are the configurations for my setup.
1) Hiera.yaml
cat /etc/puppetlabs/code/environments/hiera.yaml
version: 5
hierarchy:
- name: "Master"
path: "environments/%{environment}/data/%{trusted.certname}.yaml"
data_hash: yaml_data
datadir: /etc/puppetlabs/code/
2) And my Environment YAML files are kept at
cat /etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml
demo::configuration::phpini::memory_limit: '64'
3) but when I run the command on my remote node, nothing is changing
/opt/puppetlabs/bin/puppet agent
4) In order to troubleshoot I tried to run the command
puppet lookup --explain demo::configuration::phpini::memory_limit --environment staging --node puppetsr7.demo.com
and got below output
Searching for "lookup_options"
Global Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/hiera.yaml"
Hierarchy entry "Master"
Path "/etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml"
Original path: "environments/%{environment}/data/%{trusted.certname}.yaml"
Found key: "lookup_options" value: nil
Module data provider for module "demo" not found
Searching for "demo::configuration::phpini::memory_limit"
Global Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/hiera.yaml"
Hierarchy entry "Master"
Path "/etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml"
Original path: "environments/%{environment}/data/%{trusted.certname}.yaml"
Found key: "demo::configuration::phpini::memory_limit" value: "64"
It's showing the proper value when running from CLI i.e 64 which I need to be get applied on a remote node in php.ini and change the value from 512 to 64.
But don't know how to proceed further from here as I struck now. please help to troubleshoot this.
What I did is I kept the required class in site.pp file as well which I want to get executed through hieradata.
"demo::configuration::phpini::memory_limit: '64'" in hiera file and "demo::configuration::phpini::memory_limit in site.pp.
Hoping that some one could get help from it.

How to store ansible_become_pass in a vault and how to use it?

I am a newbie to ansible and I am using a very simple playbook to issue sudo apt-get update and sudo apt-get upgrade on a couple of servers.
This is the playbook I am using:
---
- name: Update Servers
hosts: my-servers
become: yes
become_user: root
tasks:
- name: update packages
apt: update_cache=yes
- name: upgrade packages
apt: upgrade=dist
and this is an extract from my ~/.ansible/inventory/hosts file:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-francisco>
san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-diego>
This is what I get if I launch the playbook:
$ ansible-playbook update-servers-playbook.yml
PLAY [Update Servers] **********************************************************
TASK [setup] *******************************************************************
ok: [san-francisco]
ok: [san-diego]
TASK [update packages] *********************************************************
ok: [san-francisco]
ok: [san-diego]
TASK [upgrade packages] ********************************************************
ok: [san-francisco]
ok: [san-diego]
PLAY RECAP *********************************************************************
san-francisco : ok=3 changed=0 unreachable=0 failed=0
san-diego : ok=3 changed=0 unreachable=0 failed=0
What is bothering me is the fact that I have the password for my user user stored in plaintext in my ~/.ansible/inventory/hosts file.
I have read about vaults, I have also read about the best practices for variables and vaults but I do not understand how to apply this to my very minimal use case.
I also tried to use lookups. While in general they also work in the inventory file, and I am able to do something like this:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass="{{ lookup('env', 'ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO') }}"
where this case the password would be stored in an environment variable called ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO; there is no way to look up variables in vaults as far as I know.
So, how could I organize my file such that I would be able to lookup up my passwords from somewhere and have them safely stored?
You need to create some vaulted variable files and then either include them in your playbooks or on the command line.
If you change your inventory file to use a variable for the become pass this variable can be vaulted:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass='{{ sanfrancisco_become_pass }}'
san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass='{{ sandiego_become_pass }}'
Then use ansible-vault create vaulted_vars.yml to create a vaulted file with the following contents:
sanfrancisco_become_pass: <my_sudo_password_for_user_on_san-francisco>
sandiego_become_pass : <my_sudo_password_for_user_on_san-diego>
Then either include the vaulted file as extra vars like this:
ansible-playbook -i ~/.ansible/inventory/hosts playbook.yml --ask-vault-pass -e#~/.ansible/inventory/vault_vars
Or include the vars file in your playbook with an include_vars task:
- name : include vaulted variables
include_vars: ~/.ansible/inventory/vault_vars
The best way to solve this problem is to use host_vars. The easiest setup is to just put the ansible_become_pass in Vault encrypted files in the corresponding host_vars directories like this:
myplaybook.yml
host_vars/onehost.com/crypted
host_vars/otherhost.com/crypted
In the crypted files you place the assignment of the ansible_become_pass variable:
ansible_become_pass: SuperSecre3t
Create the file with ansible-vault create, edit it with ansible-vault edit.
Following the advice in the Ansible docs you need to create an additional file per host that assigns the ansible_become_passwd from the crypted variable that has a different name. That way it is possible to search for the ansible_become_passwd in the project files.
myplaybook.yml
host_vars/onehost.com/plain
host_vars/onehost.com/crypted
host_vars/otherhost.com/plain
host_vars/otherhost.com/crypted
where a plain file contains something like this:
ansible_become_pass: "{{ vaulted_become_pass }}"
and the crypted file sets the vaulted_become_pass like shown above.
All crypted files must be encrypted with the same key and ansible-playbook must be called with --ask-vault-pass.
After setting up an inventory with your own relevant settings. These settings assume that you have already set up a rsa-key pair to access your server. You should be able to ssh into your server with ssh remoteuser#155.42.88.199
[local]
localhost ansible_connection=local
[remote]
155.42.88.199 ansible_connection=ssh ansible_user=remoteuser ansible_become_user=root ansible_become=yes ansible_ssh_private_key_file=<private_key_file_path>
You need to store your root password in a file (I called mine 'my_vault.yml'). You can do this with the following command:
~/.ansible$ ansible-vault create my_vault.yml
Simple store your remote server password as follows (do not include the '<>' tags)
su_password: <myreallyspecialpassword>
The password will now be encrypted by vault and the only way to view this is to enter the following command.
~/.ansible$ ansible-vault edit my_vault.yml
We now need to include our 'my_vault.yml' file in our playbook. We can do this by using vars-files to get the value of su-password. We can now create a var titled ansible_become_pass which will be passed the value from our my_vault.yml file which will allow our remoteuser to su once on the server.
---
- name: My Awesome Playbook
hosts: remote
become: yes
vars_files:
- ~/.ansible/my_vault.yml
vars:
ansible_become_pass: '{{ su_password }}'
roles:
- some_awesome_role
As we are using vault each time we want to run this playbook we need to use the following command.
ansible-playbook myawesome_playbook.yml --ask-vault-pass

Resources