I have python script to execute ansible-playbook programmatically, python calls ansible API but play is not getting executed. I believe it is because start_at_task is set to None
What should be the value of start_at_task? could somebody help me
Ansible Version: 2.9.9
Python Version: 3.6.8
This is my run_playbook method
def run_playbook(play_book, extra_vars, servers, inventory_path, tags='all'):
base_playbook_path = os.environ.get('PLAYBOOK_PATH',
'/hom/playbooks/')
playbook_path = base_playbook_path + play_book
context.CLIARGS = ImmutableDict(tags=tags, connection='paramiko', remote_user='xyz', listtags=False, listtasks=False,
listhosts=False, syntax=False, module_path=None, forks=100,
private_key_file='/var/lib/jenkins/.ssh/xyz.pem', ssh_common_args=None, ssh_extra_args=None,
sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None,
become_user=None, verbosity=True, check=False, start_at_task=None)
loader = DataLoader()
loader.load_from_file(base_playbook_path + '.vault_pass.txt')
inventory = InventoryManager(loader=loader, sources=inventory_path)
inventory.subset(servers)
variable_manager = VariableManager(loader=loader, inventory=inventory)
variable_manager._extra_vars = extra_vars
passwords = {}
playbook = PlaybookExecutor(playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader, passwords=passwords)
result = playbook.run()
return result
and this is simple playbook that prints the kernel version
---
- name: Get Kernel Versions
gather_facts: no
hosts: all
become: yes
become_method: sudo
tasks:
- name: Fetch Kernel Version
shell: cat /etc/redhat-release
register: os_release
- debug:
msg: "{{ os_release.stdout }}"
Output:
PLAY [Get Kernel Versions] ****************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************
0
Related
I am trying to run a task with the no_log: true attribute to keep password out of logs, but I am receiving "the output has been hidden due to the fact that 'no_log: true' was specified for this result' Failure.
Here is the task:
- name: set varible for secured vault password
set_fact:
secure_vault_password: "{{ secure_vault_password_string.stdout_lines[0] }}"
vault_password: ''
when: secure_vault_password == ''
no_log: true
I have tried commenting out the no_log line and also changing true to false but still my playbook is failing. Any idea please?
Running Python 3.6 on Redhat.
I've installed an extremely complicated project. 5 engineers worked on this for years, but they are all gone now, so I've no one to ask.
At the top level of the project:
setup.py
Inside of that:
# Python command line utilities will be installed in a PATH-accessible bin/
entry_points={
'console_scripts': [
'blueflow-connector-aims = app.connectors.aims.__main__:cli',
'blueflow-connector-csv = app.connectors.csv.__main__:cli',
'blueflow-connector-discovery = app.connectors.discovery.__main__:cli',
'blueflow-connector-fingerprint = app.connectors.fingerprint.__main__:cli',
'blueflow-connector-mock = app.connectors.mock.__main__:cli',
'blueflow-connector-nessusimport = app.connectors.nessusimport.nessusimport:cli',
'blueflow-connector-netflow = app.connectors.netflow.__main__:cli',
'blueflow-connector-passwords = app.connectors.passwords.__main__:cli',
'blueflow-connector-portscan = app.connectors.portscan.__main__:cli',
'blueflow-connector-pulse = app.connectors.pulse.pulse:cli',
'blueflow-connector-qualysimport = app.connectors.qualysimport.qualysimport:cli',
'blueflow-connector-sleep = app.connectors.sleep.__main__:cli',
'blueflow-connector-splunk = app.connectors.splunk.__main__:cli',
'blueflow-connector-tms = app.connectors.tms.__main__:cli',
]
},
I ran:
pipenv shell
to create a shell.
If I try a console script:
blueflow-connector-mock
I get:
bash: blueflow-connector-mock: command not found
That goes to bash which is clearly a mistake. I also tried:
python3 blueflow-connector-mock
which gives me:
python3: can't open file 'blueflow-connector-mock': [Errno 2] No such file or directory
How do I activate these console scripts, so they will actually run?
i'm getting a ModuleNotFoundError: No module named 'encodings' error from uwsgi when the virtualenv path is different from the project home.
Environment:
OS: debian bullseye
uwsgi version: 2.0.19.1-debian
python version: 3.9
Error scenario:
virtualenv: /home/venvs/py39
project home: /opt/local/apps/myproject
However, the error does not appear when the project home is in the virtual env ie.
virtualenv: /home/venvs/py39
project home: /home/venvs/py39/apps/myproject
The failing configuration:
[uwsgi]
project-home = /opt/local/apps/MyProject
plugins-dir = /usr/lib/uwsgi/plugins
plugin = python39
pythonpath = %(project-home)
virtualenv = /home/venvs/py39
master = 1
chdir = %(project-home)
socket = /var/run/uwsgi/%n.sock
chmod-socket = 666
manage-script-name = True
python-path = %(project-home)
module = wsgi
callable = app
uid = www-data
gid = www-data
processes = 8
log-date = true
The error message:
Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
!!! Python Home is not a directory: /home/venvs/py39 !!!
Set PythonHome to /home/venvs/py39
Python path configuration:
PYTHONHOME = '/home/venvs/py39'
PYTHONPATH = (not set)
program name = '/home/venvs/py39/bin/python'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/home/venvs/py39/bin/python'
sys.base_prefix = '/home/venvs/py39'
sys.base_exec_prefix = '/home/venvs/py39'
sys.platlibdir = 'lib'
sys.executable = '/home/venvs/py39/bin/python'
sys.prefix = '/home/venvs/py39'
sys.exec_prefix = '/home/venvs/py39'
sys.path = [
'/home/venvs/py39/lib/python39.zip',
'/home/venvs/py39/lib/python3.9',
'/home/venvs/py39/lib/python3.9/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Note that PYTHONHOME (and PYTHONPATH) are set by uswgi using info from the conf
Working configuration ie. when project home is a directory inside the venv
[uwsgi]
project-home = /home/venvs/py39/apps/MyProject
plugins-dir = /usr/lib/uwsgi/plugins
plugin = python39
pythonpath = %(project-home)
virtualenv = /home/venvs/py39
master = 1
chdir = %(project-home)
socket = /var/run/uwsgi/%n.sock
chmod-socket = 666
manage-script-name = True
python-path = %(project-home)
module = wsgi
callable = app
uid = www-data
gid = www-data
processes = 8
log-date = true
The above conf is successful and this message is in the logs:
** Operational MODE: preforking ***
added /home/venvs/py39/apps/MyProject to pythonpath.
*** uWSGI is running in multiple interpreter mode ***
It's as if the project path is not added to the python path (done by uwsgi) in the 1st scenario but works in the 2nd one.
Has anyone else come across this ?
I did an upgrade from Debian 10 to 11, the virtualenv was still python 3.7, so I just deleted the virtualenv and recreated it with python 3.9 and it was working again.
I would like to use Ansible 2.9.9 Python API to get config file and parse it to json format from servers in hosts file.
I don't know how to call an existing ansible task using Python API.
Through the Ansible API document, how to integrate ansible task with the sample code.
Sample.py
#!/usr/bin/env python
import json
import shutil
from ansible.module_utils.common.collections import ImmutableDict
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
from ansible import context
import ansible.constants as C
class ResultCallback(CallbackBase):
"""A sample callback plugin used for performing an action as results come in
If you want to collect all results into a single object for processing at
the end of the execution, look into utilizing the ``json`` callback plugin
or writing your own custom callback plugin
"""
def v2_runner_on_ok(self, result, **kwargs):
"""Print a json representation of the result
This method could store the result in an instance attribute for retrieval later
"""
host = result._host
print(json.dumps({host.name: result._result}, indent=4))
# since the API is constructed for CLI it expects certain options to always be set in the context object
context.CLIARGS = ImmutableDict(connection='local', module_path=['/to/mymodules'], forks=10, become=None,
become_method=None, become_user=None, check=False, diff=False)
# initialize needed objects
loader = DataLoader() # Takes care of finding and reading yaml, json and ini files
passwords = dict(vault_pass='secret')
# Instantiate our ResultCallback for handling results as they come in. Ansible expects this to be one of its main display outlets
results_callback = ResultCallback()
# create inventory, use path to host config file as source or hosts in a comma separated string
inventory = InventoryManager(loader=loader, sources='localhost,')
# variable manager takes care of merging all the different sources to give you a unified view of variables available in each context
variable_manager = VariableManager(loader=loader, inventory=inventory)
# create data structure that represents our play, including tasks, this is basically what our YAML loader does internally.
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(action=dict(module='shell', args='ls'), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
)
# Create play object, playbook objects use .load instead of init or new methods,
# this will also automatically create the task objects from the info provided in play_source
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# Run it - instantiate task queue manager, which takes care of forking and setting up all objects to iterate over host list and tasks
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords=passwords,
stdout_callback=results_callback, # Use our custom callback instead of the ``default`` callback plugin, which prints to stdout
)
result = tqm.run(play) # most interesting data for a play is actually sent to the callback's methods
finally:
# we always need to cleanup child procs and the structures we use to communicate with them
if tqm is not None:
tqm.cleanup()
# Remove ansible tmpdir
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)
sum.yml : generated summary file for each host
- hosts: staging
tasks:
- name: pt_mysql_sum
shell: PTDEST=/tmp/collected;mkdir -p $PTDEST;cd /tmp;wget percona.com/get/pt-mysql-summary;chmod +x pt*;./pt-mysql-summary -- --user=adm --password=***** > $PTDEST/pt-mysql-summary.txt;cat $PTDEST/pt-mysql-summary.out;
register: result
environment:
http_proxy: http://proxy.example.com:8080
https_proxy: https://proxy.example.com:8080
- name: ansible_result
debug: var=result.stdout_lines
- name: fetch_log
fetch:
src: /tmp/collected/pt-mysql-summary.txt
dest: /tmp/collected/pt-mysql-summary-{{ inventory_hostname }}.txt
flat: yes
hosts file
[staging]
vm1 ansible_ssh_host=10.40.50.41 ansible_ssh_user=testuser ansible_ssh_pass=*****
I am trying to parse a YAML input from a file:
root: {
children : { key: "test-key", version: "{{ test_version | default( '1.0.0-SNAPSHOT' ) }}"}
}
I am using ruamel.yaml, the section of code that makes the load is configured to preserve quotes and then I am adding manually a new entry:
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 4096
yaml.indent(sequence=4, offset=2)
with open(yml_file, 'r') as file:
print("Modifying file: '%s'..." % str(file))
data = yaml.load(file)
data['root'][new_project_name.lower()] = {'key': "%s" % new_project_name.lower(),
'test_version': "{{ %s_version | default(\'1.0.0-SNAPSHOT\') }}"
% new_project_name.lower()}
with open(yml_file, 'w') as file:
yaml.dump(data, file)
The thing is that when the file gets written with the new entry, I am getting everything in the same line, so it seems not to preserve the new lines (CR LF), (it seems to be loading it without them even) do you know if there is any way to preserve them?.
output is (everything in the same line):
root: {children : { key: "test-key", version: "{{ test_version | default( '1.0.0-SNAPSHOT' ) }}"}}
ruamel.yaml does not preserve comments nor does it preserve spacing within flow
style. If you care about layout, so that your YAML is more easy for
humans to read, you should be using flow style at the maximum for leaf
nodes if at all. That is the default dump style when using YAML(typ='fast').
When you have nested flow-style as with your input, the flow style on these nodes is
preserved, and standard formatting is done (except for wrapping if the line
becomes to large, everything on one line).
Setting the indent level only affects block style constructs.
You should change the input to only leaf-node flow-style, for better readability:
root:
children: {key: "test-key", version: "{{ test_version | default( '1.0.0-SNAPSHOT' ) }}"}
This loads to the same data structure as your input does.
With that you can now do:
import sys
import ruamel.yaml
yml_file = 'input.yaml'
new_project_name = 'NPN'
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 4096
with open(yml_file, 'r') as file:
print("Modifying file: '%s'..." % str(file))
data = yaml.load(file)
npn_lower = new_project_name.lower()
data['root'][npn_lower] = m = ruamel.yaml.comments.CommentedMap([
('key', "%s" % npn_lower),
('test_version', "{{ %s_version | default(\'1.0.0-SNAPSHOT\') }}" % npn_lower)
])
m.fa.set_flow_style()
with open('output.yaml', 'w') as fp:
yaml.dump(data, fp)
which prints:
Modifying file: '<_io.TextIOWrapper name='input.yaml' mode='r' encoding='UTF-8'>'...
and has as output.yaml:
root:
children: {key: "test-key", version: "{{ test_version | default( '1.0.0-SNAPSHOT' ) }}"}
npn: {key: npn, test_version: "{{ npn_version | default('1.0.0-SNAPSHOT') }}"}
Things to note:
add a CommentedMap, as on a normal dict you cannot individually set
flow style, and you need to as this is no longer a nested
flow-style. The elements are added as a list of tuples, as with
older versions of Python the key order is not guaranteed to be the
same as in your input. You also create an empty CommentedMap() and
add the keys/value pairs one at a time.
during trying things out (and with code presented here) it is always
as bad idea to change the input file, as for every test-run your
input has to be reverted.