I would like to use Ansible 2.9.9 Python API to get config file and parse it to json format from servers in hosts file.
I don't know how to call an existing ansible task using Python API.
Through the Ansible API document, how to integrate ansible task with the sample code.
Sample.py
#!/usr/bin/env python
import json
import shutil
from ansible.module_utils.common.collections import ImmutableDict
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
from ansible import context
import ansible.constants as C
class ResultCallback(CallbackBase):
"""A sample callback plugin used for performing an action as results come in
If you want to collect all results into a single object for processing at
the end of the execution, look into utilizing the ``json`` callback plugin
or writing your own custom callback plugin
"""
def v2_runner_on_ok(self, result, **kwargs):
"""Print a json representation of the result
This method could store the result in an instance attribute for retrieval later
"""
host = result._host
print(json.dumps({host.name: result._result}, indent=4))
# since the API is constructed for CLI it expects certain options to always be set in the context object
context.CLIARGS = ImmutableDict(connection='local', module_path=['/to/mymodules'], forks=10, become=None,
become_method=None, become_user=None, check=False, diff=False)
# initialize needed objects
loader = DataLoader() # Takes care of finding and reading yaml, json and ini files
passwords = dict(vault_pass='secret')
# Instantiate our ResultCallback for handling results as they come in. Ansible expects this to be one of its main display outlets
results_callback = ResultCallback()
# create inventory, use path to host config file as source or hosts in a comma separated string
inventory = InventoryManager(loader=loader, sources='localhost,')
# variable manager takes care of merging all the different sources to give you a unified view of variables available in each context
variable_manager = VariableManager(loader=loader, inventory=inventory)
# create data structure that represents our play, including tasks, this is basically what our YAML loader does internally.
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(action=dict(module='shell', args='ls'), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
)
# Create play object, playbook objects use .load instead of init or new methods,
# this will also automatically create the task objects from the info provided in play_source
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# Run it - instantiate task queue manager, which takes care of forking and setting up all objects to iterate over host list and tasks
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords=passwords,
stdout_callback=results_callback, # Use our custom callback instead of the ``default`` callback plugin, which prints to stdout
)
result = tqm.run(play) # most interesting data for a play is actually sent to the callback's methods
finally:
# we always need to cleanup child procs and the structures we use to communicate with them
if tqm is not None:
tqm.cleanup()
# Remove ansible tmpdir
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)
sum.yml : generated summary file for each host
- hosts: staging
tasks:
- name: pt_mysql_sum
shell: PTDEST=/tmp/collected;mkdir -p $PTDEST;cd /tmp;wget percona.com/get/pt-mysql-summary;chmod +x pt*;./pt-mysql-summary -- --user=adm --password=***** > $PTDEST/pt-mysql-summary.txt;cat $PTDEST/pt-mysql-summary.out;
register: result
environment:
http_proxy: http://proxy.example.com:8080
https_proxy: https://proxy.example.com:8080
- name: ansible_result
debug: var=result.stdout_lines
- name: fetch_log
fetch:
src: /tmp/collected/pt-mysql-summary.txt
dest: /tmp/collected/pt-mysql-summary-{{ inventory_hostname }}.txt
flat: yes
hosts file
[staging]
vm1 ansible_ssh_host=10.40.50.41 ansible_ssh_user=testuser ansible_ssh_pass=*****
Related
I'm attempting to acquire the "show system storage" output for both routing engines on my MX960 from the PyEZ function, dev.rpc.get_system_storage, but I don't see how to add the command option, "invoke-on other-routing-engine," to the RPC. Juniper's website shows how you can add command options to an RPCs method argument list when needing to specify any command option that doesn't have any value or requires a value input. However, my Juniper gives two separate RPCs when I pipe display xml rpc:
root#Staging-MX1> show system storage invoke-on other-routing-engine | display xml rpc
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/17.3R3/junos">
<rpc>
<other-routing-engine>
</other-routing-engine>
</rpc>
<rpc>
<get-system-storage>
</get-system-storage>
</rpc>
<cli>
<banner></banner>
</cli>
</rpc-reply>
How am I supposed to apply the option, "invoke-on other-routing-engine?" Here is my code so far for reference:
import sys
from jnpr.junos import Device
from lxml import etree
hostname = "XXX.XXX.XXX.XXX"
junos_username = "root"
junos_password = "XXXXXXX"
testport = XXXX
dev = Device(host=hostname, user=junos_username, passwd=junos_password, mode='telnet', port=testport, timeout=10)
dev.open()
data = dev.rpc.get_system_storage({'format':'text'})
output = etree.tostring(data, encoding='unicode')
print(output)
I have below configmap.yml i want to patch/update date field from python script from container in kubernates deployment i searched various side but couldn't get any reference to do that. Any reference or code sample would be a great help
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-configmap
labels:
app: test
parameter-type: sample
data:
storage.ini: |
[DateInfo]
date=1970-01-01T00:00:00.01Z
I went through this reference code but couldn't figure out what will be content of body and which parameter i should use and which parameter i should neglect
partially update the specified ConfigMap
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest
import ApiException
from pprint import pprint
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# Defining host is optional and default to http://localhost configuration.host = "http://localhost"
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(api_client)
name = 'name_example' # str | name of the ConfigMap
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = None # object |
pretty = 'pretty_example'
dry_run = 'dry_run_example'
field_manager = 'field_manager_example'
force = True
try:
api_response = api_instance.patch_namespaced_config_map(name, namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager, force=force)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->patch_namespaced_config_map: %s\n" % e)
The body parameter in patch_namespaced_config_map is the actual configmap data that you want to patch and needs to be first obtained with read_namespaced_config_map.
Following steps are required for all the operations that have the body argument:
Get the data using the read_*/get_*method
Use the data returned in the first step in the API modifying the object.
Further, for most cases, it is enough to pass the required arguments namely
name, namespace and body but here is the info about each:
Parameters
Name
Type
Description
Notes
name
str
name of the ConfigMap
namespace
str
object name and auth scope, such as for teams and projects
body
object
pretty
str
If 'true', then the output is pretty printed.
[optional]
dry_run
str
When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
[optional]
field_manager
str
fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).
[optional]
force
bool
Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.
[optional]
Review the K8s python client README for the list of all supported APIs and their usage.
I am struggling with command-line parsing and argparse, how to handle global variables, subcommands and optional params to these subcommands
I'm writting a python3 wrapper around python-libvirt to manage my VMs. The wrapper will handle creation, removal, stop/start, snapshots, etc.
A partial list of the options follows, that shows the different ways to pass params to my script:
# Connection option for all commands:
# ---
# vmman.py [-c hypervisor] (defaults to qemu:///system)
# Generic VM commands:
# ---
# vmman.py show : list all vms, with their state
# vmman.py {up|down|reboot|rm} domain : boots, shuts down, reboots
or deletes the domain
# Snapshot management:
# ---
# vmman.py lssnap domain : list snapshots attached to domain
# vmman.py snaprev domain [snapsname] : reverts domain to latest
snapshot or to snapname
# Resource management:
# ---
# vmman.py domain resdel [disk name] [net iface]
And then some code used to test the first subcommand :
def setConnectionString(args):
print('Arg = %s' % args.cstring)
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers()
parserConnect = subparsers.add_parser('ConnectionURI')
parserConnect.set_defaults(func=setConnectionString)
parserConnect.add_argument('-c', '--connect', dest='host')
args = parser.parse_args()
args.func(args)
print("COMPLETED")
Now, the argparse() doc on docs.python.org is dense and a bit confusing of a python newbie as I am... I would have expected the output to be something like:
`Arg = oslo`
What I get is :
[10:21:40|jfgratton#bergen:kvmman.py]: ./argstest.py -c oslo
usage: argstest.py [-h] {ConnectionURI} ...
argstest.py: error: invalid choice: 'connectionURI' (choose from 'ConnectionURI')
I obviously miss something, and I'm only testing the one I thought would be the easiest of the lot (global param); haven't even figured yet on how to handle optional subparams and all.
Your error output lists 'connectionURI' with lowercase 'c' as invalid choice, while it also says "choose from 'ConnectionURI'" with capital letter 'C'.
Fix: Call your test with:
./argstest.py ConnectionURI oslo
Maybe you should start simple (without subparsers) and build from there:
import argparse
def setConnectionString(hostname):
print('Arg = {}'.format(hostname))
parser = argparse.ArgumentParser(description='python3 wrapper around python-libvirt to manage VMs')
parser.add_argument('hostname')
args = parser.parse_args()
setConnectionString(args.hostname)
print("COMPLETED")
I would like to query Windows using a file extension as a parameter (e.g. ".jpg") and be returned the path of whatever app windows has configured as the default application for this file type.
Ideally the solution would look something like this:
from stackoverflow import get_default_windows_app
default_app = get_default_windows_app(".jpg")
print(default_app)
"c:\path\to\default\application\application.exe"
I have been investigating the winreg builtin library which holds the registry infomation for windows but I'm having trouble understanding its structure and the documentation is quite complex.
I'm running Windows 10 and Python 3.6.
Does anyone have any ideas to help?
The registry isn't a simple well-structured database. The Windows
shell executor has some pretty complex logic to it. But for the simple cases, this should do the trick:
import shlex
import winreg
def get_default_windows_app(suffix):
class_root = winreg.QueryValue(winreg.HKEY_CLASSES_ROOT, suffix)
with winreg.OpenKey(winreg.HKEY_CLASSES_ROOT, r'{}\shell\open\command'.format(class_root)) as key:
command = winreg.QueryValueEx(key, '')[0]
return shlex.split(command)[0]
>>> get_default_windows_app('.pptx')
'C:\\Program Files\\Microsoft Office 15\\Root\\Office15\\POWERPNT.EXE'
Though some error handling should definitely be added too.
Added some improvements to the nice code by Hetzroni, in order to handle more cases:
import os
import shlex
import winreg
def get_default_windows_app(ext):
try: # UserChoice\ProgId lookup initial
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\FileExts\{}\UserChoice'.format(ext)) as key:
progid = winreg.QueryValueEx(key, 'ProgId')[0]
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'SOFTWARE\Classes\{}\shell\open\command'.format(progid)) as key:
path = winreg.QueryValueEx(key, '')[0]
except: # UserChoice\ProgId not found
try:
class_root = winreg.QueryValue(winreg.HKEY_CLASSES_ROOT, ext)
if not class_root: # No reference from ext
class_root = ext # Try direct lookup from ext
with winreg.OpenKey(winreg.HKEY_CLASSES_ROOT, r'{}\shell\open\command'.format(class_root)) as key:
path = winreg.QueryValueEx(key, '')[0]
except: # Ext not found
path = None
# Path clean up, if any
if path: # Path found
path = os.path.expandvars(path) # Expand env vars, e.g. %SystemRoot% for ext .txt
path = shlex.split(path, posix=False)[0] # posix False for Windows operation
path = path.strip('"') # Strip quotes
# Return
return path
I'm trying to write a Python script to talk to my instance of Jenkins. I am using the newest version of the jenkinsapi module and querying Jenkins 1.509.3.
I can get a job list like follows:
l=j.get_jobs_list()
where j is an instance of jenkinsapi.Jenkins (I used the requester from jenkinsapi.utils.requester to skip ssl verification)
However, when I try to get more information on an individual job with
j.get_job(l[0])
it fails with this error: Inappropriate content found at [some_address] and what is returned is a bunch of HTML (that looks like the starting page for my instance, the one you see when you log in) instead of anything that should look like the response. Pasting [some_address] into the browser gives me what I expect as a response.
While I can get some information on the Jenkins instance, what I am really interested in is info on individual jobs. Any ideas how to fix it and get the job info?
Using python 3.6, python-jenkins 1.0.1 and Jenkins 2.121.1, following works nicely:
import pprint
import jenkins
IP = 'localhost'
USERNAME = 'my_username'
PW = 'my_password'
def get_version(server):
user = server.get_whoami()
version = server.get_version()
print('Hello %s from Jenkins %s' % (user['fullName'], version))
def get_jobs(server):
jobs = server.get_jobs() # List[dict]
print("Here are top 5 jobs")
pprint(jobs[:5])
return jobs
def get_job(server, job_name):
job_config = server.get_job_config(job_name) # XML
job_info = server.get_job_info(job_name) # dict
print("\n --- JOB CONFIG --- ")
print(job_config)
print("\n --- JOB INFO --- ")
pprint(job_info)
if __name__ == "__main__":
_server = jenkins.Jenkins(IP, username=USERNAME, password=PW)
get_version(_server)
_jobs = get_jobs(_server)
get_job(_server, _jobs[0]['name'])
Jenkins API I was using is documented here: https://python-jenkins.readthedocs.io/en/latest/index.html