Register a variable output with Ansible CLI / Ad-Hoc - linux

Can I register the output of a task? Is there an argument with ansible command for that ?
This is my command:
ansible all -m ios_command -a"commands='show run'" -i Resources/Inventory/hosts
I need this, because the output is a dictionary and I only need the value for one key. If this is not possible, is there a way to save the value of that key to a file?

I have found that you can convert ansible output to json when executing playbooks with "ANSIBLE_STDOUT_CALLBACK=json" preceding the "ansible-playbook" command. Example:
ANSIBLE_STDOUT_CALLBACK=json ansible-playbook Resources/.Scripts/.Users.yml
This will give you a large output because it also shows each host's facts, but will have a key for each host on each task.
This method is not possible with ansible command, but it's output is similar to json. It just shows "10.20.30.111 | SUCCESS =>" before the main bracket.
Source

Set the following in your ansible.cfg under the [defaults] group
bin_ansible_callbacks=True
Then as #D_Esc mentioned, you can use
ANSIBLE_STDOUT_CALLBACK=json ansible all -m ios_command -a"commands='show run'" -i Resources/Inventory/hosts
and can get the json output which you can try to parse.
I have not found a way to register the output to a variable using ad-hoc commands

Related

How to comment all the uncommented lines in a file using puppet module

I have a sshd_config configuration file which contains commented as well as uncommented lines. I want to comment all the uncommented lines in that file using puppet. Is there any optimal/simple way to do this? Or is there a way to run bash command (maybe sed to replace) via puppet? I am not sure that using bash command is a right approach.
It would be really helpful is someone guides me with this. Thanks in advance!
Is there any optimal/simple way to do this?
There is no built-in resource type or well-known module that specifically ensures that non-blank lines of a file start with a # character.
Or is there a way to run bash command (maybe sed to replace) via puppet?
Yes, the Exec resource type. That's your best bet short of writing a custom resource type.
I am not sure that using bash command is a right approach.
In a general sense, it's not. Appropriate, specific resource types are better than Exec. But when you don't have a suitable one and can't be bothered to make one, Exec is available.
It might look like this:
# The file to work with, so that we don't have to repeat ourselves
$target_file = '/etc/ssh/sshd_config'
exec { "Comment uncommented ${target_file} lines":
# Specifying the command in array form avoids complicated quoting or any
# risk of Puppet word-splitting the command incorrectly
command => ['sed', '-i', '-e', '/^[[:space:]]*[^#]/ s/^/# /', $target_file],
# If we didn't specify a search path then we would need to use fully-qualified
# command names in 'command' above and 'onlyif' below
path => ['/bin', '/usr/bin', '/sbin', '/usr/sbin'],
# The file needs to be modified only if it contains any non-blank, uncommented
# lines. Testing that via an 'onlyif' ensures that Puppet will not
# run 'sed' or (more importantly) report the file changed when it does
# not initially contain any lines that need to be commented
onlyif => [['grep', '-q', '^[[:space:]]*[^#]', $target_file]],
# This is the default provider for any target node where the rest of this
# resource would work anyway. Specifying it explicitly will lead to a more
# informative diagnostic if there is an attempt to apply this resource to
# a system to which it is unsuited.
provider => 'posix',
}
That does not rely on bash or any other shell to run the commands, but it does rely on sed and grep being available in one of the specified directories. In fact, it relies specifically on GNU sed or one that supports an -i option with the same semantics. Notably, that does not include BSD-style sed, such as you will find on macOS.

Get All AWS Lambda Functions and Their Tags and Output to CSV

My bash script runs to retrieve lambda functions and their tags.
It runs ok and does what it needed to do, however I need to get the output written to a .txt or a .csv file, which needs to be in a readable format.
Below is the script I have;
#!/bin/bash
while read -r name; do
aws lambda list-functions | jq -r ".Functions[].FunctionArn" | xargs -I {} aws lambda list-tags --resource {} --query '{"{}":Tags}' --output text
done
Below is what a returned value looks like after the script runs;
ARN:AWS:LAMBDA:EU-WEST-1:1939999:FUNCTION:example-lambda EXX dev example-lambda False release-1.1.9 False True
I need to get all the items returned and lined up neatly in a txt or csv file. Any help would be appreciated.
I would recommend to use the resourcegroupstaggingapi API to solve this problem. This API allows you to get all resources of a specific type and their tags.
To get all your Lambda functions for your default region and their tags you can run the following command:
aws resourcegroupstaggingapi get-resources --resource-type-filters "lambda"
The output of this command can now be parsed with jq. The great thing about jq is that you can manipulate the output to be CSV.
To get CSV output with two columns (ARN, Tags) you can run the following command:
aws \
resourcegroupstaggingapi \
get-resources \
--resource-type-filters "lambda" \
| jq -r '.ResourceTagMappingList[] | [.ResourceARN, ((.Tags | map([.Key, .Value] | join("="))) | join(","))] | #csv'
The advantage of this approach is that you only have a single HTTP call making it relatively fast. The disadvantage is that you only get the ARN and the tags.
As shimo mentioned in a comment to the question, a way to save the output of a command to a file is using the > operator.
> operator replaces the existing content of the file with the output of the command. If you want to save the output of multiple commands to the same file, you should use >> operator.
You can also use a pipe and the command tee. The output will be printed in your screen and in a file, which will be at the end of the file.
I found this tutorial helpful. Based on what you written, you could pipe the output of your command into a csv, or concotanate into an array then write it into a file with a newline character at the end of each line.

How to pass url as configuration parameter in Behave Python

I am starting using behave and selenium to write automative tests. I want to create parameter called url as an configuration parameter and:
- be able to set it's default value
- be able to pass it as an argument from command line
I know I should be able to use userdata to achieve this, but I can't figure out how exactly. Can anybody help? :)
You can pass directly any variable your behave execution needs via CLI, in my project we use it on Jenkins CI (Shell step) like so:
python -m behave -D platform=desktop -D os=linux -D test.environment=$environment -D browser=remote.chrome -D jenkins.job=$JOB_NAME $TAGS -t=~ignore --no-skipped --no-capture --junit --junit-directory junit_reports
In our behave.ini:
[behave.userdata]
browser=chrome
platform=desktop ;this should be configurable via behave #tags
os=windows
test.environment=staging
Then in Python code just access the data:
if context.config.userdata['browser'].lower() == ApplicationDriversEnum.SELENIUM_CHROME:
driver = __create_chrome_driver(context)

How can I write own cloud-config in cloud-init?

cloud-init is powerful to inject user-data in to VM instance, and its existing module provides lots of possibility.
While to make it more easy to use, I want to define my own tag like coreos below, see detail in running coreos in openstack
#cloud-config
coreos:
etcd:
# generate a new token for each unique cluster from https://discovery.etcd.io/new
discovery: https://discovery.etcd.io/<token>
# multi-region and multi-cloud deployments need to use $public_ipv4
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
So I could have something like below using my own defined tag/config myapp
#cloud-config
myapp:
admin: admin
database: 192.168.2.3
I am new to cloud-init, is it called module ? it is empty in document http://cloudinit.readthedocs.org/en/latest/topics/modules.html
Can you provide some information to describe how I can write my own module ?
You need to write a "cc" module in a suitable directory, and modify a few configurations. It is not terribly easy, but certainly doable (we use it a lot).
Find the directory for cloudconfig modules. On Amazon Linux, this is /usr/lib/python2.6/site-packages/cloudinit/config/, but the directory location differs in different cloud init versions and distributions. The easiest way to find this is to find a file named cc_mounts.py.
Add a new file there, in your case cc_myapp.py. Copy some existing script as a base to know what to write there. The important function is def handle(name,cfg,cloud,log,args): which is basically the entrypoint for your script.
Implement your logic. The cfg parameter has a python object which is the parsed YAML config file. So for your case you would do something like:
myapp = cfg.get('myapp')
admin = myapp.get('admin')
database = myapp.get('database')
Ensure your script gets called by cloud-init. If your distribution uses the standard cloud-init setup, just adding the file might work. Otherwise you might need to add it to /etc/cloud/cloud.cfg.d/defaults.cfg or directly to /etc/cloud/cloud.cfg. There are keys called cloud_init_modules, cloud_config_modules, etc. which correspond to different parts of the init process where you can get your script run. If this does not work straight out of the box, you'll probably have to do a bit of investigation to find out how the modules are called on your system. For example, Amazon Linux used to have a hardcoded list of modules inside the init.d script, ignoring any lists specified in configuration files.
Also note that by default your script will run only once per instance, meaning that rerunning cloud-init will not run your script again. You need to either mark the script as being per boot by setting frequency to always in the configuration file listing your module, or remove the marker file saying that the script has run, which lives somewhere under /var/lib/cloud like in /var/lib/cloud/instances/i-86ecc763/sem/config_mounts.
paste my note for you:
config: after installed cloud-init in VM,if u want to have root permission to access with passwd, do simple config below
modify /etc/cloud/cloud.cfg like below
users:
- defaults
disable_root:0
ssh_pwauth: 1
Note: ssh_pwauth: "it will modify PasswordAuthentication in sshd_config automatically, 1 means yes
Usage:
the behavior of cloud-init can configured using user data. user data can be filled by user during the start of instance (user data is limited to 16K).
Mainly there are several ways to do (tested):
user-data script
$ cat myscript.sh
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
when starting instance, add parameter --user-data myscript.sh, and the instance will run the script once during startup and only once.
cloud config syntax:
It is YAML-based, see http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/examples/
run script
#cloud-config
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]
change hostname, password
#cloud-config
chpasswd:
list: |
root:123456
expire: False
ssh_pwauth: True
hostname: test
include format
run url script, it will download URL script and execute them sequence, this can help to manage the scripts centrally.
#include
http://hostname/script1
http://hostname/scrpt2

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

Resources