I am trying to load the contents of a json file and assign them to variables.
My json file looks like this :
{ "master":{ "key1":"value1", "key2":"value2", "key3":"value3" } }
On my local machine, I was able to use the following manifest to load the json file and parse it ; it worked just fine.
$master_hash=loadjson('some_file.json')
$key1=$master_hash['master']['key1']
$key2=$master_hash['master']['key2']
$key3=$master_hash['master']['key3']
However, when I move it to the Puppet master, this fails as it looks for the json file on the Puppet master ! In my earlier request, Puppet : How to load file from agent, I was told to use a function and that worked fine for one fact, but in this case I need to generate a number of them depending on the contents of the json file. How can I achieve this ?
Functions like loadjson() execute on the machine which is compiling the catalog. In the majority of cases this means that the function executes on the master. Since some_file.json doesn't exist on the master it won't load the file.
If you want to transfer information from the agent to the master then you need to use a fact to do so. Facts are synced to the agent machine and executed at the start of the run, and their values are sent back to the master.
The answer to your previous question was a good base, but I'll expand on it a bit here:
# module_name/lib/facter/master_hash.rb
require 'json'
Facter.add(:master_hash) do
setcode do
# return content of foo as a string
f = File.read('/path/to/some_file.json')
master_hash = JSON.parse(f)
master_hash
end
end
The last line of the setcode block gets returned as the value of the fact. In this case it would expose a $::master_hash fact which would contain a hash from the parsed json.
Related
I have a JMeter test where a CSV file containing multiple rows of comma separated values example:- internalID,drivername, usreg,canadareg. I am basically using the CSV file to compare the values with the database table values. To compare the values to database values, I am adding a JDBC request with a query 'select internalID,drivername,usreg,canadareg from data where internalid ='${internalID'}' and providing the variables names to store the column data result. I use the groovy JSR233 and call the variables names in the script by declaring String a = vars.get("dintID_${counter}") where dintID is the variable name provided in the JDBC . The issue is when I run the script the first line of data in CSV files gets executed successfully, then the second line data in CSV file is passed to SQL statement correct, however the vars.get("dintid_${counter}") always stays at previous record meaning it does not go to next internalid(dintID). I have checked that my counter is incrementing. No idea how to resolve the issue. Does anyone know what mistake I am doing.
If you take a look at JSR223 Sampler documentation you will see that:
The JSR223 test elements have a feature (compilation) that can significantly increase performance. To benefit from this feature:
Use Script files instead of inlining them. This will make JMeter compile them if this feature is available on ScriptEngine and cache them.
Or Use Script Text and check Cache compiled script if available property.
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
So if the counter is a JMeter Variable - it will always be the initial value and it won't increment on subsequent iterations.
So you need to change the line to:
String a = vars.get('dintID_' + vars.get('counter'))
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It
I want to find the target branch when a pull request is submitted on GitHub, in my Jenkins pipeline. To achieve this I am doing the following:
I am invoking a windows batch file from my Jenkinsfile, which in turn invokes a nodejs script. This script internally invokes GitHub APIs to get the target branch which is to be set on some variable in Jenkinsfile(code snippet given below):
Jenkinsfile
env.TARGET_BRANCH = bat "GetTargetBranchFromGit.bat ${env.BRANCH_NAME}"
BatchFile:
node getTargetBranchForPR.js %1
But unfortunately, the variable env.TARGET_BRANCH is not getting set to the target branch even though the nodejs script gets the right value. I am in fact not able to return the value from the batch file. Could someone please help me here?
#npocmaka mention is the right way: How to do I get the output of a shell command executed using into a variable from Jenkinsfile (groovy)?
Accodring to Jenkins' documentation.
returnStdout (optional) If checked, standard output from the task is
returned as the step value as a String, rather than being printed to
the build log. (Standard error, if any, will still be printed to the
log.) You will often want to call .trim() on the result to strip off a
trailing newline.
So your code should look like
env.TARGET_BRANCH = bat( script: "GetTargetBranchFromGit.bat ${env.BRANCH_NAME}",
returnStdout: true
).trim()
If you get back more than expected you probably need to parse it.
I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.
I've seen someone doing a check on whether an agent's MAC address is on a specific regular expression before it runs the specified stuff below. The example is something like this:
if $is_virtual == "true" and $kernel == "Linux" and $macaddress =~ /^02:00:0A/ {
include nmonitor
include rootsh
include checkmk-agent
include backuppcacc
include onecontext
include sysstatpkg
include ensurekvmsudo
include cronntpdate
}
That's just it in that particular manifest file. Similarly another manifest example but via regular expression below:
node /^mi-cloud-(dev|stg|prd)-host/ {
if $is_virtual == 'false' {
include etchosts
include checkmk-agent
include nmonitor
include rootsh
include sysstatpkg
include cronntpdate
include fstab-ds-dev
}
}
I've been asked of whether can that similar concept be applied upon checking the agent's hostname with a master file of hostnames allowed to be run or otherwise.
I am not sure whether it can be done, but the rough idea goes around something like:
file { 'hostmasterfile.ini'
ensure => present,
source => puppet:///test/hostmaster.ini,
content => $hostname
}
$coname = content
#Usually the start / head of the manifest
if $hostname == $coname {
include <a>
include <b>
}
Note: $fqdn is out of the question.
To my knowledge, I have not seen any such sample manifest that matches the request. Whats more, it goes against a standard practice of keeping things easier to manage and not putting all eggs in a basket.
An ex-colleague of mine claims that idea above is about self-provisioning. However that concept is non-existent in Puppet (he posed that question at a workshop a few months back). I am not sure how true is that though.
If that thing above can be done, any suggestion of how can it be done? Or is it best to go back to the standard one manifest per node for easy maintenance?
Thanks very much.
M
Well, you can replace your node blocks with if constructs.
if $hostname == 'host1' {
# manifest for host1 here
}
You can combine this with some sort of inifile (e.g., using the generate) function. If the <a> and <b> for the include statements are then fetched from your ini file as well, you have constructed a crude ENC.
Note that this has security implications - any agent can claim to have any host name. It's even very simple to do:
FACTER_hostname=kerberos01 puppet agent --test
Any node can receive the catalog for kerberos01 this way. (node blocks rely on $certname instead, which cannot be forged.)
I could not decipher your precise intent from your question, but I suspect that you really want an ENC or a Hiera based approach.
Edit after feedback from your first comment:
To make the master read contents from local files, you should
get rid of the file { 'hostmasterfile.ini': } - it only allows you to set contents, not retrieve them
initialize the variable content using the file function (this will make all nodes fail if the file is not readable)
The code could look like this (assuming that there can be multiple host names in the ini file).
$ini_data = file('/etc/puppet/files/test/hostmaster.ini')
Next step would be a regex lookup like this:
if $ini_data =~ /name=$hostname/ {
Unfortunately, this does not work! Puppet will not expand variable values in regular expressions, apparently.
You can use this (kind of silly) workaround:
$ini_lookup = regsubst($ini_data, "name=$hostname", '__FOUND__')
if $ini_lookup =~ /__FOUND__/ {
...
}
Final remark about security: If your team is adamant about not using $certname for this lookup (although it should be easy to map host names to cert names), you should consider adding the host name to your trusted facts.
I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)