This question is in continuation with earlier question I have posted before.
To enhance my templates further I need to have multiple loops in the template engines and the values need to be replaced from the HashMap Variables that were derived from a configuration file.
My HashMap looks something like this, let say envVar:
envVar = [
PROJECT:name,
env:[
[name:param1, value:value1],
[name:param2, value:value2],
[name:param3, value:value3]],
ports:[
[protocol:port1, port:1000],
[protocol:port2, port:2000]]
]
My template engine looks like this:
- env:
<< NEED TO LOOP HERE WITH env variables
- name : param1
value : value1
....
>>
project: "${PROJECT}"
......
......
......
ports:
<< NEED TO LOOP HERE WITH ports variables
- port: 1000
protocol : port1
....
>>
Code snippet is described below.
def f = new File('output.template')
def engine = new groovy.text.GStringTemplateEngine()
//....
//envVar are derived from another file
//....
def Template = engine.createTemplate(f).make(envVar)
println "${Template}"
Can someone explain me how to modify the above code snippet and templates to make sure the envVar will be replaced properly in the template engine.
You need to do an each for every variable. Your template file needs to look like this:
#cat output.template
- env:<% env.each { v -> %>
- name : ${v.name}
value : ${v.value}<% } %>
project: "${PROJECT}"
......
......
......
ports:<% ports.each { v -> %>
- port: ${v.port}
protocol: ${v.protocol}<% } %>
Then, your main script should look like this:
def f = new File('output.template')
def engine = new groovy.text.GStringTemplateEngine()
def envVar = [
PROJECT: name,
env:[
[name:'param1', value:'value1'],
[name:'param2', value:'value2'],
[name:'param3', value:'value3']
],
ports:[
[protocol:'port1', port:1000],
[protocol:'port2', port:2000]
]
]
def Template = engine.createTemplate(f).make(envVar)
println "${Template}".trim()
output:
#cat output.template
- env:
- name : param1
value : value1
- name : param2
value : value2
- name : param3
value : value3
project: "projectName"
......
......
......
ports:
- port: 1000
protocol: port1
- port: 2000
protocol: port2
to navigate through your env variable you can use the following:
envVar.env.each{
println "name : ${it.name}"
println "value : ${it.value}"
}
envVar.ports.each{
println "port : ${it.port}"
println "protocol : ${it.protocol}"
}
Related
I am trying to generate a file by template rendering to pass to the user data of the ec2 instance. I am using the third party terraform provider to generate an ignition file from the YAML.
data "ct_config" "worker" {
content = data.template_file.file.rendered
strict = true
pretty_print = true
}
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = file("${path.module}/script.sh")
}
}
example.yml
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
Error:
Error: Error unmarshaling yaml: yaml: line 187: could not find expected ':'
on ../../modules/launch_template/launch_template.tf line 22, in data "ct_config" "worker":
22: data "ct_config" "worker" {
If I change ${script} to sample data then it works. Also, No matter what I put in the script.sh I am getting the same error.
You want this outcome (pseudocode):
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
{{content of script file}}
In your current implementation, all lines after the first loaded from script.sh will not be indented and will not be interpreted as desired (the entire script.sh content) by a YAML decoder.
Using indent you can correct the indentation and using the newer templatefile functuin you can use a slightly cleaner setup for the template:
data "ct_config" "worker" {
content = local.ct_config_content
strict = true
pretty_print = true
}
locals {
ct_config_content = templatefile("${path.module}/example.yml", {
script = indent(10, file("${path.module}/script.sh"))
})
}
For clarity, here is the example.yml template file (from the original question) to use with the code above:
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
I had this exact issue with ct_config, and figured it out today. You need to base64encode your script to ensure it's written correctly without newlines - without that, newlines in your script will make it to CT, which attempts to build an Ignition file, which cannot have newlines, causing the error you ran into originally.
Once encoded, you then just need to tell CT to !!binary the file to ensure Ignition correctly base64 decodes it on deploy:
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = base64encode(file("${path.module}/script.sh"))
}
}
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: !!binary |
${script}
I have trouble some ansible modules. I wrote the custom module and its output like this:
ok: [localhost] => {
"msg": {
"ansible_facts": {
"device_id": "/dev/sdd"
},
"changed": true,
"failed": false
}
}
my custom module:
#!bin/env python
from ansible.module_utils.basic import *
import json
import array
import re
def find_uuid():
with open("/etc/ansible/roles/addDisk/library/debug/disk_fact.json") as disk_fact_file, open("/etc/ansible/roles/addDisk/library/debug/device_links.json") as device_links_file:
disk_fact_data = json.load(disk_fact_file)
device_links_data = json.load(device_links_file)
device = []
for p in disk_fact_data['guest_disk_facts']:
if disk_fact_data['guest_disk_facts'][p]['controller_key'] == 1000 :
if disk_fact_data['guest_disk_facts'][p]['unit_number'] == 3:
uuid = disk_fact_data['guest_disk_facts'][p]['backing_uuid'].split('-')[4]
for key, value in device_links_data['ansible_facts']['ansible_device_links']['ids'].items():
for d in device_links_data['ansible_facts']['ansible_device_links']['ids'][key]:
if uuid in d:
if key not in device:
device.append(key)
if len(device) == 1:
json_data={
"device_id": "/dev/" + device[0]
}
return True, json_data
else:
return False
check, jsonData = find_uuid()
def main():
module = AnsibleModule(argument_spec={})
if check:
module.exit_json(changed=True, ansible_facts=jsonData)
else:
module.fail_json(msg="error find device")
main()
I want to use device_id variable on the other tasks. I think handle with module.exit_json method but how can I do that?
I want to use device_id variable on the other tasks
The thing you are looking for is register: in order to make that value persist into the "host facts" for the hosts against which that task ran. Then, you can go with "push" model in which you set that fact upon every other host that interests you, or you can go with a "pull" model wherein interested hosts can reach out to get the value at the time they need it.
Let's look at both cases, for comparison.
First, capture that value, and I'll use a host named "alpha" for ease of discussion:
- hosts: alpha
tasks:
- name: find the uuid task
# or whatever you have called your custom task
find_uuid:
register: the_uuid_result
Now the output is available is available on the host "alpha" as {{ vars["the_uuid_result"]["device_id"] }} which will be /dev/sdd in your example above. One can also abbreviate that as {{ the_uuid_result.device_id }}
In the "push" model, you can now iterate over all hosts, or just those in a specific group, that should also receive that device_id fact; for this example, let's target an existing group of hosts named "need_device_id":
- hosts: alpha # just as before, but for context ...
tasks:
- find_uuid:
register: the_uuid_result
# now propagate out the value
- name: declare device_id fact on other hosts
set_fact:
device_id: '{{ the_uuid_result.device_id }}'
delegate_to: '{{ item }}'
with_items: '{{ groups["need_device_id"] }}'
And, finally, in contrast, one can reach over and pull that fact if host "beta" needs to look up the device_id that host "alpha" discovered:
- hosts: alpha
# as before
- hosts: beta
tasks:
- name: set the device_id fact on myself from alpha
set_fact:
device_id: '{{ hostvars["alpha"]["the_uuid_result"]["device_id"] }}'
You could also run that same set_fact: device_id: business on "alpha" in order to keep the "local" variable named the_uuid_result from leaking out of alpha's playbook. Up to you.
I want to write a DSL job build script for jenkins in groovy which automatically make deploy job for our projects. There is a general yml file for ansible roles and hosts parameter in each project which I want to read it and use its contents to configure the job.
The problem is that so far I'm using the snakeyml for reading the yml file, but it returns an arraylist (more like a map) which I cannot use efficiently.
anyone knows a better solution?
my yml sample file:
---
- hosts: app.host
roles:
- role: app-db
db_name: myproje_db
db_port: "3306"
migrate_module: "my-proje-api"
- role: java-app
app_name: "myproje-api"
app_artifact_name: "my-proje-api"
app_links:
- myproje_db
I read the file from workspace in my main groovy script:
InputStream configFile = streamFileFromWorkspace('data/config.yml')
and process it in another function of another class:
public String configFileReader(def out, InputStream configFile){
def map
Yaml configFileYml = new Yaml()
map = configFileYml.load(configFile)
}
it returns map class type as arraylist.
It's an expected output, this configuration is starting with a "-" which represent a list. It's "a collection of hosts, and each host have a set of roles".
If you wants to iterate on each host, you can do :
Yaml configFileYml = new Yaml()
configFileYml.load(configFile).each { host -> ... }
When this configuration is read, it's equivalent to the following structure (in groovy format):
[ // collection of map (host)
[ // 1 map for each host
hosts:"app.host",
roles:[ // collection of map (role)
[ // 1 map for each role
role: 'app-db',
db_name: 'myproje_db',
db_port: "3306",
migrate_module: "my-proje-api"
],
[
role: 'java-app',
app_name: "myproje-api",
app_artifact_name: "my-proje-api",
app_links:['myproje_db']
]
]
]
]
is there a way to use a variable defined in some manifest with hiera?
This is how I tried it:
manifest.pp
if $::ipaddress_bond0 {
$primary_interface = 'bond0'
notify{"$primary_interface":}
}
else {
$primary_interface = 'eth0'
notify{"$primary_interface":}
}
hiera.yaml
some_config:
server:
foo:
bar: "%{::primary_interface}"
Yes it is possible. Look at the example:
test.pp
class nodes::test
{
$value1 = 'abc'
$value2 = hiera('test::value2')
$value3 = hiera('test::value3')
notify{ " v1 ${value1}": }
notify{ " v2 ${value2}": }
notify{ " v3 ${value3}": }
}
include nodes::test
test.yaml
test::value2: "%{value1}"
test::value3: "%{value4}"
run test:
puppet apply test.pp
Notice: v1 abc
Notice: v2 abc
Notice: v3
Keep in mind that using puppet variables in hiera is a really bad practice.
The following groovy scripts fail using command line
#Grab("org.apache.poi:poi:3.9")
println "test"
Error:
unexpected token: println # line 2, column 1.
println "test"
^
1 error
Removing the Grab, it works!
Anything I missed?
$>groovy -v
Groovy Version: 2.1.7 JVM: 1.7.0_25 Vendor: Oracle Corporation OS: Linux
Annotations can only be applied to certain targets. See SO: Why can't I do a method call after a #Grab declaration in a Groovy script?
#Grab("org.apache.poi:poi:3.9")
dummy = null
println "test"
Alternatively you can use grab as a method call:
import static groovy.grape.Grape.grab
grab(group: "org.apache.poi", module: "poi", version: "3.9")
println "test"
For more information refer to Groovy Language Documentation > Dependency management with Grape.
File 'Grabber.groovy'
package org.taste
import groovy.grape.Grape
//List<List[]> artifacts => [[<group>,<module>,<version>,[<Maven-URL>]],..]
static def grab (List<List[]> artifacts) {
ClassLoader classLoader = new groovy.lang.GroovyClassLoader()
def eal = Grape.getEnableAutoDownload()
artifacts.each { artifact -> {
Map param = [
classLoader: classLoader,
group : artifact.get(0),
module : artifact.get(1),
version : artifact.get(2),
classifier : (artifact.size() < 4) ? null : artifact.get(3)
]
println param
Grape.grab(param)
}
}
Grape.setEnableAutoDownload(eal)
}
Usage :
package org.taste
import org.taste.Grabber
Grabber.grab([
[ "org.codehaus.groovy.modules.http-builder", "http-builder", '0.7.1'],
[ "org.postgresql", "postgresql", '42.3.1', null ],
[ "com.oracle.database.jdbc", "ojdbc8", '12.2.0.1', null]
])