Terraform: YAML file rendering issue in storage section of container linux config of flatcar OS - terraform

I am trying to generate a file by template rendering to pass to the user data of the ec2 instance. I am using the third party terraform provider to generate an ignition file from the YAML.
data "ct_config" "worker" {
content = data.template_file.file.rendered
strict = true
pretty_print = true
}
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = file("${path.module}/script.sh")
}
}
example.yml
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
Error:
Error: Error unmarshaling yaml: yaml: line 187: could not find expected ':'
on ../../modules/launch_template/launch_template.tf line 22, in data "ct_config" "worker":
22: data "ct_config" "worker" {
If I change ${script} to sample data then it works. Also, No matter what I put in the script.sh I am getting the same error.

You want this outcome (pseudocode):
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
{{content of script file}}
In your current implementation, all lines after the first loaded from script.sh will not be indented and will not be interpreted as desired (the entire script.sh content) by a YAML decoder.
Using indent you can correct the indentation and using the newer templatefile functuin you can use a slightly cleaner setup for the template:
data "ct_config" "worker" {
content = local.ct_config_content
strict = true
pretty_print = true
}
locals {
ct_config_content = templatefile("${path.module}/example.yml", {
script = indent(10, file("${path.module}/script.sh"))
})
}
For clarity, here is the example.yml template file (from the original question) to use with the code above:
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}

I had this exact issue with ct_config, and figured it out today. You need to base64encode your script to ensure it's written correctly without newlines - without that, newlines in your script will make it to CT, which attempts to build an Ignition file, which cannot have newlines, causing the error you ran into originally.
Once encoded, you then just need to tell CT to !!binary the file to ensure Ignition correctly base64 decodes it on deploy:
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = base64encode(file("${path.module}/script.sh"))
}
}
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: !!binary |
${script}

Related

Combine outputs of mutually exclusive processes in a Nextflow (DSL2) pipeline

I have a DSL2 workflow in Nextflow set up like this:
nextflow.enable.dsl=2
// process 1, mutually exclusive with process 2 below
process bcl {
tag "bcl2fastq"
publishDir params.outdir, mode: 'copy', pattern: 'fastq/**fastq.gz'
publishDir params.outdir, mode: 'copy', pattern: 'fastq/Stats/*'
publishDir params.outdir, mode: 'copy', pattern: 'InterOp/*'
publishDir params.outdir, mode: 'copy', pattern: 'Run*.xml'
beforeScript 'export PATH=/opt/tools/bcl2fastq/bin:$PATH'
input:
path runfolder
path samplesheet
output:
path 'fastq/Stats/', emit: bcl_ch
path 'fastq/**fastq.gz', emit: fastqc_ch
path 'InterOp/*', emit: interop_ch
path 'Run*.xml'
script:
// processing omitted
}
// Process 2, note the slightly different outputs
process bcl_convert {
tag "bcl-convert"
publishDir params.outdir, mode: 'copy', pattern: 'fastq/**fastq.gz'
publishDir params.outdir, mode: 'copy', pattern: 'fastq/Reports/*'
publishDir params.outdir, mode: 'copy', pattern: 'InterOp/*'
publishDir params.outdir, mode: 'copy', pattern: 'Run*.xml'
beforeScript 'export PATH=/opt/tools/bcl-convert/:$PATH'
input:
path runfolder
path samplesheet
output:
path 'fastq/Reports/', emit: bcl_ch
path 'fastq/**fastq.gz', emit: fastqc_ch
path 'InterOp/', emit: interop_ch
path 'Run*.xml'
script:
// processing omitted
}
// downstream process that needs either the first or the second to work, agnostic
process fastqc {
cpus 12
publishDir "${params.outdir}/", mode: "copy"
module 'conda//anaconda3'
conda '/opt/anaconda3/envs/tools/'
input:
path fastq_input
output:
path "fastqc", emit: fastqc_output
script:
"""
mkdir -p fastqc
fastqc -t ${task.cpus} $fastq_input -o fastqc
"""
}
Now I have a variable, params.bcl_convert which can be used to switch from one process to the other, and I set up the workflow like this:
workflow {
runfolder_repaired = "${params.runfolder}".replaceFirst(/$/, "/")
runfolder = Channel.fromPath(runfolder_repaired, type: 'dir')
sample_data = Channel.fromPath(params.samplesheet, type: 'file')
if (!params.bcl_convert) {
bcl(runfolder, sample_data)
} else {
bcl_convert(runfolder, sample_data)
}
fastqc(bcl.out.mix(bcl_convert.out)) // Problematic line
}
The problem lies in the problematic line: I'm not sure how (and if it is possible) to have fastqc get the input of bcl2fastq or bcl_convert (but only fastq_ch, not the rest) regardless of the process that generated it.
Some of the things I've tried include (inspired by https://github.com/nextflow-io/nextflow/issues/1646, but that one uses a the output of a process):
if (!params.bcl_convert) {
def bcl_out = bcl(runfolder, sample_data).out
} else {
def bcl_out = bcl_convert(runfolder, sample_data).out
}
fastqc(bcl_out.fastq_ch)
But this then compilation fails with Variable "runfolder" already defined in the process scope, even using the approach in a similar way as the post:
def result_bcl2fastq = !params.bclconvert ? bcl(runfolder, sample_data): Channel.empty()
def result_bclconvert = params.bclconvert ? bcl_convert(runfolder, sample_data): Channel.empty()
I thought about using conditionals in a single script, however the outputs from the two processes differ, so it's not really possible.
The only way I got it to work is by duplicating all outputs, like:
if (!params.bcl_convert) {
bcl(runfolder, sample_data)
fastqc(bcl.out.fastqc_ch)
} else {
bcl_convert(runfolder, sample_data)
fastqc(bcl_convert.out.fastqc_ch
}
However this looks to me like unnecessary complication. Is what I want to do actually possible?
I was able to figure this out, with a lot of trial and error.
Assigning a variable to a process output acts like the .out property of said process. So I set the same variable for the two exclusive processes, set the same outputs (as seen in the question) and then accessed them directly without using .out:
workflow {
runfolder_repaired = "${params.runfolder}".replaceFirst(/$/, "/")
runfolder = Channel.fromPath(
runfolder_repaired, type: 'dir')
sample_data = Channel.fromPath(
params.samplesheet, type: 'file')
if (!params.bcl_convert) {
bcl_out = bcl2fastq(runfolder, sample_data)
} else {
bcl_out = bcl_convert(runfolder, sample_data)
}
fastqc(bcl_out.fastqc_ch)
}

Load yaml config with array type

I am writing a code where I am trying to load config.yaml file
impl ::std::default::Default for MyConfig {
fn default() -> Self { Self { foo: "".into(), conf: vec![] } }
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MyConfig {
foo: String,
conf: Vec<String>
}
let cfg: MyConfig = confy::load("config")?;
println!("{:#?}", cfg);
Config.yaml file
foo: "test"
conf:
gg
gb
gg
bb
Output
MyConfig {
url: "",
jobs: [],
}
I have kept the config.yaml file in the same folder where it is getting called. It looks like it is not able to load the file itself. What is getting missed there?
EDIT: When I changed the extension from yaml to toml, and provided full path, it found the file but the structure is expecting is
config.toml
foo = "test"
conf = ["gg","gb","gv","gx"]
full path
confy::load("c:/test/config")?;
Tried multiple places to keep it but not getting it, looks like it requires full path.
But I got the output
MyConfig {
url: "test",
jobs: [
"gg",
"gb",
"gv",
"gx",
],
}
While David Chopin's answer is correct that the YAML is not right, there is a couple of deeper issues.
Firstly, while it is not really documented, looking at the confy source, it expects TOML formatted data, not YAML - for simple cases they can be similar I think. (Turns out this is not 100% correct - the github page says you can switch to YAML this using features = ["yaml_conf"] in the Cargo.toml file)
Secondly, I'm guessing the root problem is that confy is not finding your configuration file.
The docs for confy::load state:
Load an application configuration from disk
A new configuration file is created with default values if none exists.
So, I think it's looking somewhere else, not finding your file and instead of erroring creating a nice default file in that location then returning that default for you.
I believe it should be formatted as follows:
foo: "test"
conf:
- gg
- gb
- gg
- bb

How to lookup map defined in yaml file in the Groovy script and assign to variable based input provided from Pipeline UI

How to pass map of variables from the yaml file to variable in the jenkins file or just print in the text file.
For Ex:
i have test.yaml file it contains:
processor-create:
{
service: true
ingress: true
path: /tmp/data
},
processo-update:
{
service: false
ingress: false
path: /tmp/data
}
i will provide input service_name: processor-create as a parameter from the pipeline and it has to go and look for that service in test.yaml, then whatever variables has "processor-create" it should assign to variable or print into another text file, so that i will pass that file as extra variable file for ansible script in the next stage, Thanks.
are you sure this is a valid yaml file ?
to me the right syntax must be:
processo-update:
ingress: false
path: /tmp/data
service: false
processor-create:
ingress: true
path: /tmp/data
service: true
to parse the yml you can use snakeyaml, something similar to :
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
Yaml parser = new Yaml()
map = parser.load( new File('text.yml').text )
println map[args[0]]
read this post to a more elaborate example: https://groovy-lang.gitlab.io/101-scripts/basico/config_script-en.html

How to create secured files in Puppet5 with Hiera?

I want to create SSL certificate and try to secure this operation.
I am using Puppet 5.5.2 and gem hiera-eyaml.
Created simple manifest
cat /etc/puppetlabs/code/environments/production/manifests/site.pp
package { 'tree':
ensure => installed,
}
package { 'httpd':
ensure => installed,
}
$filecrt = lookup('files')
create_resources( 'file', $filecrt )
Hiera config
---
version: 5
defaults:
# The default value for "datadir" is "data" under the same directory as the hiera.yaml
# file (this file)
# When specifying a datadir, make sure the directory exists.
# See https://puppet.com/docs/puppet/latest/environments_about.html for further details on environments.
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Secret data: per-node, per-datacenter, common"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "nodes/%{facts.fqdn}.eyaml"
- "nodes/%{trusted.certname}.eyaml" # Include explicit file extension
- "location/%{facts.whereami}.eyaml"
- "common.eyaml"
options:
pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/keys/private_key.pkcs7.pem
pkcs7_public_key: /etc/puppetlabs/puppet/eyaml/keys/public_key.pkcs7.pem
- name: "YAML hierarchy levels"
paths:
- "common.yaml"
- "nodes/%{facts.fqdn}.yaml"
- "nodes/%{::trusted.certname}.yaml"
And common.yaml
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7,{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
But have en error while applying manifest
Error: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 12, column: 1) on node test1.com
I really dont know what to do )
The problem appears to be that the indentation in common.yaml isn't right - currently, file will be null rather than a hash, which explains the error message. Also, the file should be called common.eyaml, otherwise the ENC string won't be decrypted. Try
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7{LOTS_OF_STRING_SKIPPED}UXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
There is an online YAML parser at http://yaml-online-parser.appspot.com/ if you want to see the difference the indentation makes.
Found another solution.
Its was a problem with lookup and hashes. When I have multiply lines in hiera hash, I must specify them https://docs.puppet.com/puppet/4.5/function.html#lookup
So i decided use only 'content' variable to lookup
cat site.pp
$filecrt = lookup('files')
file { 'server.crt':
ensure => present,
path => '/etc/httpd/conf/server.crt',
content => $filecrt,
owner => 'root',
group => 'root',
mode => '0600',
}
and Hiera
---
files:'ENC[PKCS7{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'

how to read ansible yml config file in groovy

I want to write a DSL job build script for jenkins in groovy which automatically make deploy job for our projects. There is a general yml file for ansible roles and hosts parameter in each project which I want to read it and use its contents to configure the job.
The problem is that so far I'm using the snakeyml for reading the yml file, but it returns an arraylist (more like a map) which I cannot use efficiently.
anyone knows a better solution?
my yml sample file:
---
- hosts: app.host
roles:
- role: app-db
db_name: myproje_db
db_port: "3306"
migrate_module: "my-proje-api"
- role: java-app
app_name: "myproje-api"
app_artifact_name: "my-proje-api"
app_links:
- myproje_db
I read the file from workspace in my main groovy script:
InputStream configFile = streamFileFromWorkspace('data/config.yml')
and process it in another function of another class:
public String configFileReader(def out, InputStream configFile){
def map
Yaml configFileYml = new Yaml()
map = configFileYml.load(configFile)
}
it returns map class type as arraylist.
It's an expected output, this configuration is starting with a "-" which represent a list. It's "a collection of hosts, and each host have a set of roles".
If you wants to iterate on each host, you can do :
Yaml configFileYml = new Yaml()
configFileYml.load(configFile).each { host -> ... }
When this configuration is read, it's equivalent to the following structure (in groovy format):
[ // collection of map (host)
[ // 1 map for each host
hosts:"app.host",
roles:[ // collection of map (role)
[ // 1 map for each role
role: 'app-db',
db_name: 'myproje_db',
db_port: "3306",
migrate_module: "my-proje-api"
],
[
role: 'java-app',
app_name: "myproje-api",
app_artifact_name: "my-proje-api",
app_links:['myproje_db']
]
]
]
]

Resources