Is there a way to update or merge string literals with kustomize? - kustomize

I'm trying to manage Argo CD projects with helm definitions using kustomize.
Unfortunately Argo manages helm values with string literals, which gives me headaches in conjunction with kustomize configuration.
I have this base/application.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
source:
chart: something
helm:
values: |
storageClass: cinder-csi
... many more lines identical to every stage
and I'd like to create variants using kustomize overlays, where I'd like to add a single line solely important for the dev stage to the base values.
This is NOT working, it simply replaces the existing base definiton.
overlay/dev/kustomize.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
kind: Application
patch: |-
- op: add
path: /source/helm/value
value: "storageSize: 1Gi"
To me it seems kustomize can not append values to string literals. My current solution requires to repeat the whole values string literal in every stage variant, with just a few lines of difference, which heavily violates DRY principles.
Any help is appreciated.

There's an open PR to add support for arbitrary YAML in the values field. If merged, I would expect it to be available in 2.4. Reviews/testing are appreciated if you have time!
One workaround is to use the parameters field and set parameters individually. It's not ideal, but maybe could help until 2.4 is released.

Related

Azure Pipeline Matrix Strategy Variable Expansion problem in conjunction with templates

For often used tasks in azp I created an own repository with a yml file, I'll show you a subpart of that:
create-and-upload-docu.yml:
parameters:
- name: Documentation
type: string
default: ''
- name: Language
type: string
default: ''
- name: ArchiveBaseDir
type: string
default: ''
steps:
- script: |
ARCHIVERELPATH=${{parameters.Documentation}}-${{parameters.Language}}.zip
ARCHIVEDIR=$(echo -n ${{parameters.ArchiveBaseDir}} | sed -e 's#/$##')/${{parameters.Documentation}}/${{parameters.Language}}
echo "##vso[task.setvariable variable=archiveRelPath;isOutput=true]$ARCHIVERELPATH"
echo "##vso[task.setvariable variable=archiveDir;isOutput=true]$ARCHIVEDIR"
name: ${{parameters.Documentation}}_${{parameters.Language}}_params
- task: DeleteFiles#1
inputs:
Contents: '$(Build.ArtifactStagingDirectory)/$(${{parameters.Documentation}}_${{parameters.Language}}_params.archiveRelPath)'
The relevant part is: the "script" has the name which is unique in a job - so I can use this kind of expansion for setting variables within the template:
$(${{parameters.Documentation}}_${{parameters.Language}}_params.archiveRelPath)
This works fine as long as I had called the template with fixed values, like
- template: create-and-upload-docu.yml#templates
parameters:
Documentation: 'adocuvalue'
Language: 'en_US'
ArchiveBaseDir: '$(Build.ArtifactStagingDirectory)/build/'
But now I want to use a matrix to have a few documentations with a few languages:
jobs:
- job: Documentation_CI
displayName: "Docu CI"
timeoutInMinutes: 30
strategy:
matrix:
main_en_US:
Documentation: main
Language: en_US
main_de_AT:
Documentation: main
Language: de_AT
steps:
- checkout: self
- template: create-and-upload-docu.yml#templates
parameters:
Documentation: ${{variables.Documentation}}
Language: ${{variables.Language}}
ArchiveBaseDir: '$(Ws)/build/'
But at the time where ${{}} expressions are expanded, it seems that the matrix variables are not already set; this means that the template script part is called __params and the pipeline has the following error
Publishing build artifacts failed with an error: Input required: ArtifactName
Is there a somewhat simple way to achive what I want (being able to set some variables within templates with a unique naming schema):
can I somehow use ${{ expressions but need a different naming to get to the hard-coded matrix style variables
can I workaround my problem any simple way?
Additional Info: we run a Azure 2020 on prem.
Is there a somewhat simple way to achive what I want (being able to set some variables within templates with a unique naming schema):
Sorry for any inconvenience.
I am afraid there is no such way to resolve this at this moment.
Just as you test, the syntax ${{}} is parsed at compile time. We could not get the value when we use it as name or display name in the task, since it will be parsed at compile time. But the matrix variables have not been set during compilation. That the reason why we get the value _params.
There is a feature request about this. And you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions:

How to construct urls for job names for downloading latest artifacts in GitLab CI?

I am using the downloading latest artifact feature.
For me it is not clear, how the job name I need to pass is created: my job name contains e.g. spaces, equal signs and brackets:
build win: [USE_PYTHON=ON]
I know that spaces are replaced by +-signs but what about the others characters?
Changing the job name is not an option because I use the matrix-feature and it creates names like these.
Thanks a lot for your help!
Example ci yaml:
build win:
...
parallel:
matrix:
- USE_PYTHON: ["USE_PYTHON=ON", "USE_PYTHON=OFF"]
You can use ASCII encoding like for space %20.
Find them here
https://www.w3schools.com/tags/ref_urlencode.ASP

Dynamically set file path in log4rs

I already asked this question in the Rust subreddit but wanted to ask it here too.
I'm using the log4rs crate and want to find a way to generate more than one log file. I have a YAML file set up with the file appender created, and am trying to have the path be unique so it doesn't have to either append or truncate the original file.
appenders:
file:
kind: file
path: "log/{h({d(%H:%M:%S)})}.log"
But this does not work and gives me this error:
log4rs: error deserializing appender file: The filename, directory name, or volume label syntax is incorrect. (os error 123)
I know that log4rs has a way to do patterns but it doesn't seem to work specifically for the path parameter.
I also saw this other crate called log4rs_routing_appender which looks promising but I don't know if I will need to use that.
Finally, I want to be able to do this non-programmatically (i.e. only with one YAML file), and am wondering if it's possible within log4rs
Thanks a lot!
I do not believe what you want is possible with yaml configuration. However, log4rs provides another way to build it's logger, which is through log4rs::Config::builder():
// get current date
let date = chrono::Utc::now();
// create log file appender
let logfile = FileAppender::builder()
.encoder(Box::new(PatternEncoder::default()))
// set the file name based on the current date
.build(format!("log/{}.log", date))
.unwrap();
// add the logfile appender to the config
let config = Config::builder()
.appender(Appender::builder().build("logfile", Box::new(logfile)))
.build(Root::builder().appender("logfile").build(LevelFilter::Info))
.unwrap();
// init log4rs
log4rs::init_config(config).unwrap();
log::info!("Hello, world!");
Ok(())
I understand that you want to use YAML configuration. However, as you said, patterns do not seem to work with the path variable, seeing as writing this fails:
path:
pattern: "log/requests-{d}-{m}-{n}.log"
Another option would be to manually parse the yaml with serde_yaml (which log4rs actually uses internally) and parse custom variables with regex.
I realize the rolling_file type make it so that it automatically increments numbers to the log names! This is the example of what I did.
appenders:
default:
kind: console
encoder:
kind: pattern
pattern: "{h({d(%H:%M:%S)})} - {m}{n}"
log_file:
kind: rolling_file
append: true
path: "logs/log.log"
encoder:
pattern: "{h({d(%m-%d-%Y %H:%M:%S)})} - {m}{n}"
policy:
kind: compound
trigger:
kind: size
limit: 10mb
roller:
kind: fixed_window
base: 1
count: 100
pattern: "logs/log{}.log"
root:
level: info
appenders:
- default
- log_file
This generates log{}.log (replace {} with incrementing numbers) files within the logs folder after the file reaches 10MB of size. Since I set append: true the log file will keep accumulating until it reaches the size limit.
Hopefully this helps others too!

Puppet hiera equivalent in Ansible

hiera.yaml
---
:hierarchy:
- node/%{host_fqdn}
- site_config/%{host_site_name}
- site_config/perf_%{host_performance_class}
- site_config/%{host_type}_v%{host_type_version}
- site/%{host_site_name}
- environments/%{site_environment}
- types/%{host_type}_v%{host_type_version}
- hosts
- sites
- users
- common
# options are native, deep, deeper
:merge_behavior: deeper
We currently have this hiera config. So the config gets merged in the following sequence common.yaml > users.yaml > sites.yaml > hosts.yaml > types/xxx_vxxx.yaml > etc. For the variable top hierarchies, it gets overwritten only if that file exists.
eg:
common.yaml
server:
instance_type: m3.medium
site_config/mysite.yaml
server:
instance_type: m4.large
So for all other sites, the instance type will be m3.medium, but only for mysite it will be m4.large.
How can I achieve the same in Ansible?
I think that #Xiong is right that you should go the variables way in Ansible.
You can set up flexible inventory with vars precedence from general to specific.
But you can try this snippet if it helps:
---
- hosts: loc-test
tasks:
- include_vars: hiera/{{ item }}
with_items:
- common.yml
- "node/{{ ansible_fqdn }}/users.yml"
- "node/{{ ansible_fqdn }}/sites.yml"
- "node/{{ ansible_fqdn }}/types/{{ host_type }}_v{{ host_type_version }}.yml"
failed_when: false
- debug: var=server
This will try to load variables from files with structure similar to your question.
Nonexistent files are ignored (because of failed_when: false).
Files are loaded in order of this list (from top to bottom), overwriting previous values.
Gotchas:
all variables that you use in the list must be defined (e.g. host_type in this example can't be defined in common.yml), because list of items to iterate is templated before the whole loop is executed (see update for workaround).
Ansible overwrite(replace) dicts by default, I guess your use case expects merging behavior. This can be achieved with hash_behavior setting – but this is unusual for Ansible playbooks.
P.S. You may alter top-to-bottom-merge behavior by changing with_items to with_first_found and reverse the list (from specific to general). In this case Ansible will load variables from first file found.
Update: use variables from previous includes in file path.
You can split the loop into multiple tasks, so Ansible will evaluate each task's result before templating next file's include path.
Make hiera_inc.yml:
- include_vars: hiera/common.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/users.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/sites.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/types/{{ host_type | default('none') }}_v{{ host_type_version | default('none') }}.yml
failed_when: false
And in your main playbook:
- include: hiera_inc.yml
This looks a bit clumsy, but this way you can define host_type in common.yaml and it will be honored in the path templating for next tasks.
With Ansible 2.2 it will be possible to include_vars into named variable (not global host space), so you can include_vars into hiera_facts and use combine filter to merge them without altering global hash behavior.
I'm not familiar with Puppet, so this may not be a direct mapping. But what I understand your question to be is "how do I use values in one shared location but override their definitions for different servers?". In Ansible, you do this with variables.
You can define variables directly in your inventory. You can define variables in host- and group-specific files. You can define variables at a playbook level. You can define variables at a role level. Heck, you can even define variables with command-line switches.
Between all of these places, you should be able to define overrides to suit your situation. You'll probably want to take a look at the documentation section on how to decide where to define a variable for more info.
It seems a little more basic than Hiera, but somebody has created a basic ansible lookup plugin with similar syntax
https://github.com/sailthru/ansible-oss/tree/master/tools/echelon

Hiera only run the first match

I would like to create a default users for all servers, but in addition of this default uses, only for specific servers I want to create in addition of the default users a specifics ones.
My problem is that when I run puppet agent -t, puppet only create the users for the first match. If the server match in - node/%{::fqdn} create only the specific users but not the default ones.
in /etc/puppet/hiera.yaml I have the follow:
:backends:
- yaml
:yaml:
:datadir: "/etc/puppet/hieradata"
:hierarchy:
- node/%{::fqdn}
- common
How I can set up hiera in order to always run the common file?
Please use hiera hash merge. Define merge behaviour in hiera.yaml, possible values are native, deep, deeper e.g:
:merge_behavior: deeper
And than just use hiera. According to documentation:
In a deeper hash merge, Hiera recursively merges keys and values in each source hash.
Here you have merge behaviour in examples.
UPDATE:
I have setup the following simple example:
hiera.yaml:
:hierarchy:
- apps
- common
:merge_behavior: deeper
apps.yaml:
test_hash:
abc1:
value: apps
abc2:
value: apps
common.yaml:
test_hash:
abc1:
value: comm
abc3:
value: comm
test_hash.pp
class test_hash
{
$normal_hash = hiera('test_hash')
$hiera_hash = hiera_hash('test_hash')
notify{ " normal: ${normal_hash}":}
notify{ " hiera : ${hiera_hash}":}
}
include test_hash
Next call puppet script puppet apply test_hash.pp
In result:
Notice: normal: {"abc1"=>{"value"=>"apps"}, "abc2"=>{"value"=>"apps"}}
Notice: hiera : {"abc1"=>{"value"=>"apps"}, "abc3"=>{"value"=>"comm"}, "abc2"=>{"value"=>"apps"}}}
UPDATE2:
You can also consider using merge function from stdlib. But probably to use it you will have to change a bit your architecture e.g:
In common define common values, in node/%{::fqdn} define node specific values, and than use it as in example:
$common_hash = hiera('something_from_common')
$node_hash = hiera('something_from_fqdn')
$merged_hash = merge($node_hash, $common_hash)
(Yes it is a bit ugly :))

Resources