I'm very new to saltstack and still learning so if there is a better way of doing things please let me know.
I'm trying to pass in override values and cascade use that override value in variable pillar files. So I plan to have a default.sls file that contains variables that all the other pillars will use. I want to also be able to override the default.sls variables from the command line.
Here is what I'm doing:
File structure:
/pillar/top.sls
/pillar/default.sls
/pillar/test.sls
In /pillar/top.sls
base:
'*':
- default
'test':
- match: glob
- test
In /pillar/default.sls
{% set home_location = salt['cmd.shell']('eval echo "~ec2-user"') %}
environment: dev
uniqueid: all
side: a
region: myregion
app: myapp
ipaddress: {{ grains['ip4_interfaces']['eth0'][0] }}
hostname: {{ grains['host'] }}
homedir: {{ home_location }}
docker:
- pip:
- version: 4.0.2
In /pillar/test.sls
test:
- log:
- group: logs-{{ pillar['environment'] }}-{{ pillar['uniqueid'] }}-{{ pillar['app'] }}
- stream: logs_{{ pillar['side'] }}_mysupertest_{{ pillar['ipaddress'] }}
Here is the command I am running locally (masterless):
salt-call --id 'test' pillar.items --local pillar='{"environment":"uat","side":"b"}'
It throws the following error:
SaltRenderError: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'environment'
[CRITICAL] Pillar render error: Rendering SLS 'test' failed. Please see master log for details.
So it's not getting the values from the default.sls file. What am I doing wrong?
Also I have tried adding the following to the test.sls file with the same results:
include:
- pillar://default
Related
I have a role to setup NATS cluster,, I've used host_vars to define which node is the master node like below:
is_master: true
Then in the setup-nats.yml task, I used the following to extract the master node's IP address based on the host_var I've set and then used it as a variable for the Jinja2 template, however, the variable doesn't get passed down to the template and I get the `variable 'master_ip' is undefined.
- name: Set master IP
set_fact:
set_master_ip: "{{ ansible_facts['default_ipv4']['address'] }}"
cacheable: yes
when: is_master
- name: debug
debug:
msg: "{{ set_master_ip }}"
run_once: true
- name: generate nats-server.conf for the slave nodes
template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when:
- is_master == false
vars:
master_ip: "{{ set_master_ip }}"
notify: nats-server
The variable is used like below in the Jinja2 template:
routes = [
nats-route://ruser:{{ nats_server_password }}#{{ master_ip }}:6222
]
}
Questions:
Is this approach according to the best practices?
What is the correct way of doing the above so the variable is passed down to the template?
Test Output:
I'm using Molecule to test my Ansible and even though in the debug task the IP address is visible, it doesn't get passed down to the template:
TASK [nats : Set master IP] ****************************************************
ok: [target1]
skipping: [target2]
skipping: [target3]
TASK [nats : debug] ************************************************************
ok: [target1] =>
msg: 10.0.2.15
TASK [nats : generate nats-server.conf for the slave nodes] ********************
skipping: [target1]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target2]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target3]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
Any help is appreciated, thanks in advance.
UPDATE: I suspect the issue has something to do with the variable scope being in the host context but cannot find a way to fix it ( I might be wrong though).
Far from being best practice IMO but answering your direct question. Your problem is not passing the variable to your template but the fact it is not assigned to all hosts in your play loop (and hence is undefined on any non master node). The following (untested) addresses that issue keeping the same task structure.
- name: Set master IP for all nodes
ansible.builtin.set_fact:
master_ip: "{{ hostvars | dict2items | map(attribute='value'
| selectattr('is_master', 'defined') | selectattr('is_master')
| map(attribute='ansible_facts.default_ipv4.address') | first }}"
cacheable: yes
run_once: true
- name: Show calculated master IP (making sure it is assigned everywhere)
ansible.builtin.debug:
msg: "{{ master_ip }}"
- name: generate nats-server.conf for the slave nodes
ansible.builtin.template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when: not is_master | bool
notify: nats-server
Ideas for enhancement (non exhaustive):
Select your master based on a group membership in the inventory rather than on a host attribute. This makes gathering the ip easier (e.g. master_ip: "{{ hostvars[groups.master | first].ansible_facts.default_ipv4.address }}"
Set the ip as a play var or directly inside the inventory for the node group rather than in a set_fact task.
I am using an if condition utilizing grain item within a state which triggered by reactor.
and I got an error message Jinja variable 'dict object' has no attribute 'environment'
=================================================
REACTOR config:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
==============================
test.sls
cat /srv/salt/reactor/test.sls
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
{% if grains['environment'] in ["prod", "dev", "migr"] %}
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- dummy_state
{% endif %}
===================================
dummy_state/init.sls
cat /srv/salt/dummy_state/init.sls
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
=================================================
salt 'salt-redhat-23.test.local' grains.item environment
salt-redhat-23.test.local:
----------
environment:
prod
=================================================
salt-redhat-23 ~]# cat /etc/salt/grains
role: MyServer
environment: prod
================================================
If I change the test.sls and use instead of custom grain a grain which salt-master is taking by default then it will works. Also it will work without the if condition in the state.
Do you know why this is happening?
Thank you all in advance.
Issue resolved.
You cannot use custom grains with Reactor directly, you need to call another state to be able to add condition there.
for instance:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
test.sls
# run a state using reactor
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- reactor.execute
execute.sls
{% set tst = grains['environment'] %}
{% if tst in ['prod', 'dev', 'test', 'migr'] %}
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
{% endif %}
this will work with the if condition, if you try to add the if statement on the test.sls it will not work.
Can someone help me in finding the best way to declare all variables inside a file and define the file path in ansible playbook? Here's my ansible-playbook
---
- hosts: myhost
vars:
- var: /root/dir.txt
tasks:
- name: print variables
debug:
msg: "username: {{ username }}, password: {{ password }}"
These are the contents inside dir.txt
username=test1
password=mypassword
When I run this, I am facing an error
TASK [print variables] *********************************************************************************************************
fatal: [121.0.0.7]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'username' is undefined\n\nThe error appears to be in '/root/test.yml': line 6, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: print variables\n ^ here\n"}
Expected output is to print the variables by this method
Any help would be appreciated
There are two reasons why your code won't work.
A variables file should be in YAML or JSON format for Ansible to load it
Such variables files are loaded with vars_files directive or include_vars task, rather than vars
So a YAML file named /root/dir.yml:
username: test1
password: mypassword
Can be used in a playbook like:
---
- hosts: myhost
vars_files:
- /root/dir.yml
tasks:
- name: print variables
debug:
msg: "username: {{ username }}, password: {{ password }}"
test_env_template.yml
variables:
- name: DB_HOSTNAME
value: 10.123.56.222
- name: DB_PORTNUMBER
value: 1521
- name: USERNAME
value: TEST
- name: PASSWORD
value: TEST
- name: SCHEMANAME
value: SCHEMA
- name: ACTIVEMQNAME
value: 10.123.56.223
- name: ACTIVEMQPORT
value: 8161
and many more variables in the list.
I wanted to iterate through all the variables in the test_env_template.yml using a loop to replace the values in a file, Is there a way to do that rather than calling each values separately like ${{ variables.ACTIVEMQNAME}} as the no. of variables in the template is dynamic.
In short no. There is no easy way to get your azure pipeline variables specific to tamplet variable. You can get env variables, but there you will get regular env variables and pipeline variables mapped to env variables.
You can get it via env | sort but I'm pretty sure that this is not waht you want.
You can't display variables specific to template but you can get all pipeline variables in this way:
steps:
- pwsh:
Write-Host "${{ convertToJson(variables) }}"
and then you will get
{
system: build,
system.hosttype: build,
system.servertype: Hosted,
system.culture: en-US,
system.collectionId: be1a2b52-5ed1-4713-8508-ed226307f634,
system.collectionUri: https://dev.azure.com/thecodemanual/,
system.teamFoundationCollectionUri: https://dev.azure.com/thecodemanual/,
system.taskDefinitionsUri: https://dev.azure.com/thecodemanual/,
system.pipelineStartTime: 2021-09-21 08:06:07+00:00,
system.teamProject: DevOps Manual,
system.teamProjectId: 4fa6b279-3db9-4cb0-aab8-e06c2ad550b2,
system.definitionId: 275,
build.definitionName: kmadof.devops-manual 123 ,
build.definitionVersion: 1,
build.queuedBy: Krzysztof Madej,
build.queuedById: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedFor: Krzysztof Madej,
build.requestedForId: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedForEmail: krzysztof.madej#hotmail.com,
build.sourceVersion: 583a276cd9a0f5bf664b4b128f6ad45de1592b14,
build.sourceBranch: refs/heads/master,
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
DB_HOSTNAME: 10.123.56.222,
DB_PORTNUMBER: 1521,
USERNAME: TEST,
PASSWORD: TEST,
SCHEMANAME: SCHEMA,
ACTIVEMQNAME: 10.123.56.223,
ACTIVEMQPORT: 8161
}
If you prefix them then you can try to filter using jq.
I wonder if it is possible to mix map arguments in Azure Pipelines template yaml, and how to do it.
These two scenarios shown bellow do same thing: place template parameter as env argument in a task, but in the second I'm trying to do it through two maps instead of a single one. That could be useful when I have different purposes to those values (at the eyes of someone who is extending the template) but both are going to be used as 'env' under the hood.
This works fine:
Main Pipeline:
...
extends:
template: templates/deploy/v1/deployment.job.yaml#infrastructure-templates
parameters:
name: dev
variableGroup: 'AzureDevopsVariableGroupName'
secretEnvVariables:
SECRET1: ${SECRET1}
SECRET2: ${SECRET2}
Target Template:
parameters:
- name: secretEnvVariables
type: object
jobs:
...
steps:
- bash: |
#!/bin/bash
echo "SECRET1 = ${SECRET1}"
...
displayName: Substitute Env VARS on files
enabled: true
env:
${{ parameters.secretEnvVariables }}
This doesn't work (and I wonder if it is possible to make it work):
Main Pipeline:
...
extends:
template: templates/deploy/v1/deployment.job.yaml#infrastructure-templates
parameters:
name: dev
variableGroup: 'AzureDevopsVariableGroupName'
secretEnvVariables:
SECRET1: ${SECRET1}
SECRET2: ${SECRET2}
moreVariables:
VAR1: ${VAR1}
Target Template:
parameters:
- name: secretEnvVariables
type: object
- name: moreVariables
type: object
jobs:
...
steps:
- bash: |
#!/bin/bash
echo "SECRET1 = ${SECRET1}"
echo "VAR = ${VAR1}"
...
displayName: Substitute Env VARS on files
enabled: true
env:
${{ parameters.secretEnvVariables }}
${{ parameters.moreVariables }}
Can it be done? How to do it?
I am doing something similar and this isn't well documented but can use objects to accomodate for this.
Here it is the combo of environment and region deployment:
- name: environmentObjects
type: object
default:
- environmentName: 'dev'
regionAbrvs: ['eus']
- environmentName: 'uat'
regionAbrvs: ['eus', 'cus']
From there it would be a loop to access each one like:
- ${{ each environmentObject in parameters.environmentObjects }} :
- ${{ each regionAbrv in enviornmentObject.regionAbrvs }} :
This should work for your scenario as well.