I am using an if condition utilizing grain item within a state which triggered by reactor.
and I got an error message Jinja variable 'dict object' has no attribute 'environment'
=================================================
REACTOR config:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
==============================
test.sls
cat /srv/salt/reactor/test.sls
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
{% if grains['environment'] in ["prod", "dev", "migr"] %}
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- dummy_state
{% endif %}
===================================
dummy_state/init.sls
cat /srv/salt/dummy_state/init.sls
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
=================================================
salt 'salt-redhat-23.test.local' grains.item environment
salt-redhat-23.test.local:
----------
environment:
prod
=================================================
salt-redhat-23 ~]# cat /etc/salt/grains
role: MyServer
environment: prod
================================================
If I change the test.sls and use instead of custom grain a grain which salt-master is taking by default then it will works. Also it will work without the if condition in the state.
Do you know why this is happening?
Thank you all in advance.
Issue resolved.
You cannot use custom grains with Reactor directly, you need to call another state to be able to add condition there.
for instance:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
test.sls
# run a state using reactor
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- reactor.execute
execute.sls
{% set tst = grains['environment'] %}
{% if tst in ['prod', 'dev', 'test', 'migr'] %}
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
{% endif %}
this will work with the if condition, if you try to add the if statement on the test.sls it will not work.
Related
I'm trying to create a sudo file for each user.
Playbook:
- name:
hosts: all
gather_facts: false
tasks:
- name:
template:
src: sudo.j2
dest: "/etc/sudoers.d/{{item.name}}"
loop: "{{userinfo}}"
when: "'admins' in item.groupname"
Var file:
userinfo:
- groupname: admins
name: bill
- groupname: admins
name: bob
- groupname: devs
name: bea
Jinja file:
{% for item in userinfo %}
{% if item.groupname=="admins" %}
{{item.name}} ALL=ALL NOPASSWD:ALL
{% endif %}
{% endfor %}
What I am getting is two files but with information of both users.
bill ALL=ALL NOPASSWD:ALL
bob ALL=ALL NOPASSWD:ALL
How do I make it work such that each file contains information of that user only
The issue is that you have 2 loops: one in the playbook, the other in the template jinja file; try leaving the template file with the templated information only
{{ item.name }} ALL=ALL NOPASSWD:ALL
After production deployment the application has not the endpoint of the environment.url from the .gitlab-ci.yml, but a combination of the groupname, projectname and basedomain:
<groupname>-<projectname>.basedomain.
The Gitlab project belongs to a Gitlab group, which has an Kubernetes cluster. De group has a basedomain which is used in the .gitlab-ci.yml:
//part of .gitlab-ci.yml
...
apply production secret configuration:
stage: prepare-deploy
extends: .auto-deploy
needs: ["build", "generate production configuration"]
dependencies:
- generate production configuration
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- kubectl create secret generic tasker-secrets-development --from-file=config.tar --dry-run -o yaml | kubectl apply -f -
environment:
name: production
url: http://app.$KUBE_INGRESS_BASE_DOMAIN
action: prepare
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
...
I expected http://app.$KUBE_INGRESS_BASE_DOMAIN as the endpoint for the application.
The Ingress (I removed the minio part):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "appname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version| replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
cert-manager.io/cluster-issuer: {{ .Values.leIssuer }}
acme.cert-manager.io/http01-edit-in-place: "true"
{{- if .Values.ingress.annotations }}
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
{{- with .Values.ingress.modSecurity }}
{{- if .enabled }}
nginx.ingress.kubernetes.io/modsecurity-transaction-id: "$server_name-$request_id"
nginx.ingress.kubernetes.io/modsecurity-snippet: |
SecRuleEngine {{ .secRuleEngine | default "DetectionOnly" | title }}
{{- range $rule := .secRules }}
{{ (include "secrule" $rule) | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.prometheus.metrics }}
nginx.ingress.kubernetes.io/server-snippet: |-
location /metrics {
deny all;
}
{{- end }}
spec:
{{- if .Values.ingress.tls.enabled }}
tls:
- hosts:
{{- if .Values.service.commonName }}
- {{ template "hostname" .Values.service.commonName }}
{{- end }}
- {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<<<
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- {{ $host }}
{{- end -}}
{{- end }}
secretName: {{ .Values.ingress.tls.secretName | default (printf "%s-cert" (include "fullname" .)) }}
{{- end }}
rules:
- host: {{ template "hostname" .Values.service.url }} <<<<<<<<<<<<<<<<<
http:
&httpRule
paths:
- path: /
backend:
serviceName: {{ template "fullname" . }}
servicePort: {{ .Values.service.externalPort }}
{{- if .Values.service.commonName }}
- host: {{ template "hostname" .Values.service.commonName }}
http:
<<: *httpRule
{{- end -}}
{{- if .Values.service.additionalHosts }}
{{- range $host := .Values.service.additionalHosts }}
- host: {{ $host }}
http:
<<: *httpRule
{{- end -}}
{{- end -}}
What I have done so far:
removed deployment from cluster, cleared the Gitlab runners cache, cleared the Gitlab cluster cache. Deleted the environment (stop and delete). Created a new environment 'production' with the right URL 'Operations>Environments>production>Edit'. After push the url has been replaced with the wrong one.
hard coded the url in Ingress (at the arrows in the snippet), it worked
changed the value in gitlab-ci.yml without http://. No result.
check the use of 'apply production secret configuration' in the gitlab-ci.yml, by adding echo 'message!'. Conclusion: this part of the file is used for production
A CICD variable Settings>CICD: GITLAB_ENVIRONMENT_URL. No effect.
UPDATE:
Maybe the .Values.gitlab.app is used for the URL.
The file .gitlab-ci.yml includes a template which overrides the value.
//.gitlab-ci.yml
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
The override in the template:
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN <<<<<<<<<<<<<<
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
How to print new line character when sending emails? I'm sending it to gmail. The character \n prints literally. I even tried </br> tag and yaml mutliline and none of them work.
- alert: KubernetesPodImagePullBackOff
expr: kube_pod_container_status_waiting_reason{reason=~"ContainerCreating|CrashLoopBackOff|ErrImagePull|ImagePullBackOff"} > 0
for: 1s
labels:
severity: warning
annotations:
summary: "Kubernetes pod crash looping (instance {{ $labels.instance }}"
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
You need to rewrite default template in Alertmanager for E-mails.
You need to replace something like
{{ .Annotations.description }}
in template by
{{ .Annotations.description | safeHtml }}
I wrote for my own email template, if you have not your own, you may create it from
https://github.com/prometheus/alertmanager/blob/master/template/default.tmpl
and edit
{{ range .Annotations.SortedPairs }} - {{ .Name }} = {{ .Value }}
in the same manner with
{{ .Value | safeHtml }}
Also read this answer
prometheus using html content in alerts annotations and using it in email template
I'm very new to saltstack and still learning so if there is a better way of doing things please let me know.
I'm trying to pass in override values and cascade use that override value in variable pillar files. So I plan to have a default.sls file that contains variables that all the other pillars will use. I want to also be able to override the default.sls variables from the command line.
Here is what I'm doing:
File structure:
/pillar/top.sls
/pillar/default.sls
/pillar/test.sls
In /pillar/top.sls
base:
'*':
- default
'test':
- match: glob
- test
In /pillar/default.sls
{% set home_location = salt['cmd.shell']('eval echo "~ec2-user"') %}
environment: dev
uniqueid: all
side: a
region: myregion
app: myapp
ipaddress: {{ grains['ip4_interfaces']['eth0'][0] }}
hostname: {{ grains['host'] }}
homedir: {{ home_location }}
docker:
- pip:
- version: 4.0.2
In /pillar/test.sls
test:
- log:
- group: logs-{{ pillar['environment'] }}-{{ pillar['uniqueid'] }}-{{ pillar['app'] }}
- stream: logs_{{ pillar['side'] }}_mysupertest_{{ pillar['ipaddress'] }}
Here is the command I am running locally (masterless):
salt-call --id 'test' pillar.items --local pillar='{"environment":"uat","side":"b"}'
It throws the following error:
SaltRenderError: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'environment'
[CRITICAL] Pillar render error: Rendering SLS 'test' failed. Please see master log for details.
So it's not getting the values from the default.sls file. What am I doing wrong?
Also I have tried adding the following to the test.sls file with the same results:
include:
- pillar://default
I have a file with variables that I use in my playbook:
net_interfaces:
...
- name: "eth0"
ip: "192.168.1.100"
mask: "255.255.255.0"
gateway: "192.168.1.1"
...
and I want to deploy some configs with this variables, for example ifcfg-eth0:
DEVICE={{ item.name }}
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR={{ item.ip }}
NETMASK={{ item.netmask }}
GATEWAY={{ item.gateway }}
but sometimes there is no gateway variable for item and in this case I want to remove string
GATEWAY={{ item.gateway }}
from this config file on the target machine. How can I achieve this without creating another task for a certain hosts?
Add condition:
{% if item.gateway is defined %}
GATEWAY={{ item.gateway }}
{% endif %}
Another (and better) way is to use 'default' filter because in this case we can check if some variable was defined and set it's default value if it wasn't. Example:
{{ my_string_value | default("awesome") }}