I am trying to run a task with the no_log: true attribute to keep password out of logs, but I am receiving "the output has been hidden due to the fact that 'no_log: true' was specified for this result' Failure.
Here is the task:
- name: set varible for secured vault password
set_fact:
secure_vault_password: "{{ secure_vault_password_string.stdout_lines[0] }}"
vault_password: ''
when: secure_vault_password == ''
no_log: true
I have tried commenting out the no_log line and also changing true to false but still my playbook is failing. Any idea please?
Related
I'm trying to use azure pipeline to build a tar.gz package on a linux build server.Iis there way in VSTS/Azure Pipeline code to increment default major:minor:patch based on source branch?
Example:
If source branch is Develop: major:minor(increment):patch
If source branch is bugfix: major:minor:patch(increment)
If source branch is main: major(increment):minor:patch
Please let me know your suggestions. Thank you
I'm trying to use azure pipeline to build a tar.gz package on a linux build server.Iis there way in VSTS/Azure Pipeline code to increment default major:minor:patch based on source branch?
Example:
If source branch is Develop: major:minor(increment):patch
If source branch is bugfix: major:minor:patch(increment)
If source branch is main: major(increment):minor:patch
Please let me know your suggestions. Thank you
Your requirement doesn't have a built in feature.
There is a counter expression in DevOps pipeline, it can auto increase a variable for each pipeline run, but this feature will forcibly increased every time the pipeline is run, and cannot be increased or stopped arbitrarily.
counter expression is for pipeline, and it has two parameters, prefix
and seed. Seed is based on the prefix.
When the prefix is been set and run for the first time, the counter
result will start from the feed value. But for the later run based on
the same prefix, the counter result will ignore the feed value, it
will be 'last time counter result + 1'
Since the counter expression has such limitation, I Write a more free design for you to manage the major, minor and patch.
The below is the YAML definition of the pipeline.
trigger:
- none
pool:
vmImage: ubuntu-latest
parameters: #The default value of increase major, minor, patch are all false. If you change these to true, the major, minor, patch will auto_increase by default.
- name: increase_major
default: false
type: boolean
values:
- true
- false
- name: increase_minor
default: false
type: boolean
values:
- true
- false
- name: increase_patch
default: false
type: boolean
values:
- true
- false
variables:
- name: Personal_Access_Token
value: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" #Put your personal Access Token here.
- ${{ if eq(parameters.increase_major,false)}}:
- name: increase_major
value: false
- ${{ if eq(parameters.increase_major,true)}}:
- name: increase_major
value: true
- ${{ if eq(parameters.increase_minor,false)}}:
- name: increase_minor
value: false
- ${{ if eq(parameters.increase_minor,true)}}:
- name: increase_minor
value: true
- ${{ if eq(parameters.increase_patch,false)}}:
- name: increase_patch
value: false
- ${{ if eq(parameters.increase_patch,true)}}:
- name: increase_patch
value: true
steps:
#===============================
#Put your pipeline steps here.
#===============================
# The below code will help you auto increase the major, minor, patch.
- task: PythonScript#0
displayName: Change the Major, Minor, Patch.
inputs:
scriptSource: inline
script: |
import json
import requests
#This part is defnition of variables
org_name = "BowmanCP"
project_name = "BowmanCP"
pipeline_definition_id = "376"
personal_access_token = "$(Personal_Access_Token)"
key = 'variables'
increase_major_control = '$(increase_major)'
increase_minor_control = '$(increase_minor)'
increase_patch_control = '$(increase_patch)'
major = "major"
minor = "minor"
patch = "patch"
#This part is logic for auto increase major, minor, patch
def get_value_of_specific_variable(json_content, key, var_name):
data = json.loads(json_content)
return data[key][var_name].get('value')
def auto_increase(json_content, key, var_name):
data = json.loads(json_content)
data[key][var_name]['value'] = str(int(get_value_of_specific_variable(json_content,key,var_name)) + 1)
return data
def get_pipeline_definition(org_name, project_name, pipeline_definition_id, personal_access_token):
url = "https://dev.azure.com/"+org_name+"/"+project_name+"/_apis/build/definitions/"+pipeline_definition_id+"?api-version=6.0"
payload={}
headers = {
'Authorization': 'Basic '+personal_access_token
}
response = requests.request("GET", url, headers=headers, data=payload)
json_content = response.text
return json_content
def update_pipeline_definition(org_name, project_name, pipeline_definition_id, json_content, key, var_name):
json_data = auto_increase(json_content, key, var_name)
url2 = "https://dev.azure.com/"+org_name+"/"+project_name+"/_apis/build/definitions/"+pipeline_definition_id+"?api-version=6.0"
payload2 = json.dumps(json_data)
headers2 = {
'Authorization': 'Basic '+personal_access_token,
'Content-Type': 'application/json'
}
response2 = requests.request("PUT", url2, headers=headers2, data=payload2)
#increase major
if increase_major_control == 'true':
json_data = update_pipeline_definition(org_name, project_name, pipeline_definition_id, get_pipeline_definition(org_name, project_name, pipeline_definition_id, personal_access_token), key, major)
if increase_minor_control == 'true':
json_data = update_pipeline_definition(org_name, project_name, pipeline_definition_id, get_pipeline_definition(org_name, project_name, pipeline_definition_id, personal_access_token), key, minor)
if increase_patch_control == 'true':
json_data = update_pipeline_definition(org_name, project_name, pipeline_definition_id, get_pipeline_definition(org_name, project_name, pipeline_definition_id, personal_access_token), key, patch)
When you run it, just click the checkbox will determine whether increase the variable of major, minor and patcH:
The variables are in this place:
Design Ideas:
Use Definitions - Get REST API to get the pipeline information.
Use code to achieve auto increase.
Use Definitions - Update REST API to change the pipeline definition(variables of the pipeline.).
I have python script to execute ansible-playbook programmatically, python calls ansible API but play is not getting executed. I believe it is because start_at_task is set to None
What should be the value of start_at_task? could somebody help me
Ansible Version: 2.9.9
Python Version: 3.6.8
This is my run_playbook method
def run_playbook(play_book, extra_vars, servers, inventory_path, tags='all'):
base_playbook_path = os.environ.get('PLAYBOOK_PATH',
'/hom/playbooks/')
playbook_path = base_playbook_path + play_book
context.CLIARGS = ImmutableDict(tags=tags, connection='paramiko', remote_user='xyz', listtags=False, listtasks=False,
listhosts=False, syntax=False, module_path=None, forks=100,
private_key_file='/var/lib/jenkins/.ssh/xyz.pem', ssh_common_args=None, ssh_extra_args=None,
sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None,
become_user=None, verbosity=True, check=False, start_at_task=None)
loader = DataLoader()
loader.load_from_file(base_playbook_path + '.vault_pass.txt')
inventory = InventoryManager(loader=loader, sources=inventory_path)
inventory.subset(servers)
variable_manager = VariableManager(loader=loader, inventory=inventory)
variable_manager._extra_vars = extra_vars
passwords = {}
playbook = PlaybookExecutor(playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader, passwords=passwords)
result = playbook.run()
return result
and this is simple playbook that prints the kernel version
---
- name: Get Kernel Versions
gather_facts: no
hosts: all
become: yes
become_method: sudo
tasks:
- name: Fetch Kernel Version
shell: cat /etc/redhat-release
register: os_release
- debug:
msg: "{{ os_release.stdout }}"
Output:
PLAY [Get Kernel Versions] ****************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************
0
Here is Ansible playbook to check the config file exist and capture that output using register and send that output to csv file.If file exist It should get 1 in csv file.But am getting error "The task includes an option with an undefined variable. The error was : 'dict object' has no attribute 'stdout' \n\n The error appeares to be in ..../..../../main.yml , but may \n be elsewhere in the file depending n the exact syntax problem. \n \n The offending line appears to be \n\n\n"
---
- hosts: all
tasks:
- name : Gather the metadata
shell : curl -H Metadata:true "http://169.254.169.254/metadata/instance"
register : vm_medtadata
- name : Assign the meta json to variable
set_facts:
meta : "{{vm_metadata.stdout }}"
- name : setting the facts for csv
set_fact:
vm_resourcegroup: "{{meta.compute.resourceGroupName }}"
- name : check config file exist
stat:
path: /etc/example.config
register: file_status
- name: create local file with file existance status
local_action : copy content = "{{vm_resourcegroup}} {{ansible_hostname}} {{file_status.stdout}}" \n dest= "{{build_source_dir}}/src/ansible/ansible_file_status{{anisible_hostname}}.csv "
...
local_action: copy content = "{{vm_resourcegroup}} {{ansible_hostname}} {{file_status.stdout}}" \n dest= "{{build_source_dir}}/src/ansible/ansible_file_status{{anisible_hostname}}.csv "
You have a misunderstanding about stat: -- it does not have a .stdout property, but rather an .stat property with several sub-fields
Also, your local_action appears to have a stray \n in it, perhaps you meant to include that inside the double-quotes?
I need to pass the word true or false to a data template file in terraform. However, if I try to provide the value, it comes out 0 or 1 due to interpolation syntax. I tried doing \\true\\ as recommended in https://www.terraform.io/docs/configuration/interpolation.html, however that results in \true\, which obviously isn't right. Same with \\false\\ = \false\
To complicate matters, I also have a scenario where I need to pass it the value of a variable, which can either equal true or false.
Any ideas?
# control whether to enable REST API and set other port defaults
data "template_file" "master_spark_defaults" {
template = "${file("${path.module}/templates/spark/spark- defaults.conf")}"
vars = {
spark_server_port = "${var.application_port}"
spark_driver_port = "${var.spark_driver_port}"
rest_port = "${var.spark_master_rest_port}"
history_server_port = "${var.history_server_port}"
enable_rest = "${var.spark_master_enable_rest}"
}
}
var.spark_master_enable_rest can be either true or false. I tried setting the variable as "\\${var.spark_master_enable_rest}\\" but again this resulted in either \true\ or \false\
Edit 1:
Here is the relevant portion of conf file in question:
spark.ui.port ${spark_server_port}
# set to default worker random number.
spark.driver.port ${spark_driver_port}
spark.history.fs.logDirectory /var/log/spark
spark.history.ui.port ${history_server_port}
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled ${enable_rest}
spark.master.rest.port ${rest_port}
I think you must be overthinking,
if i set my var value as
spark_master_enable_rest="true"
Then i get :
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled true
in my result when i apply.
I ended up creating a cloud-config script to find/replace the 0/1 in the file:
part {
content_type = "text/x-shellscript"
content = <<SCRIPT
#!/bin/sh
sed -i.bak -e '/spark.master.rest.enabled/s/0/false/' -e '/spark.master.rest.enabled/s/1/true/' /opt/spark/conf/spark-defaults.conf
SCRIPT
}
I have stored an entire yaml file into a Map yamlConfig which i am able to print and check.
The Output when I run the code: yamlConfig.each{ k, v -> println "${k}:${v}" } is:
Host:localhost
Port:10000
application:[name:ABC, preferences:[UUID:d3f3278e, server:localhost:2222]]
services:[[name:XYZ, instances:1, start:true]]
dataSets:[[name:ABC], [name:XYZ]]
Now, I am trying to fetch a value from Map using following code:
println yamlConfig.get("services").getAt("name")
However, I am getting the value: [XYZ]. Instead I need the result as XYZ, without square brackets.
Yml file I am using:
Host: localhost
Port: 10000
application:
name: ABC
preferences:
UUID: d3f3278e
server: localhost:2222
services:
- name: XYZ
instances: 1
start: true
data:
- name: ABC
- name: XYZ
This is because of the - character placed before your name property. It makes the yaml parser treat what's inside the services section as an array.
When you ask for the name property doing yamlConfig['services']['name'] groovy gives you all the macthing properties of array items in the services array, and it can only return them in another array.
So either remove the - or use yamlConfig['services'][0]['name'].
yamlConfig.get("services")
return a list but not a service, therefore when you apply getAt to the returned list of services it returns a list of names.
yamlConfig.get("services").getAt('name')
actually does
yaml['services'].collect { it['name'] }
so in order to get a name of a certain service you need to do something like this:
println yaml['services'][0]['name']