import json as variable in shell script on windows and linux? - azure

I want to read data from a json file into my azure devops as variable.
I saw this post:
import Azure Devops pipeline variables from json file
But the answer here is only powershell compatible.
Is ther any solution that works with linux and windows agents?

First, powershell should not only works on windows but also works on linux.
Install PowerShell on Linux
Second, for your requirement, I write a demo YAML for you(based on python):
1, Get the value of the specific route of json, and set it as variables in pipeline concept.
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: PythonScript#0
inputs:
scriptSource: 'inline'
script: |
import json
def get_json_content_of_specific_route(json_file_path, json_route):
length = len(json_route)
# print(length)
#open json file
with open(json_file_path, 'r') as f:
data = json.load(f)
content = data
#get the content f data[json_route[0]][json_route[1]]...[json_route[length-1]]
for i in range(length):
if i == 0:
content = data[json_route[i]]
else:
content = content[json_route[i]]
#get the content
return content
json_file_path = 'Json_Files/parameters.json' #This is the path of the json file.
route = ['parameters', 'secretsPermissions', 'value'] #This is the route of the content you want to get.
content = get_json_content_of_specific_route(json_file_path, route)
print(content)
print("##vso[task.setvariable variable=myJobVar]"+str(content))
- powershell: |
Write-Host $(myJobVar) #you can get the content after you using logging command to set the variables.
The above variables getting from json can be Temporary set and use in pipeline run.
See using logging command to set the variables.
This is the structure of my repository:
Set and get the variable successfully in pipeline run lifecycle:
2, If you want permanent variables, there are two situations.
1' When based on classic pipeline, you need to use these REST API to achieve your requirements:
Get pipeline definition
Change pipeline definition
A python script that can change the variables of the classic pipeline:
import json
import requests
org_name = "xxx"
project_name = "xxx"
pipeline_definition_id = "xxx"
personal_access_token = "xxx"
key = 'variables'
var_name = 'BUILDNUMBER'
url = "https://dev.azure.com/"+org_name+"/"+project_name+"/_apis/build/definitions/"+pipeline_definition_id+"?api-version=6.0"
payload={}
headers = {
'Authorization': 'Basic '+personal_access_token
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
json_content = response.text
def get_content_of_json(json_content, key, var_name):
data = json.loads(json_content)
return data[key][var_name].get('value')
def change_content_of_json(json_content, key, var_name):
data = json.loads(json_content)
data[key][var_name]['value'] = str(int(get_content_of_json(json_content,key,var_name)) + 1)
return data
json_data = change_content_of_json(json_content, key, var_name)
url2 = "https://dev.azure.com/"+org_name+"/"+project_name+"/_apis/build/definitions/"+pipeline_definition_id+"?api-version=6.0"
payload2 = json.dumps(json_data)
headers2 = {
'Authorization': 'Basic '+personal_access_token,
'Content-Type': 'application/json'
}
response2 = requests.request("PUT", url2, headers=headers2, data=payload2)
2' If your pipeline is based on non-classic pipeline, then you need to change the variables part of YAML file definition.
After you change the YAML definition during pipeline run, you can follow this to push back the changes:
Push Back Changes to Repository
Also, you can make your pipeline based on variables group and update the variables group via REST API:
Variablegroups - Update

Related

Dialogflow CX python SDK not picking up credentials set in the environment variable GOOGLE_APPLICATION_CREDENTIALS

I have hosted an API to detect intent in Dialogflow CX using python Flask. I have the environment variable GOOGLE_APPLICATION_CREDENTIALS correctly set to the service account credential json. While calling the API, it results in Default Credential error. If I try to print the value set in GOOGLE_APPLICATION_CREDENTIALS within the python code, it shows None. Whereas, if I check the environment variable in the server using printenv GOOGLE_APPLICATION_CREDENTIALS, I get the location of my json file. What could have went wrong? Could anybody help on this please...
UPDATE
Now, I tried by adding the environment variable in the code. This time, detect_intent doesn't give any response. It just times out after 300 seconds
Here's the code:
from flask import make_response, jsonify, request
from flask_restful import Resource, reqparse
import os
from google.cloud import dialogflowcx_v3
_KEY = 'abcd-chat-bot-e440cw2t611c.json'
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = _KEY
class DetectIntentLatest(Resource):
#classmethod
def post(cls):
parser = reqparse.RequestParser()
parser.add_argument('text')
parser.add_argument('session_id')
args = parser.parse_args()
text = args['text']
session_id = args['session_id']
project_id = "abcd-chat-bot"
location_id = "global"
agent_id = "grt31207-18q3-1a22-6dqa-44856u783364"
agent = f"projects/{project_id}/locations/{location_id}/agents/{agent_id}"
language_code = "en-us"
session_path = f"{agent}/sessions/{session_id}"
print(f"Session path: {session_path}\n")
client_options = None
client = dialogflowcx_v3.SessionsClient()
# Initialize request argument(s)
query_input = dialogflowcx_v3.QueryInput()
query_input.text.text = text
query_input.language_code = language_code
print("_____________________ QUERY INPUT _____________________")
print(query_input)
request = dialogflowcx_v3.DetectIntentRequest(
session=session_path,
query_input=query_input,
)
# Make the request
response = client.detect_intent(request=request)
# Handle the response
print("_____________________ RESPONSE _____________________")
print(response)
return make_response(jsonify(status=1, result='success'), 200)

How do I access Databricks Repos metadata?

Is there a way to access data such as Repo url and Branch name inside a notebook within a Repo? Perhaps something in dbutils.
You can use Repos API for that - specifically the Get command. You can extract notebook path from the notebook context available via dbutils, and then do the two queries:
Get repo ID by path via Workspace API (repo path always consists of 3 components - /Repos, directory (for user or custom), and actual repository name)
Fetch repo data
Something like this:
import json
import requests
ctx = json.loads(
dbutils.notebook.entry_point.getDbutils().notebook().getContext().toJson())
notebook_path = ctx['extraContext']['notebook_path']
repo_path = '/'.join(notebook_path.split('/')[:4])
api_url = ctx['extraContext']['api_url']
api_token = ctx['extraContext']['api_token']
repo_dir_data = requests.get(f"{api_url}/api/2.0/workspace/get-status",
headers = {"Authorization": f"Bearer {api_token}"},
json={"path": repo_path}).json()
repo_id = repo_dir_data['object_id']
repo_data = requests.get(f"{api_url}/api/2.0/repos/{repo_id}",
headers = {"Authorization": f"Bearer {api_token}"}
).json()

How to create Wiki Subpages in Azure Devops thru Python?

My Azure devops page will look like :
I have 4 pandas dataframes.
I need to create 4 sub pages in Azure devops wiki from each dataframe.
Say, Sub1 from first dataframe, Sub2 from second dataframe and so on.
My result should be in tab. The result should look like :
Is it possible to create subpages thru API?
I have referenced the following docs. But I am unable to make any sense. Any inputs will be helpful. Thanks.
https://github.com/microsoft/azure-devops-python-samples/blob/main/API%20Samples.ipynb
https://learn.microsoft.com/en-us/rest/api/azure/devops/wiki/pages/create%20or%20update?view=azure-devops-rest-6.0
To create a wiki subpage, you should use Pages - Create Or Update api, and specify the path to pagename/subpagename. Regarding how to use the api in Python, you could use Azure DevOps Python API and refer to the sample below:
def create_or_update_page(self, parameters, project, wiki_identifier, path, version, comment=None, version_descriptor=None):
"""CreateOrUpdatePage.
[Preview API] Creates or edits a wiki page.
:param :class:`<WikiPageCreateOrUpdateParameters> <azure.devops.v6_0.wiki.models.WikiPageCreateOrUpdateParameters>` parameters: Wiki create or update operation parameters.
:param str project: Project ID or project name
:param str wiki_identifier: Wiki ID or wiki name.
:param str path: Wiki page path.
:param String version: Version of the page on which the change is to be made. Mandatory for `Edit` scenario. To be populated in the If-Match header of the request.
:param str comment: Comment to be associated with the page operation.
:param :class:`<GitVersionDescriptor> <azure.devops.v6_0.wiki.models.GitVersionDescriptor>` version_descriptor: GitVersionDescriptor for the page. (Optional in case of ProjectWiki).
:rtype: :class:`<WikiPageResponse> <azure.devops.v6_0.wiki.models.WikiPageResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
if wiki_identifier is not None:
route_values['wikiIdentifier'] = self._serialize.url('wiki_identifier', wiki_identifier, 'str')
query_parameters = {}
if path is not None:
query_parameters['path'] = self._serialize.query('path', path, 'str')
if comment is not None:
query_parameters['comment'] = self._serialize.query('comment', comment, 'str')
if version_descriptor is not None:
if version_descriptor.version_type is not None:
query_parameters['versionDescriptor.versionType'] = version_descriptor.version_type
if version_descriptor.version is not None:
query_parameters['versionDescriptor.version'] = version_descriptor.version
if version_descriptor.version_options is not None:
query_parameters['versionDescriptor.versionOptions'] = version_descriptor.version_options
additional_headers = {}
if version is not None:
additional_headers['If-Match'] = version
content = self._serialize.body(parameters, 'WikiPageCreateOrUpdateParameters')
response = self._send(http_method='PUT',
location_id='25d3fbc7-fe3d-46cb-b5a5-0b6f79caf27b',
version='6.0-preview.1',
route_values=route_values,
query_parameters=query_parameters,
additional_headers=additional_headers,
content=content)
response_object = models.WikiPageResponse()
response_object.page = self._deserialize('WikiPage', response)
response_object.eTag = response.headers.get('ETag')
return response_object
More details, you could refer to the link below:
https://github.com/microsoft/azure-devops-python-api/blob/451cade4c475482792cbe9e522c1fee32393139e/azure-devops/azure/devops/v6_0/wiki/wiki_client.py#L107
Able to achieve with rest api
import requests
import base64
import pandas as pd
pat = 'TO BE FILLED BY YOU' #CONFIDENTIAL
authorization = str(base64.b64encode(bytes(':'+pat, 'ascii')), 'ascii')
headers = {
'Accept': 'application/json',
'Authorization': 'Basic '+authorization
}
df = pd.read_csv('sf_metadata.csv') #METADATA OF 3 TABLES
df.set_index('TABLE_NAME', inplace=True,drop=True)
df_test1 = df.loc['CURRENCY']
x1 = df_test1.to_html() # CONVERTING TO HTML TO PRESERVE THE TABULAR STRUCTURE
#JSON FOR PUT REQUEST
SamplePage1 = {
"content": x1
}
#API CALLS TO AZURE DEVOPS WIKI
response = requests.put(
url="https://dev.azure.com/xxx/yyy/_apis/wiki/wikis/yyy.wiki/pages?path=SamplePag2&api-version=6.0", headers=headers,json=SamplePage1)
print(response.text)
Based on #usr_lal123's answer, here is a function that can update a wiki page or create it if it doesn't:
import requests
import base64
pat = '' # Personal Access Token to be created by you
authorization = str(base64.b64encode(bytes(':'+pat, 'ascii')), 'ascii')
def update_or_create_wiki_page(organization, project, wikiIdentifier, path):
# Check if page exists by performing a Get
headers = {
'Accept': 'application/json',
'Authorization': 'Basic '+authorization
}
response = requests.get(url=f"https://dev.azure.com/{organization}/{project}/_apis/wiki/wikis/{wikiIdentifier}/pages?path={path}&api-version=6.0", headers=headers)
# Existing page will return an ETag in their response, which is required when updating a page
version = ''
if response.ok:
version = response.headers['ETag']
# Modify the headers
headers['If-Match'] = version
pageContent = {
"content": "[[_TOC_]] \n ## Section 1 \n normal text"
+ "\n ## Section 2 \n [ADO link](https://azure.microsoft.com/en-us/products/devops/)"
}
response = requests.put(
url=f"https://dev.azure.com/{organization}/{project}/_apis/wiki/wikis/{wikiIdentifier}/pages?path={path}&api-version=6.0", headers=headers,json=pageContent)
print("response.text: ", response.text)

How to trigger a dataflow with a cloud function? (Python SDK)

I have a cloud function that is triggered by cloud Pub/Sub. I want the same function trigger dataflow using Python SDK. Here is my code:
import base64
def hello_pubsub(event, context):
if 'data' in event:
message = base64.b64decode(event['data']).decode('utf-8')
else:
message = 'hello world!'
print('Message of pubsub : {}'.format(message))
I deploy the function this way:
gcloud beta functions deploy hello_pubsub --runtime python37 --trigger-topic topic1
You have to embed your pipeline python code with your function. When your function is called, you simply call the pipeline python main function which executes the pipeline in your file.
If you developed and tried your pipeline in Cloud Shell and you already ran it in Dataflow pipeline, your code should have this structure:
def run(argv=None, save_main_session=True):
# Parse argument
# Set options
# Start Pipeline in p variable
# Perform your transform in Pipeline
# Run your Pipeline
result = p.run()
# Wait the end of the pipeline
result.wait_until_finish()
Thus, call this function with the correct argument especially the runner=DataflowRunner to allow the python code to load the pipeline in Dataflow service.
Delete at the end the result.wait_until_finish() because your function won't live all the dataflow process long.
You can also use template if you want.
You can use Cloud Dataflow templates to launch your job. You will need to code the following steps:
Retrieve credentials
Generate Dataflow service instance
Get GCP PROJECT_ID
Generate template body
Execute template
Here is an example using your base code (feel free to split into multiple methods to reduce code inside hello_pubsub method).
from googleapiclient.discovery import build
import base64
import google.auth
import os
def hello_pubsub(event, context):
if 'data' in event:
message = base64.b64decode(event['data']).decode('utf-8')
else:
message = 'hello world!'
credentials, _ = google.auth.default()
service = build('dataflow', 'v1b3', credentials=credentials)
gcp_project = os.environ["GCLOUD_PROJECT"]
template_path = gs://template_file_path_on_storage/
template_body = {
"parameters": {
"keyA": "valueA",
"keyB": "valueB",
},
"environment": {
"envVariable": "value"
}
}
request = service.projects().templates().launch(projectId=gcp_project, gcsPath=template_path, body=template_body)
response = request.execute()
print(response)
In template_body variable, parameters values are the arguments that will be sent to your pipeline and environment values are used by Dataflow service (serviceAccount, workers and network configuration).
LaunchTemplateParameters documentation
RuntimeEnvironment documentation

HashiCorp Vault Python hvac read

I would like to read my secret from a pod with python.
I try with this:
import os
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
client = hvac.Client()
client = hvac.Client(url='https://vault.mydomain.internal')
client.auth_kubernetes("default", jwt)
print(client.read('secret/pippo/pluto'))
I'm sure that secret/pippo/pluto exists.
I'm sure that I'm properly authenticated
But I always receive "None" in answer to my print.
Where can I look to solve this ?
Thx a lot
If you read KV value from Vault, you need the Mount Point and the Path.
Example:
vault_client.secrets.kv.v1.read_secret(
path=path,
mount_point=mount_point
)
i've tried the method you provided in my k8s Python3 pod, i can get Vault secret data successfully.
You need to specify the correct vault token parameter in your hvac.Client and disable client.auth_kubernetes method.
Give it a shot and remember your code should run in k8s Python container instead of your host machine.
import hvac
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token')
jwt = f.read()
print("jwt:", jwt)
f.close()
client = hvac.Client(url='http://vault:8200', token='your_vault_token')
# res = client.auth_kubernetes("envelope-creator", jwt)
res = client.is_authenticated()
print("res:", res)
hvac_secrets_data_k8s = client.read('secret/data/compliance')
print("hvac_secrets_data_k8s:", hvac_secrets_data_k8s)
Below is the result:
92:qfedu shawn$ docker exec -it 202a119367a4 bash
airflow#airflow-858d8c6fcf-bgmwn:~$ ls
airflow-webserver.pid airflow.cfg config dags logs test_valut_in_webserver.py unittests.cfg webserver_config.py
airflow#airflow-858d8c6fcf-bgmwn:~$ python test_valut_in_webserver.py
jwt: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia
res: True
hvac_secrets_data_k8s: {'request_id': '80caf0cb-8c12-12d2-6517-530eecebd1e0', 'lease_id': '', 'renewable': False, 'lease_duration': 0, 'data': {'data': {'s3AccessKey': 'XXXX', 's3AccessKeyId': 'XXXX', 'sftpPassword': 'XXXX', 'sftpUser': 'XXXX'}, 'metadata': {'created_time': '2020-02-07T14:04:26.7986128Z', 'deletion_time': '', 'destroyed': False, 'version': 4}}, 'wrap_info': None, 'warnings': None, 'auth': None}
As #shawn mentioned above, below commands work for me as well
import hvac
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
ca_path = '/run/secrets/kubernetes.io/serviceaccount/ca.crt'
secret_path = '<secret path in vault>'
client = hvac.Client(url=vault_url,token=vault_token,verify= ca_path)
client.is_authenticated()
read_secret_result = client.read(secret_path)
print(read_secret_result)
print(read_secret_result['data']['username'])
print(read_secret_result['data']['password'])
Note: ca_path is where the pod stores k8s CA and usually it should be found under "/run/secrets/kubernetes.io/serviceaccount/ca.crt"
I found it easier to use hvac for authentication, and then use the API directly
Can skip this and use root/dev token for testing
import hvac as h
client = h.Client(url='https://<vault url>:8200/')
username = input("username")
import getpass
password = getpass.getpass()
print(client.token)
del username,password
Get the list of mounts
import requests,json
vault_url = 'https://<vault url>:8200/'
vault_token = '<vault token>'
headers = {
'X-Vault-Token': vault_token
}
response = requests.get(vault_url+'v1/sys/mounts', headers=headers)
json.loads(response.text).keys() #The ones ending with / is your mount name
Then get the password (have to create one fist)
mount = '<mount name>'
secret = '<secret name>'
response = requests.get(vault_url+'v1/'+mount+'/'+secret, headers=headers)
response.text
For the username/password to get access to password created by root, you have to add path in the JSON under Policies.

Resources