how to read different block setting from kinto.ini file - python-3.x

I created a different block in my kinto.ini file and i want to use those setting in my program.
#kinto.ini
[mysetting]
name = json
username = jsonmellow
password = *********
[app:main]
use = egg:kinto
kinto.storage_url = postgre//
if we use 'config.get_setting' function of kinto it gives me the setting of the default block "app:main" only. so how can i get the other setting from "mysetting" block.

you can use prefix for your settings like:
[app:main]
...
mysetting.name = json
mysetting.username = jsonmellow
But if you still need some extra section in ini: How can I access a custom section in a Pyramid .ini file?

Related

How to programmatically retrieve the workspace url and clusterOwnerUserId?

I would like to programmatically create the url to download a file.
To do this I need the workspaceUrl and clusterOwnerUserId.
How can I retrieve those in a Databricks notebook?
# how to get the `workspaceUrl` and `clusterOwnerUserId`?
tmp_file = '/tmp/output_abcd.xlsx'
filestore_file = '/FileStore/output_abcd.xlsx'
# code to create file omitted for brevity ...
dbutils.fs.cp(f'file:{tmp_file}', filestore_file)
downloadUrl = f'https://{workspaceUrl}/files/output_abcd.xlsx?o={clusterOwnerUserId}'
displayHTML(f"<a href='{downloadUrl}'>download</a>")
The variables are available in the spark conf.
E.g.
clusterOwnerUserId = spark.conf.get('spark.databricks.clusterUsageTags.orgId')
workspaceUrl = spark.conf.get('spark.databricks.workspaceUrl')
Use can then use the details as follows:
tmp_file = '/tmp/output_abcd.xlsx'
filestore_file = '/FileStore/output_abcd.xlsx'
# code to create file omitted for brevity ...
dbutils.fs.cp(f'file:{tmp_file}', filestore_file)
downloadUrl = f'https://{workspaceUrl}/files/output_abcd.xlsx?o={clusterOwnerUserId}'
displayHTML(f"<a href='{downloadUrl}'>download</a>")
Databricks Files in the Filestore at
/FileStore/my-stuff/my-file.txt is accessible at:
"https://databricks-instance-name.cloud.databricks.com/files/my-stuff/my-file.txt"
I don't think you need the o=... part. That is the workspace Id btw, not the clusterOwner user id.

Get auto-generated OutputFileDatasetConfig destination

From the OutputFileDatasetConfig documentation for the destination class member,
If set to None, we will copy the output to the workspaceblobstore datastore, under the path /dataset/{run-id}/{output-name}
Given I have the handle to such OutputFileDatasetConfig with destination set to None, how can I get the generated destination without recomputing the default myself as this can be subject to change.
If you do not want to pass a name and path, then in that scenario the run details should provide the run id and the path can be created using the same. In an ideal scenario you would like to pass these details, if they are not passed the recommended approach is to use them in a intermediate step so the SDK can handle this for you, as seen in this PythonScriptStep()
from azureml.data import OutputFileDatasetConfig
dataprep_output = OutputFileDatasetConfig()
input_dataset = Dataset.get_by_name(workspace, 'raw_data')
dataprep_step = PythonScriptStep(
name="prep_data",
script_name="dataprep.py",
compute_target=cluster,
arguments=[input_dataset.as_named_input('raw_data').as_mount(), dataprep_output]
)
output = OutputFileDatasetConfig()
src = ScriptRunConfig(source_directory=path,
script='script.py',
compute_target=ct,
environment=env,
arguments = ["--output_path", output])
run = exp.submit(src, tags=tags)
###############INSIDE script.py
mount_point = os.path.dirname(args.output_path)
os.makedirs(mount_point, exist_ok=True)
print("mount_point : " + mount_point)

How do I escape true/false in terraform?

I need to pass the word true or false to a data template file in terraform. However, if I try to provide the value, it comes out 0 or 1 due to interpolation syntax. I tried doing \\true\\ as recommended in https://www.terraform.io/docs/configuration/interpolation.html, however that results in \true\, which obviously isn't right. Same with \\false\\ = \false\
To complicate matters, I also have a scenario where I need to pass it the value of a variable, which can either equal true or false.
Any ideas?
# control whether to enable REST API and set other port defaults
data "template_file" "master_spark_defaults" {
template = "${file("${path.module}/templates/spark/spark- defaults.conf")}"
vars = {
spark_server_port = "${var.application_port}"
spark_driver_port = "${var.spark_driver_port}"
rest_port = "${var.spark_master_rest_port}"
history_server_port = "${var.history_server_port}"
enable_rest = "${var.spark_master_enable_rest}"
}
}
var.spark_master_enable_rest can be either true or false. I tried setting the variable as "\\${var.spark_master_enable_rest}\\" but again this resulted in either \true\ or \false\
Edit 1:
Here is the relevant portion of conf file in question:
spark.ui.port ${spark_server_port}
# set to default worker random number.
spark.driver.port ${spark_driver_port}
spark.history.fs.logDirectory /var/log/spark
spark.history.ui.port ${history_server_port}
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled ${enable_rest}
spark.master.rest.port ${rest_port}
I think you must be overthinking,
if i set my var value as
spark_master_enable_rest="true"
Then i get :
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled true
in my result when i apply.
I ended up creating a cloud-config script to find/replace the 0/1 in the file:
part {
content_type = "text/x-shellscript"
content = <<SCRIPT
#!/bin/sh
sed -i.bak -e '/spark.master.rest.enabled/s/0/false/' -e '/spark.master.rest.enabled/s/1/true/' /opt/spark/conf/spark-defaults.conf
SCRIPT
}

Setting the current test insertion within the DUT model

We have evolved our Origen usage such that we have a params file and a flow file for each test module (scan, mbist, etc.). We are now at the point where we need to take into account the test insertion when handling the DUT model and the test flow generation. I can see here that using a job flag is the preferred method for specifying test insertion specifics into the flow file. And this video shows how to specify a test insertion when simulating the test flow. My question is how can a test insertion be specified when not generating a flow, only loading params files into the DUT model? Take this parameter set that defines some test conditions for a scan/ATPG test module.
scan.define_params :test_flows do |p|
p.flows.ws1.chain = [:vmin, :vmax]
p.flows.ft1.chain = [:vmin, :vmax]
p.flows.ws1.logic = [:vmin, :vmax]
p.flows.ft1.logic = [:vmin]
p.flows.ws1.delay = [:pmax]
p.flows.ft1.delay = [:pmin]
end
You can see in the parameter set hierarchy that there are two test insertions defined: 'ws1' and 'ft1'. Am I right to assume that the --job option only sets a flag somewhere when used with the origen testers:run command? Or can this option be applied to origen i, such that just loading some parameter sets will have access to the job selected?
thx
There's no built-in way to do what you want here, but given that you are using parameters in this example the way I would do it would be to align your parameter contexts to the job name:
scan.define_params :ws1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin, :vmax]
p.flows.delay = [:pmax]
end
scan.define_params :ft1 do |p|
p.flows.chain = [:vmin, :vmax]
p.flows.logic = [:vmin]
p.flows.delay = [:pmin]
end
There are various ways to actually set the current context, one way would be to have a target setup per job:
# target/ws1.rb
MyDUT.new
dut.params = :ws1
# target/ft1.rb
MyDUT.new
dut.params = :ft1
Here it is assuming that the scan object is configured to track the context of the top-level DUT - http://origen-sdk.org/origen//guides/models/parameters/#Tracking_the_Context_of_Another_Object

SoapUI + Groovy + Get 3 test data from 3 different environment respectively

In SoapUI, We have 3 different environment and 3 different test data property files.
So my problems are:
How to set 3 different end points in SoapUI?
How to get test data as per the environment using Groovy?
I try to answer your questions
1.- How to set 3 different end points in SoapUI.
Set your test steps URL with a property like:
http://${#Project#endpoint}
And add the endpoint property in your test data file.
2.- How to get test data as per the environment using Groovy.
If you have a typical property file with key=value you can use the code shown below:
// read property file
def properties = new java.util.Properties();
properties.load( new java.io.FileInputStream( "/tmp/sample.properties" ));
proj = testRunner.testCase.testSuite.project;
def names = [];
names = properties.propertyNames();
while( names.hasMoreElements() )
{
def name = names.nextElement();
log.info name + " " + properties.getProperty(name);
proj.setPropertyValue(name, properties.getProperty(name)) ;
}
With this you save all properties in the project level, if you prefer to save in testCase or testSuite use testRunner.testCase or testRunner.testCase.testSuite instead of testRunner.testCase.testSuite.project.
Hope this helps,

Resources