Run Estimator in Child run - azure-machine-learning-service

Can't create child run of Estimator (Estimator,TensorFlow and PyTorch)
I tried to use submit_child method.
run = experiment.start_logging()
estimator = TensorFlow(source_directory='.',
compute_target=cpu_cluster,
entry_script='keras.py',
pip_packages=["keras"],
max_run_duration_seconds= 1200,
)
run.submit_child(estimator)
TrainingException: TrainingException:
Message: ['_parent_run_id'] parameters cannot be overridden. Allowed parameters are: script_params, inputs and source_directory_data_store.
InnerException None
ErrorResponse
{
"error": {
"message": "['_parent_run_id'] parameters cannot be overridden. Allowed parameters are: script_params, inputs and source_directory_data_store."
}
}
Any workaround or best practice for this scenario ?

In submit_child method estimator object is passed instead of the config object for config parameter. For details on how to configure a run, see the configuration type details.
ScriptRunConfig
AutoMLConfig
Pipeline
PublishedPipeline
PipelineEndpoint
Please follow the below link to submit a child using ScriptRunConfig.
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py#submit-child-config--tags-none----kwargs-

Related

Puppet assign class parameters in multiple places

I'm learning puppet (v6), and trying to understand how to set class parameters when a specific node needs an additional parameter, but uses the same class. Maybe a little fuzzy on the terminology, but here's what I'm working on:
MyNode1 needs sshd configured to use a banner and timeout, so using ghoneycutt-ssh, I include the ssh class with parameters:
/modules/MyModule/manifests/MySSH.pp
# Configures SSH
class MyModule::MySSH {
# Using ssh module
class { '::ssh':
sshd_client_alive_count_max => 0,
sshd_client_alive_interval => 900,
sshd_config_banner => '/etc/MyBanner.txt',
}
}
Now I have a second node MyNode2, which requires MySSH above, and also needs to disable forwarding. I started with something like this, where I define only the additional parameter in its own class:
/modules/MyModule/manifests/MySSH_Node2.pp
class MyModule::MySSH_Node2 {
class { '::ssh':
sshd_allow_tcp_forwarding => 'no',
}
}
Then define MyNode2 to include both in my site definition, hoping that puppet merges my ::ssh definitions:
/manifests/site.pp
node MyNode1 {
include MyModule::MySSH
}
node MyNode2 {
include MyModule::MySSH
include MyModule::MySSH_Node2
}
I understand that the above example doesn't work due to error Duplicate declaration: Class[Ssh]. I also tried overriding the class with a new parameter:
class MyModule::MySSH_Node2 {
Class[ssh] {
sshd_allow_tcp_forwarding => 'no',
}
}
But it seems this is not allowed either: Error: Resource Override can only operate on resources, got: Class[ssh]-Type
I'm not sure what the best way to add parameters is. I know I can create a manifest that includes all the parameters needed for this node and apply that instead, but then I end up with duplicate code everywhere.
Is there a reasonable way in modern puppet to assign and merge class parameters like this in puppet?

TF Keras Model Serving REST API JSON Input Format

So I tried following this guide and deploy the model using docker tensorflow serving image. Let's say there are 4 features: feat1, feat2, feat3 and feat4. I tried to hit the prediction endpoint {url}/predict with this JSON body:
{
"instances":
[
{
"feat1": 26,
"feat2": 16,
"feat3": 20.2,
"feat4": 48.8
}
]}
I got 400 response code:
{
"error": "Failed to process element: 0 key: feat1 of 'instances' list. Error: Invalid argument: JSON object: does not have named input: feat"
}
This is the signature passed to model.save():
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
I understand that from this signature that in every instances element, the only field being accepted is "examples" but when I tried to only pass this one only with empty string:
{
"instances":
[
{
"examples": ""
}
]
}
I also got bad request: {"error": "Name: <unknown>, Feature: feat1 (data type: int64) is required but could not be found.\n\t [[{{node ParseExample/ParseExampleV2}}]]"}
I couldn't find in the guide how to build the JSON body request the right way, it would be really helpful if anyone can point this out or give references regarding this matter.
In that example, the serving function expects a serialized tf.train.Example proto as input. This page explains how binary data can be passed to a deployed model as a string (explaining why the signature expects a tensor of strings). So what you need to do is build an Example proto containing your features and send that over. It could look something like this:
import base64
import tensorflow was tf
features = {'feat1': 26,, 'feat2': 16, "feat3": 20.2, "feat4": 48.8}
# Create an Example proto from your feature dict.
feature_spec = {
k: tf.train.Feature(float_list=tf.train.FloatList(value=[float(v)]))
for k, v in features.items()
}
example = tf.train.Example(
features=tf.train.Features(feature=feature_spec)).SerializeToString()
# Encode your serialized Example using base64 so it can be added into your
# JSON payload.
b64_example = base64.b64encode(example).decode()
result = [{'examples': {'b64': b64_example}}]
What is the output of saved_model_cli show --dir /path/to/model --all? You should follow the output to serialize your request.
I tried to solve this problem by changing the signature serving input but it raised another exception. This problem already solved, check it out here.

Graphql Dataloader File Structure and Context

Let me preface this by saying I am not a javascript developer, so I'm probably missing something very obvious. I'm a data warehouse developer and creating a graphql server that can communicate with our DW got dropped in my lap.
I've been trying to get dataloaders to work on my graphql server by using a single object in the context, containing multiple dataloaders. I'm then trying to call the appropriate dataloader in the resolver. However, I've been unable to get this to work correctly. The consolidated dataloader object only works if I individually reference the dataloaders in the server context.
I'm trying to follow a similar pattern with the loaders as I have with my models, which is each broken out into a separate file, then consolidated for use as a single object via recursion through the file structure.
Example is I have an object called loaders which contains two loaders: countryLoader and marketsectorLoader, each of which is defined in a separate file under the "loaders" directory. In my server context, the following works
import * as loaders from "./loaders"
graphQLServer.use('/graphql', bodyParser.json(),
graphqlExpress({
schema,
context: {
countryLoader: loaders.countryLoader()
I can then call this in my resolver:
StateProvince: {
Country: (parent, args, {countryLoader}) => {
countryLoader.load(parent.Country_fkey) }},
This functions correctly, batching and returning the correct query result, but I'd prefer not to have to declare each specific dataloader from the loaders object as part of the context. However, I've been unable to figure out the syntax to use the loaders object in the context and call the appropriate
individual dataloader in the appropriate resolver.
I've tried several variants of the following example:
https://github.com/relay-tools/react-relay-network-layer/blob/master/examples/dataLoaderPerBatchRequest.js
which seems to be using the type of technique I'm trying to leverage:
//context snippet:
context: {
request: req, // just for example, pass request to context
dataLoaders: initDataLoaders(),
},
However, no luck. I suspect the issue is with my resolver syntax, but I'm not sure, and I haven't been able to find working examples with multiple dataloaders.
If I'm reading your code correctly, importing your loaders using a wildcard import like this:
import * as loaders from './loaders'
results in an object wherein each property is a function that creates an instance of a particular DataLoader. So we just need to iterate through each property. For example:
// Using forEach
const dataLoaders = {}
Object.keys(loaders).forEach(loaderName => {
dataLoaders[loaderName] = loaders[loaderName]()
})
// Or using reduce
const dataLoaders = Object.keys(loaders).reduce((result, loaderName) => {
result[loaderName] = loaders[loaderName]()
return result
}, {})
Using lodash, you can also just do something like:
const dataLoaders = _.mapValues(loaders, loader => loader())

Can verigy test methods' params, aliases and methods' be defined at different times?

I am importing all of my Verigy 93k test methods' parameters based on an ASCII file I received from Verigy. At the time of import, the test method attribute aliases and methods won't be known. Can they be created statically at a later time, by various developers? The code below is just a snippet of the test method param hash I am trying to auto-create.
thx
add_tml :my93k,
class_name: 'my93k',
Functional: {
class_name: 'Functional',
'ErrorMap.DutCyclesPerTesterCycles' => [:string, '1'],
'ErrorMap.EdgesPerTesterCycle' => [:string, '4'],
'ErrorMap.Location' => [:string, 'RAM'],
# Attribute aliases can be defined like this:
aliases: {
},
# Define any methods you want the test method to have
methods: {
}
},
my_other_test: {
# Define another test in exactly the same way...
}
end
There is no way to do that today, but I don't think it would be hard to add that capability if you want.
From your example above, test_methods.my93k.Functional will return an instance of OrigenTesters::SmartestBasedTester::Base::TestMethod, which is defined here: https://github.com/Origen-SDK/origen_testers/blob/5b89bf287b3d307bd6708c878666f3609a5fd3af/lib/origen_testers/smartest_based_tester/base/test_method.rb
The contents of the hash assigned to :Functional above are passed in as the initialization options when the TestMethod instance is instantiated.
If you look at the implementation of the initialize method, you will see where it defines the aliases and methods.
You could expose that same functionality via some new methods to provide an API for developers to add additional aliases and methods later in the process. e.g. test_methods.my93k.Functional.add_alias(:blah).

overriding Parameters in puppet modules

I want to override parameters of base nodes. What I want to get is a pattern like this:
# File manifests/nodes.pp
node myDefault {
class { 'my::common::puppet_setup':
service => 'enable',
pushable => 'disable',
}
# Do lots of default things ...
}
node 'myFirstNode' inherits myDefault {
# Do something ...
}
node 'mySecondNode' inherits myDefault {
class { 'my::common::puppet_setup::params':
service => 'disable',
pushable => 'enable',
}
}
I understood the the puppet documentation, i could do this by writing my module like this:
# File modules/my/manifests/common/puppet_setup.pp
class my::common::puppet_setup (
$pushable = $my::common::puppet_setup::params::pushable,
$service = $my::common::puppet_setup::params::service,
) inherits my::common::puppet_setup::params {
# package that configures puppet node
# input value validation
validate_re($pushable, ['^enable$', '^disable$', '^ignore$', ])
validate_re($service, ['^enable$', '^disable$', '^ignore$', '^cron$', ])
# setup puppet, start or disable agent, put ssh keys for push ...
}
class my::common::puppet_setup::params {
$pushable = 'enable'
$service = 'enable'
$puppetserver = 'puppet.my.site.de'
case $::osfamily {
'Debian': {
}
default: {
fail("not implemented yet for {::operatingsystem}")
}
}
}
The Documentations on puppet website says:
When a derived class is declared, its base class is automatically declared first (if it wasn’t already declared elsewhere).
But i get this error (some indentation added):
mySecondNode# puppet agent --test --environment dev_my
Error: Could not retrieve catalog from remote server:
Error 400 on SERVER: Duplicate declaration:
Class[My::Common::Puppet_setup::Params] is already declared;
cannot redeclare at /.../puppet/manifests/nodes.pp:16 on node mySecondNode
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I'm reading on this for a week and i guess my understanding ist totally wrong somewhere, although i used the puppetlabs ntp modules as an example.
what am i missing?
You should check Inheritance section from http://docs.puppetlabs.com/puppet/latest/reference/lang_node_definitions.html
Puppet treats node definitions like classes. It does not mash the two together and then compile the mix; instead, it compiles the base class, then compiles the derived class, which gets a parent scope and special permission to modify resource attributes from the base class.
One of the good solutions is to use roles and profiles, there's a great blog post about it:
http://garylarizza.com/blog/2014/02/17/puppet-workflow-part-2/
You can use virtual resources :
http://docs.puppetlabs.com/guides/virtual_resources.html

Resources