Logging and Fetching Run Parameters in AzureML - azure

I am able to log and fetch metrics to AzureML using Run.log, however, I need a way to also log run parameters, like Learning Rate, or Momentum. I can't seem to find anything in the AzureML Python SDK documentation to achieve this. However, if I use MLflow's mlflow.log_param, I am able to log parameters, and they even nicely show up on the AzureML Studio Dashboard (bottom right of the image):
Again, I am able to fetch this using MLflow's get_params() function, but I can't find a way to do this using just AzureML's Python SDK. Is there a way to do this directly using azureml?

The retrieving of log run parameters like Learning Rate, or Momentum is not possible with AzureML alone. Because it was tied with MLFlow and azureml-core. without those two involvements, we cannot retrieve the log run parameters.
pip install azureml-core mlflow azureml-mlflow
Need to install these three for getting run parameters. Link

Related

Is it possible to stream Cloud Build logs with the Node.js library?

Some context: Our Cloud Build process relies on manual triggers and about 8 substitutions to customize deploys to various firebase projects, hosting sites, and preview channels. Previously we used a bash script and gcloud to automate the selection of these substitution options, the "updating" of the trigger (via gcloud beta builds triggers import: our needs require us to use a single trigger, it's a long story), and the "running" of the trigger.
This bash script was hard to work with and improve, and through the import-run shenanigans actually led to some faulty deploys that caused all kinds of chaos: not great.
However, recently I found a way to pass substitution variables as part of a manual trigger operation using the Node.js library for Cloud Build (runTrigger with subs passed as part of the request)!
Problem: So I'm converting our build utility to Node, which is great, but as far as I can tell there isn't a native way to steam build logs from a running build in the console (except maybe with exec, but that feels hacky).
Am I missing something? Or should I be looking at one of the logging libraries?
I've tried my best scanning Google's docs and APIs (Cloud Build REST, the Node client library, etc.) but to no avail.

What is the cause of LIBRARY_MANAGEMENT_FAILED while trying to run notebook with custom library on synapse?

Today when we've tried running our notebooks defined in synapse, we've received constantly error: 'LIBRARY_MANAGEMENT_FAILED'. We are using approach from: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-manage-python-packages#storage-account to manage custom libraries, and it was working fine up until this point. Additionally, we've tried separate method of providing spark pool with custom library and tried to use workspace packages, but after 10 minutes of loading custom package, it timesouts with failure.
When we are removing python folder completely from storage, sparkpools run notebooks normally.
Yesterday everything was working properly. The problem also could not be in custom library, because it does not work even with empty python folder.
There were issues on Microsoft side, which were resolved and it started working next day.

Can you link a Python script to Android Studio?

My aim is to create a recommendation app for Android which communicates with a script from a server to perform the Machine Learning parts and to generate recommendations for the current user. My original idea was to use Tensorflow but I am wondering if I could also write a Python script which can be called as a REST API? Would the data be best passed in JSON format?
You can create a Python Flask API and sent a POST Request to it. From that, you can use the data passed to run your ML parts on that data. However, you will need to find a way to run those ML parts after you pass your data to your Flask API. To get started, I would watch a youtube tutorial on how to make and use a Flask API. It is not terribly difficult to get up and started.

Post Service call with Multiple records

I would like to know how to post multiple records to SAP using "BatchRequestBuilder" along with ChangeSet .I am using a custom odata service call(ODataCreateRequestBuilder),not using the VDM model. I did'nt get any blog or documentation to start with.
Can you please help me in this regard.
Updated:
Below is what I am trying to post to SAP
[{"purchaseSchAgrmntNo":"","customerMaterialNumber":"","plant":"","vendorNo":""},{"purchaseSchAgrmntNo":"","customerMaterialNumber":"","plant":"","vendorNo":""}]
SAP SDK version : 3.9.0
I have added below code with only one CreateRequest.
ChangeSet changeSet = new ChangeSetBuilder().addCreateRequest( ODataCreateRequestBuilder.withEntity(sapConfig.getServiceUrlRepriceList(),
sapConfig.getEntityRepriceList())
.withBodyAsMap(responseBody)
.build()).build();
BatchResult batchResult = BatchRequestBuilder.withService("URL?").addChangeSet(changeSet).build().execute(httpClient);
Can you let me know if this is correct.Also let me know what I have to pass in the service.Is it service URL?
Thanks,
Arun Pai
The BatchRequestBuilder is actually not directly part of the SAP Cloud SDK but a dependency that the SDK internally uses to execute batch requests. That is why on the SDK level there is no documentation on how to use it.
Roughly, a batch request comprises of multiple change sets which in turn group together multiple operations. The ChangeSetBuilder allows you to build up change sets which you can then pass to a BatchRequestBuilder.
So if you want to run create requests in batch mode you would want to leverage public ChangeSetBuilder addCreateRequest(ODataCreateRequest oDataCreateRequest).
You can take a look at how the SAP Cloud SDK uses these classes to build up batch requests to get an idea how it works in detail. As a starting point look towards BatchFluentHelperBasic. However, unless you don't know the service you want to query at compile time, I recommend that you leverage the generator to generate this code so that you can use the VDM instead which simplifies this.
If you extend your question to hold more specific information on what you actually want to achieve I can expand my answer to give a more concrete example. Also please include the SDK version you are using.

Fetch Google Analytics API with Python and Google2Pandas

My plan is to fetch the GA API with python3 and google2Pandas.
My problem so far is that I don't know where to start first, when I look at the google2pandas README it looks easy but I have issues to build my own script with that and implementing the Oauth2 stuff.
What is the right way to start with these boiler plates?
All those functions are a bit confusing to me.
What do I really need to use the analytics v4 API and fetch some simple stuff for my dashboard? Which Parameters do I have to set and how or where in the file should I do that? Another question is, do I have to use those functions in a new python file or can I go start with the _panalysis_ga.py?
It would be really helpful if you can guide me here or at least steer me in the right direction with some example.
The link to the repository kind of has the answer, but appreciate it's not always clear if you've never seen it before. There is no need to do anything on the OAth2 process as the library seems to take care of that.
Use pip to install the google2Pandas library on your machine.
You then need to create a GCP account if you don't already have one, and follow step 1 here to get the credentials.
you can then use the Quick Demo shown on the README file of the repository (modify the query to your needs).
EDIT
Look into the New and Improved section of the README file as it is the most up to date one.

Resources