I’ve started experimenting with the Azure ML studio and started playing with templates, upload data into it and immediately start working with it.
The problem is, I can’t seem to figure out how to tie these algorithm to real time data. Can I define a data source to input or can I configure the Azure ML studio in a way that it runs on data that I’ve specified?
Azure ML studio is for experimenting to find a proper solution to the problem set you have. You can upload data to sample, split and train your algorithms to obtain “trained models”. Once you feel comfortable with the results, you can turn that “training experiment” to a “Predictive Experiment”. From there on, your experiment will not be training but be predicting results based on user input.
To do so, you can publish the experiment as a web service, once you’ve published the web service, under the web services tab you can find your web service and run samples with it. There’s a manual input box dialog ( entry boxes here depend on the features you were using in your data samples), some documentation and REST API info for single query and BATCH query processing with the web service. Under batch you can even find sample code to connect to the published webservice.
From here on from any platform that can talk REST API, you can call the published webservice and get the results.
Find below the article about converting from training to predictive experiments
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-walkthrough-5-publish-web-service/
Hope this helps!
Related
I'm working on testing SMART on FHIR applications, specifically the asbi screening application here: https://github.com/asbi-cds-tools/asbi-screening-app .
I'm able to get the app to run locally and I can test the app using the SmartHealthIt testing tool here: https://launch.smarthealthit.org/ .
The application runs and I am able to complete the questionnaire. When I hit the final submit button everything seems to complete without error.
However, none of the survey data seem to be written to the patient record.
Does the https://launch.smarthealthit.org/ support writing data from the SMART on FHIR application being tested? Is there an example application that does this?
Does the Cerner application (https://code.cerner.com/developer/smart-on-fhir/apps) support writing patient data from a SMART on FHIR application? Is there an example application that demonstrates this?
Is there a different sandbox that supports this functionality?
The SMART App Launcher is a simulator to replicate the process of launching a SMART on FHIR app. Whether writing data is permitted ultimately comes down to whether the FHIR server accepts writing operations like create and update. Per the CapabilityStatement, the SMART R4 open endpoint does for various resources. Cerner, Epic, and support writing operations as well. Your best bet is likely to review the documentation for the sandbox(es) you're interested in and determine what capabilities are available that align with your desired workflow.
I'm developing a time series model to anaylize the download traffic inside my organization. Now I'm trying to find a way of automatically running this code everyday and create alerts whenever I'm finding anomalies (high download volumes), so that is not necessary to do it manually. I'd also like to create a dashboard or an easy way to visualize the plots I'm getting in this case.
It'd be something similar to workbooks but with a deeper analysis.
Thanks!
Using Azure ML through the web UI. I'm doing a timeseries forecasting automl training job. In the explanations tab for a model, how can I upload the actual data for the forecast period to compare. See the red circled box in the image below.
We are currently developing test-set ingestion in the UI. However, currently there is no way to upload test data through the UI to populate these graphs. This experience can only be accessed by kicking off an explanation through the SDK with the test data. We refer to this as "Interpretability at inference time" and have some documentation on how to do this here: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml#interpretability-at-inference-time
Test-set ingestion is scoped to land for private preview before end of June. Let's keep in touch to ensure you get early access here.
Thanks,
Sabina
We are researching Stream.io and Stream Framework.
We want to build a high-volume feed with many producers( sources) that include highly personal messages (private messages?)
For building this feed and to make this relevant for all subcribers we will need to use our own ML model for the feed personalisation.
We found this as their solution for personalisation but this might scale badly to allow us to run and develop our own ML model
https://go.getstream.io/knowledge/volumes-and-pricing/can-i
Questions :
1. How do we integrate / add our own ML model for a Getstream-io feed ?
2. SHould we move more to the Stream Framework and how do we connect our own ML model to that feed solution ?
Thanks for pointing us in the right directions !
we have the ability to work with your team to incorporate ML models into Stream. The model has to be close to the data otherwise lag is an issue. If you use the Stream Framework, you're working with python and your own instance of cassandra, which we stopped using because of performance and scalability issues. If you'd like to discuss options, you can reach out via a form on our site.
Im trying to deploy a web app that takes 1 web input, then "Set Column In Dataset" a few times for each model , and then sends out a web output for each model.
Right now the way I have it setup is I have a few web inputs, then a model that runs for each, and then a web output for each. It works for now, but it's a hassle because every time I want to add a new model to be predicted I have to add a bunch of stuff in both azure and my web application. Just wondering if there is an easier way I'm missing.
I am not quite sure I understand the workflow you described. Can you provide more details on what are you trying to accomplish with your web app and your experiment? For example, what do you mean when you say "I have to add a bunch of stuff"?
Azure ML does support multiple web service inputs and outputs. Adding a new model to the experiment requires you to re-deploy your web service.