Azure Machine Learning, 1 web input with multiple outputs? - azure

Im trying to deploy a web app that takes 1 web input, then "Set Column In Dataset" a few times for each model , and then sends out a web output for each model.
Right now the way I have it setup is I have a few web inputs, then a model that runs for each, and then a web output for each. It works for now, but it's a hassle because every time I want to add a new model to be predicted I have to add a bunch of stuff in both azure and my web application. Just wondering if there is an easier way I'm missing.

I am not quite sure I understand the workflow you described. Can you provide more details on what are you trying to accomplish with your web app and your experiment? For example, what do you mean when you say "I have to add a bunch of stuff"?
Azure ML does support multiple web service inputs and outputs. Adding a new model to the experiment requires you to re-deploy your web service.

Related

Is it possible to combine multiple Application Insights application maps?

I currently have a microservice architecture consisting of ~20 services at the moment and each service has its own dedicated application insights instance per environment (dev/test/prod).
What I would like to do, if possible, is aggregate all of the different application maps into one global application map so that I can easily see everything through a single pane of glass (per environment) rather than having to drill into each individual service's application map.
The only way to do this, from what I've found, is to have ever service point to the same app insights resource. However, I would imagine that this approach would make it difficult to easily track metrics for an individual service, since the metrics would be based off the entire environments architecture rather than each service. Is there some way to build a workbook that combines all of the application maps?
Any ideas on how to approach this? Thanks in advance.
If your microservices are instrumented with Application Insights SDKs and rely on auto instrumentation then it should work out of the box. Application Insights will discover which components a particular app talks to and you should be able to get the full map by clicking on "Update map components".
One app view:
Whole connected microservice universe view:
If "Update map components" is greyed out then something wrong with distributed tracing instrumentation.

Using real time data with Azure machine learning studio?

I’ve started experimenting with the Azure ML studio and started playing with templates, upload data into it and immediately start working with it.
The problem is, I can’t seem to figure out how to tie these algorithm to real time data. Can I define a data source to input or can I configure the Azure ML studio in a way that it runs on data that I’ve specified?
Azure ML studio is for experimenting to find a proper solution to the problem set you have. You can upload data to sample, split and train your algorithms to obtain “trained models”. Once you feel comfortable with the results, you can turn that “training experiment” to a “Predictive Experiment”. From there on, your experiment will not be training but be predicting results based on user input.
To do so, you can publish the experiment as a web service, once you’ve published the web service, under the web services tab you can find your web service and run samples with it. There’s a manual input box dialog ( entry boxes here depend on the features you were using in your data samples), some documentation and REST API info for single query and BATCH query processing with the web service. Under batch you can even find sample code to connect to the published webservice.
From here on from any platform that can talk REST API, you can call the published webservice and get the results.
Find below the article about converting from training to predictive experiments
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-walkthrough-5-publish-web-service/
Hope this helps!

Azure ML App - Complete Experince - Train automatically and Consume

I played a bit around with Azure ML studio. So as I understand the process goes like this:
a) Create training experiment. Train it with data.
b) Create Scoring experiment. This will include the trained model from the training experiment. Expose this as a service to be consumed over REST.
Maybe a stupid question but what is the recommended way to get the complete experience like the one i get when I use an app like https://datamarket.azure.com/dataset/amla/mba (Frequently Bought Together API built with Azure Machine Learning).
I mean the following:
a) Expose 2 or more services - one to train the model and the other to consume (test) the trained model.
b) User periodically sends training data to train the model
c) The trained model/models now gets saved available for consumption
d) User is now able to send a dataframe to get the predicted results.
Is there an additional wrapper that needs to be built?
If there is a link documenting this please point me to the same.
The Azure ML retraining API is designed to handle the workflow you describe:
http://azure.microsoft.com/en-us/documentation/articles/machine-learning-retrain-models-programmatically/
Hope this helps,
Roope - Microsoft Azure ML Team
You need to take a look at Azure Data Factory.
I have written a Custom Activity to do the same.
And used the logic to retrain the model in the custom activity.

Node API Architecture on AWS

I'm making an app that will have:
iOS and Android apps
A web-based "dashboard" to display data gathered from the mobile apps
The app requires that end-users create an account with us (we mostly likely will NOT use Facebook/Twitter logins).
Everything is/will be hosted on AWS using EC2/RDS/S3 (All encapsulated in Elastic Beanstalk)
| Web Browser | <----> | sails.js app | <-------> |actionhero.js API|
⬆︎
⬆︎
| Mobile app(s) | <-------------------------------------/
So far, I've built most of the backing API in actionhero.js, hosted on AWS.
It made sense to me to separate the API and the web app, because there web app is only for a small subset of users -- I'd expect 50x the traffic from our mobile apps over the web app.. We could scale the API to server the mobile users without unnecessarily scaling the sails.js app.
My questions are:
(biuggest unknown) How should I handle authentication? The sails.js app needs to be able to make requests to the API, and so do the mobile applications.
I was looking at the oauth2orize node module for creating our own Auth server, but it is designed for Connect/Express, so I don't think I could leverage it in the actionhero.js-based API.
If the solution is to create an OAuth server, am I supposed to host that on its own EC2 instance?
(AWS-specific question) I don't fully understand the use case for creating what AWS describes as a "worker tier" enviornment. Would there be a reason that the API would fall into that category?
If I want to run a data querying and aggregation task, I would create a separate node process for that, correct? If so, would that background worker have to exist on its own EC2 instance?
Sails.js and Actionhero.js both provide heavy support for socket.io. Should communication between the Sails app and my API happen over a persistent WebSocket connection? Will that scale if I need to create new instances in the future?
This seems like a fairly typical pattern; I'd like to hear if there are any big red flags in this design, before I paint myself into a corner. :-) THANKS!
Bonus question (specific to AWS Elastic Beanstalk)
Will I create separate "Applications" for the sails.js server and the API server? It looks like that's the only way to set it up, anyhow, but I want to make sure.
We have used node and beanstalk for a couple of applications now. For authentication, you can create an account for the user when they first access the app, and store the account id on the device. If you want them to be able to log in from multiple devices, you'll need to provide some kind of way of them identifying themselves, which is either id/password, or using Facebook. It's not that tough to set that up. Use session to allow them to log in and stay logged in. We generally just store the user id in the session.
A worker tier is for something you want to decouple from your app, something that you want to do that you don't need to know whether it succeeded/failed. A notification server is a prime example. You send the info for the notification into an SQS queue, that then gets sent to the worker tier, that does the work. We are just trying to figure this out now.
A big aggregation process, yes, I'd take it elsewhere, so it's not eating up your production server(s). You might want to create some data aggregation ongoing, as transactions are saved, so it accumulates. Big rollups after the fact can be time consuming and fragile.
Sounds like yes, they would be seperate applications.
A good tip. We use grunt to create the zip files for the app. It's a node batch tool. We check the latest info out of SVN, clean it up by doing things like removing .svn directories, apply our configuration into the config files by doing simple string replacement, then zip up resulting output. This then gets loaded into beanstalk. This takes all the guess work and time out of actually doing a new deployment. We can get a new build up in minutes that way.
Beanstalk can be very frustrating. When it fails, it's not very good at telling you why.

Need to poll a bunch of web services from a Linux server - What is the most efficient way?

So basically, I'm looking to build a web app that aggregates a bunch of data from various web services and presents the data visually. To achieve what I want, I will basically need to regularly poll these web services, and store the resulting data from each call in a database. This data would then be queried by the web app etc.
I'm looking to build the web app using PHP (code igniter), but I'm not entirely sure how to go about the polling component. I'm coming from a .NET background and still getting used to the Linux/web world. I would normally solve this problem by simply writing a .NET Windows Service... I want all of this to run on a linux box however, so if anyone could recommend any technologies for this sort of thing that would be great.
Thanks in advance!

Resources