Azure Machine Learning Shutdown Managed Instance - azure-machine-learning-service

We recently upgraded from Azure ML Sdk V1 to Sdk V2, to get automatic support for HTTPS-Endpoints. Azure ML now doesn't use Container Instances anymore. By using Container Instances, we could shutdown our Endpoints to save costs over night. For that we were using the Azure Management API:
'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.ContainerInstance/containerGroups/{containerInstance}/stop?api-version=2021-10-01'
Now using the sdk V2, there is no option to shutdown our Endpoint Resources. The only workaround to my knowledge is to delete this endpoint (Managed Instance) and recreate it the next day.
As a reference, the User Guide for Azure Machine Learning provides information on how to reduce costs, but it doesn't cover Endpoints.
Since the deployment takes way longer now, I was wondering, if there is another option to reduce our costs without deleting endpoints?
Let me know, if you need more information on my issue. Thanks in advance for your help :)

Related

Azure function vs ASP.NET Core Worker Service?

How does a Azure function differ from a ASP.NET Core Worker Service?
Does both of these cover the same use cases?
How do I decide which one to use?
Update:
Azure function is designed for azure. So when you use azure, or other azure services which are integrated with azure function, you should consider azure function, which could simplify your code and logic by the built-in features of azure function. But it will also have charges for azure function. And yes, if you don't use azure, just use asp.net core worker services.
Azure function are totally different from ASP.NET Core Worker Service.
The benefit of Azure function is that it supports a lot of triggers like azure blob trigger / azure event hub trigger, and is integrated with other azure services.
With these built-in features, it's easy to create a proper azure function to complete a proper task. For example, if you upload an image to azure blob storage, and then want to resize the image, you can easily create a blob trigger azure function with less code.
Worker services are the perfect use case for any background processing. And if you want to operation some azure services like azure blob / azure event hub etc. you may achieve this but need to do a lot of work.
At last, it depends on your use case to choose which one should be used, and select the simple / efficient one.
I think the answer Fred was looking for is, why would someone use Azure functions over Asp.net Core hosted service? And the answer isn't only because you have Azure account so everything is easy and integrated. It depends. I.e Traffic, cost, expertise, etc.
Actually, there are guidelines set by Microsoft when to use an Azure Functions versus let's say over a IHosted service running in a linux container hosted in AKS for example. and I'll post the link, but it's basically if your workload is mission critical or not. If you want the best performance, lowest cost, reliability a hosted dotnet core service running in Linux container would be an excellent alternative.
Here is the mission critical guidelines from Microsoft, hope this helps!
Design recommendations
Azure Functions:
Consider Azure Functions for simple business process scenarios which don't have the same stringent business requirements as business-critical system flows.
Low-critical scenarios can also be hosted as separate containers within AKS to drive consistency, provided affinity and anti-affinity requirements are fully considered when collocating containers on nodes.
Excerpt taken from here:
https://learn.microsoft.com/en-us/azure/architecture/framework/mission-critical/mission-critical-application-platform

Azure Dynamic App Service instance that starts up and shuts down automatically based on the current needs

I am new to Microsoft Azure / Google Cloud and I am currently comparing these two different cloud solution providers, before starting a new project. I am planning to write a web application using either Google Cloud App Engine or Azure App Service.
I want to start with a very basic service instance, which I want to call via HTTPS. To reduce charges it would be nice to only pay for used service minutes resp. that the instance only runs, when needed.
Google Cloud offers dynamic instances, where compute instances are shutdown, when idle and started for incoming requests. Which seems way cheaper for a seldom used prototype and first usage of cloud services.
Instances are resident or dynamic. A dynamic instance starts up and shuts down automatically based on the current needs. [...] When an application is not being used at all, App Engine turns off its associated dynamic instances, but readily reloads them as soon as they are needed.
Unfortunately, I found in the Azure documentation only an Overview of autoscale in Microsoft Azure Virtual Machines, Cloud Services, and Web Apps, which does not cover my question of an automatic instance shutdown in idle state. Also Start/Stop VMs during off-hours solution in Azure Automation does not satisfy my information need, because I am looking only for a compute instance and not a full VM.
Is there an equivalent in the Azure domain, that allows to automatically start up and shut down app service instances, based on the usage resp. incoming requests?
Depending on the functionality of the two cloud service provider, I am deciding which one to use. Has anybody experience with this matter in the Azure domain? Thank you.
You can't do that with Azure App Service alone as of now (24-Feb-2019). But you could combine an Azure function to fire up a App Service instance and then forward all incoming traffic to an app hosted in this App Service via an Azure function proxy, see this description on learn.microsoft.com. I was planning to try this for while now too. In theory it should work... From experience, App Service instances fire up quickly, so the warm up time should be acceptable. Even better, you could keep free or shared App Service plan instance with your app running and forward the Azure function calls to it by default. On increasing load, move the app to a pre-configured plan which supports auto scaling.
Of course you could try to implement the entire app via a set of Azure functions which are fully "dynamic" using your terminology. Depending on the architecture of your application, this might actually be the best choice.
The Autoscale feature of Azure offers you to scale out/scale in based on configurable criterias, take a look here. You are limited by your pricing tier. Maybe this example will help you get an insight.

Alternate to run window service in Azure cloud

We currently have a window service which send some notification emails to users after doing some processing on database(SQL database). Runs once in day.
We want to move this on azure cloud. One alternate is to put it on Azure VM as is. but I am finding some other best possible solution for that.
I study about recurring and on demand Web jobs but I am not sure is this is best solution.
Also is there any possibility to update configuration of service code in App.config without re-deploy the code of service on cloud. I means we can manage configuration from Azure portal.
Thanks in advance.
Update 11/4/2016
Since this was written, there are 2 additional features available in Azure that are both excellent choices depending on what functionality you need:
Azure Functions (which was based on the WebJobs described below): Serverless code that can be trigger/invoked in various ways, and has scaling support.
Azure Service Fabric: Microservice platform, with support for actor model, stateful and stateless services.
You've got 3 basic options:
Windows service running on VM
WebJob
Cloud service
There's a lot of information out there on the tradeoffs between these choices, but here's a brief summary.
VM - Advantages: you can move your service basically as it is without having to change much or any of your code. They also have the easiest connectivity with other resources in Azure (blob storage, virtual networks, etc). The disadvantage is you're giving up all the of PaaS advantages and are still stuck managing your own VM infrastructure
WebJob - Advantages: Multiple invocation options (queues, blobs, manually, queue receive loops, continuous while-loop style, etc), scheduled (would cover your case). Easy to deploy (can go with website, as a console app, automatically through Kudu), has some built in logging in Azure portal - and yes, to answer your question, you can alter the configuration in the portal itself for connection strings and app settings.
Disadvantages - you'll need to update code, you don't have access to underlying resources (if you need that), and more of something to keep in mind than a disadvantage - it uses the same resources as the webapp it's deployed with.
Web Jobs are the newest of the options, but at the same time appear to have active development going on to increase the functionality and usefulness.
Cloud Service - like a managed VM, has some deployment options, access to underlying VM if needed. Would require some code changes from your existing service.
There's nothing you've mentioned in your use case that makes me think a Web Job shouldn't be first thing you try.
(Edit: Troy Hunt has a great and relatively recent blog post illustrating most of the points I've mentioned about Web Jobs above: http://www.troyhunt.com/2015/01/azure-webjobs-are-awesome-and-you.html)

How to turn on/off cloud instances during office hours

I've got my head around creating cloud instances in AWS, Azure and Rackspace. However, I need to turn my instances off at the end of the day and on in the morning as this will half my hosting cost (they are for development).
I've looked at a few management services but they blew my brains out. Is there a simple way to do this?
Azure
REST:
You can do this to Azure deployments by using the Windows Azure Service Management REST API. Because it is REST you can use most programming languages to access it.
You could have an application running on your local machine that schedules calls to these services to delete at a certain time at the end of office hours and then create your service again in the morning.
PowerShell:
Or you can manage your deployments in the same way but instead of using REST you can use Azure PowerShell cmdlets. I have done this way myself and it works nicely.
To help you get started there is a nice tutorial on how to do use PowerShell to deploy Azure applications.
also if you didn't already know I should also mention there is a 3month free trial with Azure if you are simply looking for cutting costs whilst developing.
Approach
You could always roll your own solution, insofar most cloud providers offer a respective API to start/stop instances on demand (or even on schedule), which is what those management services are actually using as well of course - the AmazonEC2 Java interface offers all relevant methods for example (amongst many others), specifically:
StartInstances()
StopInstances()
RebootInstances()
Via Scripting (EC2)
The most simple approach for this regarding Amazon EC2 would be to craft yourself some Python scripts by means of the excellent boto (An integrated interface to current and future infrastructural services offered by Amazon Web Services), which exposes all EC2 methods mentioned above; you could then start those scripts on demand or via your operating system scheduler.
Via Continuous Integration / Automation (EC2)
Another option would be to facilitate a continuous integration server as an automation engine (a sometimes overlooked aspect of these systems), in case you happen to run one anyway; it would allow you to both start/stop instances on demand or scheduled similar to cron.
We do exactly this by means of the Bamboo AWS Plugin (it's Open Source and the code is available on Bitbucket), see my answer to How to start and stop an Amazon EC2 instance programmatically in java for more details on this approach. While Atlassian Bamboo is a commercial offering, there should be something similar available for popular Open Source CI solutions like e.g. Jenkins as well.
NOTE: As for June of 2013, IaaS Instances can be placed in a "stopped (deallocated)" state. In this state you are only billed for storage of any disks associated with the VM. The original answer below describes a VM instance that is in a "stopped" but not deallocated state. The deallocated state is currently the default for VM stop actions taken via the Azure management portal.
The only way to accomplish this in Widows Azure today is to delete the deployment.
If you stop the service, you are still billed (like renting office space, you pay for it even if you aren't in it), and you can't set the instance count to zero. An option may use is to just reduce the instance count to absolute minimum (1) an then scale it back up during needed hours. But the cost benefits of this will depend on the size of your instances.
Old thread I know, but Microsoft introduced 'Runbooks' for Azure in 2014 that you can use for automation, including scheduled startups and shutdowns. As mentioned above, be sure you are in stopped (deallocated) state, as opposed to just stopped, in order to prevent charges.
More info:
Script to stop your VMs
Azure automation, official MS docs.
Yes Automation Runbook are there by which we can schedule the job. I created the script for stopping (De-allocated) Azure VM.
https://gallery.technet.microsoft.com/Deallocate-all-VM-under-79049c69
Please read about how to use runbook http://azure.microsoft.com/blog/2014/06/19/azure-automation-runbook-management/
Dellocation and stop are different, since stop vm will also incur cost.
The best article on automation + switching on/off VMs I have found so far. [05 February 2015]. http://clemmblog.azurewebsites.net/using-azure-automation-start-und-stop-virtual-machines-schedule/
Recommended solution for AWS:
The AWS Data Pipeline is uniquely suited to this task. Data Pipeline
uses AWS technologies and can be configured to run AWS CLI commands on
a set schedule with no external dependencies. Data Pipeline can write
logs to S3 and runs in the context of an IAM role, which eliminates
key management requirements. Data Pipeline is also cost effective; for
example, the Data Pipeline free tier can be used to stop and start
instances once per day.
https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/
Refer to this article, there some options to turn your instances on/off inside AWS.
AWS Datapipeline
AWS Lambda scheduled events
Scheduled Cron on EC2 instance
Scheduled Scaling of Auto Scaling Group
So in your case I'd recommend the followings:
For AWS:
Through Shell Command like AWS CLI commands: See Turn on/off
Cloud instances using AWS Pipeline. this method will initiate a
separate EC2 instance to be started and terminated for each AWS API
call that running times affect to your Bill.
Through programming languages like Node.js / Python: See Turn
on/off Cloud instances using AWS Lambda. The task running twice a
day for typically less than 3 seconds with memory consumption up to
128MB typically costs less than $0.0004 USD/month
For Azure and Rackspace (or other platforms you may have):
Use the above tools to provide a Respective API to start/stop instances on demand.
You may also consider to set scripts-per-boot which runs each time your instance is started.
AWS
AWS SDK is your best bet but I am using TotalCloud.io to start and stop instances under the free tier. Very customizable.
Easy to setup.

Is it possible to deploy an application using cassandra database on Windows Azure?

I recently got a trial version of Windows Azure and wanted to know if there is any way I can deploy an application using Cassandra.
I can't speak specifically to Cassandra working or not in Azure unfortuantly. That's likely a question for that product's development team.
But the challenge you'll face with this, mySQL, or any other role hosted database is persistence. Azure Roles are in and of themselves not persistent so whatever back end store Cassandra is using would need to be placed onto soemthing like an Azure Drive (which is persisted to Azure Blob Storage). However, this would limit the scalability of the solution.
Basically, you run Cassandra as a worker role in Azure. Then, you can mount an Azure drive when a worker starts up and unmount when it shuts down.
This provides some insight re: how to use Cassandra on Azure: http://things.smarx.com/#Run Cassandra
Some help w/ Azure drives: http://azurescope.cloudapp.net/CodeSamples/cs/792ce345-256b-4230-a62f-903f79c63a67/
This should not limit your scalability at all. Just spin up another Cassandra instance whenever processing throughput or contiguous storage become an issue.
You might want to check out AppHarbor. AppHarbor is a .Net PaaS built on top of Amazon. It gives users the portability and infrastructure of Amazon and they provide a number of the rich services that Azure offers such as background tasks & load balancing plus some that it doesn't like 3rd party add-ons, dead-simple deployment and more. They already have add-ons for CouchDB, MongoDB and Redis if Cassandra got high enough on the requested features I'm sure they could set it up.

Resources