What is the difference between Azure Machine Learning Studio and Azure Machine Learning Workbench? What is the intended difference? And is it expected that Workbench is heading towards deprecation in favor of Studio?
I have gathered an assorted collection of differences:
Studio has a hard limit of 10 GB total input of training data per module, whereas Workbench has a variable limit by price.
Studio appears to have a more fully-featured GUI and user-friendly deployment tools, whereas Workbench appears to have more powerful / customizable deployment tools.
etc.
However, I have also found several scattered references claiming that Studio is a renamed updated of Workbench, even though both services appear to still be offered.
For a fresh Data Scientist looking to adopt the Microsoft stack (potentially on an enterprise scale within the medium-term and for the long-term), which offering should I prefer?
Azure Machine Learning Workbench is a preview downloadable application. It provides a UI for many of the Azure Machine Learning CLI commands, particularly around experimentation submission for Python based jobs to DSVM or HDI. The Azure Machine Learning CLI is made up of many key functions, such as job submisison, and creation of real time web services. The workbench installer provided a way to install everything required to participate in the preview.
Azure Machine Learning Studio is an older product, and provides a drag and drop interface for creating simply machine learning processes. It has limitations about the size of the data that can be handled (about 10gigs of processing). Learning and customer requests have based on this service have contributed to the design of the new Azure Machine Learning CLI mentioned above.
It should be added that Azure Machine Learning Workbench is deprecated since september 2018 and has been replaced by the Azure Machine Learning services, which was made generally available in december 2018. The core functionality is still intact, but some major changes to point out about the architecture are:
A simplified Azure resources model
New portal UI to manage your experiments and compute targets
A new, more comprehensive Python SDK
A new expanded Azure CLI extension for machine learning
Related
We plan to migrate or modernize all of our applications to Azure and looking a tool to discovery & assess the applications and report the following
Technology stack, version, external dependencies like database, kafka & etc and Server details like OS version, CPU, Memory & Disk.
I did go through
Azure App Service Migration Assistant
Azure Migrate
for "Discovery & Assessment".
I don't find these report contains the required information. Is there a single tool that analyzes provides all of the required information like cloudamize, movere & etc?
In single tool that analyzes provides all of the required information you can make use AHEAD Software
These tools have been designed specifically for application migration and can offer a detailed report on the technology stack, version, external dependencies, and server information like the OS version, CPU, memory, and storage. They can also provide you with information on guiding principles and assist in the planning and execution of the migration process.
Note that: Azuer is not re if you use any third-party tools, make sure to register with those third-party services, integrate them into the Azure Migrate interface, and then have a consistent experience with a variety of tools.
Reference:
Azure Migrate: A Suite Ride to the Cloud
I am having my project collection running in Azure DevOps Online(Services). And I would like to migrate that to Azure DevOps On-Prem Server.
Help me out here with the incompatibility issue i will be facing and how to overcome that.
Options to Migrate from Azure DevOps online(services) to Azure DevOps Server(On-Prem).
Is there any services available in azure to successfully acheive the above migration with out any data loss?
Should I must use third party tool to do the above migration with out any data loss?
Help me out here with the Downtime required for the 100 GB of Project collection with multiple repository.
Project Collection size - 100 GB
One of the previous answers (since deleted?) has captured most of the critical points, and that no tool can migrate 100% of data with zero data loss (Actually, 100% migration with no loss is not feasible as inherently some of the automatically generated and configuration values, like work item ids etc., will inherently be different between two instances). Therefore, the only way to get zero data loss migration is to lift and shift the complete project collection image from Azure DevOps Services to Azure DevOps Server, which is not supported by the official Azure DevOps migration tool. Given that, the only way left to migrate data is using Azure DevOps APIs.
So, the best approach is to understand what data cannot be migrated by the migration tools that you are evaluating, and then decide what works best for you. Also, it will not be a black and white selection when it comes to choosing a migration solution. First, you need to define the must-haves you expect from migration and then evaluate the different migrators available in the market. Here are a few common selection criteria:
Data Loss:
Understand what data can be and cannot be migrated by the migration solution. Ideally, the tool should be able to migrate work items (along with history, attachments, mentions, and inline images) and test management, including Test Results, Source code, Dashboards, Areas and Iterations. For Builds and pipelines, you can use the native Export-Import feature, as they require manual changes to tweak the connection.
Zero Downtime:
Downtime adds operational costs and impacts development operations as teams cannot use Azure DevOps tools. Understand thoroughly that there is no scenario in which downtime will be required for any type of data.
Ease of Use:
Some tools are a collection of unsupported scripts (Naked Agility) which require very high degree of sophistication to use. These can be extremely expensive (even though the scripts are open source), error prone and hinder operations.
Project Consolidation or Customized Templates:
Analyze if you want to consolidate multiple projects into one project while migrating or if the templates need to be customized. If that is the need, evaluate if the migration tool can support such configuration with ease and has a UI to do so. Manually configuring mappings for each project can be tedious and highly error prone.
Migration Time:
Many migration tools migrate projects one by one, hence consuming a lot of effort and time to migrate the data spread across multiple projects. Understand how many projects can be parallelly migrated to have speedy migration.
Reverse Synchronization:
Do you want to keep the data in sync between Services and Server for some time post-migration? Will data be integrated bidirectionally or unidirectionally? Answer these questions and then evaluate the migration solution if it will meet the requirements.
Commercial Support:
Migration can be tricky and time-consuming, as, over time, different teams have created all the odd stuff in there. Better to have a team of experts do the migration for you while you focus on defining requirements and validating the completeness of migration.
I hope this helps. Full disclosure: I work for OpsHub, where we are experts at data migration and using OpsHub Azure DevOps migrator have migrated multiple organizations to and from Azure DevOps Services and Server over the last decade. Contact us if you need more help.
I am exploring the idea of hosting my CD environment in Windows Azure. I read that the current release of the DMS does not play ball in the cloud, however, no detailed explanation was given. Apparently Azure support is planned for second quarter 2013, but in the meantime, I'd like to know why it doesn't work so that I can explore potential workarounds.
For instance, is the issue related to sticky sessions (or lack thereof)? Or, is it related to the DMS compatibility with SQL Azure?
It will be an issue with the sticky sessions. As the DMS does all its work server side it needs proper session state management to work. You could do this on Azure using IaaS, but then you would be responsible for installing and maintaining the deployment of Sitecore on the OS rather than using the built in deployment features.
See this post by Jakob Leander for more info: http://www.sitecore.net/Community/Technical-Blogs/Jakob-Leander/Posts/2013/01/Why-we-love-Sitecore-on-Azure.aspx
I am conducting some research on emerging web technologies and have created a very simple Azure website which makes use of web sockets and mongo db as the database. I have managed to get all the components working together and now must perform load testing on the application.
The main criteria is the maximum user load that the app can support, at the moment there is 1 web role instance, so probably I would need to test the max user load for that instance, then try with 2 instances and so on.
I found some solutions online such as Loadstorm, however I cannot afford to pay to use these services so I need to be able to do this from my own development machine OR from another cloud service.
I have come across Visual Studio Load Tests and they seem quite useful, however it seems they require VS Ultimate and an active msdn subscription - the prerequisites are listed here. Also, from this video which shows the basics of load tests, it seems like these load tests are created completely separately from the actual web project, so does that mean I can only see metrics related to the user? i.e. I cannot see the amount of RAM being used, processor etc.
Any suggestions?
You might create a Linux virtual machine in Azure itself or another hosting provider and use ApacheBench (ab) or JMeter to do simple load testing on your application. Be aware that in such a setup your benchmark servers may be a bottleneck themselves.
Another approach is to use online load testing services wich allow some free usage, such as:
loader.io, by SendGrid Labs
LoadStorm
Blazemeter
Blitz
Neotys
Loadimpact
For load-testing, LoadStorm is very reasonably priced, especially compared to on-premises software (and has a free tier with up to 25 virtual clients). You can install code such as jmeter, but you'll still need machines (or vm's) to host and run it from, and you need to make sure that the load-generator machines aren't the bottleneck in your tests.
When you run your tests, you may want to consider separating your web tier from MongoDB. MongoDB will consume as much memory as possible (as that's what gives MongoDB its speed). In a real-world scenario, you'll likely have MongoDB in its own environment. So for your tests, I'd consider offloading MongoDB to its own instance(s), and 10gen has a Worker Role setup that's fairly straightforward to install.
Also remember that NIC bandwidth is 100Mbps per core, which could be a limiting factor on your tests, depending on how much load you're driving.
One alternative to self-hosting MongoDB: Offload MongoDB to a hoster such as MongoLab. This will allow you to test the capacity of your web app without worrying about the details around MongoDB setup, configuration, optimization, etc. Currently MongoLab offers their free tier hosted in Azure, US West and US East data centers.
Editing my response, didnt read the question carefully.
Check out this thread for various tools and links:
Open source Tool for Stress, Load and Performance testing
If you are interested in finding the performance counters of the application under test you can revisit some of the latest features added to Visual Load Cloud base load test.
http://blogs.msdn.com/b/visualstudioalm/archive/2014/04/07/get-application-performance-data-during-load-runs-with-visual-studio-online.aspx
To get more info on Visual Studio Cloud Load Testing solution - https://www.visualstudio.com/features/vso-cloud-load-testing-vs
I'm writing an application in Node.js for a spare-time, bootstrap project. I have a Windows background and Windows Azure with three-month free trial currently seems like the simplest way to develop, deploy and host the project.
However Windows Azure appears to get expensive after the free trial expires, and in any case I'd like the option to host on non-MS platforms, so I have a couple of questions:
I can see from the tutorial that I need some Windows-specific code to import the port number at which the app should listen - are there many more examples of Windows or Azure specific code requirements further down the line?
I'd like to take a NoSQL approach to data storage since I'm more interested in flexibility and performance than in referential integrity or structural consistency - would it be difficult to wrap Azure Tables in a data access layer that would be reasonably portable to other NoSQL databases such as MongoDB or the various cloud offerings?
Finally, the catch-all question - is there anything else I should be looking out for?
Tackling your second question: there are modules in the NPM registry that can help you here.
Firstly Microsoft have recently released the Azure SDK for node as an NPM installation module. This has a rich API that will help you interface into Azure Tables.
There are also NoSQL clients available in the NPM registry for most solutions (including MongoDB).
If you keep your data access simple, you should be able to make use of the various NoSQL clients that are available and create a nice little module layer that sits above all the ones you need to support.
You could even create a public github repository and submit your hard work into the NPM registry for other people to help you develop.
I have built an app on Windows Azure's node.js support as well and there is virtually no lock in if you stick to npm modules and open platforms.
You should also check into Microsoft's Bizspark program - you get two years of 2 reserved instances for free + storage. Its a great program.