Azure Webjobs vs SSIS packages - azure

I have been tasked with creating a scheduled job to first call an api, convert the response to a new format and then pass that data to another api. It doesn't sound like there is any logic in between
The company I work for has a lot of SSIS packages doing a variety of things but also has a healthy Azure platform with a few web jobs running. Several developers on my team have expressed a dislike for SSIS packages so I would like to implement this in Azure, but I want to make sure that is the most reasonable thing to do.
What I am asking for is a pro con list where each option is strong or weak. A good answer will assist readers in making a decision on if their specific situation is best solved using a SSIS package or an Azure webjob, assuming the needed environment is setup for either already.

Related

How can i Migrate from Azure DevOps services to Azure DevOps Server

I am having my project collection running in Azure DevOps Online(Services). And I would like to migrate that to Azure DevOps On-Prem Server.
Help me out here with the incompatibility issue i will be facing and how to overcome that.
Options to Migrate from Azure DevOps online(services) to Azure DevOps Server(On-Prem).
Is there any services available in azure to successfully acheive the above migration with out any data loss?
Should I must use third party tool to do the above migration with out any data loss?
Help me out here with the Downtime required for the 100 GB of Project collection with multiple repository.
Project Collection size - 100 GB
One of the previous answers (since deleted?) has captured most of the critical points, and that no tool can migrate 100% of data with zero data loss (Actually, 100% migration with no loss is not feasible as inherently some of the automatically generated and configuration values, like work item ids etc., will inherently be different between two instances). Therefore, the only way to get zero data loss migration is to lift and shift the complete project collection image from Azure DevOps Services to Azure DevOps Server, which is not supported by the official Azure DevOps migration tool. Given that, the only way left to migrate data is using Azure DevOps APIs.
So, the best approach is to understand what data cannot be migrated by the migration tools that you are evaluating, and then decide what works best for you. Also, it will not be a black and white selection when it comes to choosing a migration solution. First, you need to define the must-haves you expect from migration and then evaluate the different migrators available in the market. Here are a few common selection criteria:
Data Loss:
Understand what data can be and cannot be migrated by the migration solution. Ideally, the tool should be able to migrate work items (along with history, attachments, mentions, and inline images) and test management, including Test Results, Source code, Dashboards, Areas and Iterations. For Builds and pipelines, you can use the native Export-Import feature, as they require manual changes to tweak the connection.
Zero Downtime:
Downtime adds operational costs and impacts development operations as teams cannot use Azure DevOps tools. Understand thoroughly that there is no scenario in which downtime will be required for any type of data.
Ease of Use:
Some tools are a collection of unsupported scripts (Naked Agility) which require very high degree of sophistication to use. These can be extremely expensive (even though the scripts are open source), error prone and hinder operations.
Project Consolidation or Customized Templates:
Analyze if you want to consolidate multiple projects into one project while migrating or if the templates need to be customized. If that is the need, evaluate if the migration tool can support such configuration with ease and has a UI to do so. Manually configuring mappings for each project can be tedious and highly error prone.
Migration Time:
Many migration tools migrate projects one by one, hence consuming a lot of effort and time to migrate the data spread across multiple projects. Understand how many projects can be parallelly migrated to have speedy migration.
Reverse Synchronization:
Do you want to keep the data in sync between Services and Server for some time post-migration? Will data be integrated bidirectionally or unidirectionally? Answer these questions and then evaluate the migration solution if it will meet the requirements.
Commercial Support:
Migration can be tricky and time-consuming, as, over time, different teams have created all the odd stuff in there. Better to have a team of experts do the migration for you while you focus on defining requirements and validating the completeness of migration.
I hope this helps. Full disclosure: I work for OpsHub, where we are experts at data migration and using OpsHub Azure DevOps migrator have migrated multiple organizations to and from Azure DevOps Services and Server over the last decade. Contact us if you need more help.

Is Pulumi that magical when compared to using Azure .NET SDK?

I'm with a dilema here about which SE site to ask this question so please help me out if it should be somewhere else.
I've been looking into Infrastructure as Code solutions.
Didn't like Terraform too much. The lack of intellisense makes discoberability harder than programmers have been used to.
I've been considering ARM templates. I like it that the templates are made available as we create resources in the portal but it seems way less readable and harder to maintain afterwards.
Then I found out Pulumi and love their idea compared to Terraform. The way I see it, they're approach is also declarative like the above options but we can use decent programming languages to get the job done.
The for loops is a must.
Cool, I like that! But since we like using C# (or other alternatives), then why don't we SDKs to manage our infrastructure as code?
Pulumi has compared themselves with cloud SKDs by positioning their solution as much safer advocating that, if we just use a cloud SDK ourselves, then our solution wouldn't be that reliable.
To what extent is this really true, I wonder?
Last year, I wrote some libraries that used Azure service bus queues/topics. There were several integration tests that would run in parallel and I needed to isolate them by creating new queues/topics and used Microsoft.Azure.ServiceBus.Management.ManagementClient to do this.
It really didn't seem like I had to learn anything at all.
Going to the point now. Not discarding Pulumi's innovation which I think is great:
Will Pulumi's really add that much benefit compared to using Azure SDKs?
What's been your experience with it?
A Pulumi developer here, so I'm definitely biased. I suspect the SO community may find your question violating some of the guidance, but I hope my answer survives :)
One upside of using Pulumi is that you get access to multiple providers with consistent developer experience. You may be using exclusively Azure, but you might at some point start combining it with things like building and publishing Docker images, deploying Kubernetes applications, or Datadog dashboards. All can be done from the same program or solution.
Now, the biggest difference with imperative SDKs is the notion of desired-state configuration. A Pulumi program describes the graph of resources and dependencies between them (what), not the steps to provision them (how). When you have an environment that lives for months and years, there's a big difference between evolving a single definition with baby steps and applying incremental changes (Pulumi) and writing a bunch of update scripts/programs to bring each environment to the new state (SDK).
How do you maintain multiple environments that may be similar but still different? (production vs staging vs test vs dev) How do you make sure that your short-lived infra that you created for nightly tests reflects the reality of production? What happens when an SDK program fails in the middle - can you retry running it again or will it create duplicate resources/fail with another error? How do you get a simple overview of changes over time in git? Concurrency control? Change history?
All the things above are baked into Pulumi and require manual consideration with a cloud SDK.

Save-AzureVMImage Generalized vs. Specialized

I've looked extensively at the Azure documentation regarding saving VM images. I understand there are two types available, Generalized and Specialized. I've read explanations of what the differences are. However, these appear to be written mostly for those very familiar with Azure concepts or IT in general. I'm more on the development side.
To my problem... I have an azure hosted image, which i've used as a build agent for teamcity. Our application isn't vanilla in that we can just install Visual Studio and be done. (i wish). We have about 20 or so third party dependent applications to install to the main OS disk, with lots of configuration required (System variables, etc.) to get it all to work.
So finally to my question - Which is the right version to use? Specialized or Generalized? I want to spawn 4 copies of this server in the same cloud service.
Any advice is greatly appreciated.
Generalized since you want them to be in the same cloud service. Generalized images start up differently as they configure themselves to take on new identities during their first start up.

Is there anything like ServerSpec for Azure

I've been bitten by the test-driven infrastructure bug. My current project is using Azure, including SQL Azure, Azure tables, cloud services, and mobile services. Configuring an entire environment is somewhat complex. Now I'm looking for a testing framework that I can use to verify that the environment is configured correctly. Something like "Confirm that there's a mobile service endpoint named foo, that is has APNS and GCM endpoints, and that there is a Google API key and Apple push certificate associated." There is more, but that is complex enough that existing tools don't seem to cover it but simple enough to describe in a single sentence.
Because of the number of products, I have to use both the PowerShell module and the cross-platform CLI to script the setup. The cross-platform CLI looks like the easiest way to get data out (it uses Node and can easily dump JSON data), but I'm at a loss as to how to even start with testing JSON dumps from a Node module that was never really intended to be used as a module.
The PowerShell module is buggy and doesn't have any ability to read mobile services information.
There is a ruby gem for managing Azure, but it's very limited. So my hope of being able to work all in Ruby was dashed. There too, I'm not sure how one would use ServerSpec to test a remote node without actually running anything on the remote node.
I'd like to stay within the realm of something that would be understandable by another Azure developer (e.g. JavaScript, PowerShell, and potentially Ruby) and not have to start from scratch with something like Erlang or Brainf**k.
Corey - big area of ongoing build out on Azure right now which is why you are finding limited support. Resource Manager is aimed at driving programmable infrastructure (http://azure.microsoft.com/en-us/documentation/articles/xplat-cli-azure-resource-manager/) but doesn't yet encapsulate all Azure service offerings.
There is also the Management Libraries (for .Net) - http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries or at the most basic of levels there is the pure REST API that you can code directly against if there are bits missing from the above (which is likely) - http://msdn.microsoft.com/en-us/library/azure/ee460799.aspx

A little confused about Azure

I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.

Resources