I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.
Related
I'm with a dilema here about which SE site to ask this question so please help me out if it should be somewhere else.
I've been looking into Infrastructure as Code solutions.
Didn't like Terraform too much. The lack of intellisense makes discoberability harder than programmers have been used to.
I've been considering ARM templates. I like it that the templates are made available as we create resources in the portal but it seems way less readable and harder to maintain afterwards.
Then I found out Pulumi and love their idea compared to Terraform. The way I see it, they're approach is also declarative like the above options but we can use decent programming languages to get the job done.
The for loops is a must.
Cool, I like that! But since we like using C# (or other alternatives), then why don't we SDKs to manage our infrastructure as code?
Pulumi has compared themselves with cloud SKDs by positioning their solution as much safer advocating that, if we just use a cloud SDK ourselves, then our solution wouldn't be that reliable.
To what extent is this really true, I wonder?
Last year, I wrote some libraries that used Azure service bus queues/topics. There were several integration tests that would run in parallel and I needed to isolate them by creating new queues/topics and used Microsoft.Azure.ServiceBus.Management.ManagementClient to do this.
It really didn't seem like I had to learn anything at all.
Going to the point now. Not discarding Pulumi's innovation which I think is great:
Will Pulumi's really add that much benefit compared to using Azure SDKs?
What's been your experience with it?
A Pulumi developer here, so I'm definitely biased. I suspect the SO community may find your question violating some of the guidance, but I hope my answer survives :)
One upside of using Pulumi is that you get access to multiple providers with consistent developer experience. You may be using exclusively Azure, but you might at some point start combining it with things like building and publishing Docker images, deploying Kubernetes applications, or Datadog dashboards. All can be done from the same program or solution.
Now, the biggest difference with imperative SDKs is the notion of desired-state configuration. A Pulumi program describes the graph of resources and dependencies between them (what), not the steps to provision them (how). When you have an environment that lives for months and years, there's a big difference between evolving a single definition with baby steps and applying incremental changes (Pulumi) and writing a bunch of update scripts/programs to bring each environment to the new state (SDK).
How do you maintain multiple environments that may be similar but still different? (production vs staging vs test vs dev) How do you make sure that your short-lived infra that you created for nightly tests reflects the reality of production? What happens when an SDK program fails in the middle - can you retry running it again or will it create duplicate resources/fail with another error? How do you get a simple overview of changes over time in git? Concurrency control? Change history?
All the things above are baked into Pulumi and require manual consideration with a cloud SDK.
Currently, our product is a web application with SQL Server as DBMS, ASP.NET backend, and classic HTML/JavaScript/CSS frontend. The product is actively developed and each month we have to deploy a new version of it to production.
During this deployment, we update all the components listed above (apply some SQL scripts, update binaries, and client files) but we deploy only the delta (set of files which were changed since the last release). It has some benefits like we do not reset custom data/configs/client adjustments.
Now we are going to move inside clouds like Azure, AWS, etc. Adjust product architecture to be compliant with the Docker/Kubernetes and provide the product as SaaS.
And now the question itself: "Which approach of deployment is recommended in the clouds?" Can we keep applying the delta only? Or we have to reorganize the process to always deploy from scratch?
If there are some Internet resources I have missed, please share.
This question is extremely broad but maybe some clarification could steer you in the right direction anyway:
Source code deployments (like applying delta's) and container deployments are two very different directions in the sense that the tooling you invest in during the entire SLDC CAN differ substantially. Some testing pipelines/products focus heavily (or exclusively) on working with one or the other. There will be tools that can handle both of course.
They also differ in the problems they're attempting to solve and come with some pro's and con's:
Source Code Deployments/Apply Diffs:
Good for small teams and quick deployments as they're simple to understand and setup.
Starts to introduce risk when you need to upgrade the Host OS or application dependencies
Starts to introduce risk when the Host's in production begin to drift (have more differing files then expected) more dramatically over time
Slack has a good write up of their experience here.
Container deployments
Provides isolation from the application (developer space) and the Host OS (sysadmin/ops space). This usually means they can work with each other independently.
Gives an "artifact" that won't change between deployments, ie the container tagged v1 will always be the same unless you do something really funky. You can't really guarantee this
The practice of isolating stateless components makes autoscaling those components very easy, and you can eventually spend more time on the harder ones (usually stateful).
Introduces a new abstraction with new concerns that your team will have to mature into. Testing pipelines, dev tooling, monitoring/loggin architectures might all need to be adjusted over time and that comes with cost and risk.
Stateful containers is hardly a solved problem (ie shoving an existing database in a container can be a surprising challenge).
In order to work with Kubernetes, you need to have a containerized application. That doesn't mean you need to containerize your entire product over night. Splitting out the front end to deploy with cloudfront/s3, and containerizing a stateless app will get your feet wet.
Some books that talk about devops philosophies (in which this transition plays a part)
The Devops Handbook
Accelerate
Effective Devops
SRE book
I've been bitten by the test-driven infrastructure bug. My current project is using Azure, including SQL Azure, Azure tables, cloud services, and mobile services. Configuring an entire environment is somewhat complex. Now I'm looking for a testing framework that I can use to verify that the environment is configured correctly. Something like "Confirm that there's a mobile service endpoint named foo, that is has APNS and GCM endpoints, and that there is a Google API key and Apple push certificate associated." There is more, but that is complex enough that existing tools don't seem to cover it but simple enough to describe in a single sentence.
Because of the number of products, I have to use both the PowerShell module and the cross-platform CLI to script the setup. The cross-platform CLI looks like the easiest way to get data out (it uses Node and can easily dump JSON data), but I'm at a loss as to how to even start with testing JSON dumps from a Node module that was never really intended to be used as a module.
The PowerShell module is buggy and doesn't have any ability to read mobile services information.
There is a ruby gem for managing Azure, but it's very limited. So my hope of being able to work all in Ruby was dashed. There too, I'm not sure how one would use ServerSpec to test a remote node without actually running anything on the remote node.
I'd like to stay within the realm of something that would be understandable by another Azure developer (e.g. JavaScript, PowerShell, and potentially Ruby) and not have to start from scratch with something like Erlang or Brainf**k.
Corey - big area of ongoing build out on Azure right now which is why you are finding limited support. Resource Manager is aimed at driving programmable infrastructure (http://azure.microsoft.com/en-us/documentation/articles/xplat-cli-azure-resource-manager/) but doesn't yet encapsulate all Azure service offerings.
There is also the Management Libraries (for .Net) - http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries or at the most basic of levels there is the pure REST API that you can code directly against if there are bits missing from the above (which is likely) - http://msdn.microsoft.com/en-us/library/azure/ee460799.aspx
I'm not very skilled on Azure, but googling hasn't give me more answer on this topic.
I have an ASP.NET web page that use R-(D) COM Interface for doing some complex calculus. I'm evaluating to move everything to the Azure platform.
I saw that it's easy to move webpages on Azure however being that I need that RSERVER is installaled on the machine I need to move everything.
I was thinking of creating a VHD machine and publish the entire image on Azure but I'm not sure this is the best solution.
I am not familiar with RSERVER, but here are some guidelines you may follow:
By default all Windows Azure servers run in 64bit mode. This is
important for the COM interfaces.
You may run any executable as a Startup Task in regular
Windows Azure Web/Worker role. Frankly you can create vey complex
startup scripts. You may use the Windows Azure Bootstrapper to
ease the solution. The trick is that RSERVER must support
unattended/silent install.
I would stick to the least friction solution - which would be using a normal Windows Azure Web Role and a Startup Task.
If that is not working for you, you may consider preparing a VHD image and use the Windows Azure VM Role.
I've written a very similar answer to what I'd write to you here. The thing is, the Azure VM role is technically a good solution, depending on what you need to do with it. You can generally create really good solutions with a fairly minimal amount of effort to let legacy code work with Azure & all the shortcomings of the VM role.
In general, if you have a lot of custom installation you need to do, create the Azure VM role, absolutely. But make sure you make the communication with it proper. It's not going to behave exactly like a web or worker role. Although, if I remember correctly, you still have endpoints and configuration there, so you can expose your programming to the outside. Personally however, my architectures are way more queue based (as described in the answer highlighted above) so I'd opt for writing a bridge program in the VM.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.