VMSS Custom image pros/cons - azure

I need to install .net framework 4.7 on my VMSS. I tried using script extension but since I need to reboot the machine after the installation, it was a bit complex.
I decided to go with custom image. I created a VM, installed the .net framework and then captured it to image. Was painless process.
My question is, it seems that if my VMSS is using custom image, I cannot update it to use a marketplace image. Are there any other things I lose by using custom images?

George! Long time no see :).
This is a great question, but I don't think it's documented anywhere. I threw together a quick blog post describing the pros and cons: https://negatblog.wordpress.com/2018/06/28/custom-vs-platform-images-for-scale-sets/. Here's the summary:
Platform images tend to allow for greater scale
Some features only support platform images
When using custom images, you control the image lifecycle, so you don't need to worry about the image being removed unexpectedly
Deployment speed can differ between the two (either way can be faster depending on the scenario)
With custom images, you can actually capture data disks along with the OS disk, allowing you to easily initialize each VM in the scale set with data
Hope this helps! :)
-Neil

Related

Web application deployment approaches

Currently, our product is a web application with SQL Server as DBMS, ASP.NET backend, and classic HTML/JavaScript/CSS frontend. The product is actively developed and each month we have to deploy a new version of it to production.
During this deployment, we update all the components listed above (apply some SQL scripts, update binaries, and client files) but we deploy only the delta (set of files which were changed since the last release). It has some benefits like we do not reset custom data/configs/client adjustments.
Now we are going to move inside clouds like Azure, AWS, etc. Adjust product architecture to be compliant with the Docker/Kubernetes and provide the product as SaaS.
And now the question itself: "Which approach of deployment is recommended in the clouds?" Can we keep applying the delta only? Or we have to reorganize the process to always deploy from scratch?
If there are some Internet resources I have missed, please share.
This question is extremely broad but maybe some clarification could steer you in the right direction anyway:
Source code deployments (like applying delta's) and container deployments are two very different directions in the sense that the tooling you invest in during the entire SLDC CAN differ substantially. Some testing pipelines/products focus heavily (or exclusively) on working with one or the other. There will be tools that can handle both of course.
They also differ in the problems they're attempting to solve and come with some pro's and con's:
Source Code Deployments/Apply Diffs:
Good for small teams and quick deployments as they're simple to understand and setup.
Starts to introduce risk when you need to upgrade the Host OS or application dependencies
Starts to introduce risk when the Host's in production begin to drift (have more differing files then expected) more dramatically over time
Slack has a good write up of their experience here.
Container deployments
Provides isolation from the application (developer space) and the Host OS (sysadmin/ops space). This usually means they can work with each other independently.
Gives an "artifact" that won't change between deployments, ie the container tagged v1 will always be the same unless you do something really funky. You can't really guarantee this
The practice of isolating stateless components makes autoscaling those components very easy, and you can eventually spend more time on the harder ones (usually stateful).
Introduces a new abstraction with new concerns that your team will have to mature into. Testing pipelines, dev tooling, monitoring/loggin architectures might all need to be adjusted over time and that comes with cost and risk.
Stateful containers is hardly a solved problem (ie shoving an existing database in a container can be a surprising challenge).
In order to work with Kubernetes, you need to have a containerized application. That doesn't mean you need to containerize your entire product over night. Splitting out the front end to deploy with cloudfront/s3, and containerizing a stateless app will get your feet wet.
Some books that talk about devops philosophies (in which this transition plays a part)
The Devops Handbook
Accelerate
Effective Devops
SRE book

How to fit docker in this architecture?

Fairly new to Docker. Trying to understand how Docker can be used in this real life enterprise level applications.
Here are the components (all hosted in Azure) that make up the website:
Web services
Web App
Azure search
Document DB
Web jobs
How can one utilize docker in this scenario?
I don't think you need Docker at all, that only introduces an extra management overhead. What you have perfectly fits PaaS scenario and Azure gives you much more than you will do with Docker (you'll probably have to spend years trying to get the same functionality).
As you tagged your question with Service Fabric: you won't need that too, although it's a great framework providing lots of microservice based architecture orchestration out of the box. It can utilize Docker to host services (I think on Linux it uses Docker out of the box).
So unless you have a specific problem I wouldn't look in this direction and concentrate on improving your application functionality and quality, existing services already fit best.
I think the main takeaway here is: why do you want docker at all? You don't seem to provide any reason for that. And there's no point in using docker if you don't know why do you want to use it.
But all the services you indicated are PaaS, so introducing docker somewhere here (except for webapps) would only increase your administrative overhead, why do you need that? WebApps can be painlessly converted to docker (WebApps Linux have the ability to launch docker containers, you can even use private repos).
Also, it is not considered a best practice to host your persistent data (Document DB in your case) in containers, it can be done, but a lot of people would argue against that.
ps: this question is mostly opinion based and hence should be closed

Save-AzureVMImage Generalized vs. Specialized

I've looked extensively at the Azure documentation regarding saving VM images. I understand there are two types available, Generalized and Specialized. I've read explanations of what the differences are. However, these appear to be written mostly for those very familiar with Azure concepts or IT in general. I'm more on the development side.
To my problem... I have an azure hosted image, which i've used as a build agent for teamcity. Our application isn't vanilla in that we can just install Visual Studio and be done. (i wish). We have about 20 or so third party dependent applications to install to the main OS disk, with lots of configuration required (System variables, etc.) to get it all to work.
So finally to my question - Which is the right version to use? Specialized or Generalized? I want to spawn 4 copies of this server in the same cloud service.
Any advice is greatly appreciated.
Generalized since you want them to be in the same cloud service. Generalized images start up differently as they configure themselves to take on new identities during their first start up.

A little confused about Azure

I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.

moving R-project to AZURE

I'm not very skilled on Azure, but googling hasn't give me more answer on this topic.
I have an ASP.NET web page that use R-(D) COM Interface for doing some complex calculus. I'm evaluating to move everything to the Azure platform.
I saw that it's easy to move webpages on Azure however being that I need that RSERVER is installaled on the machine I need to move everything.
I was thinking of creating a VHD machine and publish the entire image on Azure but I'm not sure this is the best solution.
I am not familiar with RSERVER, but here are some guidelines you may follow:
By default all Windows Azure servers run in 64bit mode. This is
important for the COM interfaces.
You may run any executable as a Startup Task in regular
Windows Azure Web/Worker role. Frankly you can create vey complex
startup scripts. You may use the Windows Azure Bootstrapper to
ease the solution. The trick is that RSERVER must support
unattended/silent install.
I would stick to the least friction solution - which would be using a normal Windows Azure Web Role and a Startup Task.
If that is not working for you, you may consider preparing a VHD image and use the Windows Azure VM Role.
I've written a very similar answer to what I'd write to you here. The thing is, the Azure VM role is technically a good solution, depending on what you need to do with it. You can generally create really good solutions with a fairly minimal amount of effort to let legacy code work with Azure & all the shortcomings of the VM role.
In general, if you have a lot of custom installation you need to do, create the Azure VM role, absolutely. But make sure you make the communication with it proper. It's not going to behave exactly like a web or worker role. Although, if I remember correctly, you still have endpoints and configuration there, so you can expose your programming to the outside. Personally however, my architectures are way more queue based (as described in the answer highlighted above) so I'd opt for writing a bridge program in the VM.

Resources