When using Azure web/worker roles users can specify osVersion to explicitly set "Guest OS image" version. This ensures that when Microsoft issues new critical updates they are first shown up on a newer "OS image" which users can explicitly specify and test their service on.
How is the same achieved with Azure Service Fabric? Suppose I deployed my service into Azure Service Fabric and it's been running for a month, then Microsoft issues updates for the OS on the server where the service is running - how are they applied such that I can test them first to ensure they don't break the service?
Brett is correct. SF cluster is based on Azure VMSS and the expectation is that the customer is responsible to patch the OS. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-upgrade/
We have heard from majority of the SF customers that this is not at all desirable and that they do not want to be responsible for OS patching.
The feature to enable an OPT-IN automatic OS patching is indeed a very high priority within Azure Compute team. The exact details on how best to offer this is still in design, however the intent is to have this functionality enabled before the end of the year.
Although that is the right long term solution, to mitigate this issue in the short term, SF team is working on a set of steps that will enable the customers to opt into having the their VMs patched using WU in a safe manner. Once the steps are tested out, we will blog about it and will publish a document detailing the steps. Expect that in the next couple of months.
As I understand it you are currently responsible for managing patching on SF cluster nodes yourself. Apparently moving this to be a SF managed feature is planned but I have no idea how far down the road it might be.
I personally would make this a high priority. Having used Cloud Services for many years I have come to rely on never having to patch my VM's manually. SF is a large backwards step in this particular area.
It'd be great to hear from an Azure PM on this...
Automatic Image based patching like cloud services in service fabric.
Today you do not have that option. The image based patching capability is work in progress. I posted a road map to get there on the team blog : https://blogs.msdn.microsoft.com/azureservicefabric/2017/01/09/os-patching-for-vms-running-service-fabric/ Try out the script and report any issues you hit. Looking forward to your feedback.
Lots of parts of Service Fabric are huge rolling dumpster fires backwards. Whole new hosts of problems have been introduced that the IIS/WAS/WCF team have already solved that need to be developed for once again. The concept of releasing a PAAS platform while requiring OS patch management is laughable. To add insult to injury there is no migration path from "classic cloud PAAS" to this stuff. WEEEE I get to write my very own service host. Something that was provided out of the box for a decade by WAS. Not all of us were scared by the ability to control all aspects of service host communication options via configuration. Now we get to use code so a tweak channel configuration requires a full patch/release cycle!
Related
Currently, our product is a web application with SQL Server as DBMS, ASP.NET backend, and classic HTML/JavaScript/CSS frontend. The product is actively developed and each month we have to deploy a new version of it to production.
During this deployment, we update all the components listed above (apply some SQL scripts, update binaries, and client files) but we deploy only the delta (set of files which were changed since the last release). It has some benefits like we do not reset custom data/configs/client adjustments.
Now we are going to move inside clouds like Azure, AWS, etc. Adjust product architecture to be compliant with the Docker/Kubernetes and provide the product as SaaS.
And now the question itself: "Which approach of deployment is recommended in the clouds?" Can we keep applying the delta only? Or we have to reorganize the process to always deploy from scratch?
If there are some Internet resources I have missed, please share.
This question is extremely broad but maybe some clarification could steer you in the right direction anyway:
Source code deployments (like applying delta's) and container deployments are two very different directions in the sense that the tooling you invest in during the entire SLDC CAN differ substantially. Some testing pipelines/products focus heavily (or exclusively) on working with one or the other. There will be tools that can handle both of course.
They also differ in the problems they're attempting to solve and come with some pro's and con's:
Source Code Deployments/Apply Diffs:
Good for small teams and quick deployments as they're simple to understand and setup.
Starts to introduce risk when you need to upgrade the Host OS or application dependencies
Starts to introduce risk when the Host's in production begin to drift (have more differing files then expected) more dramatically over time
Slack has a good write up of their experience here.
Container deployments
Provides isolation from the application (developer space) and the Host OS (sysadmin/ops space). This usually means they can work with each other independently.
Gives an "artifact" that won't change between deployments, ie the container tagged v1 will always be the same unless you do something really funky. You can't really guarantee this
The practice of isolating stateless components makes autoscaling those components very easy, and you can eventually spend more time on the harder ones (usually stateful).
Introduces a new abstraction with new concerns that your team will have to mature into. Testing pipelines, dev tooling, monitoring/loggin architectures might all need to be adjusted over time and that comes with cost and risk.
Stateful containers is hardly a solved problem (ie shoving an existing database in a container can be a surprising challenge).
In order to work with Kubernetes, you need to have a containerized application. That doesn't mean you need to containerize your entire product over night. Splitting out the front end to deploy with cloudfront/s3, and containerizing a stateless app will get your feet wet.
Some books that talk about devops philosophies (in which this transition plays a part)
The Devops Handbook
Accelerate
Effective Devops
SRE book
I'm trying to come up with a solution for achieving Geo-Redundancy (2+ datacentres) while using Service Fabric reliable Actors/Services to manage state. It insinuates here that geo replication is possible
This may happen when, for example, if you aren’t geo replicated and your entire cluster is in one data center, and the entire data center goes down.
but doesn't explain how to switch it on.
Does anybody know if it's a planned feature for ASF that just hasn't been released yet, or whether it's present but not fully explored yet?
Alternatively does anybody have any recommended approaches for cross DC resilience when the state required to run the app is stored using ASF's StateManager?
thanks,
Alex
Alex,
Apparently the service fabric team is still to crack this problem - more info below. However, you should be able to GeoHA Service Fabric Cluster on Azure by yourself. Here's an example of that:
https://alexandrebrisebois.wordpress.com/2016/05/31/deploy-a-geo-ha-service-fabric-cluster-on-azure/
Not today, but this is a common request that we continue to investigate.
The core Service Fabric clustering technology knows nothing about Azure regions and can be used to combine machines running anywhere in the world, so long as they have network connectivity to each other. However, the Service Fabric cluster resource in Azure is regional, as are the virtual machine scale sets that the cluster is built on. In addition, there is an inherent challenge in delivering strongly consistent data replication between machines spread far apart. We want to ensure that performance is predictable and acceptable before supporting cross-regional clusters. Source: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-common-questions
Cheers,
Paulo
There is no reason you cannot install a series of nodes in different regions as part of the same Fabric, and use placement constraints to control service allocation. As long as the nodes can properly communicate with each other, there should be no problem with this.
If you're using Azure, you should deploy them to Virtual Networks, and link them together using VPNs. You could even cross to on-prem.
I believe the answer would be to use a custom replicator implementation and bridging multiple clusters with expressroute.
I am exploring the idea of hosting my CD environment in Windows Azure. I read that the current release of the DMS does not play ball in the cloud, however, no detailed explanation was given. Apparently Azure support is planned for second quarter 2013, but in the meantime, I'd like to know why it doesn't work so that I can explore potential workarounds.
For instance, is the issue related to sticky sessions (or lack thereof)? Or, is it related to the DMS compatibility with SQL Azure?
It will be an issue with the sticky sessions. As the DMS does all its work server side it needs proper session state management to work. You could do this on Azure using IaaS, but then you would be responsible for installing and maintaining the deployment of Sitecore on the OS rather than using the built in deployment features.
See this post by Jakob Leander for more info: http://www.sitecore.net/Community/Technical-Blogs/Jakob-Leander/Posts/2013/01/Why-we-love-Sitecore-on-Azure.aspx
I'm not very skilled on Azure, but googling hasn't give me more answer on this topic.
I have an ASP.NET web page that use R-(D) COM Interface for doing some complex calculus. I'm evaluating to move everything to the Azure platform.
I saw that it's easy to move webpages on Azure however being that I need that RSERVER is installaled on the machine I need to move everything.
I was thinking of creating a VHD machine and publish the entire image on Azure but I'm not sure this is the best solution.
I am not familiar with RSERVER, but here are some guidelines you may follow:
By default all Windows Azure servers run in 64bit mode. This is
important for the COM interfaces.
You may run any executable as a Startup Task in regular
Windows Azure Web/Worker role. Frankly you can create vey complex
startup scripts. You may use the Windows Azure Bootstrapper to
ease the solution. The trick is that RSERVER must support
unattended/silent install.
I would stick to the least friction solution - which would be using a normal Windows Azure Web Role and a Startup Task.
If that is not working for you, you may consider preparing a VHD image and use the Windows Azure VM Role.
I've written a very similar answer to what I'd write to you here. The thing is, the Azure VM role is technically a good solution, depending on what you need to do with it. You can generally create really good solutions with a fairly minimal amount of effort to let legacy code work with Azure & all the shortcomings of the VM role.
In general, if you have a lot of custom installation you need to do, create the Azure VM role, absolutely. But make sure you make the communication with it proper. It's not going to behave exactly like a web or worker role. Although, if I remember correctly, you still have endpoints and configuration there, so you can expose your programming to the outside. Personally however, my architectures are way more queue based (as described in the answer highlighted above) so I'd opt for writing a bridge program in the VM.
i intend to build software for SMEs on the azure platform that can be provisioned for different clients..what i mean is, once the client signs up, a new instance is automatically created for them on the azure platform.
Does anyone have any experience with building such solutions or are their any commercial packages like that available?
thanks
It sounds like you're planning to have a single-tenant system, where each instance is slightly different then others and is customized for each client slightly differently. If this is the case, Azure in general will not be a great platform for you. It thrives on providing a dynamic quantity of exactly-alike instances. Furthermore, having one instance per client is a bad idead, as instances are slightly volatile. MS may choose to bring one down for an upgrade, or instance may simply crash, and SLA is only inforced when 2+ instances are running.
I'd like to suggest that you consider multi-tenant environment, where your system shards itself virtually via database/architecture/etc. Do not tie instances to quantity of clients, but to actual load.
Now, if you want to spin up exactly same instances when new clients sign up, check out dynamic scaling service for Azure called AzureWatch # http://www.paraleap.com - its main premise to scale your instances to load, but with a few simple queue/table inserts it can programmatically scale you up or down. Contact me there if you think this will work for you, and ill be glad to explain how this can be done