I'm currently building an application using the ReliableActors framework on Azure ServiceFabric. As we scale up, I'm looking at doing blue/green deployments. I can see how to do this using a stateless system. Is there's a way to do this using statefull actors?
Service Fabric is all about rolling upgrades, rather than deployment swaps, like a VIP swap. Both stateless and stateful services are upgraded the same way, but there are a few additional nuances to stateful that I'll mention later.
By rolling upgrades, I mean upgrades to an application are done in place, one upgrade domain at a time, so that there is no downtime and no sudden switch. A rolling upgrade in Service Fabric can be done in a safe "managed" mode where the platform will perform health checks before moving on to the next upgrade domain, and will automatically roll back if health checks fail.
OK, that all sounds nice. But how do you do blue/green deployments when upgrades are always rolling upgrades?
This where application types and version come in. Instead of having two "environments" that can hold two running applications, Service Fabric has this concept of versioned application types from which application instances can be created. Here's an example of how this works:
Let's say I want to make an application called Foo. My Foo application is defined as an application type, call it FooType. This is similar to defining a class in C#. And like class in C#, I can create instances of my type. Each instance has a unique name, similar to how each object instance of a class has a unique variable name. But unlike classes in C#, my FooType has a version number. Then I can "register" the application type and version in my cluster:
FooType 1.0
With that registered, I can create an instance of that application:
"fabric:/FooApp" of FooType 1.0
Now, let's say I develop version 2.0 of my application. So I register version 2.0 of my FooType in the cluster:
FooType 1.0
FooType 2.0
Now I have both versions of FooType registered, and I still have an instance of 1.0 running:
"fabric:/FooApp" of FooType 1.0
Here's where it gets fun. I can do some interesting things:
I can take "fabric:/FooApp" - an instance of FooType 1.0 - and upgrade it to FooType 2.0. This will be a rolling upgrade of that running application.
Or.. I can leave "fabric:/FooApp" alone, and create a new instance of my version 2.0 application:
"fabric:/FooApp" of FooType 1.0
"fabric:/FooAppv2Test" of FooType 2.0
Now I have two applications, running side-by-side, in the same cluster. One is an instance of 1.0, and the other is an instance of 2.0. With some configuring of ports and application endpoints, I can ensure users are still going to the 1.0 instance while I test out the 2.0 instance.
Great, so all my tests pass against the 2.0 instance, so now I can safely take the 1.0 instance and upgrade it to 2.0 of FooType. Again, this is a rolling upgrade of that instance (fabric:/FooApp), it's not migrating users to the new instance (fabric:/FooAppv2Test). Later I'll go and delete fabric:/FooAppv2Test because that was just for testing.
One of the benefits of blue/green though is being able to swap back to the other deployment if the new one fails. Well, you still have both 1.0 and 2.0 of FooType registered. So if your application started misbehaving after the upgrade from 1.0 to 2.0, you can just "upgrade" it back to 1.0! In fact, you can "upgrade" an application instance between as many different versions of its application type as you want! And you don't need to have instances of all your application versions running like you do in a swapping environment, you just have the different versions registered and a single application instance that can "upgrade" between versions.
I mentioned caveats with stateful services. The big thing to remember with stateful services is that the application state - your users' data - is contained in the application instance (fabric:/FooApp), so for your users to see their data you need to keep them on that instance. That's why we do rolling upgrades instead of deployment swaps.
This is just the basic idea. There are other ways you can play around with application types, versions, and instances depending on what your goals are and how your application works, but that's for another time.
Related
We use an "enterprise" app which deploys Hazelcast. Every minor version upgrade is a pain, because the vendor has to become involved.
Any chance Hazelcast.org could deploy a hazelcast-api.jar on which the legacy app could depend at compile-time, leaving us free to deploy the compatible version of our choice for runtime use?
Thanks, Robin.
Robin,
Currently it's not possible we at Hazelcast have this requirement in mind for future hazelcast versions (maybe in Hazelcast 4).
Let me know if you have an questions.
When using Azure web/worker roles users can specify osVersion to explicitly set "Guest OS image" version. This ensures that when Microsoft issues new critical updates they are first shown up on a newer "OS image" which users can explicitly specify and test their service on.
How is the same achieved with Azure Service Fabric? Suppose I deployed my service into Azure Service Fabric and it's been running for a month, then Microsoft issues updates for the OS on the server where the service is running - how are they applied such that I can test them first to ensure they don't break the service?
Brett is correct. SF cluster is based on Azure VMSS and the expectation is that the customer is responsible to patch the OS. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-upgrade/
We have heard from majority of the SF customers that this is not at all desirable and that they do not want to be responsible for OS patching.
The feature to enable an OPT-IN automatic OS patching is indeed a very high priority within Azure Compute team. The exact details on how best to offer this is still in design, however the intent is to have this functionality enabled before the end of the year.
Although that is the right long term solution, to mitigate this issue in the short term, SF team is working on a set of steps that will enable the customers to opt into having the their VMs patched using WU in a safe manner. Once the steps are tested out, we will blog about it and will publish a document detailing the steps. Expect that in the next couple of months.
As I understand it you are currently responsible for managing patching on SF cluster nodes yourself. Apparently moving this to be a SF managed feature is planned but I have no idea how far down the road it might be.
I personally would make this a high priority. Having used Cloud Services for many years I have come to rely on never having to patch my VM's manually. SF is a large backwards step in this particular area.
It'd be great to hear from an Azure PM on this...
Automatic Image based patching like cloud services in service fabric.
Today you do not have that option. The image based patching capability is work in progress. I posted a road map to get there on the team blog : https://blogs.msdn.microsoft.com/azureservicefabric/2017/01/09/os-patching-for-vms-running-service-fabric/ Try out the script and report any issues you hit. Looking forward to your feedback.
Lots of parts of Service Fabric are huge rolling dumpster fires backwards. Whole new hosts of problems have been introduced that the IIS/WAS/WCF team have already solved that need to be developed for once again. The concept of releasing a PAAS platform while requiring OS patch management is laughable. To add insult to injury there is no migration path from "classic cloud PAAS" to this stuff. WEEEE I get to write my very own service host. Something that was provided out of the box for a decade by WAS. Not all of us were scared by the ability to control all aspects of service host communication options via configuration. Now we get to use code so a tweak channel configuration requires a full patch/release cycle!
I have a number of microservices in a distributed system - one of which I have recently renamed to better reflect its bounded context and disambiguate with another similarly named service.
The service was on version 3.1.0 at the point of renaming. My question is, what do I do with the version now? Is it 4.0.0? Or is this conceptually now a new service, replacing the old one and starting again from 1.0.0?
I would lean towards the latter option, but I'm also versioning the db schema to match the service, and I don't want to end up in the position where the service is 1.0.0 but the db schema is 3.1.0...
You should bump to version 4.0. The idea is not the name of the service, but a hint to its history and lineage. In this example, though the method of calling the executable has changed, the linage of the database is intact and you want to keep the history that the previous version existed.
Bumping the major version already signals to all users that incompatible changes have occurred, so no one will pick this new invocation version up by accident.
Semantic versioning is deliberately underspecified so that the core tenets will continue to be followed even if others disagree with an argument. That is, semantic versioning will only specify what you must do, not what you should do.
A security flaw in Groovy was detected in versions 1.7 to 2.4.3:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3253
The MethodClosure class in runtime/MethodClosure.java in Apache Groovy 1.7.0 through 2.4.3 allows remote attackers to execute arbitrary code or cause a denial of service via a crafted serialized object.
Does this affect "typical" Grails projects that retrieve data from user input (web forms), the DB, web services, etc. and assume this is all text, not serialized objects? In other words, is there any of this happening implicitly that we should be aware of?
Otherwise, what should we look for to ensure this bug isn't affecting us?
Grame has an issue for exactly this, but noone has been able to show any way to exploit it in a Grails app yet: https://github.com/grails/grails-core/issues/9113
In short: "the plan is 2.5.1 and 3.0.4 will have Groovy 2.4.4"
i intend to build software for SMEs on the azure platform that can be provisioned for different clients..what i mean is, once the client signs up, a new instance is automatically created for them on the azure platform.
Does anyone have any experience with building such solutions or are their any commercial packages like that available?
thanks
It sounds like you're planning to have a single-tenant system, where each instance is slightly different then others and is customized for each client slightly differently. If this is the case, Azure in general will not be a great platform for you. It thrives on providing a dynamic quantity of exactly-alike instances. Furthermore, having one instance per client is a bad idead, as instances are slightly volatile. MS may choose to bring one down for an upgrade, or instance may simply crash, and SLA is only inforced when 2+ instances are running.
I'd like to suggest that you consider multi-tenant environment, where your system shards itself virtually via database/architecture/etc. Do not tie instances to quantity of clients, but to actual load.
Now, if you want to spin up exactly same instances when new clients sign up, check out dynamic scaling service for Azure called AzureWatch # http://www.paraleap.com - its main premise to scale your instances to load, but with a few simple queue/table inserts it can programmatically scale you up or down. Contact me there if you think this will work for you, and ill be glad to explain how this can be done