I have a pair of Active/Standby ASA need to upgrade from 9.1.5 to 9.1.7.
I am going to upgrade the Standby unit first and then force it to become active.
In case of any unpredictable problem on version 9.1.7.
I want to wait for a week before upgrading another the another ASA.
My concern is this pair of ASA can't perform hot standby due to the version difference.
This is what cisco saying you don't need to maintain same version while upgrade, failover works regardless of your minor version number, I believe you are good to go:
http://www.cisco.com/c/en/us/support/docs/security/asa-5500-x-series-next-generation-firewalls/111867-asa-failover-upgrade.html
Perform Zero-Downtime Upgrades for Failover Pairs
The two units in a failover configuration should have the same major (first number) and minor (second number) software version. However, you do not need to maintain version parity on the units during the upgrade process; you can have different versions on the software running on each unit and still maintain failover support. In order to ensure long-term compatibility and stability, Cisco recommends that you upgrade both units to the same version as soon as possible.
Related
I have recently upgraded our Cassandra cluster from 3.11 to 4.0 with the long term goal to also upgrade the Java version. I did not want to do both of these things at once for obvious reasons, however we have been upgraded on C4 for just over two weeks now and I'm looking to upgrade the Java version from jdk8 to jdk11, and also move from CMS Garbage Collector to G1GC.
We wanted to get an idea of what the impact of moving to G1GC would be before going big bang across all nodes.
Is it safe to use a different Garbage collector on different nodes? or should this be something setup in a test environment to monitor?
Thanks in advance.
Yes! That is actually the recommended practice when changing/testing new GC types, assuming that you cannot fully simulate production workloads in a lower environment.
I'd advise making the switch on one or two nodes, and then monitor their performance relative to the CMS nodes.
Logically you can do it since they are different java processes running on different machines. Actual intention behind you doing this activity is to test you must analyze the impact on test environment first and then apply changes on production if you find test results suitable.
I would like to discuss what are the best practices/approaches engineers do while upgrading elasticsearch clusters. I believe this post may serve as a good example of strategies and steps to perform, guaranteeing no data loss, minimum downtime, scalability and availability of the elasticsearch services.
To start the initiative, we can break the upgrade into two subsections:
1) Performing upgrade on master nodes:
Since master nodes do not contain any data and are responsible for controlling the cluster I believe we can safely do terraform apply to add all the upgraded master node VMs and then remove the old ones.
2) Performing upgrade on data nodes:
As many people already know, there is certain limitation on the ability to update data nodes. We cannot afford to completely deallocate the VM and replace it with another. A good practice in my opinion is to:
a) Stop the index allocation to the old VM
b) Then performing terraform apply to create the new upgraded version of the data node VM(and manually modifying the terraform state in order the old VM not to be destroyed)
c) Allowing traffic(index creation) to the new VM and using the elasticsearch APIs to transfer the data from the old to the new VM
d) Manually changing the terraform state allowing it to delete the old VM.
These are just idealistic steps, I would like to see your opinion and strategies to perform safe elasticsearch upgrades via Terraform.
The reference manual has guidelines regarding removing master-eligible nodes that you must respect in versions 7 and later. It's much trickier to get this right in earlier versions because of the discovery.zen.minimum_master_nodes setting.
Your strategy for data nodes sounds slow and expensive given that you might be moving many terabytes of data around for each node. It's normally better to restart/upgrade larger data nodes "in place", detaching and reattaching the underlying storage if needed.
Is it possible to create two or more datacentre in yugabyte-db.
Each datacentre having it's own RF and datacentres may be asynchronously replicated.
We are currently working on a distributed databases where we read or write to a local datacentre if local datacentre fail to served us in that case only geo-datacentre is queried.
Is this sort of solutions supported in yugabyte. If not then we may face latency in write due to nodes distribution among differently geographical location.
This feature is currently on our roadmap and scheduled for a beta release in 4Q 2019. You can read more about it here: YugaByte Multi-Region 2DC deployment
I've got a stateful service running in a Service Fabric cluster that I now know fails to honor a cancellation token passed into it. My fault.
I'm ready to release the fix, but during the upgrade process, I'm expecting the service replica on the faulty primary node to get stuck since it won't honor the token passed in.
I can use Restart-ServiceFabricDeployedCodePackage or even Restart-ServiceFabricNode to manually take down the stuck replica, but that will result in a brief service interruption during the upgrade process.
Is there any way to release this fix with zero downtime?
This is not possible for a stateful service using the Service Fabric infrastructure, you will need to have downtime on the upgrade. Once you have a version that supports the cancellation token then you will be fine.
That said, depending on the use of the state, and if you have a load balancer between your clients and the service, you can stand up another service instance on the new fixed version and use the load balancer to drain your traffic across to then new version, upgrade the old, drain back to it and then drop the second service you created. This will allow for a zero downtime scenario.
The only workarounds I can think of are worse since they turn off parts of health checks during upgrades and "force" the process to come down. This doesn't make things more graceful or improve downtime, and has a side effect of potentially causing other health issues to be ignored.
There's always some downtime, even with the fully rolling upgrades, since swapping a primary to another node is never instantaneous and callers need to discover the new location. With those commands, you're just converting a more graceful shutdown and cleanup into a failure, which results in the same primary swap. Shouldn't be a huge difference since clients (and SF) have to deal with failure normally anyway.
I'd keep using those commands since they give you good manual control over which replicas/processes to poke when things get stuck.
I have a .NET application on a Windows machine and a Cassandra database on a Linux (CentOS) server. The Windows machine might be with a couple of seconds in the past sometimes and when that thing happens, the deletes or updates queries does not take effect.
Does Cassandra require all servers to have the same time? Does the Cassandra driver send my query with timestamp? (I just write simple delete or update query, with not timestamp or TTL).
Update: I use the Datastax C# driver
Cassandra operates on the "last write wins" principle. The system time is therefore critically important in determining which writes succeed. Google for "cassandra time synchronization" to find results like this and this which explain why time synchronization is important and suggests a method to solve the problem utilizing an internal NTP pool. The articles specifically refer to AWS architecture, but the principals apply to any cassandra installation.
The client timestamps are used to order mutation operations relative to each other.
The DataStax C# driver uses a generator to create them, it's important for all the hosts running the client drivers (origin of the execution request) to have clocks in sync with each other.