I renamed my microservice, what do I do with the semantic version? - rename

I have a number of microservices in a distributed system - one of which I have recently renamed to better reflect its bounded context and disambiguate with another similarly named service.
The service was on version 3.1.0 at the point of renaming. My question is, what do I do with the version now? Is it 4.0.0? Or is this conceptually now a new service, replacing the old one and starting again from 1.0.0?
I would lean towards the latter option, but I'm also versioning the db schema to match the service, and I don't want to end up in the position where the service is 1.0.0 but the db schema is 3.1.0...

You should bump to version 4.0. The idea is not the name of the service, but a hint to its history and lineage. In this example, though the method of calling the executable has changed, the linage of the database is intact and you want to keep the history that the previous version existed.
Bumping the major version already signals to all users that incompatible changes have occurred, so no one will pick this new invocation version up by accident.
Semantic versioning is deliberately underspecified so that the core tenets will continue to be followed even if others disagree with an argument. That is, semantic versioning will only specify what you must do, not what you should do.

Related

What semantic versioning number has to increase when a REST API doesn't change but the code behind it has breaking changes?

When using semantic versioning, what if the REST API endpoints of a backend (used by a frontend application) do NOT change, but the code inside of the codebase DOES change in a backwards-incompatible way (breaking change)? Does this mean the major number has to increase or the minor?
Also, in the situation that there is a bug, and in order to fix the bug we have to add a new parameter to a function that is used in multiple places in the codebase (which means it is a breaking change if I'm correct), does this mean the major version has to increase even though it is a bug fix (which would mean the patch version has to increase)?
PS: we want to use semantic versioning in a small development team (5 developers). The projects are for paying clients, but we want to handle a good standard for doing new releases and knowing the amount of impact of a release based on the version. I don't think this will change how we have to use semantic versioning though (feel free to correct me if I'm wrong here).

What is the best way to resolve CouchDB document conflicts across 2 DB instances?

I have one application running over NodeJS and I am trying to make a distributed app. All write request goes to Node application and it writes to CouchDB A and on success of that It writes to CouchDB B. We read data through ELB(which reads from the 2 DBs).It's working fine.
But I faced a problem recently, my CouchDB B goes down and after CouchDB B up, now there is document _rev mismatch between the 2 instances.
What would be the best approach to resolve the above scenario without any down time?
If your CouchDB A & CouchDB B are in the same data centre, then #Flimzy's suggestion of using CouchDB 2.0 in a clustered deployment is a good one. You can have n CouchDB nodes configured in a cluster with a load balancer sitting above the cluster, delivering HTTP(s) traffic to any node that is "up".
If A & B are geographically separated, you can use CouchDB Replication to move data from A-->B and B-->A which would keep both instances perfectly in sync. A & B could each be clusters of 3 or more CouchDB 2.0 nodes, or single instances of CouchDB 1.7.
None of these solutions will "fix" the problem you are seeing when two copies of the database are modified in different ways at the same time. This "conflict" state is CouchDB's way of preventing data loss when two writes clash. Your app can resolve the conflict by picking a winning revision or writing a new one. It's not a fault condition, it's helping your application recover from a data loss during concurrent writes in a distributed system.
You can read more about document conflicts in this blog post series.
If both of your 1.6.x nodes are syncing buckets using standard replication, turning off one node shouldn’t be an issue. On node up it receives all updates without having conflicts – because there were no way to make them, the node was down.
If you experience conflicts during normal operation, unfortunately there exist no common general way to resolve them automatically. However, in most cases you can find a strategy of marking affected doc subtrees in a way allowing to determine which subversion is most recent (or more important).
To detect docs that have conflicts you may use standard views: a doc received by a view function has the _conflicts property if there exist conflicting revisions. Using appropriate view you can detect conflicts and merge docs. Anyway, regardless of how you detect conflicts, you need external code for resolving them.
If your conflicting data is numeric by nature, consider using CRDT structures and standard map/reduce to obtain final value. If your data is text-like you may also try to use CRDT, but to obtain reasonable performance you need to use reducers written in Erlang.
As for 2.x. I do not recommend using 2.x for your case (actually, for any real case except experiments). First, using 2.x will not remove conflicts, so it does not solve your problem. Also taking in account 2.x requires a lot of poorly documented manual operations across nodes and is unable to rebalance, you will get more pain than value.
BTW using any cluster solution have very little sense for two nodes.
As for above mentioned CVE 12635 and CouchDB 1.6.x: you can use this patch https://markmail.org/message/kunbxk7ppzoehih6 to cover the vulnerability.

Configuring Distributed Objects Dynamically

I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.

iOS Core Data lightweight migration in new version

I have an app with multiple updates on the AppStore already, funny thing happened, I thought that the lightweight migration happens automatically, however, my recent discovery that I need to add the
NSDictionary *storeOptions = #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES};
to my persistentStoreCoordinator shook my confidence when I realized I already have 5 core data models.
The question is: when I add the above line to the next version of the app, is it going to work for everyone when they update? Because right now everything that happens when they open the app .. is a fancy CRASH.
Thx
It will work if automatic lightweight migration is possible for the migration you're trying to perform. Whether this will work depends on the differences between the model used for the existing data and the current version of the model. Many common changes permit automatic lightweight migration but not all. You'll need to review the docs on this kind of migration and decide whether it will work in your case.
If it doesn't work, there are other ways to handle it, for example by creating a mapping file to tell Core Data how to make changes that it can't infer automatically.

core data NSPersistentStore issue

I am developing an application that is rolled out in stages. For each sprint, there are database changes so core data migration has been implemented. So far we have had 3 stage releases. Whenever successive up gradation is done , the application runs fine. But whenever I try to upgrade from version 1 to version 3, 'unable to add persistent store' error occurs'. Can someone help me with the issue ?
Core Data migration does not have a concept of versions as you would expect them. As far as Core Data is concerned there are only two versions, the version of the NSPersistentStore and the version you are currently using.
To use lightweight migration, you must test every version of your store and make sure that it will migrate to the current version directly. If it does not then you cannot use lightweight migration for that specific use case and you either need to develop a migration model or come up with another solution.
Personally, on iOS, I avoid heavy migration as it is very expensive in terms of memory and time. If I cannot use a lightweight migration I most often will explore export/import solutions (exporting to JSON for example and importing in to the new model) or look at refreshing data from the server.
My problem is i am trying to change my attribute datatype during automatic lightweight migration, as automatic lightweight core data migration does not support data type change. I resolved this issue by resetting the data type to older one.

Resources