We are implementing a puppet module for a storage subsystem. We are implementing our own types and providers and we will have types like volume, host etc related to the storage subsystem.
We have made our types ensurable and creation and deletion are working fine.
Our question is, how to implement the modification of an existing resource?
Suppose a volume resource has been created and now I want to change the expiration hours of the volume, how do I implement this in my provider?
Is it by creating a new ensure value like modify or is there some other way?
how to implement the modification of an existing resource? Suppose a
volume resource has been created and now I want to change the
expiration hours of the volume, how do I implement this in my
provider? Is it by creating a new ensure value like modify or is there
some other way?
No, you do not create a special ensure value. That would be hard to work with, because it would require that your manifests be aware of whether the resource needs to be created. Remember always that your manifests describe the target state of each resource, irrespective (to a first approximation) of their current state or even whether they exist.
The custom type documentation is a little vague here, however, because the implementation is basically open. You can do whatever makes sense for you. But there are two particularly common models:
the provider's property setter methods (also) modify the physical resource's properties if they are out of sync, on a property-by-property basis.
the provider implements flushing, so resource properties are synchronized with the system directly or indirectly by the provider's flush method
Related
some existing resources will be re-created if parameters are changed, one example is ebs_block_device which will even re-create the EC2 instance on AWS if you change e.g. the volume_size parameter.
is there a list of such Terraform resources/parameters so we use them carefully?
In Terraform's model, the decision about whether a particular change can be made using an in-place update or whether it requires replacing the whole object is made dynamically by the provider during the planning step.
Unfortunately, that means that there isn't any way to systematically enumerate all of the kinds of changes that might cause each of those results: those rules are represented as executable code inside the provider plugin, rather than as declarative metadata in the static provider schema (which you can see with the terraform providers schema command).
Although it's true that in many cases any change to a particular argument will require replacement, Terraform is designed to allow providers to implement more specific rules if necessary, such as when a remote system allows upgrading a database engine to a newer version in-place but requires replacement to downgrade the same database engine. A provider implements that during the planning step for that resource type, by comparing the previous value with the new value to determine whether the new value is "older" than the previous value, following whatever rules the remote system requires.
Because of that, the only totally-reliable way to determine whether a particular change will require replacement is to try making the change and run terraform plan to see how the provider proposes to implement the change.
Sometimes provider developers include details in their own documentation about which changes will require replacement, but in other cases they will assume that you're familiar with the behavior of the underlying API and rely on that API's own documentation to learn about what can be changed with an in-place update.
Global class?
I'd like to use a module in 2 other modules. And use defaults.
Then update the module after application is initialized (and connected to DB).
How this can be achieved?
Example use case:
Logger module is started with default configuration. It will fetch custom one from the database after database is connected.
Database module is using same logger (using default configuration until it gets configuration from that same database).
In many other languages I could create a class, then use instances of it and finally update class (not the instance) with new configuration. Updated values will be shared across the instances.
Some ideas that came into my mind:
Maybe I am thinking about it wrong way?
Can I use global variables?
I can use a local shared resource (file for example) to trigger change after startup is completed and connections are established/configuration fetched.
Another problem: How to avoid strong coupling between the modules?
Maybe I am thinking about it wrong way?
Right or wrong isn't really black and white here. It's more about benefits of modularity.
Can I use global variables?
You can, but you probably shouldn't.
Modularity in nodejs offers all sorts of benefits. Using a global variable creates a global environment dependency that breaks some of the fundamental tenets of modularity.
Instead, it generally makes more sense to create a single module that encapsulates the shared instance that you wish to use. When that module is initialized, it creates the shared instance and stores it locally in its own module level variable. Then, when other modules require() or import this module, it exports that shared instance. In this way you both retain the modularity and all the benefits of it and you get a common, shared instance that everyone who wants to can use.
The only downside? One line of code is required in any module that wants to use the shared resource to import that shared resource. That one line of code helps you retain all the benefits of modularity while still getting access to a shared resource.
I can use a local shared resource (file for example) to trigger change after startup is completed and connections are established/configuration fetched.
It isn't clear what you mean by this. Any modular, shared resource (without
globals) as described above can capture a configuration and preserve that configuration change.
How to avoid strong coupling between the modules?
This is indeed one of the reasons to avoid globals as it creates strong coupling. Any module that exports or shares (in any way) some shared resource creates some level of coupling. The code using the shared resource has to know what the interface is to the shared resource and that cannot be avoided in order to use it. You can often take advantage of existing interfaces (like eventEmitters) in order to avoid reinventing a lot of new interface, but the caller still needs to know what common interface is being used and how.
we are writing a Puppet module for a networked storage device. We are implementing custom types and providers and for one of the type we will have on an average around 40k objects.
Getting these many resources thru pre-fetch and self.instances will be very performance intensive. Is it mandatory to implement self.instances and pre-fetch methods for a provider? What will we lose if we do not implement them?
Is it mandatory to implement self.instances and pre-fetch methods for
a provider?
No, it is not.
What will we lose if we do not implement them?
You'll not be able to use a Resources resource to purge unmanaged resources of the type in question. But with so many resources, I'm inclined to think that you'll be actively managing only a few, and will not want to purge the others anyway.
You'll not be able to use the puppet resource command to enumerate all resources of your custom type.
You'll also need either to retrieve individual resource properties on demand, when the provider's getter methods are invoked, or else to track whether each individual resource's state has been retrieved and have each getter first load the state if needed. Either is doable.
For comparison, consider File resources, which seem conceptually similar to what you have in mind. Puppet makes no attempt to prefetch all File resources of a given system, and there's no practical way it could do.
I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.
I am exploring the notion of using Hazelcast (or any another caching framework) to advertise services within a cluster. Ideally when a cluster member departs then its services (or objects advertising them) should be removed from the cache.
Is this at all possible?
It is possible for sure.
The question is: which solution do you like.
If the services can be stored in a map, you could create a map with a ttl of e.g. a few minutes and each member needs to refresh its service to prevent the services from expiring.
An alternative solution is to listen to member changes using the membershiplistener and once a member leaves, the services that belong to that member need to be removed from the map.
If you don't like none of this, you could create your own SPI based implementation. The SPI is the lower level infrastructure used by hazelcast to create its distributed datastructures. A lot more work, but also a lot of flexibility.
So there are many solutions.