YUI config object - yui

I noticed reading the YUI documentation that it says:
YUI ( o* )
Parameters:
o* Up to five optional configuration objects. This object is
stored in YUI.config. See config for
the list of supported properties.
What would be the reasoning behind limiting it to 5 configuration objects? There are appear to be more than 5 possibilities when looking at the config class, so why limit?

That means it supports YUI({ someConfig: value }, { anotherConfig: value }, { aThird: config }, { aFourth: config, { andFinally: aFifth });
Each object can contain any amount of configuration. The constructor supports multiple config objects for flexibility in larger systems, but is limited to five for code size maintenance. It's very unlikely implementers will use even a second config object since more complex apps can simply build up a single config object prior to YUI instantiation.
If a real need arises to have more than five, a reasonably justified feature request could be submitted to raise the limit. Personally, I don't see this happening.

Related

What's the strategy for implementing an attribute of type schema.TypeMap in Terraform Provider SDKv2?

Context: we're developing a TF Provider using TF Provider SDKv2.
Consider a resource that has an attribute of type schema.TypeMap that should support updates. Semantically it means a list of settings for a resource.
"settings": {
Type: schema.TypeMap,
Optional: true,
ForceNew: false,
Elem: schema.TypeString,
},
resource "tv" "example" {
...
settings = {
"brightest" = "23"
"contrast" = "56"
}
}
Now let's say tv has 50 settings where each setting has a default value and we don't want to make a user specify each 50 so we only want a user to specify settings that they want to override (for example, default value for contrast is 50 but user wants it to be 56 which is why they added "contrast" = "56" under settings attribute.
We can see 2 possible implementation options that would support Update for `setting attribute:
Don't use DiffSuppressFunc and store only overridden settings in TF state. This hardcoding default setting values is undesirable in the client (TF Provider), this option requires API to have overridden = true or something to indicate whether the value is set to a default value or was overridden.
Save all 50 settings in TF state and use DiffSuppressFunc to disable the diff between empty tf.example.settings in main.tf and 50 settings saved in TF state. However in this scenario it's a little bit unclear how Update would be implemented.
Which option is used typically?
The only example I've found is airflow_config_overrides attribute from GCP TF Provider which solves this issue in a different way.
In order to create predictable behavior when arguments to one resource use values derived from arguments to another, Terraform has some specific constraints on what providers are allowed to do with attributes during plan and apply which I'll summarize here:
If an argument is set (present and not null) in the configuration, the provider must set the value to exactly what the module author specified when generating a plan.
If an argument is known (not "known after apply") at the planning phase, then the final value during the apply phase must exactly match the value from the planning phase.
(There are more details in the internal documentation Resource Instance Change Lifecycle, but that is intended to be read by developers of SDKs rather than developers of providers using the SDK, so it uses some terminology that the SDK normally abstracts away.)
Since you are defining settings as a single argument of a map type, the rules above apply two that argument as a whole, rather than to individual elements. Therefore it is not possible to mix both author-provided keys and provider-defaulted keys in the same map. Terraform prohibits this so that a module author can write tv.example.settings somewhere else in their module and know it will evaluate to exactly the value they wrote.
However, if you declare an argument that cannot be set in the configuration -- Computed: true without Optional: true -- then your provider can freely set that argument to whatever it wants as long as it still upholds the second rule above, of consistency between plan and apply.
Therefore the compromise I would suggest here is to declare a pair of attributes where one is author-settable and the other one is not:
"settings": {
Type: schema.TypeMap,
Optional: true,
Elem: schema.TypeString,
},
"all_settings": {
Type: schema.TypeMap,
Computed: true,
Elem: schema.TypeString,
},
(all_settings might not be the best name. Other names I considered were final_settings, combined_settings, etc.)
With this approach, you can set a value for all_settings either during plan (CustomizeDiff) or apply (Create and Update), depending on whether you have all of the information you need during planning or not.
The one operation that will require some extra special care with this design is the one the SDK calls Read, because in that case you'll need to decide whether and how to incorporate potential changes made outside of Terraform into the settings value. If you don't update it at all then Terraform will not detect and "repair" changes made outside of Terraform, because it will not see the settings to have changed.
Given the intended meaning you described in your question, I can imagine two possible implementations depending on what the provider has access to:
If the provider itself knows what all of the default values are, Read could first update all_settings and then copy from there to settings the subset of values that aren't the defaults.
This is the most robust solution because it allows Terraform to recognize when a change outside of Terraform has changed one of the settings from its default to a non-default value, and so to propose to reconcile it by "removing" that setting. (That is, it will show in Terraform's UI as a removal, but your implementation will internally interpret that as "set it back to the default".)
If only the remote system knows the defaults and the provider cannot know them then I would probably compromise by first updating all_settings and then copying into settings only the subset of keys that match what was previously set in settings.
This will allow Terraform to detect when a setting the author configured was changed outside of Terraform, but it won't detect situations where a setting that wasn't managed in Terraform (i.e. originally set to the default) changes to a different value.
If you don't foresee a need for module authors to refer to the settings they didn't explicitly set elsewhere in the module (that is, if they are unlikely to want to write tv.example.all_settings somewhere) then you could potentially just ignore this part entirely, in which case this reduces to what you described as option 1, with the exception that the question would not be whether the author set the value to the default or not but instead whether the author set the value at all. In that case, you're essentially saying that setting a particular setting in the map represents an intent for Terraform to manage that setting, and leaving a setting unset represents the intent to let it be chosen by the server and arbitrarily drift in future.
Any other option you can think of which conforms to the requirements I summarized at the start of this answer would be fine too. However, I don't think that your option 2 meets those requirements because it suggests that settings would have a different value after apply than was set in the configuration, which would violate at least one of the requirements depending on whether you deal with it during the planning phase or the apply phase.

Additive deserializing with Serde

I'd like to additively deserialize multiple files over the same data structure, where "additively" means that each new file deserializes by overwriting the fields that it effectively contains, leaving unmodified the ones that it does not. The context is config files; deserialize an "app" config provided by the app, then override it with a per-"user" config file.
I use "file" hear for the sake of clarity; this could be any deserializing data source.
Note: After writing the below, I realized maybe the question boils down to: is there a clever use of #[serde(default = ...)] to provide a default from an existing data structure? I'm not sure if that's (currently) possible.
Example
Data structure
struct S {
x: f32,
y: String,
}
"App" file (using JSON for example):
{ "x": 5.0, "y": "app" }
"User" file overriding only "y":
{ "y": "user" }
Expected deserializing (app, then user):
assert_eq!(s.x, 5.0);
assert_eq!(s.y, "user");
Expected solution
I'm ignoring on purpose any "dynamic" solution storing all config settings into, say, a single HashMap; although this works and is flexible, this is fairly inconvenient to use at runtime, and potentially slower. So I'm calling this approach out of scope for this question.
Data structure can contain other structs. Avoid having to write too many per-struct code manually (like implementing Deserialize by hand). A typical config file for a moderate-sized app can contains hundreds of settings, I don't want the burden of having to maintain those.
All fields can be expected to implement Default. The idea is that the first deserialized file would fallback on Default::default() for all missing fields, while subsequent ones would fallback on already-existing values if not explicitly overridden in the new file.
Avoid having to change every single field of every single struct to Option<T> just for the sake of serializing/deserializing. This would make runtime usage very painful, where due to above property there would anyway be no None value ever once deserialization completed (since, if a field is missing from all files, it defaults to Default::default() anyway).
I'm fine with a solution containing only a fixed number (2) of overriding files ("app" and "user" in example above).
Current partial solution
I know how to do the first part of falling back to Default; this is well documented. Simply use #[serde(default)] on all structs.
One approach would be to simply deserialize twice with #[serde(default)] and override any field which is equal to its default in the app config with its value in the user config. But this 1) probably requires all fields to implement Eq or PartialEq, and 2) is potentially expensive and not very elegant (lose the info during deserialization, then try to somehow recreate it).
I have a feeling I possibly need a custom Deserializer to hold a reference/value of the existing data structure, which I would fallback to when a field is not found, since the default one doesn't provide any user context when deserializing. But I'm not sure how to keep track of which field is currently being deserialized.
Any hint or idea much appreciated, thanks!
Frustratingly, serde::Deserialize has a method called deserialize_in_place that is explicitly omitted from docs.rs and is considered "part of the public API but hidden from rustdoc to hide it from newbies". This method does exactly what you're asking for (deserialize into an existing &mut T object), especially if you implement it yourself to ensure that only provided keys are overridden and other keys are ignored.

Shall I set an empty string computed string attribute for Terraform resource?

context: I'm adding a new resource to TF Provider.
I've got an API that optionally return a string attribute so I represent it as:
"foo": {
Type: schema.TypeString,
Computed: true,
Optional: true,
},
Question: if an API returns value not set / empty string for response.foo, shall I still set an empty string for foo attribute or I shouldn't set any value instead (e.g., null)?
in my resource schema.
(Hello! I'm the same person who wrote the answer you included in your screenshot.)
If both approaches -- returning null or returning an empty string -- were equally viable from a technical standpoint then I would typically prefer to use null to represent the absence of a value, since that is clearly distinct from an empty string which for some situations would otherwise be a valid present value for the attribute.
However, since it seems like you are using the old SDK ("SDKv2") here, you will probably be constrained from a technical standpoint: SDKv2 was designed for Terraform v0.11 and earlier and so it predates the idea of attributes being null and so there is no way in its API to specify that. You may be able to "trick" the SDK into effectively returning null by not calling d.Set("foo", ...) at all in your Create function, but there is no API provided to unset an attribute and so once you've set it to something non-null there would typically be no way to get it to go back to being null again.
Given that, I'd suggest it better to be consistent and always use "" when using the old SDK, because that way users of the provider won't have to deal with the inconsistency of the value sometimes being null and sometimes being "" in this case.
When using the modern Terraform Plugin Framework this limitation doesn't apply, because that framework was designed with the modern Terraform language in mind. You aren't using that framework and so this part of the answer probably won't help you right now, but I'm mentioning it just in case someone else finds this answer in future who might already be using or be considering use of the new framework.

Writing ENV variables to configure an npm module

I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.

using spring cache read only, how set spring cache redis read only

when I use spring cache with redis, I use it in two app, the one read and write,the other is only read,how can I config?
I try do like this, but it does not work!
#Cacheable(value = "books", key = "#isbn", condition = "false")
Can anyone help ?
You have misunderstood the purpose of the #Cacheable annotation's "condition" attribute. Per the documentation...
If true, the method is cached - if not, it behaves as if the method is
not cached, that is executed every since time no matter what values
are in the cache or what arguments are used.
The condition attribute just determines whether the cache (e.g. Redis) is consulted first, before executing the (potentially expensive) method. If condition evaluates to false, then the method will always be executed and the result subsequently cached.
In the read-only app, I am assuming you want the cache consulted first, if the value is not in the cache, then execute the method, however, DO NOT cache the result. Is this correct?
If so, then you only need specify the unless attribute instead of the condition attribute like so...
#Cacheable(value="books", key="#isbn", unless="true")
void someBookMutatingOperation(String isbn, ...) { .. }
If, however, you want to avoid the cacheable method invocation in the read-only (version of the) app altogether and just consult the cache regardless of whether a value actually exists in the cache or not, then your problem is quite a bit more complex/difficult.
Spring's Cache Abstraction operates on the premise that if a value is not in the cache then it will return null to indicate a cache miss, which is then followed by a subsequent method invocation. Only when a cache returns a value for the specified key(s) will the method invocation be avoided.
Without a custom extension (perhaps using (additional) AOP interceptors) there is no way to avoid the OOTB behavior.
I will not elaborate on this later technique unless your use case requires it.
Hope this helps.
#John Blum
thanks! happy new year.
your answer inspired me, I have read a part of the spring cache source code. the CacheInterceptor class. the CacheAspectSupport class.
private Object execute(CacheOperationInvoker invoker, CacheOperationContexts contexts) {
// Process any early evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), true, ExpressionEvaluator.NO_RESULT);
// Check if we have a cached item matching the conditions
Cache.ValueWrapper cacheHit = findCachedItem(contexts.get(CacheableOperation.class));
// Collect puts from any #Cacheable miss, if no cached item is found
List<CachePutRequest> cachePutRequests = new LinkedList<CachePutRequest>();
if (cacheHit == null) {
collectPutRequests(contexts.get(CacheableOperation.class), ExpressionEvaluator.NO_RESULT, cachePutRequests);
}
Cache.ValueWrapper result = null;
// If there are no put requests, just use the cache hit
if (cachePutRequests.isEmpty() && !hasCachePut(contexts)) {
result = cacheHit;
}
// Invoke the method if don't have a cache hit
if (result == null) {
result = new SimpleValueWrapper(invokeOperation(invoker));
}
// Collect any explicit #CachePuts
collectPutRequests(contexts.get(CachePutOperation.class), result.get(), cachePutRequests);
// Process any collected put requests, either from #CachePut or a #Cacheable miss
for (CachePutRequest cachePutRequest : cachePutRequests) {
cachePutRequest.apply(result.get());
}
// Process any late evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), false, result.get());
return result.get();
}
I think should prevent the cachePutRequest execute. if no cache be hit, to invoke the method body of #Cacheable and don't cached the result. use unless will prevent the method invoke. Is this correct?
#Tonney Bing
First of all, my apologies for misguiding you on my previous answer...
If condition evaluates to false, then the method will always be
executed and the result subsequently cached.
The last part is NOT true. In fact, the condition attribute does prevent the #Cacheable method result from being cached. But, neither the condition nor the unless attribute prevent the #Cacheable service method from being invoked.
Also, my code example above was not correct. The unless attribute needs to be set to true to prevent caching of the #Cacheable method result.
After re-reading this section in the Spring Reference Guide, I came to realize my mistake and wrote an example test class to verify Spring's "conditional" caching behavior.
So...
With respect to your business use case, the way I understand it based on your original question and subsequently, your response to my previous answer, you have a #Cacheable service method that needs to be suppressed of invocation in the read-only app regardless of whether the value is in the cache or not! In other words, the value should always be retrieved from the cache and the #Cacheable service method should NOT be invoked in read-only mode.
Now to avoid polluting your application code with Spring infrastructure component references, and specifically, with a Spring CacheManager, this is a good example of a "cross-cutting concern" (since multiple, mutating-based application service operations may exist) and therefore, can be handled appropriately using AOP.
I have coded such an example satisfying your requirements here.
This is a self-contained test class. The key characteristics of this test class include...
The use of external configuration (by way of the app.mode.read-only System property) to determine if the app is in read-only mode.
The use of AOP and a custom Aspect to control whether the subsequent invocation of the Joint Point (i.e. the #Cacheable service method) is allowed (no, in a read-only context). In addition, I appropriately set the order in which the Advice (namely, the #Cacheable based advice along with the handleReadOnlyMode advice in the UseCacheExclusivelyInReadOnlyModeAspect Aspect) should fire based on precedence.
Take note of the #Cacheable annotation on the service method...
#Cacheable(value = "Factorials", unless = "T(java.lang.System).getProperty('app.mode.read-only', 'false')")
public Long factorial(long number) { .. }
You can see the intended behavior with the System.err output statements in the test class.
Hope this helps!

Resources