Log4Net: how to configure Threshold in run time? - log4net

I need to configure Threshold dynamically in run-time, I might have integer or string representation of the level in .net variable. Instead of getting one of this representations, log4net expects both them to be specified in constructor, which seems very weird to me:
public Level(int level, string levelName);
log4net team did not even trouble to explain why, all I found is this:
http://logging.apache.org/log4net/release/sdk/log4net.Core.LevelConstructor1.html
they do not even explain what this int level means, where can I find mapping for it? They just say that "higher values represent more severe levels" which is not enough.
I expected to have something like that:
appender.Threshold = new Level(level);
here level might be of string or int type

The Level class has public static members for predefined levels such as Off, Error, Warning:
You probably want to set the Threshold property to one of these predefined levels, e.g.:
myAppender.Threshold = Level.Warn;
Alternatively, as described in the documentation, you can set the level to one of the entries in a repository's LevelMap.
UPDATE
Do you mean I have to work it out from their order? like Off maps to 0 (or 1?),
Not at all: Off maps to Level.Off.Value; and similarly for the others.

Related

Jooq - converting nested objects

the problem which I have is how to convert jooq select query to some object. If I use default jooq mapper, it works but all fields must be mentioned, and in exact order. If I use simple flat mapper, I have problems with multiset.
The problem with simple flat mapper:
class Student {
private final id;
Set<String> bookIds;
}
private static final SelectQueryMapper<Student> studentMapper = SelectQueryMapperFactory.newInstance().newMapper(Studen.class);
var students = studentMapper.asList(
context.select(
STUDENT.ID.as("id),
multiset(
select(BOOK.ID).from(BOOK).where(BOOK.STUDENT_ID.eq(STUDENT.ID)),
).convertFrom(r -> r.intoSet(BOOK.ID)).as("bookIds"))
.from(STUDENT).where(STUDENT.ID.eq("<id>"))
)
Simple flat mapper for attribute bookIds returns:
Set of exact one String ["[[book_id_1], [book_id_2]]"], instead of ["book_id_1", "book_id_2"]
As I already mention, this is working with default jooq mapper, but in my case all attributes are not mention in columns, and there is possibility that some attributes will be added which are not present in table.
The question is, is there any possibility to tell simple flat mapper that mapping is one on one (Set to set), or to have default jooq mapper which will ignore non-matching and disorder fields.
Also what is the best approach in this situations
Once you start using jOOQ's MULTISET and ad-hoc conversion capabilities, then I doubt you still need third parties like SimpleFlatMapper, which I don't think can deserialise jOOQ's internally generated JSON serialisation format (which is currently an array of arrays, not array of objects, but there's no specification for this format, and it might change in any version).
Just use ad-hoc converters.
If I use default jooq mapper, it works but all fields must be mentioned, and in exact order
You should see that as a feature, not a bug. It increases type safety and forces you to think about your exact projection, helping you avoid to project too much data (which will heavily slow down your queries!)
But you don't have to use the programmatic RecordMapper approach that is currently being advocated in the jOOQ manual and blog posts. The "old" reflective DefaultRecordMapper will continue to work, where you simply have to have matching column aliases / target type getters/setters/member names.

What's the usage of field's SQLDataType in JOOQ's auto generated classes

When generating JOOQ classes via JOOQ code gen, for each field, there will be a SQLDataType associated with it like below.
public final TableField<EventsRecord, LocalDateTime> CREATED_AT = createField(DSL.name("CREATED_AT"), SQLDataType.LOCALDATETIME(6).nullable(false), this, "");
What's the usage or purpose to have SQLDataType with each generated field? Since we already have a return type and client code is likely to use the this type to do the compile check.
Why we still need to know the actual SQLDataType in generated class/fields?
By client type, you probably mean the LocalDateTime type, i.e. the <T> type that you will find throughout the jOOQ API. Sure, that's the type you care about, but jOOQ, internally, will care about the org.jooq.DataType instead. Your example already gives away two ideas why this may be useful:
There's a precision of 6 fractional digits on LOCALDATETIME(6), which is used (among other things):
In CAST expressions. Try DSL.cast(inline("2000-01-01 00:00:00"), EVENTS.CREATED_AT),
In DDL statements. Try DSLContext.meta(EVENTS). You should see a CREATE TABLE statement with the appropriate data type
In the optimistic locking feature, to create modification timestamps with the right precision.
There's an indication whether the column is nullable, which is used (again among other things):
In DDL statements, see above
In the implicit join feature, to decide whether to produce an INNER JOIN or a LEFT JOIN
There are many other properties a DataType can have, which would be interesting for jOOQ at runtime including:
Custom data type bindings
Character set
Collation
Converters
Default value
Whether it is an identity
Besides, a String is not a String. For example, it could mean CHAR(2), CHAR(5), VARCHAR(100), CLOB, which are all quite different things in some dialects.
It would be a shame if your runtime meta model didn't have this information available.

Why is VkShaderStageFlagBits a bitmask?

In Vulkan you specify the VkPipelineShaderStageCreateInfo's to the VkGraphicsPipelineCreateInfo structure, and presumably there is supposed to be one VkPipelineShaderStageCreateInfo for each shader stage (for example the vertex, and fragment shaders).
So why exactly is the field stage field of type vkShaderStageFlagBits is this just because it sits closer to some kind of Vulkan convention?
My confusion is I am led to believe that the only reason you would use a Bitmask in this way, is if you need to combine bits together. (For example for the general flags field in all Vulkan structures). I was trying to find the answer for this, so I looked at the Vulkan Spec, and this confused me even more! This is because they have two bits VK_SHADER_STAGE_ALL_GRAPHICS and VK_SHADER_STAGE_ALL these are defined as:
VK_SHADER_STAGE_ALL_GRAPHICS is a combination of bits used as shorthand to specify all graphics stages defined above (excluding the compute stage).
VK_SHADER_STAGE_ALL is a combination of bits used as shorthand to specify all shader stages supported by the device, including all additional stages which are introduced by extensions.
Well if they are supposed to be "shorthand" for specifying all bits, does this mean one shader stage, is supposed to be able to represent a version of all the stages?
Thanks in advance!
Exactly, this is mostly to keep the api consistent. VkShaderStageFlagBits is used in several spots where a bit mask makes more sense than at pipeline creation time.
An example where it makes sense are descriptor set layout bindings where you use the flag mask to specify what stages can access your descriptors (samplers, uniform buffer object, etc.).
So if you want one UBO to be accessible from the vertex and fragment stage and another one from the geometry and tessellation stage you'd use different stage flag bit combinations when setting up the VkDescriptorSetLayoutBinding. Pipeline state combinations are pretty common here.
Vulkan uses fields of type Vk*FlagBits (e.g. VkShaderStageFlagBits) when exactly one of the defined values is expected, and uses the corresponding Vk*Flags type (always a typedef for VkFlags which is just a typedef for uint32_t (e.g. typedef VkFlags VkShaderStageFlags) when a combination zero, one, or more of the defined values is expected.
There are two reasons for this:
It gives a signal (albeit subtle) about whether exactly one value is expected/allowed or some combination of values is expected.
Many compilers will give warnings when assigning a combination of bit values to a field of enum type, which in practice helps enforce (1). This is because to do bitwise operations on enum values, they're first promoted to an integer type, and the result is an integer type, and typical settings for most compilers yield a warning (often promoted to error) when doing an implicit conversion from integer to enum type, since the integer may not be one of the enumerated values.
So VkPipelineShaderStageCreateInfo::stage is VkShaderStageFlagBits because exactly one shader stage is valid there, and you'll probably get a warning if you try to set it to something silly like VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT.
But VkDescriptorSetLayoutBinding::stageFlags is VkShaderStageFlags because it's common and expected to include multiple stages there, and you won't get a compiler warning if you set it to VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT.

Puppet Include vs Class and Best Practices

When should I be using an include vs a class declaration? I am exploring creating a profile module right now, but am struggling with methodology and how I should lay things out.
A little background, I'm using the puppet-labs java module which can be found here.
My ./modules/profile/manifests/init.pp looks like this:
class profile {
## Hiera Lookups
$java_version = hiera('profile::jdk::package')
class {'java':
package => $java_version,
}
}
This works fine, but I know that I can also remove the class {'java': block of the code and instead use include java. My question relates around two things. One, if I wanted to use an include statement for whatever reason, how could I still pass the package version from hiera to it? Second, is there a preferred method of doing this? Is the include something I really shouldn't be using, or are there advantages and disadvantages to each method?
My long term goal will be building out profile like modules for my environment. Likely I would have a default profile that applies to all of my servers, and then profiles for different application load outs. I could include the profiles into a role and apply things to my individual nodes at that level. Does this make sense?
Thanks!
When should I be using an include vs a class declaration?
Where a class declares another, internal-only class that belongs to the same module, you can consider using a resource-like class declaration. That leverages your knowledge of the implementation details of the module, as you need to be able to prove that no other declaration of the class in question will be evaluated before the resource-like one. If ever that constraint is violated, catalog building will fail.
Under all other circumstances, you should use include or one of its siblings, require and contain.
One, if I wanted to use an include statement for whatever reason, how
could I still pass the package version from hiera to it?
Exactly the same way you would specify any other class parameter via Hiera. I already answered that for you.
Second, is
there a preferred method of doing this?
Yes, see above.
Is the include something I
really shouldn't be using, or are there advantages and disadvantages
to each method?
The include is what you should be using. This is your default, with require and contain as alternatives for certain situations. The resource-like declaration syntax seemed good to the Puppet team when they first introduced it, in Puppet 2.6, along with parameterized classes themselves. But it turns out that that syntax introduced deep design problems into the language, and it has been a source of numerous bugs and headaches. Automatic data binding was introduced in Puppet 3 in part to address many of those, allowing you to assign values to class parameters without using resource-like declarations.
The resource-like syntax has the single advantage -- if you want to consider it one -- that the parameter values are expressed directly in the manifest. Conventional Puppet wisdom holds that it is better to separate data from code, however, so as to avoid needing to modify manifests as configuration requirements change. Thus, expressing parameter values directly in the manifest is a good idea only if you are confident that they will never change. The most significant category of such cases is when a class has read data from an external source (i.e. looked it up via Hiera), and wants to pass those values on to another class.
The resource-like syntax has the great disadvantage that if a resource-like declaration of a given class is evaluated anywhere during the construction of a catalog for a given target node, then it must be the first declaration of that class that is evaluated. In contrast, any number of include-like declarations of the same class can be evaluated, whether instead of or in addition to a resource-like declaration.
Classes are singletons, so multiple declarations have no more effect on the target node than a single declaration. Allowing them is extremely convenient. Evaluation order of Puppet manifests is notoriously hard to predict, however, so if there is a resource-like declaration of a given class somewhere in the manifest set, it is very difficult in the general case to ensure that it is the first declaration of that class that is evaluated. That difficulty can be managed in the special case I described above. This falls into the more general category of evaluation-order dependencies, and you should take care to ensure that your manifest set is free of those.
There are other issues with the resource-like syntax, but none as significant as the evaluation-order dependency.
Clarification with respect to automated data binding
Automated data binding, mentioned above, associates keys identifying class parameters with corresponding values for those parameters. Compound values are supported if the back end supports them, which the default YAML back end in fact does. Your comments on this answer suggest that you do not yet fully appreciate these details, and in particular that you do not recognize the significance of keys identifying (whole) class parameters.
I take your example of a class that could on one hand be declared via this resource-like declaration:
class { 'elasticsearch':
config => { 'cluster.name' => 'clustername', 'node.name' => 'nodename' }
}
To use an include-like declaration instead, we must provide a value for the class's "config" parameter in the Hiera data. The key for this value will be elasticsearch::config (<fully-qualified classname> :: <parameter name>). The associated value is wanted puppet-side as a hash (a.k.a. "associative array", a.k.a. "map"), so that's how it is specified in the YAML-format Hiera data:
elasticsearch::config:
"cluster.name": "clustername"
"node.name": "nodename"
The hash nature of the value would be clearer if there were more than one entry. If you're unfamiliar with YAML, then it would probably be worth your while to at least skim a primer, such as the one at yaml.org.
With that data in place, we can now declare the class in our Puppet manifests simply via
include 'elasticsearch'

Usage of a correct collection Type

I am looking for a native, or a custom-type that covers the following requirements:
A Generic collection that contains only unique objects like a HashSet<T>
It implements INotifyCollectionChanged
It implements IENumerable<T> (duh) and must be wrappable by a ReadOnlyCollection<T> (duh, duh)
It should work with both small and large numbers of items (perhaps changing inner behaviour?)
the signature of the type must be like UniqueList<T> (like a list, not a key/valuepair)
It does not have to be sortable.
Searchability is not a "must-have".
The main purpose of this is to set up a small mesh/network between related objects.
So this network can only unique objects and there has to be a mechanism that notifies the application when changes in the collection happen.Since it is for a proof-of-concept the scope is purely within the assembly (no db's or fs are of any importance).
What is a proper native type for this or what are the best ingredients to create a composite?
Sounds like you could just wrap HashSet<T> in your own type extremely easily, just to implement INotifyCollectionChanged. You can easily proxy everything you need - e.g. GetEnumerator can just call set.GetEnumerator() etc. Implementing INotifyCollectionChanged should just be a matter of raising the event when an element is added or removed. You probably want to make sure you don't raise the event if either you add an element which is already present or remove an element which isn't already present. HashSet<T>.Add/Remove both return bool to help you with this though.
I wouldn't call it UniqueList<T> though, as that suggests list-like behaviour such as maintaining ordering. I'd call it ObservableSet<T> or something like that.

Resources