ServiceStack.Text JsonConfig Scoping Ignoring Attributes - servicestack

I'm looking to effectively create logic via attributes on a .net core API project that, depending on a attribute will serialise or de-serialise while ignoring certain properties.
Eg.
If a property was decorated with [OutputOnly] it would not allow users to pass it in via the API, but the API would be able to return this value.
Inversely [InputOnly] would enable users to only pass this value in, but an API would not return this.
The issue I am having is JsConfig static and the property (IgnoreAttributesNamed) that enables Ignoring fields is a singleton too and not part of the Scope functionality in JsConfig.With()
My idea currently is to have an InputFormatter and an OutputFormatter in .net core, that will handle the this logic, but need to be able to config which properties are ignored in those contexts
any suggestions would be greatly appreciated :)

I don't really understand what the goal is here, you would use a Request DTO to define which Parameters a Service Accepts and Response DTO to define what your Service returns, the explicit purpose of the Request/Response DTOs is to define your Services Contract, i.e. the most important contract in your System, whose well-defined interface is used to encapsulate your systems capabilities and is what all consumers of your APIs binds to.
The C# POCO used to define your Request/Response DTO classes should be considered as a DSL for defining the inputs/outputs of your API, trying to collapse and merge their explicit intent of your APIs into multi competing Types with custom attributes is self-defeating, it adds unnecessary confusion, blurs its explicit definition which invalidates the primary purpose of having service contracts which is what all metadata services look at for documenting your API and generating the typed bindings in different supported languages.
So the approach and desired goal for using custom attributes for controlling serialization behavior so you can reuse the same types in different contracts is highly discouraged, but should you wish to continue with this approach you can refer to this answer for different ways to Ignore properties in ServiceStack.Text, specifically ShouldSerailize() API which will allow you to dynamically specify which fields ServiceStack.Text should serialize, if you intend on implementing a convention you can delegate the implementation to a custom extension method, e.g:
class MyRequest
{
public bool? ShouldSerialize(string fieldName) =>
MyUtils.ShouldSerialize(GetType(),fieldName);
}
Other than the linked answer the only other opportunity to manipulate serialization is potentially to use the built-in AutoMapping utils for selecting which properties should be copied over and the Object Dictionary APIs for converting C# Types into an Object Dictionary and manipulate it that way, then can dehydrate it back into the C# type after applying your conventions.

Related

NestJS Mapped Types and DTO usage

I'm confused about the mapped types in NestJS.
The documentation says that PartialType create a new class making its validation decorators optional.
So, we use it in our validation pipes as we do with the original classes.
I'm wondering if it's the normal usage of the derived classes.
I mean, to make it easy to create a partial update DTO.
And if so, why is it in a swagger package (or graphql) and not in a utils of the core?
So, there's three mapped-types in Nest actually: the base #nestjs/mapped-types, the one in #nestjs/swagger, and the one in #nestjs/graphql. The reason for these packages is to allow devs to define a base class, and then be able to define classes that extend off this base class but use different decorators for validations/schema definitions. These mixin methods become useful because they read the existing metadata on the class and make modifications to it for the resulting class, like making every property optional, or leaving out key properties, like a password field on a User class.
The #nestjs/mapped-types package deals with class-validator and class-transformer metadata and is the basis for the other two packages. If the metadata doesn't exist, nothing is effected in terms of metadata, and the types are the only thing updated.
#nestjs/swagger's mapped-types update the swagger schema metadata so that your OpenAPI spec shows up properly.
Similarly, the #nestjs/graphql mapped-types updates the GraphQL schema so that Apollo doesn't throw exceptions on partial updates.
Because all of this metadata is handled differently, and it doesn't overlap, is the reason for three different ways to deal with it. Also, to help keep the base package small, rather than requiring the metadata keys from the other two packages.

Codeigniter4 - Difference between Libraries, helper and model

I walk the first steps in codeigniter4.
Now I ask myself what are the big differences between a "Model" where I do at first all database related things, a "Helper" where I defined a set of functions or a "Libary"?
In which cases should I create my own libary, helper, model?
The CI4 Docu won't give me a answer for that, so I hope someone can explain it for me (and other ones)
The documentation is pretty straight forward when it comes to Models, there's really no caveats there. A Model is a class that represents a single table in the database and it provides to you a wide variety of related functionality: built-in CRUD methods, the ability to save Entities, transparent use of Query Builder methods, etc.
In general, you would typically have one Model class per database table that you're working with. That being said, you do not necessarily need Models in order to work with a database; however if you expect to need the functionality a Model provides, it can be used to save you the work of doing it yourself.
The documentation is indeed far more opaque on the differences between Libraries and Helpers. As such, I've found the most objective measure of difference to be in how CodeIgniter itself utilizes them, and the distinction is rather subtle:
Libraries provide their functionality through methods that exist within the namespace of the class they're defined in.
Helpers provide their functionality through functions that exist within the namespace of the importing class.
(NB: In PHP, a method is simply the name for a function that's defined within a class)
In other words, Libraries are typically non-static classes that (like all non-static classes) must be 'constructed' before use. The methods defined within that class reside within the namespace of the class itself, not the class they're called from.
For example, to gain access to the current Session object, I retrieve an instance of the Session Library class: $session = session(); Using $session, I can then invoke methods provided by that Session class, like $session->destroy().
On the other hand, Helpers are typically just a collection of functions that are imported into the namespace of the importing class itself. They are called in the current context and their use is not predicated upon the use of an object.
For example, if I loaded the Array Helper (helper('array');), it would grant me access to a handful of functions that I could call directly from the current context (e.g. $result = array_deep_search(...)). I didn't have to create an object first, that function was imported directly into my current namespace.
This distinction could matter for a couple of reasons, the biggest of which is probably naming conflicts. If you have two Helpers, each with an identically-named function, you can't import both of those functions at the same time. On the other hand, you could have one hundred different classes with the destroy() method defined, because each of those method definitions resides within the namespace of the defining class itself and is invoked through an instance of that specific class.
You may notice that all of CI's Helper functions are prefixed with the name of the Helper itself, e.g. 'array' or 'form'; this is to prevent the aforementioned naming conflict.
This probably doesn't answer when you want to use one or the other, and in truth that really comes down to your opinion. In the end, it's all just namespacing. You can put things wherever you want, as long as the code that needs it knows where to look for it. I personally try to follow CI's lead on things, so I tend to follow their distinction between the two.

What is the best way to restrict strings in an Object Oriented model?

I need to select a modeling method for documenting extensions to an existing collection of web services. The method/tool needs to be used by tech business analysts. The existing API is defined in XML Schema. XML Schema work well with the one exception. Take a PaymentInformation class as an example. One partner might accept Visa and Mastercard as an example. Another also excepts Amex. We want to be able to extend PaymentInformation for PartnerA and PartnerB.
class PaymentInformation
method // CASH,CC
ccNumber
ccType // MC,V,AMEX
class PaymentInformationPartnerA
method // CASH,CC,PAYPAL
ccNumber
ccType // MC, V
The problem with XML Schema is that to apply a restriction to a class requires redefining the whole type. This seems like a maintenance nightmare. UML doesn't seem to support restricted strings (patterns, length, etc). What tool/method do you recommend for this? We have a preference, but not a requirement for Eclipse IDE.
You can add an UML constraint or a condition on your class. This is either a graphical note or directly an information hand coded on the UML metamodel.
The UML model is already an XMI 2.1 therefore like a XML but using specific rules.
Don't do that. If PaymentInformationPartnerA extends PaymentInformation then for all uses of PaymentInformation you can use PaymentInformationPartnerA, whereas you are saying that for some uses ( assigning a value to ccType of "AMEX" ) it is not covariant.
You're probably better off putting the constraint as a pre-condition of the endpoint receiving the message rather than as a constraint on the message type itself.

Meta Programming, whats it good for?

So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.

Using Factories in Presenters in a Model View Presenter and Domain Driven Design Project

In domain driven design, it appears to be a good practice to use Factories to create your domain objects in your domain layer (as opposed to using a direct constructor or IoC).
But what about using the domain object factories in a presenter layer. For instance, say that I was creating a domain object from user input obtained from the presenter.
Here's an example, say I have a Configuration domain object that has a number of decimal settings.
public class Configuration : PersistantObject
{
public decimal temperature {get;set;}
...(times 20)
public decimal gravity {get;set;}
}
In order to create this object in the domain layer, rather than the presenter layer, I would have to pass each of these decimal values as function parameters. Creating an unwieldy function definition and call.
ie ConfigurationService.CreateConfiguration(temperature, ...(x20), gravity);
The perhaps better solution would be to create the Configuration object in the presenter layer, and assign all the values of the configuration object directly from the user input, skipping a lengthy function call.
Configuration config = ConfigurationFactory.CreateNewConfiguration();
config.temperature = temperature;
..(x20).. = ...;
config.gravity = gravity;
ConfigurationService.SaveNewConfiguration(config);
But I'm wondering if this approach is wrong and why?
If both of these approaches are wrong, what is the best approach for creating a lengthy object from user input and why?
Thanks!
I'd advise against letting your domain objects out of the domain layer and into the presentation layer. Keep the presentation layer focused on presentation.
For this reason, I construct Data Transfer Objects to shuffle data to and from the domain and presentation layers. In your case, have the dialog populate a DTO that is passed to your service and translated into the corresponding domain object.
You wouldn't want to construct domain objects from DTOs every time, though. Consider the case where a DTO represents only a subset of a domain object. Re-constructing an existing domain object from such a DTO would give you a partial domain object. You'd probably want to maintain a light-weight cache that held the full domain object so you could do a proper update.
Essentially, you'd arrive at the DTO solution if you applied the Introduce Parameter Object refactoring.
There are two main ways I would handle this
1) If this is setup through a dialog I would create classes implementing the command pattern and bind a dialog with the object in question. For example CmdCreateConfigurationService, and CmdEditConfigurationService.
CmdCreateConfigurationService would rely on a factory class and minimum parameters you need to select the correct Configuration Service.
You setup a IConfigurationServiceEditor interface and pass that as one of the parameter to CmdEditConfiguration Parameters. With the IConfigurationServiceEditor interface you define as many methods as you need to make the transfer of information from and to the dialog easy and painless as possible. I recommend using a collection of keys and values.The Command Object knows how to setup up the Configuration Service from this collection. The Dialog know to expect this collection when setting up.
Regardless of the data structure you will do the work of filling out the Configuration Service in the command object. By having non dialog/form/screen object implement IConfigurationServiceEditor you can automate your testing and in certain circumstances make configuration of complex objects easiers.
I developed this method for a CAD/CAM softaware that has several dozen parametric shapes each having from 4 to 40 entries.

Resources