Override JsonDeserializer Behavior - sap-cloud-sdk

The generator has created a field of type OffsetDateTime:
#Nullable
#ElementName("DocDate")
private OffsetDateTime docDate;
But the server actually returns dates in the format: YYYY-mm-dd i.e. 2021-03-07
When using the generated code I get the following warnings:
WARN - Not deserializable: 2021-03-07
What is the correct way to override the deserialization of these fields? Or have these fields deserialize correctly?

An OffsetDateTime should have both a date and a time. The data your service responds with is lacking the time part. As per the OData V4 ABNF, this is not allowed (assuming your service is a V4 service):
dateTimeOffsetValue = year "-" month "-" day "T" hour ":" minute [ ":" second [ "." fractionalSeconds ] ] ( "Z" / SIGN hour ":" minute )
One way to solve this is to change the property type. You could either:
Change it to Edm.Date in the specification
Or change it to LocalDate in the generated code.
Of course this only makes sense if the service will always respond with a date.
Edit: If you really need to register a custom type adapter (e.g. because the service violates the JSON format) you could override the GsonVdmAdapterFactory:
public <T> TypeAdapter<T> create( #Nonnull final Gson gson, #Nonnull final TypeToken<T> type )
{
if( LocalDate.class.isAssignableFrom(rawType) ) {
return (TypeAdapter<T>) new CustomLocalDateTypeAdapter();
} else {
return super.create(gson, type);
}
}
However, this also requires changing the generated code, because there is currently no convenience to pass a custom type adapter as parameter. Change #JsonAdapter(com.sap.cloud.sdk.datamodel.odatav4.adapter.GsonVdmAdapterFactory.class) to reference your custom factory.
Still, I'd recommend using one of the above workarounds until the service is fixed.

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

The performance issue of validating entity using value object

I have the following value object code which validates CustCode by some expensive database operations.
public class CustCode : ValueObject<CustCode>
{
private CustCode(string code) { Value = code; }
public static Result<CustCode> Create(string code)
{
if (string.IsNullOrWhiteSpace(code))
return Result.Failure<CustCode>("Code should not be empty");
// validate if the value is still valid against the database. Expensive and slow
if (!ValidateDB(code)) // Or web api calls
return Result.Failure<CustCode>("Database validation failed.");
return Result.Success<CustCode>(new CustCode(code));
}
public string Value { get; }
// other methods omitted ...
}
public class MyEntity
{
CustCode CustCode { get; }
....
It works fine when there is only one or a few entity instances with the type. However, it becomes very slow for method like GetAll() which returns a lot of entities with the type.
public async IAsyncEnumerable<MyEntity> GetAll()
{
string line;
using var sr = File.OpenText(_config.FileName);
while ((line = await sr.ReadLineAsync()) != null)
{
yield return new MyEntity(CustCode.Create(line).Value); // CustCode.Create called many times
}
}
Since data in the file was already validated before saving so it's actually not necessary to be validated again. Should another Create function which doesn't validate the value to be created? What's the DDD idiomatically way to do this?
I generally attempt not to have the domain call out to retrieve any additional data. Everything the domain needs to do its job should be passed in.
Since value objects represent immutable state it stands to reason that once it has managed to be created the values are fine. To this end perhaps the initial database validation can be performed in the integration/application "layer" and then the CustCode is created using only the value(s) provided.
Just wanted to add an additional point to #Eben Roux answer:
In many cases the validation result from a database query is dependent on when you run the query.
For example when you want to check if a phone number exists or if some product is in stock. The answers to those querys can change any second, and though are not suited to allow or prevent the creation of a value object.
You may create a "valid" object, that is (unknowingly) becoming invalid in the very next second (or the other way around). So why bother running an expensive validation, if the validation result is not reliable.

How to hardcode the entity varchar value?

The requirement is to store the hardcoded value for varchar which is in an entity file(.eti). I tried adding to the default option but it is not reflecting.
Default option works well with boolean values (true/false), typelists (you can choose a default typecode), monetary amounts too, but it looks like it is not allowed to specify a default varchar.
Therefore the easiest way would be to create a preupdate rule which inserts that default value every time when you create a new record in the database.
Preupdate rule example:
#gw.rules.RuleName("YourEntityAssignDefaultValue")
internal class YourEntityAssignDefaultValueRule {
static function doCondition(yourEntity : entity.YourEntity) : boolean {
return yourEntity.New
}
static function doAction(yourEntity : entity.YourEntity, actions : gw.rules.Action) {
yourEntity.yourColumn = "defaultValue"
}
}
you can achieve through getter and setter properties in an appropriate enhancement class.
public property get PolicyNumber(): String {
return this.PolicyNumber }
and somewhere class you must be assigned the value to the PolicyNumber field then it will reflect.

NodaPatternConverter for Instant with numeric (unix) format in 2.x.x

As I can read on https://nodatime.org/2.0.x/userguide/migration-to-2 the support for numeric formatting of Instants has been removed.
Is there currently a way to create a NodaPatternConverter that will convert to a unix / ticks format and back?
SystemClock.Instance.GetCurrentInstant().ToString( "d", CultureInfo.InvariantCulture );
Results in the following exception:
NodaTime.Text.InvalidPatternException : The standard format "d" is not valid for the NodaTime.Instant type.
The solution based on Jon's suggestion I've ended up implementing:
public class InstantUnixTicksConverter : NodaConverterBase<Instant>
{
protected override Instant ReadJsonImpl( JsonReader reader, JsonSerializer serializer )
{
string text = reader.Value.ToString();
if ( !long.TryParse( text, out var ticks ) )
{
throw new InvalidNodaDataException( $"Value \'{text}\'cannot be parsed as numeric format {reader.TokenType}." );
}
return Instant.FromUnixTimeTicks( ticks );
}
protected override void WriteJsonImpl( JsonWriter writer, Instant value, JsonSerializer serializer )
{
writer.WriteValue( value.ToUnixTimeTicks() );
}
}
Well, you can implement IPattern<T> yourself. Your parser would just need to use long.Parse then call Instant.FromUnixTicks. The formatter would just need to call Instant.ToUnixTimeTicks and format the result. Ideally, do both of those in the invariant culture.
You can then pass that pattern to the NodaPatternConverter<T> constructor - or just implement JsonConverter directly, to be honest.
Note that this will only give you tick resolution, which matches 1.x but may lose data with 2.x values.
I'd strongly encourage you to move away from this format as soon as you can though.

Default values for properties in Azure Table Storage

I am using Azure table storage and have questions about how nulls and default values for entities work.
Specifically, I have a class that extends TableServiceEntity. The default constructor for this class is setting default values for various properties like so:
public class MyEntity: TableServiceEntry
{
public MyEntity() : MyEntity("invalid", "invalid") {}
public MyEntity(string f1, string f2)
{
Field1 = f1;
Field2 = f2;
}
public string Field1 { get; set; }
public string Field2 { get; set; }
}
I tested this class locally (on the emulator) by constructing the following entity:
MyEntity e = new MyEntity("hello", null);
I uploaded the entity and then retrieved it locally and the two fields were set to "hello" and null, respectively, as expected.
However, when I uploaded the same entity to the Azure cloud, what I received back was "hello" and "invalid", respectively, for the two properties.
My code that saves the entity is below:
public class MyTable : TableServiceContext
{
...
public void AddEntry(MyEntity e)
{
this.AddObject("MyTable", e);
this.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
}
}
I was able to fix this by making the default constructor take no arguments, but now I feel like I have a fundamental misunderstanding of how table storage works. Is it true that when you specify defaults for properties of a TableServiceEntry, those become the defaults for each row in the table in the cloud but not in the emulator (i.e. cloud vs. SQL Express)? If so, why can't I override those defaults with null in the cloud? Is there any documentation that explains how default constructors and nulls work in Azure table storage?
Yes, there is a difference between how table storage behaves in the emulator and in the cloud. The emulator implemented in SQL server, returns all columns defined for a table, even if not defined for a row, irrespective of the columns value (null / non-null). In the cloud, a property set to null is neither stored nor returned in the REST call.
A quick fix would be to check for null in property set, and only mutate the property if the value passed in, is not null.
Devstorage and real storage behave differently in some cases, but I've never seen them handle NULL values differently. And I've certainly never seen it change a value from NULL to "invalid", as you seem to be implying. Are you sure you didn't accidentally upload the wrong values to the cloud? You may want to try again, and use Fiddler to look at the actual request and response values.

Resources