Call GetOrAddAsync in the OnOpenAsync method - azure

I use a StatefulService with a IReliableDictionary.
Currently, I call StateManager.GetOrAddAsync<IReliableDictionary> everywhere I need this dictionary.
Is it best practice to call one time only StateManager.GetOrAddAsync<IReliableDictionary> in the OnOpenAsync method of StatefulService and to store the return in a member ?

It does not matter much. I've asked it to the product team an got this response:
You can cache the result of GetOrAddAsync locally but it doesn't matter since the statemanager does that for you automatically. Some folks think it's easier to keep a local, but I never do because now you have some lifecycle things to deal with (you have a ref to the thing not protected by state manger acquisition locks so you can see some different exceptions, but nothing you wouldn't have to deal with anyway).
Italic text inserted by me.

As per the official documentation here, it's not recommended to store the reliable collection references.
We don't recommended that you save references to reliable collection
instances in class member variables or properties. Special care must
be taken to ensure that the reference is set to an instance at all
times in the service lifecycle. The Reliable State Manager handles
this work for you, and it's optimized for repeat visits.

Related

How to handle a non deletable resource when implementing a terraform provider

I am currently working on trying to manage a resource with Terraform that has no delete method, and terraform insists there must be one.
1 error occurred:
* resource xray_db_sync_time: Delete must be implemented
The API I am trying to implement is here, and as you can see, there is no "Delete". You can't remove this sync timer. I am open to ideas. The code being worked on is here
This is a situation where you, as the provider developer, will need to make a judgement call about how best to handle this mismatch between Terraform's typical resource instance lifecycle and the actual lifecycle of the object type you're intending to represent.
Broadly speaking, there are two options:
You could make the Delete function immediately return an error, explaining that this object is not deleteable. This could be an appropriate approach if the user might be surprised or harmed by the object continuing to exist even though Terraform has no record of it. I would informally call this the "explicit approach", because it makes the user aware that something unusual is happening and requires them to explicitly confirm that they want Terraform to just "forget" the rather than destroying it, using terraform state rm.
You could make the Delete function just call d.SetId("") (indicating to the SDK that the object no longer exists) and return successfully without taking any other action. I'll call this the "implicit approach", because a user not paying close attention may be fooled into thinking the object was actually deleted, due to the provider not giving any feedback that it just silently discarded the object.
Both of these options have advantages and disadvantages, and so ultimately the final decision is up to you. Terraform and its SDK will support either strategy, but you will need to implement some sort of Delete function, even if it doesn't do anything, to satisfy the API contract.
You are also missing a Create for this API endpoint. With only Update and Read supported, you will need to extend Create to be the same as Update except for additionally adding the resource to the state. You can easily invoke the Update function within the Create function for this behavior.
For the delete function, this should actually be easier than you may expect. The Terraform provider SDKv2 and your resource code should automatically Read the resource prior to attempting the delete to verify that it actually exists (this probably requires no extra effort on your part without seeing the code). Then you would need to remove the resource from the state with d.SetId("") where d is of type *schema.ResourceData. However, this also automatically is invoked assuming the Delete returns no errors. Therefore, you could define a Delete that merely returns warnings or errors of an appropriate Go type. If you do not need that (and probably would not considering the minimal functionality), then you could probably just return nil. Part of this is speculation based on what your code probably looks like, but in general this all holds true.

How to use UUIDs in Neo4j, to keep pointers to nodes elsewhere?

I figured out thanks to some other questions that Neo4j makes use of ids for its nodes that could get recycled in case of node deletion.
That's a real concern for me as I need to store a reference to my node in another database (relational this time) in order to keep some sort of "pinned" nodes.
I've tried using this https://github.com/graphaware/neo4j-uuid to generate them automatically, but I did not succeed, all my queries kept running indefinitely.
My new idea is to make a new field in each of my nodes that I would manually fill with a UUID generated by NodeJs package uuid through uuid.v4().
I also came across the concept of indexing multiple times, which is totally unclear to me, but it seems that I should run this query:
CREATE INDEX ON :MyNodeLabel(myUUIDField)
If you think that it doesn't make sense at all don't hesitate to come up with another proposition. I am open to all kinds of suggestions.
Thanks for your help.
I would consider using the APOC library's apoc.uuid.install procedure.
Definitely create a unique constraint on the label and attribute you are going to use. This will not only create an index but also guarantee uniqueness of the attribute in the label namespace.
CREATE CONSTRAINT ON (mynode:MyNodeLabel) ASSERT mynode.myUUIDField IS UNIQUE
Then call the apoc.uuid.install procedure. This will create uuid's in the attribute myUUIDField on all of the existing MyNodeLabel nodes and on any new ones.
CALL apoc.uuid.install('MyNodeLabel', {addToExistingNodes: true, uuidProperty: 'myUUIDField'}) yield label, installed, properties
NOTE: you will have to install APOC and set apoc.uuid.enabled=true n the neo4j.conf file.

Is it a good idea to rely on a given aggregate's history with Event Sourcing?

I'm currently dealing with a situation in which I need to make a decision based on whether it's the first time my aggregate got into a situation (an Order was bought).
I can solve this problem in two ways:
Introduce in my aggregate a field stating whether an order has ever been bought (or maybe the number of bought orders);
Look up in the aggregate's history for any OrderWasBought event.
Is option 2 ever acceptable? For some reason I think option 1) is for the general case safer / cleaner but I lack experience in these matters.
Thanks
IMHO both effectively do the same thing: The field stating that an order was bought needs to be hydrated somehow. Basically this would be done as part of the replay, which basically does not mean anything but that when an OrderWasBought event happened, the field will be set.
So, it does not make any difference if you look at the field, or if you look for the existence of the event. At least it does not make a difference, when it is about the effective result.
Talking about efficiency, it may be the better idea to use a field, since this way the field gets hydrated as part of the replay, which needs to be run anyway. So, you don't have to search the list of events again, but you can simply look at the (cached) value in the field.
So, in the end, to cut a long story short: It doesn't matter. Use what feels better to you. If the history of an aggregate gets lengthy, you may be better off using the field approach in terms of performance.
PS: Of course, this depends on the implementation of how aggregates are being loaded – is the aggregate able to access its own event history at all? If not, setting a field while the aggregate is being replayed is your only option, anyway. Please note that the aggregate does not (and should not!) have access to the underlying repository, so it can not load its history on its own.
Option 2 is valid as long as the use case doesn't need the previous state of the aggregate. Replaying events only restores a readonly state, if the current command doesn't care about it, searching for a certain event may be a valid simple solution.
If you feat "breaking encapsulation" this concern may not apply. Event sourcing and aggregate are concepts mainly they don't impose a certain OO approach. The Event Store contains the business state expressed as a stream of events. You can read it and use it as an immutable collection any time. I would replay events only if I'd need a certain complex state restored. But in your case here, the simpler 'has event' solution encapsulated as a service should work very well.
That being said, there's nothing wrong with always replaying events to restore state and have that field. It's a matter of style mostly, choose between a consistent style of doing things or adapt it to go for the simplest solution for a given case.

What to write in this contract

I'm designing a academic decision support system. I have to write documentation for that project. The part I am stuck on is writing contracts.
I've a use case Generate custom reports.
The interaction the user will do with the system is setParametersforReport().
In this function he will set attributes, like student_rollNumber or marks, or warning count or anything else he wants to see on the report.
However I am confused what to write in the contract's post condition.
The 3 things that I should mention are:
Instances created
Associations formed or broken
Attributes changed
I don't get what to write in that and how to explain since nothing is actually being created. I have all the data I want in the database and I am accessing them without classes. I am confused because database instance can't be created.
Please any help will be appreciated.
Postconditions are used to specify the state of the system at the end of the operation execution. In your case it looks like the state at the system at the end is the same that the state at the beginning since you´re not modifying the database (and you´re not storing the report instance either). Therefore I don't see the point of defining a contract for this operation.

Supplying UITableView Core Data the old-fashioned way

Does anyone have an example of how to efficiently provide a UITableView with data from a Core Data model, preferable including the use of sections (via a referenced property), without the use of NSFetchedResultsController?
How was this done before NSFetchedResultsController became available? Ideally the sample should only get the data that's being viewed and make extra requests when necessary.
Thanks,
Tim
For the record, I agree with CommaToast that there's at best a very limited set of reasons to implement an alternative version of NSFetchedResultsController. Indeed I'm unable to think of an occasion when I would advocate doing so.
That being said, for the purpose of education, I'd imagine that:
upon creation, NSFetchedResultsController runs the relevant NSFetchRequest against the managed object context to create the initial result set;
subsequently — if it has a delegate — it listens for NSManagedObjectContextObjectsDidChangeNotification from the managed object context. Upon receiving that notification it updates its result set.
Fetch requests sit atop predicates and predicates can't always be broken down into the keys they reference (eg, if you create one via predicateWithBlock:). Furthermore although the inserted and deleted lists are quite explicit, the list of changed objects doesn't provide clues as to how those objects have changed. So I'd imagine it just reruns the predicate supplied in the fetch request against the combined set of changed and inserted records, then suitably accumulates the results, dropping anything from the deleted set that it did previously consider a result.
There are probably more efficient things you could do whenever dealing with a fetch request with a fetch limit. Obvious observations, straight off the top of my head:
if you already had enough objects, none of those were deleted or modified and none of the newly inserted or modified objects have a higher sort position than the objects you had then there's obviously no changes to propagate and you needn't run a new query;
even if you've lost some of the objects you had, if you kept whichever was lowest then you've got an upper bound for everything that didn't change, so if the changed and inserted ones together with those you already had make more then enough then you can also avoid a new query.
The logical extension would seem to be that you need re-interrogate the managed object context only if you come out in a position where the deletions, insertions and changes modify your sorted list so that — before you chop it down to the given fetch limit — the bottom object isn't one you had from last time. The reasoning being that you don't already know anything about the stored objects you don't have hold of versus the insertions and modifications; you only know about how those you don't have hold of compare to those you previously had.

Resources