design pattern for caching nested (or child) objects - object

I have an class as below. I have a concept of caching in my application.
At some time, suppose one Profile has 25 ABC objects , and the data is stored in DB.
Now , while using my application, i may or may not use all 25 ABC objects. How do i design my classes to cache the code. One approach can be that i fetch all 25 ABC objects and cache the Profile object. (But that would have performance hit)
Usually , you cache the objects as required , but how do i do it. I cannot cache ABC objects separately because the other fields in Profile class are required along with these.
class Profile
{
String name;
String country; AND 5 more fields
List<ABC> obj;
}

Related

Redis Caching - Is it a bad practice to store duplicate data

Is it a bad practice to store duplicate data in Redis cache?
I am experimenting with a GraphQL Caching solutions, but I have a few tables which I query by a combination of keys and never their primary key, and appears to be a bit of an issue for me.
Lets consider these tables
Products - id, ...
Images - id, productId, size
I need to be able to get the images ( multiple ) by productId or a single row by a combination of productId and size.
What I currently store is something in the form of
{
images:productId:1:size:sm: {...},
images:productId:1:size:xs: {...},
images:productId:1: ['images:productId:1:size:sm', 'images:productId:1:size:xs']
}
The third object contains references to all of the available images in cache for the product, so I end up performing two queries to retrieve the data.
If I want one, I can directly go ahead and get it. If I want all of them, I first have to hit the third key, and then use the keys within it to get the actual objects.
Is this a bad idea? Should I bother with it, or just go with the simpler form
{
images:productId:1:size:sm: {...},
images:productId:1:size:xs: {...},
images:productId:1: [ {...}, {...} ] // Copies of the two objects from above
}
To provide some context, some of these objects might become a bit large over time, because they might contain long text / html from Rich text editors.
I read that hashes compress data better, so I organized them in a way that they are placed in a single hash, that way invalidation becomes easier too ( I don't care about invalidating some of them, they will always be invalidated at once ).
It is a multi-tenant system, where I would be using a tenant id to scope the data to specific users.

Repository methods for query children of aggregate root

I have Order aggregate root class containing children value objects:
class Order {
val id: String
val lines: Seq[OrderLine]
val destination: Destination
//...omit other fields
}
This is a CQRS read model, that is represented by order-search microservice responsible for searching orders by some filter.
There is OrderApplicationService that uses OrderRepository (I am not sure that it is a pure repository in ddd terms):
trait OrderRepository {
def search(filter:OrderFilter):Seq[Order]
def findById(orderId:String):Order
}
and ElasticSearchOrderRepository which uses ES as search engine.
Due to new requirements I need new api method for UI that will search for the all destinations across the orders by some filter. It should be /destinations endpoint, that will call repository to find all data. The performance is important in this case, so to search for all orders and that map them to destination doesn't seem a good solution.
What is the most appropriate option to solve this? :
Add new method in OrderRepository e.g. def searchOrderDestinations(filter:DestinationFilter): Seq[Destination]
Create new repository:
trait OrderDestinationRepository {
def searchOrderDestinations(filter:DestinationFilter): Seq[Destination]
}
The same is for application service - do I need to create new DestinationAppService?
Are these options applicable? Or maybe there is some better solution?
Thanks in advance!
This is a CQRS read model
Perfect - create and update a list of your orders indexed by destination, and use that to serve the query results.
Think "relational database that includes the data you need to create the view, and an index". Queries go to the database, which acts as a cache for the information. A background process (async) runs to update the information in database.
How often you run that process will depend on how stale the data in the view can be. How bad is it for the business if the view shows results as of 10 minutes ago? as of 1 minute ago? as of an hour ago?

ServiceStack.OrmLite: Slow write/reads?

UPDATE June 30
This question made a more clean benchmarking, and Mythz found an issue and resolved it:
ServiceStack benchmark continued: why does persisting a simple (complex) to JSON slow down SELECTs?
ARE WRITE/READ SPEEDS REASONABLE?
Im my trials with OrmLite, I am going to test to convert all our current data/objects from our own implementation for saving to database, and switch over to OrmLite.
However, I did a simple benchmark/speedtest, where I compared our current serialization and write to db as well as read from db and deserialize.
What I found was that ServiceStack is much slower than how we currently do it (we currently just serialize the object using FastSerializer, and write the byte[] data to a BLOB field, so its fast to write and read, but of course obvious drawbacks).
The test I did was using the Customer class, that has a bunch of properties (used in our products, so its a class that is used every day in our current versions).
If I create 10 000 such objects, then measure how long it takes to persist those to a MySql database (serialization + write to db), the results are:
UPDATE
As the "current implementation" is cheating (its just BLOBing a byte[] to database), I implemented a simple RelationDbHandler that persists the 10 000 objects in the normal way, with a simple SQL query. Results are added below.
WRITE 10 000 objects:
Current implementation: 33 seconds
OrmLite (using .Save): 94 seconds
Relational approach: 24.7 seconds
READ 10 000 objects:
Current implementation: 1.5 seconds
OrmLite (using Select<>): 28 seconds
Relational approach: 16 seconds
I am running it locally, on a SSD disk, with no other load on CPU or disk.
I expected our current implementation to be faster, but not that much faster.
I read some benchmark-stuff on ServiceStack webpage (https://github.com/ServiceStack/ServiceStack/wiki/Real-world-performance), but most of the links area dead. Some plain that reading 25 000 rows takes 245 ms, but i have no idea what a row looks like.
Question 1: Are there any benchmarks I can read more about?
Question 2: The Customer object is specified below. Does mythz think the write/read times above is reasonable?
TEST CASE:
This is the Customer objects as it looks in the database after OrmLite created the table. I only populate 5 properties, one is "complex" (so only one field has a JSON serialization represenation in the row), but since all fields are written, I dont think that matters much?
Code to save using OrmLite:
public void MyTestMethod<T>(T coreObject) where T : CoreObject
{
long id = 0;
using (var _db = _dbFactory.Open())
{
id = _db.Insert<T>(coreObject, selectIdentity: true);
}
}
Code to read all from table:
internal List<T> FetchAll<T>()
{
using (var _db = _dbFactory.Open())
{
List<T> list = _db.Select<T>();
return list;
}
}
Use Insert() for inserting rows. Save() will check if the existing record exist and update it if it does, it also populates Auto Increment primary keys, if any were created.
There's also InsertAsync() APIs available but Oracle's official MySql.Data NuGet package didn't have a proper async implementation in which case using https://github.com/mysql-net/MySqlConnector can yield better results by installing the
ServiceStack.OrmLite.MySqlConnector NuGet package and using its MySqlConnectorDialect.Provider.
You'll also get better performance using .NET Core which will use the latest 8.x MySql.Data NuGet package.
Note: The results in https://github.com/tedekeroth/OrmLiteVsFastSerializer are not comparable, which is essentially comparing using MySql as a NoSQL blob storage vs a quasi relational model in OrmLite with multiple complex type blobbed fields.
In my tests I've also noticed several serializaation exceptions being swallowed which will negative impact performance, you can have Exceptions bubbled by configuring on Startup:
OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnError = true;

Preferred way to store a child object in Azure Table Storage

I did a little expirement with storing child objects in azure table storage today.
Something like Person.Project where Person is the table entity and Person is just a POCO. The only way I was able to achieve this was by serializing the Project into byte[]. It might be what is needed, but is there another way around?
Thanks
Rasmus
Personally I would prefer to store the Project in a different table with the same partition key that its parent have, which is its Person's partition key. It ensures that the person and underlying projects will be stored in the same storage cluster. On the code side, I would like to have some attributes on top of the reference properties, for example [Reference(typeof(Person))] and [Collection(typeof(Project))], and in the data context class I can use some extension method it retrieve the child elements on demand.
In terms of the original question though, you certainly can store both parent and child in the same table - were you seeing an error when trying to do so?
One other thing you sacrifice by separating out parent and child into separate tables is the ability to group updates into a transaction. Say you created a new 'person' and added a number of projects for that person, if they are in the same table with same partition key you can send the multiple inserts as one atomic operation. With a multi-table approach, you're going to have to manage atomicity yourself (if that's a requirement of your data consistency model).
I'm presuming that when you say person is just a POCO you mean Project is just a POCO?
My preferred method is to store the child object in its own Azure table with the same partition key and row key as the parent. The main reason is that this allows you to run queries against this child object if you have to. You can't run just one query that uses properties from both parent and child, but at least you can run queries against the child entity. Another advantage is that it means that the child class can take up more space, the limit to how much data you can store in a single property is less than the amount you can store in a row.
If neither of these things are a concern for you, then what you've done is perfectly acceptable.
I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
Have a look at: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten complex object (of type ie. Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);

CoreData design pattern: persisting a single object of many -or-how many NSPessistentObjectContexts should I have?

I'm converting an app from SQLitePersistentObjects to CoreData.
In the app, have a class that I generate many* instances of from an XML file retrieved from my server. The UI can trigger actions that will require me to save some* of those objects until the next invocation of the app.
Other than having a single NSManagedObjectContext for each of these objects (shared only with their subservient objects which can include blobs). I can't see a way how I can have fine grained control (i.e. at the object level) over which objects are persisted. If I try and have a single context for all newly created objects, I get an exception when I try to move one of my objects to a new context so I can persist it on ots own. I'm guessing this is because the objects it owns are left in the 'old' context.
The other option I see is to have a single context, persist all my objects and then delete the ones I don't need later - this feels like it's going to be hitting the database too much but maybe CoreData does magic.
So:
Am I missing something basic about the way my CoreData app should be architected?
Is having a context per object a good design pattern?
Is there a better way to move objects between contexts to avoid 2?
* where "many" means "tens, maybe hundreds, not thousands" and "some" is at least one order of magnitude less than "many"
Also cross posted to the Apple forums.
Core Data is really not an object persistence framework. It is an object graph management framework that just happens to be able to persist that graph to disk (see this previous SO answer for more info). So trying to use Core Data to persist just some of the objects in an object graph is going to be working against the grain. Core Data would much rather manage the entire graph of all objects that you're going to create. So, the options are not perfect, but I see several (including some you mentioned):
You could create all the objects in the Core Data context, then delete the ones you don't want to save. Until you save the context, everything is in-memory so there won't be any "going back to the database" as you suggest. Even after saving to disk, Core Data is very good at caching instances in the contexts' row cache and there is surprisingly little overhead to just letting it do its thing and not worrying about what's on disk and what's in memory.
If you can create all the objects first, then do all the processing in-memory before deciding which objects to save, you can create a single NSManagedObjectContext with a persistent store coordinator having only an in-memory persistent store. When you decide which objects to save, you can then add a persistent (XML/binary/SQLite) store to the persistent store coordinator, assign the objects you want to save to that store (using the context's (void)assignObject:(id)object toPersistentStore:(NSPersistentStore *)store) and then save the context.
You could create all the objects outside of Core Data, then copy the objects to-be-saved into a Core Data context.
You can create all the objects in a single in-memory context and write your own methods to copy those objects' properties and relationships to a new context to save just the instances you want. Unless the entities in your model have many relationships, this isn't that hard (see this page for tips on migrating objects from one store to an other using a multi-pass approach; it describes the technique in the context of versioning managed object models and is no longer needed in 10.5 for that purpose, but the technique would apply to your use case as well).
Personally, I would go with option 1 -- let Core Data do its thing, including managing deletions from your object graph.

Resources