How does AutoMapper store all its maps in memory? - automapper

I wanted to know how AutoMapper stores all its MappingConfigurations in memory. Also how can I remove all the mappings from memory without using Mapper.Reset.
I am using AutoMapper 3.2.1

Related

Queries in Hazerlcast

I have a Map that uses MapStore. In this way, some objects are not loaded into Memory. How I can search a required object if it isn't in memory?
Is 'read-through' feature working for queries?
You can query Hazelcast for data held in Hazelcast or for data external to Hazelcast using the same SQL,
SELECT * FROM etc..
For the latter, see documentation link.
Unfortunately, there is not currently an implementation for Mongo. So for now you are blocked, sorry.
Read-through (or query-through) would also require the remote store have the same format as the IMap, which is not otherwise required for MapStore.
If you can't host all your Mongo data in Hazelcast (which eliminates the need to query Mongo), then you could perhaps consider some sort of fly-weight design pattern and perhaps hold a projection.

Is there a memory limit that we can configure in Hazelcast for all maps combined?

I would like to know if there is a memory limit that we can configure in Hazelcast for all maps combined. Say at the instance level.
Since my hazelcast node runs embedded inside my app's JVM, I would like to put this restriction
How would that work when you reach memory limit? Which map will evict the entries?
Since you're starting hazelcast with defining a heap size you can use heap usage based eviction on all maps. Or you can just use a single map only if you want to have a single eviction configuration.
#Haresh, you have an option to define eviction policy for USED_HEAP_PERCENTAGE or FREE_HEAP_PERCENTAGE as described here: https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#configuring-map-eviction
You need to define this eviction config for all map configs & default map config as well. This'll help you to limit heap usage but can cause other unexpected behaviors as #sertug mentioned.
Assume that you have 2 maps: a and b. You fill up a and only have enough room for 1 more entry in the JVM. When you start pushing data to b, it'll only store the last one & evict the previous one since a map can only evict a record from its own, not from a different map.

Does index on IMap work with Kryo serialization?

I've created a Hazelcast IMap and defined some Index on value field. Does Index work with Kryo serialization ? I remember that in earlier version of Hazelcast index used to work only when in-memory-format was OBJECT.
Indexes indeed works with Kryo serialization. When an entry is being Hazelcast deserializes it to extract indexed fields.

Hazelcast 3.4: how to avoid deserialization from near cache and get original item

Starting from version 3.X Hazelcast returns copy of the original object that is stored in a distributed map with near cache enabled, as opposed to version 2.5 where original object was returned.
This behavior allowed local modifications of entries stored in the map and GET operations was fast.
Now, with version 3.X it stores binary object in near cache, and it causes deserialization on every GET, which significantly impacts performance.
Is it possible to configure Hazelcast 3.4.2 Map's Near Cache to return reference to original object, and not a copy of the original entry?
In the <near-cache> section, if you set
<in-memory-format>OBJECT</in-memory-format>
AND
<cache-local-entries>true</cache-local-entries>
you should get the same instance returned.
This works for both client and member.
I do not think that there is a way to get original item.
To avoid deserialization you could try and set
<in-memory-format>OBJECT</in-memory-format>
in <near-cache> configuration. This way hazelcast will store data in <near-cache> in object form and deserialization would not be needed. But I guess this will work only if you configured <near-cahce> on the client side, because if <near-cache> is on the node you will still need serialization to pass object from node to client.

ServiceStack cache size

In ServiceStack when using IN-memory cache is there a way to find the actual size of the cached objects in bytes?
The In Memory Cache just stores everything in a ConcurrentDictionary. There's no available way to count the bytes.
One solution would be to create your own fork of it and add all the instrumentation you need on each write.

Resources