Cache aside pattern on azure and redis cache - azure

We are trying to implement cache aside pattern on azure. While reading data, we first check if data exist on cache, if it does, we serve it from cache. Otherwise we fetch it from database, populate cache and return it.
If cache is not accessible (due to some transient or non-transient issues), we ignore it.
But in case of update, we first update it in database and then delete the cache key. What should we do in case if cache is not accessible? To handle transient error we can implement retry strategy. If cache is not accessible even after retry, we should rollback our database transaction. Otherwise if cache comes back later, it would not be in sync with db. But while retrying, if somebody try to read this data, he will get a update value can later be rollbacked (if cache is not responding).
Thanks In Advance

Related

Redis Write Behind and Aside cache strategy

I want to implement a write-behind and cache aside strategy at the same time. Basically, I want to read data from the cache if it exists in the cache otherwise fetch data from the database and set it to cache so can I read the same data from cache in next request.
I also want to functionality when anyone wants to update data first update cache data then after a time(eg. every 2 mints) cache automatically update the database.
I don't know which redis function can do this for me, can anybody give me a proper resource. If you tell me all code with this functionality it will be so helpful for me.
I did a lot of google, All are telling the strategy of write-behind and cache aside. But not anybody is telling me proper code and function which i will use for write back/behind cache in Node JS.
For this application, I am using node js.
Thanks A lot!!
Take a look at the following project that uses RedisGears (https://oss.redislabs.com/redisgears/) to implement write-behind and write-through on Redis.
https://github.com/RedisGears/rgsync

DDD aggregate repository and caching repository

I have product repository. And I want to use redis as cache. And I create cache repo.
When I want to get product. First I go cache repo if not exist I query main database . If product exists in There. I write to cache and return.
Option 1) I get cache repository in product repository via DI and use in there.
Option 2) I get cache repository in application layer in command handler with product repository and I use both separetly
It seems to me that you are driven by technical requirements (i.e usage of Redis) and not business requirement (i.e why do you need caching ? performance issue, latency ?).
But, to sum up a great post from another thread in SO: Which layer should I implement caching of lookup data from database in a DDD application?, you have the following options:
Manage the cache in the Application layer, directly in the Application Service. This way, you have full control whether you want to to use the cache or not for such query/command
Hide the cache in the repository. But here every clients of your repository will use the cache, and that is something maybe you want to have control over.
Either way, one of the most common approach is to use the pattern proxy, where the method call will be intercepted first by the proxy, whom role is to send data from the cache if it already have the data. Otherwise delegate the call to the original object.

How to change hazelcast cache refresh

We have a hazelcast problem. For the last few days, it has not been refreshing itself. We have to maunally refresh the cache from the web console.
What can I do about this?
And other problem is: How can I force hazelcast to read from db if cache does not exist?
It is not clear what you mean by Refresh. Normally, what users do is, they have TTL configured for the entries in MAP. And you can also implement a MapStore that will read from DB, when entry is not available in cache. When your applications read the entry and if it doesn't exist in cache, Hazelcast will call the mapstore to read from DB. And after TTL elapsed, the entry will be removed from the cache. Next time you read it, it will be refreshed.

What if cache data in varnish cache is changed?

What if some data that is saved in varnish cache is changed after sometime on backend server. Then when a request comes, then varnish return old data or updated data?
The old data, or to be clear: it returns the data as it was at the time when it was cached if the expiry time of the cached object has not yet been reached. If you want it to update before that time you need to purge or ban the item in the cache. See the chapter on Purging and banning in the varnish documentation for details on implementation.

Is it possible to securely delete data from memcached?

As above really, if I store something (e.g. website session data) in memcached, is it possible to remove the data securely so that it would not be evident in a later memory dump?
I assume delete just unassigns the memory rather than wiping it? Could I manually junk the allocated memory by updating the key with random data before deleting it?
Obviously encrypting the data before storing it would be a solution but this also adds a performance overhead.
You can't... Replacing a value is another allocation and will not overwrite the old value in memory.
Check the FAQ. So, if you want to secure your data, because you are in a hostile environment, use SASL authentication. Check it out: SASL
And make sure no one has access to memcached from the outside!!! Bind it to localhost.
Excerpt from the manual:
When do expired cached items get deleted from the cache?
memcached uses a lazy expiration, which means it uses no extra cpu
expiring items. When an item is requested (a get request) it checks
the expiration time to see if the item is still valid before returning
it to the client.
Similarly when adding a new item to the cache, if the cache is full,
it will look at for expired items to replace before replacing the
least used items in the cache.

Resources