Is there a way to modify a JCache's expiry policy that has been declared declaratively? - hazelcast

I'm using Hazelcast 3.12 jCache implementation. My cache has been declared declaratively (via hazelcast-config.xml), however I want to be able to change the ExpiryPolicy's duration dynamically, as the timeout value is only available programatically.
I've looked at the docs for ICache but do not see any methods that would allow me to retreive and/or modify the expiry policy for the cache as a whole.
My cache is declared as:
<cache name="toto">
<async-backup-count>1</async-backup-count>
<backup-count>1</backup-count>
<cache-entry-listeners>
<cache-entry-listener old-value-required="true">
<cache-entry-listener-factory class-name="cache.UserRolesEntryListenerStaticFactory"/>
</cache-entry-listener>
</cache-entry-listeners>
<key-type class-name="java.lang.String"/>
<management-enabled>true</management-enabled>
<statistics-enabled>true</statistics-enabled>
<quorum-ref>1</quorum-ref>
<partition-lost-listeners></partition-lost-listeners>
</cache>
and I would like to set/update the expiry policy when it is retrieved as:
#Produces
#Singleton
#UserRolesCache
public Cache<String, String[]> createUserRoleCache(#HazelcastDistributed CacheManager cacheManager) {
Cache cache = cacheManager.getCache("toto");
// get expiry timeout from a 3rd service
int timeout = configService.getCacheExpiry();
// how to set the expiry policy here???
// cache.setExpiryPolicy(.....) ?????
}
Is this feasible using the jCache or Hazelcast API?

There is no explicit way to change the default expiry policy of a Cache (either in JCache or
Hazelcast-specific APIs).
You can achieve the same effect by configuring your Cache with a custom Factory<ExpiryPolicy>.
In your ExpiryPolicy implementation you can consult your external service to query the current
expiry duration and return that so the JCache implementation applies it. Notice that since the expiry policy
is queried on each entry creation/access/update, it is best that ExpiryPolicy methods implementation do not involve
any remote service queries or database access, otherwise you will experience high latency. For example,
it is best to register your expiry policy as a listener to your external service (if supported)
or have a separate executor to schedule queries to the external service for updates.
Example implementation using JCache API:
class Scratch {
public static void main(String[] args)
throws InterruptedException {
MutableConfiguration<String, String[]> roleCacheConfig = new MutableConfiguration<>();
// other config here...
roleCacheConfig.setExpiryPolicyFactory(new CustomExpiryPolicyFactory());
CachingProvider provider = Caching.getCachingProvider();
CacheManager manager = provider.getCacheManager();
Cache<String, String[]> cache = manager.createCache("test", roleCacheConfig);
cache.put("a", new String[]{}); // consults getExpiryForCreation
Thread.sleep(1000);
cache.get("a"); // consults getExpiryForAccess
Thread.sleep(1000);
cache.put("a", new String[]{}); // consults getExpiryForUpdate
}
public static class CustomExpiryPolicyFactory implements Factory<ExpiryPolicy>, Serializable {
#Override
public ExpiryPolicy create() {
// initialize custom expiry policy: at this point
// the custom expiry policy should be registered as listener to
// external service publishing expiry info or otherwise configured
// to poll the external service for new expiry info
CustomExpiryPolicy expiryPolicy = new CustomExpiryPolicy(120, 30, 120);
return expiryPolicy;
}
}
public static class CustomExpiryPolicy implements ExpiryPolicy {
private volatile Duration expiryForCreation;
private volatile Duration expiryForAccess;
private volatile Duration expiryForUpdate;
public CustomExpiryPolicy(long secondsForCreation,
long secondsForAccess,
long secondsForUpdate) {
this.expiryForCreation = new Duration(TimeUnit.SECONDS, secondsForCreation);
this.expiryForAccess = new Duration(TimeUnit.SECONDS, secondsForAccess);
this.expiryForUpdate = new Duration(TimeUnit.SECONDS, secondsForUpdate);
}
// assuming this is invoked from external service whenever there is a change
public void onExpiryChanged(long secondsForCreation,
long secondsForAccess,
long secondsForUpdate) {
this.expiryForCreation = new Duration(TimeUnit.SECONDS, secondsForCreation);
this.expiryForAccess = new Duration(TimeUnit.SECONDS, secondsForAccess);
this.expiryForUpdate = new Duration(TimeUnit.SECONDS, secondsForUpdate);
}
#Override
public Duration getExpiryForCreation() {
return expiryForCreation;
}
#Override
public Duration getExpiryForAccess() {
return expiryForAccess;
}
#Override
public Duration getExpiryForUpdate() {
return expiryForUpdate;
}
}
}
You can supply your custom expiry policy factory class name in declarative Hazelcast XML config like this:
<cache name="toto">
<async-backup-count>1</async-backup-count>
<backup-count>1</backup-count>
...
<expiry-policy-factory class-name="com.example.cache.CustomExpirePolicyFactory" />
...
</cache>
As a side-note, there are methods in ICache, the Hazelcast-specific extended Cache interface, that allow
you to perform operations on a key or set of keys with a custom expiry policy specified per-key (but not change the cache-wide applicable expiry policy).

Related

Does the WindowsAzure.Storage library for Xamarin use NSUrlSession for file uploads?

Problem Statement: We have a requirement to upload log data to Azure Storage from a Xamarin.IOS application. The logs are not created by the user of the application, and there's no constraint on the user to keep the application open for any amount of time after the logs are generated. We want to reliably upload our logs with a couple points in mind:
The user might send the app into the background
The file sizes can be up to 15MB
We don't care when we get them. We're open to scheduling a task for this.
In looking at potential solutions to this problem, the Xamarin documentation states that in iOS7+:
NSURLSession allows us to create tasks to:
Transfer content through network and device interruptions.
Upload and download large files ( Background Transfer Service ).
So it seems like NSURLSession is a good candidate for this sort of work, but I wonder if I am reinventing the wheel. Does the WindowsAzure.Storage client library respect app backgrounding with an upload implementation based on NSURLSession, or if I want to upload the data in the background, is it necessary to upload to an intermediate server I control with a POST method, and then relay data to Azure Storage? There doesn't seem to be any indication from the public Azure documentation that uploads can be done via scheduled task.
I got this working. I've simplified the classes and methods into a single method. Only the necessities are here.
public void UploadFile(File playbackFile)
{
/// Specify your credentials
var sasURL = "?<the sastoken>";
/// Azure blob storage URL
var storageAccount = "https://<yourstorageaccount>.blob.core.windows.net/<your container name>";
/// specify a UNIQUE session name
var configuration =
NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration("A background session name");
/// create the session with a delegate to recieve callbacks and debug
var session = NSUrlSession.FromConfiguration(
configuration,
new YourSessionDelegate(),
new NSOperationQueue());
/// Construct the blob endpoint
var url = $"{storageAccount}/{playbackFile.Name}{sasURL}";
var uploadUrl = NSUrl.FromString(url);
/// Add any headers for Blob PUT. x-ms-blob-type is REQUIRED
var dic = new NSMutableDictionary();
dic.Add(new NSString("x-ms-blob-type"), new NSString("BlockBlob"));
/// Create the request with NSMutableUrlRequest
/// A default NSUrlRequest.FromURL() is immutable with a GET method
var request = new NSMutableUrlRequest(uploadUrl);
request.Headers = dic;
request.HttpMethod = "PUT";
/// Create the task
var uploadTask = session.CreateUploadTask(
request,
NSUrl.FromFilename(playbackFile.FullName));
/// Start the task
uploadTask.Resume();
}
/// Delegate to recieve callbacks. Implementations are omitted for brevity
public class YourSessionDelegate: NSUrlSessionDataDelegate
{
public override void DidBecomeInvalid(NSUrlSession session, NSError error)
{
Console.WriteLine(error.Description);
}
public override void DidSendBodyData(NSUrlSession session, NSUrlSessionTask task, long bytesSent, long totalBytesSent, long totalBytesExpectedToSend)
{
Console.WriteLine(bytesSent);
}
public override void DidReceiveData(NSUrlSession session, NSUrlSessionDataTask dataTask, NSData data)
{
Console.WriteLine(data);
}
public override void DidCompleteWithError(NSUrlSession session, NSUrlSessionTask task, NSError error)
{
var uploadTask = task as NSUrlSessionUploadTask;
Console.WriteLine(error?.Description);
}
public override void DidReceiveResponse(NSUrlSession session, NSUrlSessionDataTask dataTask, NSUrlResponse response, Action<NSUrlSessionResponseDisposition> completionHandler)
{
Console.WriteLine(response);
}
public override void DidFinishEventsForBackgroundSession(NSUrlSession session)
{
using (AppDelegate appDelegate = UIApplication.SharedApplication.Delegate as AppDelegate)
{
var handler = appDelegate.BackgroundSessionCompletionHandler;
if (handler != null)
{
appDelegate.BackgroundSessionCompletionHandler = null;
handler();
}
}
}
}
Helpful documentation:
https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob
https://developer.apple.com/documentation/foundation/nsmutableurlrequest/1408793-setvalue
https://learn.microsoft.com/en-us/dotnet/api/foundation.insurlsessiontaskdelegate?view=xamarin-ios-sdk-12
Hopefully someone finds this useful and spends less time on this than I did. Thanks #SushiHangover for pointing me in the right direction.

Hazelcast IMap TTL Expiry

How to invoke a method to sync the data to some DB or Kafka once the TTL set is expired during the put method of IMap class.
eg:IMap.put(key,value,TTL,TimeUnit.SECONDS);
if the above TTL is set to like 10 seconds i must call some store or some mechanism where i could sync that key and value to DB or Kafka in real time. As of now when i tried the store method it is immediately calling the method instead of 10 seconds wait time.
You may set an EntryExpiredListener to your map config.
It feeds on two sources of expiration based eviction, they are max-idle-seconds and time-to-live-seconds.
Example Listener class:
#Slf4j
public class MyExpiredEntryListener implements EntryExpiredListener<String, String>, MapListener {
#Override
public void entryExpired(EntryEvent<String, String> event) {
log.info("entry Expired {}", event);
}
}
You can add this config via programmatically or you may set mapconfig via xml config file.
Example usage:
public static void main(String[] args) {
Config config = new Config();
MapConfig mapConfig = config.getMapConfig("myMap");
mapConfig.setTimeToLiveSeconds(10);
mapConfig.setEvictionPolicy(EvictionPolicy.RANDOM);
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IMap<String, String> map = hz.getMap("myMap");
map.addEntryListener(new MyExpiredEntryListener(), true);
for (int i = 0; i < 100; i++) {
String uuid = UUID.randomUUID().toString();
map.put(uuid, uuid);
}
}
You will see the logs like below when running this implementation.
entry Expired EntryEvent{entryEventType=EXPIRED, member=Member [192.168.1.1]:5701 - ca76c6d8-abe0-4efe-a6a6-24330657675b this, name='myMap', key=70ee594c-ffea-4584-aefe-1148b9fcdf9f, oldValue=70ee594c-ffea-4584-aefe-1148b9fcdf9f, value=null, mergingValue=null}
Also, you can use other entry listeners according to your requirements.

Global Static Dictionary Initialization from Database in Webapi

I want to Initialize a global Dictionary from Database in my web Api. Do i need to inject my DBContext in Global.Asax or Owin Startup. Any example would be much appreciated.
Any kind initialization purposes can be made in your custom defined OWIN Startup class class, like this:
using Microsoft.Owin;
using Microsoft.Owin.Security.OAuth;
using Owin;
using System;
[assembly: OwinStartup(typeof(WebAPIRestWithNest.Startup))]
namespace YourNamespace
{
public class Startup
{
public Dictionary<string, string> Table {get; private set;}
public void Configuration(IAppBuilder app)
{
// token generation
app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
AllowInsecureHttp = false,
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromHours(8),
Provider = new SimpleAuthorizationServerProvider()
});
// token consumption
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());
app.UseWebApi(WebApiConfig.Register());
Table = ... Connect from DB and fill your table logic ...
}
}
}
After that you can use your Startup.Table property from your application.
In general, it is bad practice to access objects using static field in the asp.net applications because this may lead to bugs that are hardly detected and reproduced: especially this is true for non-immutable/not-thread-safe objects like Dictionary.
I assume you want to cache some DB data in memory to avoid excessive SQL queries. It is good idea to use standard asp.net caching for this purpose:
public IDictionary GetDict() {
var dict = HttpRuntime.Cache.Get("uniqueCacheKey") as IDictionary;
if (pvtData==null) {
dict = doLoadDictionaryFromDB(); // your code that loads data from DB
HttpRuntime.Cache.Add(cacheKey, dict,
null, Cache.NoAbsoluteExpiration,
new TimeSpan(0,5,0), // cache at least for 5 minutes after last access
CacheItemPriority.Normal, null);
}
return dict;
}
This approach allows you to select appropriate expiration policy (without the need to reinventing the wheel with static dictionary).
If you still want to use static dictionary, you can populate it on the application start (global.asax):
void Application_Start(object sender, EventArgs e)
{
// your code that initializes dictionary with data from DB
}

Azure Redis Cache and MVC Child Actions

I have successfully implemented Azure Redis Cache using the Microsoft RedisOutputCacheProvider from NuGet which works as expected for general pages.
[ChildActionOnly]
[ChildActionOutputCache(CacheProfile.StaticQueryStringComponent)]
public ActionResult Show(int id)
{
// some code
}
However, I can't seem to get it to work for child actions. Prior to using Redis Cache, it was working using the default OutputCacheProvider.
Does anyone have any ideas, or is it simply a limitation?
Thanks in advance
In your Global.asax.cs, set a custom child action output cache that talks to Redis:
protected void Application_Start()
{
// Register Custom Memory Cache for Child Action Method Caching
OutputCacheAttribute.ChildActionCache = new CustomMemoryCache("My Cache");
}
This cache should derive from MemoryCache and implement the following members:
/// <summary>
/// A Custom MemoryCache Class.
/// </summary>
public class CustomMemoryCache : MemoryCache
{
public CustomMemoryCache(string name)
: base(name)
{
}
public override bool Add(string key, object value, DateTimeOffset absoluteExpiration, string regionName = null)
{
// Do your custom caching here, in my example I'll use standard Http Caching
HttpContext.Current.Cache.Add(key, value, null, absoluteExpiration.DateTime,
System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null);
return true;
}
public override object Get(string key, string regionName = null)
{
// Do your custom caching here, in my example I'll use standard Http Caching
return HttpContext.Current.Cache.Get(key);
}
}
More info on my blog post

Is it possible to inject an instance of object to service at runtime

I have created a plugin which inspects a param in the query string and loads up a user object based on this ID and populates
any request DTO with it. (All my request DTO's inherit from BaseRequest which has a CurrentUser property)
public class CurrentUserPlugin : IPlugin
{
public IAppHost CurrentAppHost { get; set; }
public void Register(IAppHost appHost)
{
CurrentAppHost = appHost;
appHost.RequestFilters.Add(ProcessRequest);
}
public void ProcessRequest(IHttpRequest request, IHttpResponse response, object obj)
{
var requestDto = obj as BaseRequest;
if (requestDto == null) return;
if (request.QueryString["userid"] == null)
{
throw new ArgumentNullException("No userid provided");
}
var dataContext = CurrentAppHost.TryResolve<IDataContext>();
requestDto.CurrentUser = dataContext.FindOne<User>(ObjectId.Parse(requestDto.uid));
if (requestDto.CurrentUser == null)
{
throw new ArgumentNullException(string.Format("User [userid:{0}] not found", requestDto.uid));
}
}
}
I need to have this User object available in my services but I don't want to inspect the DTO every time and extract from there. Is there a way to make data from plugins globally available to my services? I am also wondering if there is another way of instantiating this object as for my unit tests, the Plugin is not run - as I call my service directly.
So, my question is, instead of using Plugins can I inject a user instance to my services at run time? I am already using IoC to inject different Data base handlers depending on running in test mode or not but I can't see how to achieve this for User object which would need to be instantiated at the beginning of each request.
Below is an example of how I inject my DataContext in appHost.
container.Register(x => new MongoContext(x.Resolve<MongoDatabase>()));
container.RegisterAutoWiredAs<MongoContext, IDataContext>();
Here is an example of my BaseService. Ideally I would like to have a CurrentUser property on my service also.
public class BaseService : Service
{
public BaseService(IDataContext dataContext, User user)
{
DataContext = dataContext;
CurrentUser = user; // How can this be injected at runtime?
}
public IDataContext DataContext { get; private set; }
public User CurrentUser { get; set; }
}
Have you thought about trying to use the IHttpRequest Items Dictionary to store objects. You can access these Items from any filter or service or anywhere you can access IHttpRequest. See the src for IHttpRequest.
Just be mindful of the order that your attributes, services and plugins execute and when you store the item in the Items dictionary.
Adding:
We don't want to use HttpContext inside of the Service because we want use Service in our tests directly.
Advantages for living without it
If you don't need to access the HTTP
Request context there is nothing stopping you from having your same
IService implementation processing requests from a message queue which
we've done for internal projects (which incidentally is the motivation
behind the asynconeway endpoint, to signal requests that are safe for
deferred execution).
http://www.servicestack.net/docs/framework/accessing-ihttprequest
And we don't use http calls to run tests.
So our solution is:
public class UserService
{
private readonly IDataContext _dataContext;
public UserService(IDataContext dataContext)
{
_dataContext = dataContext;
}
public User GetUser()
{
var uid = HttpContext.Current.Request.QueryString["userId"];
return _dataContext.Get<User>(uid);
}
}
and
container.Register(x => new UserService(x.Resolve<IDataContext>()).GetUser()).ReusedWithin(ReuseScope.Request);
This is service signature:
public SomeService(IDataContext dataContext, User user) { }
Any suggestions?
I need to have this User object available in my services but I don't want to inspect the DTO every time and extract from there
How will your application know about the user if you're not passing the 'userid' in the querystring? Could you store the user data in the Session? Using a Session assumes the client is connected to your app and persists a Session Id (ss-id or ss-pid cookie in ServiceStack) in the client that can be looked up on the Server to get the 'session data'. If you can use the Session you can retrieve the data from your service doing something like
base.Session["UserData"] or base.SessionAs<User>();
Note: you will need to save your User data to the Session
Is there a way to make data from plugins globally available to my services? but I can't see how to achieve this for User object which would need to be instantiated at the beginning of each request.
This sounds like you want a global request filter. You're kind of already doing this but you're wrapping it into a Plugin.

Resources