Add cache at runtime to GridGain / reconfigure GridGain at runtime - gridgain

Is there a means of reconfiguring the GridCacheConfiguration at runtime for GridGain?
The end goal is to be able to add a grid cache at runtime after having started up the Grid.
final GridConfiguration gridConfiguration = new GridConfiguration();
gridConfiguration.setMarshaller(new GridOptimizedMarshaller());
Grid grid = GridGain.start(gridConfiguration);
...
<later on>
GridCacheConfiguration newCacheConfig = ...; //defines newConfig
grid.configuration().setCacheConfiguration(newCacheConfig);
grid.cache("newConfig"); // <-- throws a cache not defined error!

Adding caches usually has to do with handling different data types (generics), which GridGain addresses with GridCacheProjections, like so:
GridCacheProjection<Integer, MyType> prj = cache.projection(Integer.class, MyType.class);
You can create as many different projection from the same cache as needed. In addition to specifying data types, you can also use projections to turn cache flags on and off, or to provide filtered view for the cache with projection filters.

Related

How can I manipulate the Audit screen (SM205510) through code

I'm trying to manipulate the Audit screen (SM205510) through code, using a graph object. The operation of the screen has processes that seem to work when a screen ID is selected in the header. This is my code to create a new record:
Using PX.Data;
Using PX.Objects.SM;
var am = PXGraph.CreateInstance<AUAuditMaintenance>();
AUAuditSetup auditsetup = new AUAuditSetup();
auditsetup.ScreenID = "GL301000";
auditsetup = am.Audit.Insert(auditsetup);
am.Actions.PressSave();
Now, when I execute the code above, it creates a record in the AUAuditSetup table just fine - but it doesn't automatically create the AUAuditTable records the way they are auto-generated in the screen (I realize that the records aren't in the database yet) - but how can I get the graph object to auto-generate the AUAuditTable records in the cache the way they are in the screen?
I've tried looking at the source code for the Audit screen - but it just shows blank, like there's nothing there. I look in the code repository in Visual Studio and I don't see any file for AUAuditMaintenance either, so I can't see any process that I could run in the graph object that would populate those AUAuditTable records.
Any help would be appreciated.
Thanks...
If I had such a need, to manipulate Audit screen records, I'd rather create my own graph and probably generate DAC class. Also I'd add one more column UsrIsArtificial and set it to false by default. And then manage them as ordinary records, but each time I'll add something, I'd set field UsrIsArtificial to false.
You can hardly find how that records are managed at graph level because that records are created and handled on on Graph level, but on framework level. Also think twice or even more about design, as direct writing into Audit history may cause confusion for users in the system of what was caused by user, and what was caused by your code. From that point of view I would rather add one more additional table, then add confusion to existing one.
Acumatica support provided this solution, which works beautifully (Hat tip!):
var screenID = "GL301000"; //"SO303000";
var g = PXGraph.CreateInstance<AUAuditMaintenance>();
//Set Header Current
g.Audit.Current = g.Audit.Search<AUAuditSetup.screenID>(screenID);
if (g.Audit.Current == null) //If no Current then insert
{
var header = new AUAuditSetup();
header.ScreenID = screenID;
header.Description = "Test Audit";
header = g.Audit.Insert(header);
}
foreach (AUAuditTable table in g.Tables.Select())
{
table.IsActive = true;
//Sets Current for Detail
g.Tables.Current = g.Tables.Update(table);
foreach (AUAuditField field in g.Fields.Select())
{
field.IsActive = false;
g.Fields.Update(field);
}
}
g.Actions.PressSave();

Anyway to view Cache entry metadata from Hazelcast (ie: date added, last accessed, etc)?

I'm using Hazelcast with a distributed cache in embedded mode. My cache is defined with an Eviction Policy and an Expiry Policy.
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setStatisticsEnabled(false)
.setManagementEnabled(false)
.setReadThrough(true)
.setWriteThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1)
.setAsyncBackupCount(1)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
1800,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
MutableCacheEntryListenerConfiguration<UserRolesCacheKey, String[]> listenerConfiguration =
new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false);
userRolesCache.registerCacheEntryListener(listenerConfiguration);
The problem I am having is that my Listener seems to be firing prematurely in a production environment; the listener is executed (REMOVED) even though the cache has been recently queried.
As the expiry listener fires, I get the CacheEntry, but I'd like to be able to see the reason for the Expiry, whether it was Evicted (due to MaxSize policy), or Expired (Duration). If Expired, I'd like to see the timestamp of when it was last accessed. If Evicted, I'd like to see the number of entries in the cache, etc.
Are these stats/metrics/metadata available anywhere via Hazelcast APIs?
Local cache statistics (entry count, eviction count) are available using ICache#getLocalCacheStatistics(). Notice that you need to setStatisticsEnabled(true) in your cache configuration for statistics to be available. Also, notice that the returned CacheStatistics object only reports statistics for the local member.
When seeking info on a single cache entry, you can use the EntryProcessor functionality to unwrap the MutableEntry to the Hazelcast-specific class com.hazelcast.cache.impl.CacheEntryProcessorEntry and inspect that one. The Hazelcast-specific implementation provides access to the CacheRecord that provides metadata like creation/accessed time.
Caveat: the Hazelcast-specific implementation may change between versions. Here is an example:
cache.invoke(KEY, (EntryProcessor<String, String, Void>) (entry, arguments) -> {
CacheEntryProcessorEntry hzEntry = entry.unwrap(CacheEntryProcessorEntry.class);
// getRecord does not update the entry's access time
System.out.println(hzEntry.getRecord().getLastAccessTime());
return null;
});

Dropped and Recreated ArangoDB Databases Retain Collections

A deployment script creates and configures databases, collections, etc. The script includes code to drop databases before beginning so testing them can proceed normally. After dropping the database and re-adding it:
var graphmodule = require("org/arangodb/general-graph");
var graphList = graphmodule._list();
var dbList = db._listDatabases();
for (var j = 0; j < dbList.length; j++) {
if (dbList[j] == 'myapp')
db._dropDatabase('myapp');
}
db._createDatabase('myapp');
db._useDatabase('myapp');
db._create('appcoll'); // Collection already exists error occurs here
The collections that had previously been added to mydb remain in mydb, but they are empty. This isn't exactly a problem for my particular use case since the collections are empty and I had planned to rebuild them anyway, but I'd prefer to have a clean slate for testing and this behavior seems odd.
I've tried closing the shell and restarting the database between the drop and the add, but that didn't resolve the issue.
Is there a way to cleanly remove and re-add a database?
The collections should be dropped when db._dropDatabase() is called.
However, if you run db._dropDatabase('mydb'); directly followed by db._createDatabase('mydb'); and then retrieve the list of collections via db._collections(), this will show the collections from the current database (which is likely the _system database if you were able to run the commands)?.
That means you are probably looking at the collections in the _system database all the time unless you change the database via db._useDatabase(name);. Does this explain it?
ArangoDB stores additional information for managed graphs;
Therefore when working with named graphs, you should use the graph management functions to delete graphs to make shure nothing remains in the system:
var graph_module = require("org/arangodb/general-graph");
graph_module._drop("social", true);
The current implementation of the graph viewer in the management interface stores your view preferences (like the the attribute that should become the label of a graph) in your browsers local storage, so thats out of the reach of these functions.

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

How do I disable caching on certain nodes in MVCSiteMapProvider?

I need to have a menu structure that changes depending on what page the user is currently viewing. Hence I need to disable caching for certain nodes as these may change for every request. How do I do this?
I have tried setting up the DynamicNode in the following way:
var dynamicNode = new DynamicNode()
{
Title = title,
Action = actionName,
Controller = controllerName,
RouteValues = routeValues,
Attributes = attributes,
ChangeFrequency = ChangeFrequency.Always,
LastModifiedDate = DateTime.Now,
UpdatePriority = UpdatePriority.Automatic,
};
But that seems tohave no effect.
I have also set cacheDuration="0" in the Web.config file, no effect.
I've also set the following in the GetCacheDesctription of the DynamicNodeProvider
return new CacheDescription("GuideDynamicNodeProvider")
{
AbsoluteExpiration = DateTime.Now,
};
Also with no effect.
Am I using these settings incorrectly? The documentation on this aspect is rather lacking.
Disabling caching for specific nodes is not supported. However, you can disable caching for the entire sitemap by setting the cache duration to 0.
If what you are trying to do is refresh nodes when the data changes, you can use the SiteMapCacheReleaseAttribute or call SiteMaps.ReleaseSiteMap() when the data is updated.
On the other hand, if data is updated in your database from a source that is not under your control, you can implement ICacheDependency yourself to create a SqlCacheDependency class and then inject it using DI. Have a look at the RuntimeFileCacheDependency class to see how that can be done.
Note that the reason the ChangeMonitor is put into a list is so it will support the RuntimeCompositeCacheDependency, which allows you to configure multiple cache dependencies for the same cache.

Resources