I am using Mondrian as my server olap engine.
I have a scenario where some of my dimensions data is changing. When this happens I would like to clear mondrian cache.
I cannot understand how I can get a handle to Mondrian's cache control.
I have a reference to OlapConnection object, but I could not find any method that would give a handle to the CacheControl
Any suggestions?
Yosi
The answer given by bhuang3 is right. To access the cache control from an olap4j connection:
OlapConnection.unwrap(mondrian.rolap.RolapConnection.class).getCacheControl(null)
Well, you can use following APIs to flush the cube's cache
mondrian.olap.CacheControl cacheControl = connection.getCacheControl(null);
mondrian.olap.Schema schema = connection.getSchema();
mondrian.olap.Cube cube = schema.lookupCube(cubeName, false);
mondrian.olap.CacheControl.CellRegion cellRegion = cacheControl.createMeasuresRegion(cube);
cacheControl.flush(cellRegion);
Or you can flush the schema cache
cacheControl.flushSchemaCache();
Alternatively, please read this doc to get more details.
Related
How to set Hybris db transaction timeout using oracle database?
I have tried the following code but no effect, thanks in advance.
TransactionTemplate template = new TransactionTemplate(manager);
template.setTimeout(1);
You can set in local.properties db.connectionparam.oracle.jdbc.ReadTimeout or db.connectionparam.oracle.net.READ_TIMEOUT (both in millisecond).
Check this post for the difference : https://stackoverflow.com/a/18513472/1140748
To explain a bit, in the platform, in AbstractTenant you have a method extractCustomDBParams that checks for key starting with db.connectionparam .
Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency
I am replicating docs from DB A to DB B, every time a Doc from DB A arrives in DB B I want to run a 'stored procedure' to remove most of the fields from DB A (DB A is private, but has attachments that I want to be publicly available)
So far I've seen that this might be achieved using the _changes feed (continuous)and then running an 'update' handler on each document.
The document update handlers doc: https://wiki.apache.org/couchdb/Document_Update_Handlers
This seems like something that CouchDB would implement for me... (and I'm not really sure yet how to do the above).
Is there something like a 'hook' that can be run on every document that enters the database?
== EDIT ==
It seems that I would want to somehow include the update handler command in the replication trigger?
It sounds like with some changes to how your storing documents you may be able to benefit from CouchDB's filtered replication. You'd need to store the attachments in documents that could be equivalently copied (without modification) between the two databases.
If that's not an option, then you could potentially use transform-pouchdb plus PouchDB's .replicate.from() method to manage the replication.
Some quick pseudo-code for this idea looks a bit like this:
var PouchDB = require('pouchdb');
PouchDB.plugin(require('transform-pouch'));
var dbA = new PouchDB('a'); // "a" could be a URL to CouchDB or Cloudant
var dbB = new PouchDB('b');
dbB.transform({
incoming: function (doc) {
// do something to the document before storage
return doc;
}
});
dbB.replicate.from(dbA);
In theory, that (or something like it) should do what you're wanting...or at least giving you the framework in which to do what you're wanting. ^_^
Hope that helps!
Is there any fast way to remove all data from the local database? Like SQL 'drop database'?
I was looking through the documentation but haven't found anythig interesting yet.
The "CLI" way
Using the provided fdbcli interface, you can clear all the keys in the database using a single clearrange command, like this:
fdb> writemode on
fdb> clearrange "" \xFF
Committed (68666816293119)
Be warned that it executes instantly and that there is no undo possible!
Also, any application still connected to the database may continue reading/writing data using cached directory subspace prefixes, which may introduce data corruption! You should make sure to only use this method when nothing is actively using the cluster.
This method requires that your cluster be in a working state, and it will not immediately reclaim the space used on disk, and also will not reset the cluster's read version.
The "hard" way
If you have a single-node cluster, you can stop the fdb service, remove all files in its data_dir folder, restart the service, and then using fdbcli, execute the configure new single ssd command.
This will reclaim the disk space used previously, and reset everything back to the post-install state.
You can do this by clearing the entire range of keys.
In Python, it looks like this:
Database.clear_range('', '\xFF')
Where '' is the default slice begin, and '\xFF' is the default slice end, according to the clear_range documentation.
You can find the more information on clear_range for the API you're using in the documentation.
To do this programmatically in Java:
db.run(tx -> {
final byte[] st = new Subspace(new byte[]{(byte) 0x00}).getKey();
final byte[] en = new Subspace(new byte[]{(byte) 0xFF}).getKey();
tx.clear(st, en);
return null;
});
I am using Apache Shiro in my webapp.
I store some parameters in the session notably the primary key of an object stored in the database.
When the user logs in, I load the object from the database and save the primary key in the session. Then within the app the user can edit the object's data and either hit a cancel or a save button.
Both buttons triggers a RPC that gets the updated data to the server. The object is then updated in the database using the primary key stored in the session.
If the user remains active in the app (making some RPCs) everything works fine. But if he stays inactive for 3 min and subsequently makes a RPC then Shiro's securityUtils.getSubject().getSession() returns null.
The session timeout is set to 1,200,000 ms (20 min) so I don't think this is the issue.
When I go through the sessions stored in the cache of my session manager I can see the user's session org.apache.shiro.session.mgt.SimpleSession,id=6de78f10-b58e-496c-b40a-e2a9a4ad069c but when I try to get the session ID from the cookie and to call SecurityUtils.getSecurityManager().getSession(key) to get the session (where key is a SessionKey implementation): I get an exception.
When I try building a new subject from the session ID I lose all the attributes saved in the session.
I am happy to post some code to help resolve the issue but I tried so many workarounds that I don't know where to start... So please let me know what you need.
Alternatively if someone knows a better documented framework than Shiro I am all ears (Shiro's lack of documentation makes it really too time consuming)
The issue was related to the session config in the ini file. As usual with shiro the order mattered and some of my lines were out of place.
Below is the config that worked for me:
sessionDAO = org.apache.shiro.session.mgt.eis.EnterpriseCacheSessionDAO
#sessionDAO.activeSessionsCacheName = dropship-activeSessionCache
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $sessionDAO
# cookie for single sign on
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = www.foo.com.session
cookie.path = /
sessionManager.sessionIdCookie = $cookie
# 1,800,000 milliseconds = 30 mins
sessionManager.globalSessionTimeout = 1800000
sessionValidationScheduler =
org.apache.shiro.session.mgt.ExecutorServiceSessionValidationScheduler
sessionValidationScheduler.interval = 1800000
sessionManager.sessionValidationScheduler = $sessionValidationScheduler
securityManager.sessionManager = $sessionManager
cacheManager = org.apache.shiro.cache.ehcache.EhCacheManager
securityManager.cacheManager = $cacheManager
It sounds as if you have sorted out your problem already. As you discovered, the main thing to keep in mind with the Shiro INI file is that order matters; the file is parsed in order, which can actually be useful for constructing objects used in the configuration.
Since you mentioned Shiro's lack of documentation, I wanted to go ahead and point out two tutorials that I found helpful when starting:
http://www.javacodegeeks.com/2012/05/apache-shiro-part-1-basics.html
and
http://www.ibm.com/developerworks/web/library/wa-apacheshiro/.
There are quite a few other blog posts that provide good information to supplement the official documentation if you look around.
Good luck!