We are caching a object of below class format.
public class CachedObject
{
#javax.persistence.Transient
public List<ObjectA> objectAs;
#javax.persistence.Transient
public Date cacheExpirationDate;
}
We have a process that checks the expiration date on the cached object and refreshes the cache before it expires. There is another process that also updates the cached object using replace (spymemcached) method if there are any updates. Updates normally happen to objectAs in our situation. The cached object is never evicted. We also started memcached with default slab (1MB) size settings.
The issue: When we fetch some objects we find that the cache object is not null, but the objectAs is null. We subsequently get null pointer exceptions when trying to do anything on objectAs. I checked and we always set objectAs value before caching/updating. I also saw the incomplete cached object from the telnet interface. When I refreshed the cache, it started showing the full object in the cache.
Can someone please suggest what could probably be going wrong?
After running some more tests unit/integration, I found a case where this field was set to null. So my problem as stated in the question has nothing to do with memcached cache server. This is an application related issue.
Related
What is the difference between the WKWebsiteDataStore.DefaultDataStore and the one present in webview configuration instance - Configuration.WebsiteDataStore?
If i delete a specific cookie from HttpCookieStore by accessing the default websitedatastore, would that be in sync with HttpCookieStore present in Configuration.WebsiteDataStore when webview loads?
Unless you pass a WKWebsiteDataStore.nonPersistent() (which is new every time) dataStore to your webview's configuration it will already have the default in there (which is shared and it's always the same).
You can check that yourself by running
webview.configuration.websiteDataStore == WKWebsiteDataStore.default()
And it will return true.
So everything is definitely in sync since they are the same instance.
I'm using Hazelcast with a distributed cache in embedded mode. My cache is defined with an Eviction Policy and an Expiry Policy.
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setStatisticsEnabled(false)
.setManagementEnabled(false)
.setReadThrough(true)
.setWriteThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1)
.setAsyncBackupCount(1)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
1800,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
MutableCacheEntryListenerConfiguration<UserRolesCacheKey, String[]> listenerConfiguration =
new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false);
userRolesCache.registerCacheEntryListener(listenerConfiguration);
The problem I am having is that my Listener seems to be firing prematurely in a production environment; the listener is executed (REMOVED) even though the cache has been recently queried.
As the expiry listener fires, I get the CacheEntry, but I'd like to be able to see the reason for the Expiry, whether it was Evicted (due to MaxSize policy), or Expired (Duration). If Expired, I'd like to see the timestamp of when it was last accessed. If Evicted, I'd like to see the number of entries in the cache, etc.
Are these stats/metrics/metadata available anywhere via Hazelcast APIs?
Local cache statistics (entry count, eviction count) are available using ICache#getLocalCacheStatistics(). Notice that you need to setStatisticsEnabled(true) in your cache configuration for statistics to be available. Also, notice that the returned CacheStatistics object only reports statistics for the local member.
When seeking info on a single cache entry, you can use the EntryProcessor functionality to unwrap the MutableEntry to the Hazelcast-specific class com.hazelcast.cache.impl.CacheEntryProcessorEntry and inspect that one. The Hazelcast-specific implementation provides access to the CacheRecord that provides metadata like creation/accessed time.
Caveat: the Hazelcast-specific implementation may change between versions. Here is an example:
cache.invoke(KEY, (EntryProcessor<String, String, Void>) (entry, arguments) -> {
CacheEntryProcessorEntry hzEntry = entry.unwrap(CacheEntryProcessorEntry.class);
// getRecord does not update the entry's access time
System.out.println(hzEntry.getRecord().getLastAccessTime());
return null;
});
Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency
I am using Apache Shiro in my webapp.
I store some parameters in the session notably the primary key of an object stored in the database.
When the user logs in, I load the object from the database and save the primary key in the session. Then within the app the user can edit the object's data and either hit a cancel or a save button.
Both buttons triggers a RPC that gets the updated data to the server. The object is then updated in the database using the primary key stored in the session.
If the user remains active in the app (making some RPCs) everything works fine. But if he stays inactive for 3 min and subsequently makes a RPC then Shiro's securityUtils.getSubject().getSession() returns null.
The session timeout is set to 1,200,000 ms (20 min) so I don't think this is the issue.
When I go through the sessions stored in the cache of my session manager I can see the user's session org.apache.shiro.session.mgt.SimpleSession,id=6de78f10-b58e-496c-b40a-e2a9a4ad069c but when I try to get the session ID from the cookie and to call SecurityUtils.getSecurityManager().getSession(key) to get the session (where key is a SessionKey implementation): I get an exception.
When I try building a new subject from the session ID I lose all the attributes saved in the session.
I am happy to post some code to help resolve the issue but I tried so many workarounds that I don't know where to start... So please let me know what you need.
Alternatively if someone knows a better documented framework than Shiro I am all ears (Shiro's lack of documentation makes it really too time consuming)
The issue was related to the session config in the ini file. As usual with shiro the order mattered and some of my lines were out of place.
Below is the config that worked for me:
sessionDAO = org.apache.shiro.session.mgt.eis.EnterpriseCacheSessionDAO
#sessionDAO.activeSessionsCacheName = dropship-activeSessionCache
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $sessionDAO
# cookie for single sign on
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = www.foo.com.session
cookie.path = /
sessionManager.sessionIdCookie = $cookie
# 1,800,000 milliseconds = 30 mins
sessionManager.globalSessionTimeout = 1800000
sessionValidationScheduler =
org.apache.shiro.session.mgt.ExecutorServiceSessionValidationScheduler
sessionValidationScheduler.interval = 1800000
sessionManager.sessionValidationScheduler = $sessionValidationScheduler
securityManager.sessionManager = $sessionManager
cacheManager = org.apache.shiro.cache.ehcache.EhCacheManager
securityManager.cacheManager = $cacheManager
It sounds as if you have sorted out your problem already. As you discovered, the main thing to keep in mind with the Shiro INI file is that order matters; the file is parsed in order, which can actually be useful for constructing objects used in the configuration.
Since you mentioned Shiro's lack of documentation, I wanted to go ahead and point out two tutorials that I found helpful when starting:
http://www.javacodegeeks.com/2012/05/apache-shiro-part-1-basics.html
and
http://www.ibm.com/developerworks/web/library/wa-apacheshiro/.
There are quite a few other blog posts that provide good information to supplement the official documentation if you look around.
Good luck!
When running iisdirinfo on iis7 I'm seeing an error
.dll,1,GET,HEAD,POST,DEBUG
BUILD FAILED
C:\iisinfo.build(9,2):
Error retrieving info for virtual directory 'WebServices' on 'localhost:80' (wesite: Webservice).
Object reference not set to an instance of an object.
Total time: 0.5 seconds.
This is after displaying a number of the properties correctly. So I guess it's being tripped up by another property later on.
Anyone seen this or have any ideas on what could be causing the issue?
So I forked the repo and put some extra debug in and it seems like the failure was down to a null value for a property.
Using appcmd it looks like
<redirectHeaders>
</redirectHeaders>
I've put a check for null into the code and will be submitting a pull request later today