JSF - Extensive Read-Write to Application-Scoped Bean - jsf

In my XPages web app (Xpages a Lotus Notes Technology based in JSF), I need to have a dynamic map to store the session IDs and the last accessed time (in millisecond). This is implemented as a TreeMap inside a application-scoped bean. Each initial access to the app registers the current session to the TreeMap in this bean. Only a limited number of session entries are permitted to this map, and excess sessions are not registered. The map is also cleared once in a while from old session entries so that new sessions can be registered. I need to know if this is an acceptable approach/use of a application bean. I know I can store the session entries temporarily in an external DB (non-lotus notes) but the company I'm working for doesn't allow me to do so. Will this approach lead me to potential problems? If yes, is there another way for me to do this?

This sounds like a perfectly valid use of an application bean, but I'd offer two suggestions. The first is to use a ConcurrentSkipListMap instead of a TreeMap. The former is thread-safe, while the latter is not. When interacting with the lower scopes, thread safety is typically not crucial, as each user can only write to their own session, view, and request scopes, but all users can write to the application scope, so it's conceivable that concurrent writes could occur, especially in applications with heavy user load. The second suggestion is to urge caution with how much information about each session is stored in an application bean. Since the bean will be accessible to all users, it is theoretically possible to inadvertently expose too much information about a user to other users. If you're only storing session name or ID in addition to the last access time, you'll be fine. But if you're actually storing a pointer to each user's session scope, you may accidentally provide a window into data a user has cached that other users shouldn't have access to. I've never actually seen someone get bitten by this, but it's always important to keep this in mind when storing any user-specific information in the application scope.

Indeed, this is a good use of the Application Scope. Still, the TreeMap collection isn't the best approach for your situation, there are some problems with this:
Concurrency problems when 2 requests want to modify the data in your container.
If your application must scale horizontally, you will have 2 TreeMaps in each managed bean.
A good approach will be using a cache system. There are good cache libraries that will meet these requirements, I've tested ehcache and it provides both concurrent management for handling data and libraries in case you have 2 or more nodes to deploy your application, also you can configure an algorithm to clear the cache based on LRU (less frequently used) or FIFO (first in, first out).
Using an external database to handle the sessions IDs could consume some time to get/set the data (it could be very low, but still it is a disk I/O operation). For that problem, you can use BigMemory as an external database that lives in RAM, or a NoSQL database like BigTable.
Note: I do not work for ehcache nor I'm associated in a commercial way, I've tested it and it fulfills my needs.,Tthere are other cache system libraries like JBoss Cache and others you can evaluate and use.

Related

node.js keep a small in-memory database

I have an API-service in Node.js, basically what it does is gets id from request, reads record with this id from the database and returns it back in response.
While there are many clients with different ids usually only about 10-20 of them are used in a given timespan.
Is it a good idea to create an object with ids as keys and store the resulting record along with last_requested time to emulate a small database with fast-access? Whenever a record is requested I will update the last_requested field with new Date(). Also, create a setInterval() to delete those keys which were not used for some time.
Records in the database do not change often, and when they do I can restart the service (there are several instances running simultaneously via PM2, so they can be gracefully restarted).
If the required id is not found in this "database" a request to real database will be performed and the result will be stored in the object in a new key.
You're talking about caching. And it's very useful, if
You have a lot of reads, but not a lot of writes. i.e. Lots of people request a record, and it changes rarely.
You have a lot of free memory, or not many records.
You have a good indication of when to invalidate the cache.
For trivial usecases (i.e. under 50 requests / second), you probably don't need an in-memory cache for the database. Moreover, database access is very fast if you use the tools the database gives you (like persistent connection pools, consistent parameterized queries, query cache, etc).
It all depends on your specific usecase. But I wouldn't do it until I actually start encountering performance problems, and determine that the database is the bottleneck.
It's not just a good idea, caching is a necessity in different level of a computational system. Caching start from the CPU level (L1, L2, L3), OS Level up to application level which must be done by the developer.
Even if you have a well structured Database with good indexes, still there is an overhead for TCP-IP communication between your app and database. So if you are going to access some row frequently it's a must to have them in your app process.
The good news is Node.js apps are single process resident in memory (unlike PHP or other scripting programs which come and go). So you can load frequent required data and omit the database access.
The best mechanism to store the record can be an LRU (least-recently-used) cache. There are several LRU cache packages available for node.js:
https://github.com/adzerk/node-lru-native
https://github.com/isaacs/node-lru-cache
https://www.npmjs.com/package/simple-lru-cache
In an LRU cache you can define how much memory the cache can use, expiry age of each item, and how many item it can store! or you can write your own!

out of process session state in orchard

We were using default session state (in proc) in our application which we've built on top of Orchard. Now management has decided to install a load balancer in between. To make our sessions still work, I thought to go with Out of process session state. However, I am a bit confused that whether should I enable it in the 'Orchard.web' module of in specific modules where I've used sessions.
I was trying to search out if Orchard supports out proc sessions some other way or it should be the similar way like a normal asp.net application would have.
Any help would be appreciated
First off - I'm pretty certain that the Orchard team recommends avoiding session state at all costs. Anything that has session state is (by definition) stateful and that makes scaling outwards harder. However, assuming that you cannot avoid it:
1) It's just an ASP.NET application, the normal rules apply. Ensure the same machine key is set in app config, configure the session state mechanism of your choice (SQL/state server) and configure the appropriate values in web.config.
however
2) The standard ASP.NET session state implementation has really poor locking. This can lead to bad responsiveness issues for your pages. Check out this excellent question (and linked posts) on session state performance. You should evaluated for yourself whether you have any need for locked session state. We had to remove session state entirely in order to provide acceptable performance for our applications (and we've never looked back or found a reasonable argument for session over caching since
The classic solution to scaling is to use sticky sessions. Most of the load balancers have this setting and this will allow you to keep using the inproc session. And if you don't plan on auto scaling, so you will always have a fixed number of servers behind your LB then it is a solution that you should carefully consider.
Going out of proc can give you some headaches, like marking all your classes that you put in session as Serializable.

Minimal multithreaded transaction with Hibernate

I'm using Hibernate in an embedded Jetty server, and I want to be able to parallelize my data processing with some multithreading and still have it all be in the same transaction. As Sessions are not thread safe this means I need a way to get multiple sessions attached to the same transaction, which means I need to switch away from the "thread" session context I've been using.
By my understanding of the documentation, this means I need to switch to JTA session context, but I'm having trouble getting that to work. My research so far seems to indicate that it requires something external to Hibernate in the server to provide transaction management, and that Jetty does not have such a thing built in, so I would have to pull in some additional library to do it. The top candidates I keep running across for that generally seem to be large packages that do all sorts of other stuff too, which seems wasteful, confusing, and distracting when I'm just looking for the one specific feature.
So, what is the minimal least disruptive setup and configuration change that will allow getCurrentSession() to return Sessions attached to the same transaction in different threads?
While I'm at it, I know that fetching objects in one thread and altering them in another is not safe, but what about reading their properties in another thread, for example calling toString() or a side effect free getter?

Azure Websites and stateful webApp

I have a naïve version of a PokerApp running as an Azure Website.
The server stores in its memory the state of the tables, (whose turn it is, blinds value, cards…) etc.
The problem here is that I don't know how much I can rely on the WebServer's memory to be "permanent". A simple restart of the server would cause that memory to be lost and therefore all the games in progress before the restart would get lost / cause trouble.
I've read about using TableStorage to keep session data and share it between instances, but in my case it's not just string of text that I want to share but let's say for example, a Lobby objcet which contains all info associated with the games.
This is very roughly the structure of the object I have in memory
After some of your comments, you can see the object that needs to be stored is quite big and is being almost constantly. I don't know how well serializing and deserializing is going to work for me here...
Should I consider an azure VM which I'm hoping is going to have persistent memory instead of a Website?
Or is there a better approach to achieve something like this?
Thanks all for the answers and comments, you've made it clear that one can't rely on local memory when working on the cloud.
I'm going to do some refactoring and optimize the "state" object and then use a caching service.
Two question come to my mind though, and once you throw some light on these ones I promise I will shut up and accept #astaykov's great answer.
CONCURRENCY AT INSTANCE LEVEL - I have classic thread locks in my app to avoid concurrency problems, so I'm hoping there is something equivalent for those caching services you guys propose?
Also, I have a few timeouts per table (increase blinds, number of seconds the players have to act…). Let's say a user has just folded a hand, he's finished interacting with the state object so I update the cache. While that state object (to which the timers belong) is cached, my timers will stop ticking…
I know I'm not explaining myself very well here but I hope you guys see my point.
I'd suggest using the Azure Redis Cache.
Here is a nice sample how to build MVC App with Redis Cache in 15 minutes.
You can, of course use the Azure Managed Cache. Or end up with Azure Tables. And Azure Tables can hold much more then just a string. But I believe the caching solutions would have lower latency in communication.
In either way, your objects have to be serializable. And yes - the objects will get serialized/deserialized by every access. You can do it manually, or let the framework do it for you. From what I've read, NewtonSoft.JSON is quite good and optimized JSON serializerdeserializer.
UPDATE
When you ask for a VM running in the cloud - a VM will be restarted sooner or later! Application Pool will recycle, a planned maintenance will occur, an unplanned maintenance will occur, a hard disk will fail, a memory module will fail, unforeseen disaster will happen.
Only one thing is for sure - if you want your data to survive server crashes, change the way you think and design software, and take data out of (local) the memory. Or just live the fact that application may lose state sometime.
Second update - for the clocks
Well, you have to play with your imagination and experience. I would question that your clocks work anyway in the context of the ASP.NET app (unless all of them being static properties of a static type, which would be a little hell). My approach would be heavily extend my app to the client as well (JavaScript). There are a lot of great frameworks out there - SignalR, AngularJS, KnockoutJS, none of them to be underestimated! By extending your object model to the client, you can maintain players objects lifetime on the client (keeping the clock ticking) and sending updates from the client to the server for all those events. If you take a look at SignalR, you can keep real time communication between multiple clients (say players) and the server. And the server side of SignalR scales out nicely with Azure Service Bus and even Redis.

Domino Database connection for a Java bean architecture

We are moving our multi-database web application from LS to a Java beans architecture, but are struggling to decide how best to handle database connections and what scope should we use for them.
If we use sessionScope then connection to 5-6 databases per call will be created for each user. If we use a applicationScope bean for the database connection then it will remain open until the server is restarted, causing memory leaks. I understand that certain values such as System Configuration values which rarely change can be cached at applicationScope level, but I am concerned about the rest of the connections.
My question really is what's the best way to handle domino database connections (domino objects are not serializable) without affecting performance or memory leaks or automatic GC issues?
This is a tough one because it deals with architecting a specific solution vs just some generic "this works better than that" advice. We have had great success architecting a consumer XPage application so that data is retrieved from additional databases. Sort of a front end with database backends but with Domino.
We use no applicationScope anything because there is nothing global to the application but even if there was there is enough chatter out there to indicate that perhaps applicationScope is not as ubiquitous as it sounds and therefore you have to monitor your objects closely.
You already figured out the Domino object issue so that has to be done no matter which approach you choose.
Depending on your application you may be staring down some major rearchitecting but my recommendation is to try it with the sessionScope first and see how it performs. Do some benchmarking. If it works fast enough then go with that but as you develop your beans you should pay VERY close attention to performance optimization. The multiple database calls could be an issue but you really won't know until you play with it a little bit.
One thing that will help is that if you build your classes beans using a more detailed architecture than you think you need at first (don't try to pile everything into a single class or bean), not only will it be easier to adapt your architecture if needed but you will also start to see design patterns that you may not have even known were possibilities.
As Russell mentions, there is no one way to do this and each will have their pros/cons.
There is a Wrapped Document class you can use to store Document information.
public static DominoDocument wrap(java.lang.String database,
lotus.domino.Database db,
java.lang.String parentId,
java.lang.String form,
java.lang.String computeWithForm,
java.lang.String concurrencyMode,
boolean allowDeletedDocs,
java.lang.String saveLinksAs)
Javadoc is here:
http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
However this just does some of the handling of recycle() in the background. So you are still going to have the same overheads generated by making/recycle() of the database objects.
The main overhead you will find is the creating the connection to the Database in your Java code. Once that connection is made, everything else is relatively faster.
I would recommend when testing this for performance that you use the XPages Toolkit. Videos on how to use it are part of the XPages Masterclass on openNTF.
http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=XPages%20Masterclass

Resources