This is more of a non-tech question.
We intend to use OrganizationServiceContext with Linq as opposed to calling OrganizationServiceProxy.
My question is: what should the lifetime of the context be? Should it instantiated once per method or can you keep it around for the life of the web application using a singleton approach?
What would the pros/cons be? Any advice?
Thanks in advance
You should never keep a datacontext around for the life of a web application. The application lifecycle is managed outside of your code.
There is also a world of pain around saving changes when other users are saving at the same time. Datacontexts should always be managed only for the life of the request and running save changes should never save bits and pieces from other people's request as they are processing.
If you want to reduce reads, then use caching.
If you want to manage concurrency use transactions with a unit of work.
Just to expand a little on Gats' answer, which is entirely correct, we create new context objects pretty much for each separate method we have. Even for Silverlight, where we know we're running for one user at a time, managing what is in the context at any time is just too painful just to avoid creating a new context object.
Related
I have a naïve version of a PokerApp running as an Azure Website.
The server stores in its memory the state of the tables, (whose turn it is, blinds value, cards…) etc.
The problem here is that I don't know how much I can rely on the WebServer's memory to be "permanent". A simple restart of the server would cause that memory to be lost and therefore all the games in progress before the restart would get lost / cause trouble.
I've read about using TableStorage to keep session data and share it between instances, but in my case it's not just string of text that I want to share but let's say for example, a Lobby objcet which contains all info associated with the games.
This is very roughly the structure of the object I have in memory
After some of your comments, you can see the object that needs to be stored is quite big and is being almost constantly. I don't know how well serializing and deserializing is going to work for me here...
Should I consider an azure VM which I'm hoping is going to have persistent memory instead of a Website?
Or is there a better approach to achieve something like this?
Thanks all for the answers and comments, you've made it clear that one can't rely on local memory when working on the cloud.
I'm going to do some refactoring and optimize the "state" object and then use a caching service.
Two question come to my mind though, and once you throw some light on these ones I promise I will shut up and accept #astaykov's great answer.
CONCURRENCY AT INSTANCE LEVEL - I have classic thread locks in my app to avoid concurrency problems, so I'm hoping there is something equivalent for those caching services you guys propose?
Also, I have a few timeouts per table (increase blinds, number of seconds the players have to act…). Let's say a user has just folded a hand, he's finished interacting with the state object so I update the cache. While that state object (to which the timers belong) is cached, my timers will stop ticking…
I know I'm not explaining myself very well here but I hope you guys see my point.
I'd suggest using the Azure Redis Cache.
Here is a nice sample how to build MVC App with Redis Cache in 15 minutes.
You can, of course use the Azure Managed Cache. Or end up with Azure Tables. And Azure Tables can hold much more then just a string. But I believe the caching solutions would have lower latency in communication.
In either way, your objects have to be serializable. And yes - the objects will get serialized/deserialized by every access. You can do it manually, or let the framework do it for you. From what I've read, NewtonSoft.JSON is quite good and optimized JSON serializerdeserializer.
UPDATE
When you ask for a VM running in the cloud - a VM will be restarted sooner or later! Application Pool will recycle, a planned maintenance will occur, an unplanned maintenance will occur, a hard disk will fail, a memory module will fail, unforeseen disaster will happen.
Only one thing is for sure - if you want your data to survive server crashes, change the way you think and design software, and take data out of (local) the memory. Or just live the fact that application may lose state sometime.
Second update - for the clocks
Well, you have to play with your imagination and experience. I would question that your clocks work anyway in the context of the ASP.NET app (unless all of them being static properties of a static type, which would be a little hell). My approach would be heavily extend my app to the client as well (JavaScript). There are a lot of great frameworks out there - SignalR, AngularJS, KnockoutJS, none of them to be underestimated! By extending your object model to the client, you can maintain players objects lifetime on the client (keeping the clock ticking) and sending updates from the client to the server for all those events. If you take a look at SignalR, you can keep real time communication between multiple clients (say players) and the server. And the server side of SignalR scales out nicely with Azure Service Bus and even Redis.
I have a couple of questions regarding EJB transactions. I have a situation where a process has become longer running that originally intended and is sometimes failing due to server timeout's being exceeded. While I have increased the timeouts initially (both total transaction and max transaction), for a long running process, I know that it make more sense to segment this work as much as possible into smaller units of work that don't fail based on timeout. As a result, I'm looking for some thoughts or references regarding next course of action based on the background below and the questions that follow.
Environment:
EJB 3.1, JPA 2.0, WebSphere 8.5
Background:
I built a set of POJOs to do some batch oriented work for an enterprise application. They are non-EJB POJOs that were intended to implement several business processes (5 related, sequential processes, each depending on it's predecessor). The POJOs are in a plain Java project, not an EJB project.
However, these POJOs access an EJB facade for database access via JPA. The abstract core of the 5 business processes does the JNDI lookup for the EJB facade in order to return the domain objects for processing. Originally, the design was to run from the server completely, however, a need arose to initiate these processes externally. As a result, I created an EJB wrapper so that the processes could be called remotely (individually or as a single process based on a common strategy interface). Unfortunately, the size of the data, both row width and row count, has grown well beyond the original intent.
The processing time required to complete these batch processes has increased significantly (from around a couple of hours to around 1/2 a day and could increase beyond that). Only one of the 5 processes made sense to multi-thread (I did implement it multi-threaded). Since I have the wrapper EJB to initiate 1 or all, I have decided to create a new container transaction for each process as opposed to the single default transaction of "required" when I run all as a single process. Since the one process is multi-threaded, it would make sense to attempt to create a new transaction per thread, however, being a group of POJOs, I do not have transaction capability.
Question:
So my question is, what makes more sense and why? Re-engineer the POJOs to be EJBs themselves and have the wrapper EJB instantiate each process as a child process where each can have its own transaction and more importantly, the multi-threaded process can create a transaction per thread. Or does it make more sense to attempt to create a UserTransaction in the POJOs from a JNDI lookup in the container and try to manage it as if it were a bean managed transaction (if that's even a viable solution). I know this may be application dependent, but what is reasonable with regard to timeouts for a Java EE container? Obviously, I don't want run away processes, but want to make sure that I can complete these batch processes.
Unfortunatly, this application has already been deployed as a production system. Re-engineering, though it may be little more than assembling the strategy logic in EJBs, is a large change to the functionality.
I did look around for some other threads here and via general internet searches, but thought I would see if anyone had compelling arguments for one over the other or another solution entirely. Additional links that talk about a topic such as this are appreciated. I wrestled with whether to post this since some may construe this as subjective, however, I felt the narrowed topic was worth the post and potentially relevant to others attempting processes like this.
This is not direct answer to your question, but something you could consider.
WebSphere 8.5 especially for these kind of applications (batch) provides a batch container. The batch function accommodate applications that must perform batch work alongside transactional applications. Batch work might take hours or even days to finish and uses large amounts of memory or processing power while it runs. You can reuse your Java classes in batch applications, batch steps can be run in parallel in cluster and has transaction checkpoint management.
Take a look at following resources:
IBM Education Assistant - Batch applications
Getting started with the batch environment
Since I really didn't get a whole lot of response or thoughts for this question over the past couple of weeks, I figured I would answer this question to hopefully help others in making a decision if they run across this or a similar situation.
Ultimately, I re-engineered one of the POJOs into an EJB that acted as a wrapper to call the other POJOs. The wrapper EJB performed the same activity as when it was just a POJO, except for the fact that I added the transaction semantics (REQUIRES_NEW) on the primary method. The primary method calls the other POJOs based on a stategy pattern so each call (or POJO) gets its own transaction. Other methods in the EJB that call the primary method were defined with NOT_SUPPORTED so that I could separate the transactions for each call to the primary method and not join an existing transaction.
Full disclosure, the original addition of transaction semantics significantly increased the processing time (on the order of days), but the process did not fail due to exceeding transaction timeouts. It was the result of some unexpected problems with JPA Many-To-One relationships that were bringing back too much data. Data retreived as a result of a the Many-To-One relationship. As I mentioned originally, some of my data row width increased unexpectedly. That data increase was in the related table object, but the query did not need that data at the time. I corrected those issues by changing my queries (creating objects for SELECT NEW queries, changed relationships to FetchType.LAZY, etc).
Going forward, if I am able to dedicate enough time, I will transform the rest of those POJOs into EJBs. The POJO doing the most significant amount of work that is threaded has been implemented with a Callable implementation that is run via an ExecutorService. If I can transform that one, the plan will be to make each thread its own transaction. However, while I'm not sure yet, it appears that my container may already be creating transactions for each thread group (of 10 threads) due to status updates I'm seeing. I will have to do more investigation.
The Problem
Our liferay system is the basis to synchronize data with other web-applications.
And we use Model Listeners for that purpose.
There are a lot of web-service calls and database updates through the listeners and consequently the particular action in liferay is too slow.
For example:
On adding of a User in liferay we need to fire a lot of web-service calls to add user details and update other systems with the userdata, and also some liferay custom tables. So the adding of User is taking a lot of time and in a few rare cases the request may time-out!
Since the code in the UserListener only depends on the User Details and even if there is any exception in UserListener still the User would be added in Liferay, we have thought of the following solution.
We also have a scheduler in liferay which fixes things if there was some exception while executing code in Listeners.
Proposed Solution
We thought of making the code in UserListener asynchronous by using Concurrency API.
So here are my questions:
Is it recommended to have concurrent code in Model Listeners?
If yes, then will it have any adverse effect if we also update Liferay custom tables through this code, like transactions or other stuff?
What can be other general Pros and Cons of this approach?
Is there any other better-way we can have real-time update to other systems without hampering User-experience?
Thank you for any help on this matter
It makes sense that you want to use Concurrency to solve this issue.
Doing intensive work like invoking web services etc in the thread that modifies the model is not really a good idea, apart from the impact it will have on user experience.
Firing off threads within the models' listeners may be somewhat complex and hard to maintain.
You could explore using Liferay's Message Bus paradigm where you can send a message to a disconnected message receiver which will then do all the intensive work outside of the model listener's calling thread.
Read more about the message bus here:
Message Bus Developer Guide
Message Bus Wiki
We are moving our multi-database web application from LS to a Java beans architecture, but are struggling to decide how best to handle database connections and what scope should we use for them.
If we use sessionScope then connection to 5-6 databases per call will be created for each user. If we use a applicationScope bean for the database connection then it will remain open until the server is restarted, causing memory leaks. I understand that certain values such as System Configuration values which rarely change can be cached at applicationScope level, but I am concerned about the rest of the connections.
My question really is what's the best way to handle domino database connections (domino objects are not serializable) without affecting performance or memory leaks or automatic GC issues?
This is a tough one because it deals with architecting a specific solution vs just some generic "this works better than that" advice. We have had great success architecting a consumer XPage application so that data is retrieved from additional databases. Sort of a front end with database backends but with Domino.
We use no applicationScope anything because there is nothing global to the application but even if there was there is enough chatter out there to indicate that perhaps applicationScope is not as ubiquitous as it sounds and therefore you have to monitor your objects closely.
You already figured out the Domino object issue so that has to be done no matter which approach you choose.
Depending on your application you may be staring down some major rearchitecting but my recommendation is to try it with the sessionScope first and see how it performs. Do some benchmarking. If it works fast enough then go with that but as you develop your beans you should pay VERY close attention to performance optimization. The multiple database calls could be an issue but you really won't know until you play with it a little bit.
One thing that will help is that if you build your classes beans using a more detailed architecture than you think you need at first (don't try to pile everything into a single class or bean), not only will it be easier to adapt your architecture if needed but you will also start to see design patterns that you may not have even known were possibilities.
As Russell mentions, there is no one way to do this and each will have their pros/cons.
There is a Wrapped Document class you can use to store Document information.
public static DominoDocument wrap(java.lang.String database,
lotus.domino.Database db,
java.lang.String parentId,
java.lang.String form,
java.lang.String computeWithForm,
java.lang.String concurrencyMode,
boolean allowDeletedDocs,
java.lang.String saveLinksAs)
Javadoc is here:
http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
However this just does some of the handling of recycle() in the background. So you are still going to have the same overheads generated by making/recycle() of the database objects.
The main overhead you will find is the creating the connection to the Database in your Java code. Once that connection is made, everything else is relatively faster.
I would recommend when testing this for performance that you use the XPages Toolkit. Videos on how to use it are part of the XPages Masterclass on openNTF.
http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=XPages%20Masterclass
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.